Move rebased CONNECTORS-674-2 branch into place

git-svn-id: https://svn.apache.org/repos/asf/manifoldcf/branches/CONNECTORS-674@1554204 13f79535-47bb-0310-9956-ffa450edef68
diff --git a/CHANGES.txt b/CHANGES.txt
index 82ea7b5..a2c0948 100644
--- a/CHANGES.txt
+++ b/CHANGES.txt
@@ -1,7 +1,521 @@
 ManifoldCF Change Log
 $Id$
 
-======================= 1.2-dev =====================
+======================= 1.5-dev =====================
+
+CONNECTORS-849: Upgrade to Hadoop 2.2.0.
+(Karl Wright)
+
+CONNECTORS-847: Upgrade to SolrJ 4.6.0.
+(Karl Wright)
+
+CONNECTORS-814: Make File System output connector handle
+directory/file conflicts.
+(Karl Wright)
+
+CONNECTORS-733: Write a test which exercises database connection
+interruption logic, and debug the same.
+(Ahmet Arslan, Karl Wright)
+
+CONNECTORS-816: Document revisions to SharePoint connector,
+and new SharePoint authorities.
+(Karl Wright)
+
+CONNECTORS-553: Add email connector.
+(Tishan DahanaYakage, Piergiorgio Lucidi, Karl Wright)
+
+CONNECTORS-846: Once a service had grabbed all connector
+instances, but was not using them any more, it would not release
+them for other cluster members to use.
+(Karl Wright)
+
+CONNECTORS-844: Remove "per JVM" from connection maximum
+messages in crawler UI.
+(Karl Wright)
+
+CONNECTORS-842: Update documentation to describe the functioning
+of the org.apache.manifoldcf.mysql.client property.
+(Chris Griffin, Karl Wright)
+
+CONNECTORS-843: Solr Connector methods do not call getSession()
+when they should, leading to unpredictable results.
+(Markus Schuch, Karl Wright)
+
+CONNECTORS-841: Always reset document schedules on job start.
+(David Morana, Karl Wright)
+
+CONNECTORS-839: Fix our CloudSolrServer usage to use multipart
+post instead of putting everything in the URL.
+(Alessandro Benedetti, Raymond Wiker, Karl Wright)
+
+CONNECTORS-838: Character-stuff names that ZooKeeper sees, to
+prevent issues with '/' characters.
+(Karl Wright)
+
+CONNECTORS-837: Specifying 1 connection causes hang.
+(Karl Wright)
+
+CONNECTORS-836: Use the same thread context in the registered
+shutdown objects.
+(Karl Wright)
+
+CONNECTORS-835: Fix busted ZooKeeper implementation.
+(Karl Wright)
+
+CONNECTORS-833: Fix Null Pointer exception in LiveLink connector.
+(David Morana, Karl Wright)
+
+CONNECTORS-828: New stylesheet!  Apparently brings the
+crawler UI into the modern world.
+(Eranda Bandaranaike)
+
+CONNECTORS-827: Add a feature which removes all ingeststatus
+records for an output connection, in case that the target index
+is gone forever and is inaccessible.
+(Swami Rajamohan, Karl Wright)
+
+CONNECTORS-795, CONNECTORS-821: Release new versions of
+Solr 3.x, Solr 4.x, ElasticSearch, and SharePoint 2010 plugins.
+(Karl Wright)
+
+CONNECTORS-832: Add example second agents process to all
+multiprocess examples.
+(Karl Wright)
+
+CONNECTORS-831: Refactor ReprioritizationTracker so that it uses
+the interface/factory paradigm.
+(Karl Wright)
+
+CONNECTORS-830: Stuffer thread throttling now works globally, not
+just locally.
+(Karl Wright)
+
+CONNECTORS-781: Add support for multiple agents processes.  This
+ticket involved changing much of the underlying infrastructure for
+the framework, including adding lock manager features, revamping the
+agents framework, building tests, and adding schema features that
+allow for an agents process failure.  All in all, a very significant change,
+which could well have introduced various instabilities.  Please don't
+hesitate to note any issues.
+(Graeme Seaton, Karl Wright)
+
+CONNECTORS-822: Use data size as Long value, not int.
+(David Morana, Karl Wright)
+
+CONNECTORS-819: Add missing metadata to JCIFS connector.
+(Graeme Seaton)
+
+CONNECTORS-817: Improve SharePoint logging.
+(Mark Libucha, Karl Wright)
+
+CONNECTORS-813: For list items, obtain the url from the EncodedAbsURL
+metadata field.  This should allow list items to be clicked on in search results.
+(Mark Libucha, Karl Wright)
+
+CONNECTORS-812: SharePoint connector needs to ignore constructs
+that it doesn't understand when enumerating list contents, because
+under some configurations, SharePoint lists things that aren't list items
+there.
+(Mark Libucha, Karl Wright)
+
+CONNECTORS-810: Update JDBC connector documentation, and add
+documentation for the JDBC authority connector.
+(Karl Wright)
+
+CONNECTORS-805: Add author name/email support to RSS connector.
+(Benjamin Brandmeier, Karl Wright)
+
+CONNECTORS-13: Provide a ZooKeeper method of coordinating
+various ManifoldCF processes, along with an example.
+(Karl Wright)
+
+CONNECTORS-808: Add raw jdbc string so smart users can override the
+default behavior and supply gobbledegook Oracle connect strings.
+(Marcello Lorenzi, Karl Wright)
+
+CONNECTORS-807: Handle the case when there are no documents being
+deleted.
+(Adrian Conlon, Karl Wright)
+
+CONNECTORS-804: Web connector documentation is out of date.
+(Shigeki Kobayashi, Karl Wright)
+
+CONNECTORS-806: Web connector session passwords were not being
+handled in the UI properly.
+(Karl Wright)
+
+CONNECTORS-803: Add automatic and manual ability to clean out
+repohistory rows.
+(Marcello Lorenzi, Karl Wright)
+
+CONNECTORS-801: Refactor JDBC authority to bring it into compliance with current
+conventions in JDBC connector.  Warning: this change is not backwards
+compatible!  JDBC authority queries now require explicit return column
+names for the user id and token queries!
+(Karl Wright)
+
+CONNECTORS-798: Bring JDBC connector support for CLOBs into the
+modern era.  This also required extension of the CharacterInputFile
+paradigm slightly - a new method to find the utf8 byte length was needed.
+(Karl Wright)
+
+CONNECTORS-797: Bad query when maxcount was exceeded in the
+database table for the jobstatus screen.
+(Graeme Seaton, Karl Wright)
+
+CONNECTORS-796: Fix NPE in RSS connector having to do with
+tags that aren't nested as expected.  CDATA parsing was also broken.
+(Benjamin Brandmeier, Karl Wright)
+
+CONNECTORS-792: Introduce concepts of authorization domain
+and authority group, and appropriate UI, API, etc. changes.  This
+is to support federated authorization models such as what SharePoint
+Claim Space fundamentally does, where there are multiple authorities
+per repository.
+WARNING: Schema change!
+(Karl Wright)
+
+======================= Release 1.4 =====================
+
+CONNECTORS-791: Broken WHERE clause in job status query.
+(Karl Wright)
+
+CONNECTORS-790: Fix NPE when reading job status via API.
+(Erlend Garåsen, Karl Wright)
+
+CONNECTORS-789: Fix broken metadata field display in SharePoint
+connector UI.
+(Mark Libucha, Karl Wright)
+
+CONNECTORS-786: Fix broken Discussion Groups in SharePoint.
+(Mark Libucha, Karl Wright)
+
+CONNECTORS-785: Turn of debug logging for Axis configuration
+exception, to remove spurious log noise.
+(Karl Wright)
+
+CONNECTORS-771: Include new versions of SharePoint plugins in
+distribution.
+(Karl Wright)
+
+CONNECTORS-784: Fix incorrect no-day-of-month-selected message
+when viewing a schedule record.
+(Adrian Conlon, Karl Wright)
+
+CONNECTORS-783: Add support for limiting the amount of counting
+needed for the job status page.
+(Hiroshi Tatsumi, Karl Wright)
+
+CONNECTORS-782: Add unique-ID metadata in SharePoint connector.
+(Dmitry Goldenberg, Karl Wright)
+
+CONNECTORS-778: Add support for attachments in SharePoint
+connector.
+(Dmitry Goldenberg, Karl Wright)
+
+CONNECTORS-772: SharePoint connector wasn't working against
+certain SharePoint instances.  The reason was that some paths
+were relative and some absolute.  Fixed ALL path management to
+have stronger check logic so bad paths are detected, warned, and
+skipped.
+(Dmitry Goldenberg, Karl Wright)
+
+CONNECTORS-775: Use httpclient 4.2.6.
+(Karl Wright)
+
+CONNECTORS-774: Add support for proxies to Wiki connector.
+(Karl Wright)
+
+CONNECTORS-773: Add support for proxies to Jira connectors.
+(Karl Wright)
+
+CONNECTORS-770: Java7 no longer allows an interface to be
+interchangeable with a class.  Modified FileNet stub accordingly.
+(Yann Barraud, Karl Wright)
+
+CONNECTORS-767: Add filename support to Web connector.
+(Shinichiro Abe)
+
+CONNECTORS-769: Include general_parentid in Livelink metadata.
+(David Morana, Karl Wright)
+
+CONNECTORS-768: Add more extensions to ExtensionMimeMap.
+(Shinichiro Abe)
+
+CONNECTORS-766: When using mcf-api-service, multiply query
+parameters are not parsed correctly.
+(Shinichiro Abe)
+
+CONNECTORS-765: Increase crawler-ui session timeout to 30 minutes.
+(Erlend Garåsen, Karl Wright)
+
+CONNECTORS-764: HOPCOUNTREMOVED records need to be reset when
+a job's hopcount limits change.  It also makes sense to reset them
+when the set of documents is changed.
+(Karl Wright)
+
+CONNECTORS-750: Skip files when catching FileNotFoundException,
+e.g. access/permission denied files in FileConnector.
+(Shinichiro Abe)
+
+======================= Release 1.3 =====================
+
+CONNECTORS-761: Fix broken tab in Jira authority connector, and
+don't use startsWith in javascript since IE doesn't recognize it.
+(Shinichiro Abe, Karl Wright)
+
+CONNECTORS-760: HDFSRepositoryConnector's version string is always start with '-'.
+(Minoru Osuka)
+
+CONNECTORS-759: Fix broken content type for login page.
+(Shinichiro Abe, Karl Wright)
+
+CONNECTORS-757: NPE's from GoogleDrive connector when crawling
+documents that don't have a file length.
+(Piergiorgio Lucidi, Karl Wright)
+
+CONNECTORS-756: Fix broken JDBC authority UI.
+(Karl Wright)
+
+CONNECTORS-755: ant rat-sources finds unlicensed files in alfresco connector
+(Karl, Wright, Piergiorgio Lucidi)
+
+CONNECTORS-753: Japanese translations needed for ManifoldCF crawler UI login page and logout link
+(Minoru Osuka, Karl Wright)
+
+CONNECTORS-737: Make crawler-UI more secure.  This ticket requires
+that all users log in to the MCF UI, and also prevents passwords from
+appearing in HTML pages.
+(Maciej Lizewski, Karl Wright)
+
+CONNECTORS-751: RequestMinimum field not handled properly in API.
+(Maciej Lizewski, Karl Wright)
+
+CONNECTORS-749: HDFSRepositoryConnector can not fetch files.
+(Minoru Osuka)
+
+CONNECTORS-748: Fix broken regular expressions in file connector, and
+modify file output connector to deal with colons in the file name in an
+acceptable way (so can be used on WIndows too).
+(Minoru Osuka, Karl Wright)
+
+CONNECTORS-729: Break up Jira URL into components, with proper
+javascript checking.
+(Karl Wright)
+
+CONNECTORS-744: Use background threads in HDFS output connector.
+(Karl Wright)
+
+CONNECTORS-742: Document HDFS connector.
+(Karl Wright)
+
+CONNECTORS-741: Document HDFS output connector.
+(Karl Wright)
+
+CONNECTORS-743: Document user mapping functionality.
+(Karl Wright)
+
+CONNECTORS-703: Add mapper plugin functionality, and regular expression
+mapper.  WARNING: This change represents a schema change!!
+(Maciej Lizewski, Karl Wright)
+
+CONNECTORS-745: Translate FileSystem and HDFS output connector's end-user documentation.
+(Minoru Osuka)
+
+CONNECTORS-740: Add documentation for filesystem connector changes
+and for filesystem output connector.  Also refactor filesystem connector
+for better UI.
+(Karl Wright)
+
+CONNECTORS-738: Update how-to-build-and-deploy to reflect new
+connectors.
+(Karl Wright)
+
+CONNECTORS-739: Update included-connectors on site.
+(Karl Wright)
+
+CONNECTORS-735: Include crawl date in output.  Modified Solr connector
+to allow you to specify indexeddate attribute name.
+(Stephane Gamard, Karl Wright)
+
+CONNECTORS-728: Add HDFS connector
+(Minoru Osuka, Karl Wright)
+
+CONNECTORS-727: Implemented generic API connector
+(Maciej Lizewski, Karl Wright)
+
+CONNECTORS-734: Catch deadlock error with EXPLAIN ANALYZE in
+postgresql, and ignore it.
+(Ahmet Arslan, Karl Wright)
+
+CONNECTORS-732: Fix crawler-ui's message for 'Waiting' job status.
+(Ahmet Arslan)
+
+CONNECTORS-723: Add a Jira connector.  This connector does not yet
+handle security or comments, but may be extended in the future to do
+so.
+(Andrew Janowczyk, Karl Wright)
+
+CONNECTORS-725: Update to SolrJ 4.3.1.
+(Karl Wright)
+
+CONNECTORS-722: Fix hang during seeding in DropBox connector and
+GoogleDrive connector.
+(Andrew Janowczyk, Karl Wright)
+
+CONNECTORS-721: Fix a refactoring error in DropBox connector.
+(Andrew Janowczyk)
+
+CONNECTORS-715: Add sufficient logging to the Web Connector so that
+people can diagnose their own "why didn't it index" questions.
+(TC Tobin-Campbell, Karl Wright)
+
+CONNECTORS-717: Alfresco Connector needs a new parameter for the Socket Timeout
+(Piergiorgio Lucidi)
+
+CONNECTORS-714: All LiveLink connector to use LAPI for document fetches.
+(David Morana, Karl Wright)
+
+CONNECTORS-718: Alfresco Connector must throw exceptions with handler methods
+(Piergiorgio Lucidi, Karl Wright)
+
+CONNECTORS-713: Alfresco connector needs to deal with IOExceptions better
+(Piergiorgio Lucidi, Karl Wright)
+
+CONNECTORS-623: Use UTF-8 for encoding headers in SolrJ post.
+(Shinichiro Abe, Karl Wright)
+
+CONNECTORS-716: RepositoryDocument.addField() would go into an
+infinite loop if field value was null.
+(Piergiorgio Lucidi, Karl Wright)
+
+CONNECTORS-710: FileConnector should have option of outputting a full http url based on Wget conventions, not just a file:/ url
+(Minoru Osuka, Karl Wright)
+
+CONNECTORS-667: Refix NPE problem with Livelink authority.  This time
+we were seeing it when SSL was not on.
+(David Morana, Karl Wright)
+
+CONNECTORS-712: ArrayIndexOutOfBoundsException in the Alfresco Connector
+(Karl Wright)
+
+CONNECTORS-711: Wrong date parsing of the Alfresco Connector
+(Piergiorgio Lucidi, Karl Wright)
+
+CONNECTORS-705: Upgrade the CMIS Connector to OpenCMIS 0.9.0
+(Piergiorgio Lucidi)
+
+CONNECTORS-706: Update documentation to require JDK 1.7.
+(Karl Wright)
+
+CONNECTORS-635: Alfresco test sometimes fails; upgrade to Alfresco 4.0 recommended
+(Piergiorgio Lucidi)
+
+CONNECTORS-707: Treat special character "." as meaning "no extension",
+for ElasticSearch and OpenSearchServer output connectors.
+(TC Tobin-Campbell, Karl Wright)
+
+CONNECTORS-708: Make JDBC connector check mime type for indexability,
+if it is present.
+(Richard Nichols, Karl Wright)
+
+CONNECTORS-709: Escape \r, \n, \f, and \b in ElasticSearch connector.
+(Richard Nichols, Karl Wright)
+
+CONNECTORS-701: Add forced ACLs to DropBox connector.  Fixed a
+few UI-related other problems as well.
+WARNING: This is a non-backwards-compatible change!!
+(Karl Wright)
+
+CONNECTORS-702: Add forced ACLs to GoogleDrive connector.  Also
+fixed a number of UI-related issues.
+(Karl Wright)
+
+CONNECTORS-684: Add Dropbox connector end-user documentation for
+Japanese.
+(Shinichiro Abe)
+
+CONNECTORS-687: Fix the way documents are indexed via DropBox, so
+that dangling threads are not left unjoined, and all common metadata
+is set in RepositoryDocument.
+(Karl Wright)
+
+CONNECTORS-700: Fix ISO8601 date parsing to handle timezones with
+colons in them, e.g. -08:00
+(Stephane Gamard, Karl Wright)
+
+CONNECTORS-698: Add various required metadata values to the
+GoogleDrive connector.
+(Karl Wright)
+
+CONNECTORS-693: Support for gzip and deflate encoding for web
+connector.
+(Maciej Li¿ewski, Karl Wright)
+
+CONNECTORS-694: Add Google Drive connector.
+(Andrew Janowczyk, Karl Wright)
+
+CONNECTORS-690: For ElasticSearch connector, include _name and
+_content_type field within "file" portion of JSON, so it will work properly
+with the Mapper Attachment Plugin.
+(Richard Nichols, Karl Wright)
+
+CONNECTORS-692: Add support for basic auth in wiki connector.
+(TC Tobin-Campbell, Karl Wright)
+
+CONNECTORS-689: Add ability to crawl user workspaces in LiveLink
+connector.
+(David Morana, Karl Wright)
+
+CONNECTORS-696: FileSystem Output Connector.
+(Minoru Osuka)
+
+======================= Release 1.2 =====================
+
+CONNECTORS-682: Fix expect-continue issues with Solr when there is
+a Solr delay of more than 3 seconds.
+(Oleg Kalnichevski, Erlend Garåsen, Karl Wright)
+
+CONNECTORS-683: Add Dropbox connector end-user documentation.
+(Andrew Janowczyk)
+
+CONNECTORS-586: Fix native2ascii incompatibility with Java 7.
+(Christian Ziech, Karl Wright)
+
+CONNECTORS-685: Handle the case when connector model is
+ADD_CHANGE_DELETE and you change configuration data.
+(Maciej Li¿ewski, Karl Wright)
+
+CONNECTORS-676: Include DropBox connector.
+(Andrew Janowczyk, Karl Wright)
+
+CONNECTORS-678: Add missing noteModification() method calls, so
+that jobqueue is reanalyzed more frequently.
+(Karl Wright)
+
+CONNECTORS-681: ElasticSearch and OpenSearchServer connectors
+both misused the File object passed to them in checkFileIndexable()
+in order to see if the extension was a supported one.  Instead they
+should have been checking the URL.  Added that code as well as changed
+the JCIFS connector to check indexability using the URL means.
+(konrad, Karl Wright)
+
+CONNECTORS-679: Web connector hangs during throttling.  Reason
+appears to be that it is possible to interrupt the beginRead() method
+after it goes into "obtain estimate" mode.  Added code to make it clean
+up in that case.  Also applied to RSS connector.
+(Erlend Garåsen, Karl Wright)
+
+CONNECTORS-677: Close body streams where required.
+(Karl Wright)
+
+CONNECTORS-599: Upgrade to a new version of Derby which doesn't
+stall.
+(Karl Wright)
+
+CONNECTORS-675: Fix for json UTF-8 encoding, elastic search
+connector.
+(Karl Wright)
 
 CONNECTORS-649: Adopt httpclient 4.2.4.
 (Karl Wright)
diff --git a/DEPENDENCIES.txt b/DEPENDENCIES.txt
index 8ffdcaf..e68ef68 100644
--- a/DEPENDENCIES.txt
+++ b/DEPENDENCIES.txt
@@ -1,6 +1,6 @@
 ManifoldCF requires
 ------------------
-* JRE 1.6 or above
+* JRE 1.7 or above
 * SVN client, version 1.7 or above
 * Apache Ant, version 1.7 or above
 * Apache Forrest, version 0.9-dev or above
diff --git a/README.txt b/README.txt
index 3d154b0..dd0ead2 100644
--- a/README.txt
+++ b/README.txt
@@ -32,10 +32,10 @@
    
 3. Copy the lib folder in the lib distribution into the source distribution.
 
-4. Download the Java SE 6 JDK (Java Development Kit), or greater, from http://www.oracle.com/technetwork/java/index.html.
+4. Download the Java SE 7 JDK (Java Development Kit), or greater, from http://www.oracle.com/technetwork/java/index.html.
    You will need the JDK installed, and the %JAVA_HOME%\bin directory included
    on your command path.  To test this, issue a "java -version" command from your
-   shell and verify that the Java version is 1.6 or greater.
+   shell and verify that the Java version is 1.7 or greater.
 
 5. Download the Apache Ant binary distribution (1.7.0 or greater) from http://ant.apache.org.
    You will need Ant installed and the %ANT_HOME%\bin directory included on your
diff --git a/build.xml b/build.xml
index 4f6ea7a..2e004e7 100644
--- a/build.xml
+++ b/build.xml
@@ -58,12 +58,18 @@
         <ant dir="framework" target="clean"/>
         <ant dir="connectors/alfresco" target="clean"/>
         <ant dir="connectors/cmis" target="clean"/>
+        <ant dir="connectors/dropbox" target="clean"/>
+        <ant dir="connectors/email" target="clean"/>
+        <ant dir="connectors/generic" target="clean"/>
+        <ant dir="connectors/googledrive" target="clean"/>
+        <ant dir="connectors/jira" target="clean"/>
         <ant dir="connectors/activedirectory" target="clean"/>
         <ant dir="connectors/ldap" target="clean"/>
         <ant dir="connectors/documentum" target="clean"/>
         <ant dir="connectors/filenet" target="clean"/>
         <ant dir="connectors/filesystem" target="clean"/>
         <ant dir="connectors/gts" target="clean"/>
+        <ant dir="connectors/hdfs" target="clean"/>
         <ant dir="connectors/jcifs" target="clean"/>
         <ant dir="connectors/jdbc" target="clean"/>
         <ant dir="connectors/livelink" target="clean"/>
@@ -75,6 +81,7 @@
         <ant dir="connectors/nullauthority" target="clean"/>
         <ant dir="connectors/nulloutput" target="clean"/>
         <ant dir="connectors/rss" target="clean"/>
+        <ant dir="connectors/regexpmapper" target="clean"/>
         <ant dir="connectors/sharepoint" target="clean"/>
         <ant dir="connectors/webcrawler" target="clean"/>
         <ant dir="connectors/wiki" target="clean"/>
@@ -84,6 +91,7 @@
         <ant dir="tests/cmis" target="clean"/>
         <ant dir="tests/filesystem" target="clean"/>
         <ant dir="tests/gts" target="clean"/>
+        <ant dir="tests/hdfs" target="clean"/>
         <ant dir="tests/opensearchserver" target="clean"/>
         <ant dir="tests/rss" target="clean"/>
         <ant dir="tests/solr" target="clean"/>
@@ -108,12 +116,18 @@
         <ant dir="framework" target="clean"/>
         <ant dir="connectors/alfresco" target="clean"/>
         <ant dir="connectors/cmis" target="clean"/>
+        <ant dir="connectors/dropbox" target="clean"/>
+        <ant dir="connectors/email" target="clean"/>
+        <ant dir="connectors/generic" target="clean"/>
+        <ant dir="connectors/googledrive" target="clean"/>
+        <ant dir="connectors/jira" target="clean"/>
         <ant dir="connectors/activedirectory" target="clean"/>
         <ant dir="connectors/ldap" target="clean"/>
         <ant dir="connectors/documentum" target="clean"/>
         <ant dir="connectors/filenet" target="clean"/>
         <ant dir="connectors/filesystem" target="clean"/>
         <ant dir="connectors/gts" target="clean"/>
+        <ant dir="connectors/hdfs" target="clean"/>
         <ant dir="connectors/jcifs" target="clean"/>
         <ant dir="connectors/jdbc" target="clean"/>
         <ant dir="connectors/livelink" target="clean"/>
@@ -125,6 +139,7 @@
         <ant dir="connectors/nullauthority" target="clean"/>
         <ant dir="connectors/nulloutput" target="clean"/>
         <ant dir="connectors/rss" target="clean"/>
+        <ant dir="connectors/regexpmapper" target="clean"/>
         <ant dir="connectors/sharepoint" target="clean"/>
         <ant dir="connectors/webcrawler" target="clean"/>
         <ant dir="connectors/wiki" target="clean"/>
@@ -134,6 +149,7 @@
         <ant dir="tests/cmis" target="clean"/>
         <ant dir="tests/filesystem" target="clean"/>
         <ant dir="tests/gts" target="clean"/>
+        <ant dir="tests/hdfs" target="clean"/>
         <ant dir="tests/opensearchserver" target="clean"/>
         <ant dir="tests/rss" target="clean"/>
         <ant dir="tests/solr" target="clean"/>
@@ -278,6 +294,14 @@
 
     <target name="setup-cmis-connector" depends="build-framework" if="downloaded"/>
     
+    <target name="setup-dropbox-connector" depends="build-framework" if="downloaded"/>
+
+    <target name="setup-generic-connector" depends="build-framework" if="downloaded"/>
+
+    <target name="setup-googledrive-connector" depends="build-framework" if="downloaded"/>
+    
+    <target name="setup-jira-connector" depends="build-framework" if="downloaded"/>
+    
     <target name="setup-alfresco-connector-tests" depends="build-tests-framework" if="downloaded"/>
 
     <target name="setup-cmis-connector-tests" depends="build-tests-framework" if="downloaded"/>
@@ -294,6 +318,39 @@
         <ant dir="connectors/cmis" target="build"/>
     </target>
     
+    <target name="build-dropbox-connector" depends="setup-dropbox-connector" if="downloaded">
+        <ant dir="connectors/dropbox" target="build"/>
+    </target>
+
+    <target name="build-generic-connector" depends="setup-generic-connector" if="downloaded">
+        <ant dir="connectors/generic" target="build"/>
+    </target>
+
+    <target name="build-googledrive-connector" depends="setup-googledrive-connector" if="downloaded">
+        <ant dir="connectors/googledrive" target="build"/>
+    </target>
+
+    <target name="build-jira-connector" depends="setup-jira-connector" if="downloaded">
+        <ant dir="connectors/jira" target="build"/>
+    </target>
+	
+
+    <target name="doc-dropbox-connector" depends="setup-dropbox-connector" if="downloaded">
+        <ant dir="connectors/dropbox" target="doc"/>
+    </target>
+
+    <target name="doc-generic-connector" depends="setup-generic-connector" if="downloaded">
+        <ant dir="connectors/generic" target="doc"/>
+    </target>
+
+    <target name="doc-googledrive-connector" depends="setup-googledrive-connector" if="downloaded">
+        <ant dir="connectors/googledrive" target="doc"/>
+    </target>
+
+    <target name="doc-jira-connector" depends="setup-jira-connector" if="downloaded">
+        <ant dir="connectors/googledrive" target="doc"/>
+    </target>
+	
     <target name="doc-alfresco-connector" depends="setup-alfresco-connector" if="downloaded">
         <ant dir="connectors/alfresco" target="doc"/>
     </target>
@@ -494,6 +551,42 @@
         <ant dir="connectors/gts" target="run-tests-HSQLDB"/>
     </target>
 
+    <target name="setup-hdfs-connector" depends="build-framework" if="downloaded"/>
+
+    <target name="setup-hdfs-connector-tests" depends="build-tests-framework" if="downloaded"/>
+
+    <target name="build-hdfs-connector" depends="setup-hdfs-connector" if="downloaded">
+        <ant dir="connectors/hdfs" target="build"/>
+    </target>
+
+    <target name="doc-hdfs-connector" depends="setup-hdfs-connector" if="downloaded">
+      <ant dir="connectors/hdfs" target="doc"/>
+    </target>
+
+    <target name="build-tests-hdfs-connector" depends="setup-hdfs-connector,setup-hdfs-connector-tests" if="downloaded">
+        <ant dir="connectors/hdfs" target="build-tests"/>
+    </target>
+
+    <target name="run-tests-hdfs-connector" depends="setup-hdfs-connector,setup-hdfs-connector-tests" if="downloaded">
+        <ant dir="connectors/hdfs" target="run-tests"/>
+    </target>
+
+    <target name="run-tests-derby-hdfs-connector" depends="setup-hdfs-connector,setup-hdfs-connector-tests" if="downloaded">
+        <ant dir="connectors/hdfs" target="run-tests-derby"/>
+    </target>
+
+    <target name="run-tests-postgresql-hdfs-connector" depends="setup-hdfs-connector,setup-hdfs-connector-tests" if="downloaded">
+        <ant dir="connectors/hdfs" target="run-tests-postgresql"/>
+    </target>
+
+    <target name="run-tests-mysql-hdfs-connector" depends="setup-hdfs-connector,setup-hdfs-connector-tests" if="downloaded">
+        <ant dir="connectors/hdfs" target="run-tests-mysql"/>
+    </target>
+
+    <target name="run-tests-HSQLDB-hdfs-connector" depends="setup-hdfs-connector,setup-hdfs-connector-tests" if="downloaded">
+        <ant dir="connectors/hdfs" target="run-tests-HSQLDB"/>
+    </target>
+
     <target name="setup-jcifs-connector" depends="build-framework" if="downloaded"/>
 
     <target name="setup-jcifs-connector-tests" depends="build-tests-framework" if="downloaded"/>
@@ -892,6 +985,42 @@
         <ant dir="connectors/rss" target="run-tests-HSQLDB"/>
     </target>
 
+    <target name="setup-regexp-mapper" depends="build-framework" if="downloaded"/>
+
+    <target name="setup-regexp-mapper-tests" depends="build-tests-framework" if="downloaded"/>
+
+    <target name="build-regexp-mapper" depends="setup-regexp-mapper" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="build"/>
+    </target>
+
+    <target name="doc-regexp-mapper" depends="setup-regexp-mapper" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="doc"/>
+    </target>
+
+    <target name="build-tests-regexp-mapper" depends="setup-regexp-mapper,setup-regexp-mapper-tests" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="build-tests"/>
+    </target>
+
+    <target name="run-tests-regexp-mapper" depends="setup-regexp-mapper,setup-regexp-mapper-tests" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="run-tests"/>
+    </target>
+
+    <target name="run-tests-derby-regexp-mapper" depends="setup-regexp-mapper,setup-regexp-mapper-tests" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="run-tests-derby"/>
+    </target>
+
+    <target name="run-tests-postgresql-regexp-mapper" depends="setup-regexp-mapper,setup-regexp-mapper-tests" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="run-tests-postgresql"/>
+    </target>
+
+    <target name="run-tests-mysql-regexp-mapper" depends="setup-regexp-mapper,setup-regexp-mapper-tests" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="run-tests-mysql"/>
+    </target>
+
+    <target name="run-tests-HSQLDB-regexp-mapper" depends="setup-regexp-mapper,setup-regexp-mapper-tests" if="downloaded">
+        <ant dir="connectors/regexpmapper" target="run-tests-HSQLDB"/>
+    </target>
+
     <target name="setup-sharepoint-connector" depends="build-framework" if="downloaded"/>
     
     <target name="setup-sharepoint-connector-tests" depends="build-tests-framework" if="downloaded"/>
@@ -1000,6 +1129,41 @@
         <ant dir="connectors/wiki" target="run-tests-HSQLDB"/>
     </target>
 
+    <target name="setup-email-connector" depends="build-framework" if="downloaded"/>
+    
+    <target name="setup-email-connector-tests" depends="build-tests-framework" if="downloaded"/>
+
+    <target name="build-email-connector" depends="setup-email-connector" if="downloaded">
+        <ant dir="connectors/email" target="build"/>
+    </target>
+
+    <target name="doc-email-connector" depends="setup-email-connector" if="downloaded">
+        <ant dir="connectors/email" target="doc"/>
+    </target>
+
+    <target name="build-tests-email-connector" depends="setup-email-connector,setup-email-connector-tests" if="downloaded">
+        <ant dir="connectors/email" target="build-tests"/>
+    </target>
+
+    <target name="run-tests-email-connector" depends="setup-email-connector,setup-email-connector-tests" if="downloaded">
+        <ant dir="connectors/email" target="run-tests"/>
+    </target>
+
+    <target name="run-tests-derby-email-connector" depends="setup-email-connector,setup-email-connector-tests" if="downloaded">
+        <ant dir="connectors/email" target="run-tests-derby"/>
+    </target>
+
+    <target name="run-tests-postgresql-email-connector" depends="setup-email-connector,setup-email-connector-tests" if="downloaded">
+        <ant dir="connectors/email" target="run-tests-postgresql"/>
+    </target>
+
+    <target name="run-tests-mysql-email-connector" depends="setup-email-connector,setup-email-connector-tests" if="downloaded">
+        <ant dir="connectors/email" target="run-tests-mysql"/>
+    </target>
+
+    <target name="run-tests-HSQLDB-email-connector" depends="setup-email-connector,setup-email-connector-tests" if="downloaded">
+        <ant dir="connectors/email" target="run-tests-HSQLDB"/>
+    </target>
 
     <target name="deliver-site-doc" depends="presite-check" if="site-exists">
           <mkdir dir="dist/doc"/>
@@ -1018,15 +1182,19 @@
     <target name="preclean-for-delivery">
         <mkdir dir="dist"/>
         <mkdir dir="dist/example"/>
-        <mkdir dir="dist/multiprocess-example"/>
         <mkdir dir="dist/example-proprietary"/>
-        <mkdir dir="dist/multiprocess-example-proprietary"/>
+        <mkdir dir="dist/multiprocess-file-example"/>
+        <mkdir dir="dist/multiprocess-zk-example"/>
+        <mkdir dir="dist/multiprocess-file-example-proprietary"/>
+        <mkdir dir="dist/multiprocess-zk-example-proprietary"/>
         <delete file="dist/connectors.xml"/>
         <delete file="dist/connectors-proprietary.xml"/>
         <delete file="dist/example/properties.xml"/>
         <delete file="dist/example-proprietary/properties.xml"/>
-        <delete file="dist/multiprocess-example/properties.xml"/>
-        <delete file="dist/multiprocess-example-proprietary/properties.xml"/>
+        <delete file="dist/multiprocess-file-example/properties.xml"/>
+        <delete file="dist/multiprocess-file-example-proprietary/properties.xml"/>
+        <delete file="dist/multiprocess-zk-example/properties.xml"/>
+        <delete file="dist/multiprocess-zk-example-proprietary/properties.xml"/>
         <delete file="dist/NOTICE.txt"/>
         <delete file="dist/LICENSE.txt"/>
     </target>
@@ -1040,13 +1208,21 @@
         <copy todir="dist/web-proprietary">
             <fileset dir="framework/dist/web-proprietary"/>
         </copy>
-        <mkdir dir="dist/multiprocess-example"/>
-        <copy todir="dist/multiprocess-example">
-            <fileset dir="framework/dist/multiprocess-example"/>
+        <mkdir dir="dist/multiprocess-file-example"/>
+        <copy todir="dist/multiprocess-file-example">
+            <fileset dir="framework/dist/multiprocess-file-example"/>
         </copy>
-        <mkdir dir="dist/multiprocess-example-proprietary"/>
-        <copy todir="dist/multiprocess-example-proprietary">
-            <fileset dir="framework/dist/multiprocess-example-proprietary"/>
+        <mkdir dir="dist/multiprocess-file-example-proprietary"/>
+        <copy todir="dist/multiprocess-file-example-proprietary">
+            <fileset dir="framework/dist/multiprocess-file-example-proprietary"/>
+        </copy>
+        <mkdir dir="dist/multiprocess-zk-example"/>
+        <copy todir="dist/multiprocess-zk-example">
+            <fileset dir="framework/dist/multiprocess-zk-example"/>
+        </copy>
+        <mkdir dir="dist/multiprocess-zk-example-proprietary"/>
+        <copy todir="dist/multiprocess-zk-example-proprietary">
+            <fileset dir="framework/dist/multiprocess-zk-example-proprietary"/>
         </copy>
         <mkdir dir="dist/example"/>
         <copy todir="dist/example">
@@ -1078,8 +1254,10 @@
         <chmod dir="dist/script-engine" perm="a+x" includes="**/*.sh"/>
         <chmod dir="dist/example" perm="a+x" includes="**/*.sh"/>
         <chmod dir="dist/example-proprietary" perm="a+x" includes="**/*.sh"/>
-        <chmod dir="dist/multiprocess-example" perm="a+x" includes="**/*.sh"/>
-        <chmod dir="dist/multiprocess-example-proprietary" perm="a+x" includes="**/*.sh"/>
+        <chmod dir="dist/multiprocess-file-example" perm="a+x" includes="**/*.sh"/>
+        <chmod dir="dist/multiprocess-file-example-proprietary" perm="a+x" includes="**/*.sh"/>
+        <chmod dir="dist/multiprocess-zk-example" perm="a+x" includes="**/*.sh"/>
+        <chmod dir="dist/multiprocess-zk-example-proprietary" perm="a+x" includes="**/*.sh"/>
         <copy todir="dist">
             <fileset dir="dist-license" includes="*.txt"/>
         </copy>
@@ -1172,7 +1350,25 @@
         </condition>
     </target>
     
-    
+    <target name="general-add-mapping-connector-commented" depends="general-connector-runnable-check" unless="${connector-name}.is-runnable">
+        <replace file="dist/connectors.xml" token="&lt;!-- Add your mapping connectors here --&gt;" value="&lt;!-- Add your mapping connectors here --&gt;&#0010;  &lt;!--mappingconnector name=&quot;${connector-label}&quot; class=&quot;${connector-class}&quot;/--&gt;"/>
+    </target>
+
+    <target name="general-add-mapping-connector-non-commented" depends="general-connector-runnable-check" if="${connector-name}.is-runnable">
+        <replace file="dist/connectors.xml" token="&lt;!-- Add your mapping connectors here --&gt;" value="&lt;!-- Add your mapping connectors here --&gt;&#0010;  &lt;mappingconnector name=&quot;${connector-label}&quot; class=&quot;${connector-class}&quot;/&gt;"/>
+    </target>
+
+    <target name="general-add-mapping-connector-proprietary-commented" depends="general-connector-proprietary-runnable-check" unless="${connector-name}.is-proprietary-runnable">
+        <replace file="dist/connectors-proprietary.xml" token="&lt;!-- Add your mapping connectors here --&gt;" value="&lt;!-- Add your mapping connectors here --&gt;&#0010;  &lt;!--mappingconnector name=&quot;${connector-label}&quot; class=&quot;${connector-class}&quot;/--&gt;"/>
+    </target>
+
+    <target name="general-add-mapping-connector-proprietary-non-commented" depends="general-connector-proprietary-runnable-check" if="${connector-name}.is-proprietary-runnable">
+        <replace file="dist/connectors-proprietary.xml" token="&lt;!-- Add your mapping connectors here --&gt;" value="&lt;!-- Add your mapping connectors here --&gt;&#0010;  &lt;mappingconnector name=&quot;${connector-label}&quot; class=&quot;${connector-class}&quot;/&gt;"/>
+    </target>
+
+    <target name="general-add-mapping-connector" depends="general-add-mapping-connector-commented,general-add-mapping-connector-non-commented,general-add-mapping-connector-proprietary-commented,general-add-mapping-connector-proprietary-non-commented">
+    </target>
+
     <target name="general-add-authority-connector-commented" depends="general-connector-runnable-check" unless="${connector-name}.is-runnable">
         <replace file="dist/connectors.xml" token="&lt;!-- Add your authority connectors here --&gt;" value="&lt;!-- Add your authority connectors here --&gt;&#0010;  &lt;!--authorityconnector name=&quot;${connector-label}&quot; class=&quot;${connector-class}&quot;/--&gt;"/>
     </target>
@@ -1315,7 +1511,7 @@
     </target>
 
     <target name="calculate-alfresco-testmaterials-condition" depends="calculate-alfresco-condition,build-alfresco-connector-testmaterials">
-        <available file="connectors/alfresco/build/alfresco-war" type="dir" property="alfresco-testmaterials.exists"/>
+        <available file="connectors/alfresco/build/alfresco-4-war" type="dir" property="alfresco-testmaterials.exists"/>
         <condition property="alfresco-testmaterials.include">
           <and>
               <isset property="alfresco-testmaterials.exists"/>
@@ -1361,6 +1557,89 @@
             </and>
         </condition>
     </target>
+    
+    
+    <target name="calculate-dropbox-condition" depends="build-dropbox-connector">
+        <available file="connectors/dropbox/dist/lib" type="dir" property="dropbox.exists"/>
+        <condition property="dropbox.include">
+            <and>
+                <isset property="dropbox.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-dropbox-doc-condition" depends="doc-dropbox-connector">
+        <available file="connectors/dropbox/dist/doc" type="dir" property="dropbox-doc.exists"/>
+        <condition property="dropbox-doc.include">
+            <and>
+                <isset property="dropbox-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-generic-condition" depends="build-generic-connector">
+        <available file="connectors/generic/dist/lib" type="dir" property="generic.exists"/>
+        <condition property="generic.include">
+            <and>
+                <isset property="generic.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-generic-doc-condition" depends="doc-generic-connector">
+        <available file="connectors/generic/dist/doc" type="dir" property="generic-doc.exists"/>
+        <condition property="generic-doc.include">
+            <and>
+                <isset property="generic-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+	
+    <target name="calculate-googledrive-condition" depends="build-googledrive-connector">
+        <available file="connectors/googledrive/dist/lib" type="dir" property="googledrive.exists"/>
+        <condition property="googledrive.include">
+            <and>
+                <isset property="googledrive.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-googledrive-doc-condition" depends="doc-googledrive-connector">
+        <available file="connectors/googledrive/dist/doc" type="dir" property="googledrive-doc.exists"/>
+        <condition property="googledrive-doc.include">
+            <and>
+                <isset property="googledrive-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-jira-condition" depends="build-jira-connector">
+        <available file="connectors/jira/dist/lib" type="dir" property="jira.exists"/>
+        <condition property="jira.include">
+            <and>
+                <isset property="jira.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-jira-doc-condition" depends="doc-jira-connector">
+        <available file="connectors/jira/dist/doc" type="dir" property="jira-doc.exists"/>
+        <condition property="jira-doc.include">
+            <and>
+                <isset property="jira-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+
 
     <target name="calculate-cmis-doc-condition" depends="doc-cmis-connector">
         <available file="connectors/cmis/dist/doc" type="dir" property="cmis-doc.exists"/>
@@ -1388,6 +1667,85 @@
         </antcall>
     </target>
 
+    <target name="deliver-dropbox-connector" depends="calculate-dropbox-condition" if="dropbox.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="dropbox"/>
+        </antcall>
+        <antcall target="general-add-repository-connector">
+            <param name="connector-name" value="dropbox"/>
+            <param name="connector-label" value="DropBox"/>
+            <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.dropbox.DropboxRepositoryConnector"/>
+        </antcall>
+    </target>
+    
+    <target name="deliver-dropbox-connector-doc" depends="calculate-dropbox-doc-condition" if="dropbox-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="dropbox"/>
+        </antcall>
+    </target>
+
+    <target name="deliver-generic-connector" depends="calculate-generic-condition" if="generic.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="generic"/>
+        </antcall>
+        <antcall target="general-add-repository-connector">
+            <param name="connector-name" value="generic"/>
+            <param name="connector-label" value="Generic"/>
+            <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.generic.GenericConnector"/>
+        </antcall>
+        <antcall target="general-add-authority-connector">
+            <param name="connector-name" value="generic"/>
+            <param name="connector-label" value="Generic"/>
+            <param name="connector-class" value="org.apache.manifoldcf.authorities.authorities.generic.GenericAuthority"/>
+        </antcall>
+    </target>
+    
+    <target name="deliver-generic-connector-doc" depends="calculate-generic-doc-condition" if="generic-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="generic"/>
+        </antcall>
+    </target>
+
+    <target name="deliver-googledrive-connector" depends="calculate-googledrive-condition" if="googledrive.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="googledrive"/>
+        </antcall>
+        <antcall target="general-add-repository-connector">
+            <param name="connector-name" value="googledrive"/>
+            <param name="connector-label" value="GoogleDrive"/>
+            <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.googledrive.GoogleDriveRepositoryConnector"/>
+        </antcall>
+    </target>
+
+    <target name="deliver-googledrive-connector-doc" depends="calculate-googledrive-doc-condition" if="googledrive-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="googledrive"/>
+        </antcall>
+    </target>
+
+    <target name="deliver-jira-connector" depends="calculate-jira-condition" if="jira.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="jira"/>
+        </antcall>
+        <antcall target="general-add-repository-connector">
+            <param name="connector-name" value="jira"/>
+            <param name="connector-label" value="Jira"/>
+            <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.jira.JiraRepositoryConnector"/>
+        </antcall>
+        <antcall target="general-add-authority-connector">
+            <param name="connector-name" value="jira"/>
+            <param name="connector-label" value="Jira"/>
+            <param name="connector-class" value="org.apache.manifoldcf.authorities.authorities.jira.JiraAuthorityConnector"/>
+        </antcall>
+    </target>
+
+    <target name="deliver-jira-connector-doc" depends="calculate-jira-doc-condition" if="jira-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="jira"/>
+        </antcall>
+    </target>
+
+	
     <target name="deliver-cmis-connector-doc" depends="calculate-cmis-doc-condition" if="cmis-doc.include">
         <antcall target="general-connector-doc-delivery">
             <param name="connector-name" value="cmis"/>
@@ -1502,6 +1860,11 @@
             <param name="connector-label" value="File system"/>
             <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.filesystem.FileConnector"/>
         </antcall>
+        <antcall target="general-add-output-connector">
+            <param name="connector-name" value="filesystem"/>
+            <param name="connector-label" value="File system"/>
+            <param name="connector-class" value="org.apache.manifoldcf.agents.output.filesystem.FileOutputConnector"/>
+        </antcall>
     </target>
     
     <target name="deliver-filesystem-connector-doc" depends="calculate-filesystem-doc-condition" if="filesystem-doc.include">
@@ -1557,6 +1920,48 @@
         </condition>
     </target>
 
+    <target name="calculate-hdfs-condition" depends="build-hdfs-connector">
+        <available file="connectors/hdfs/dist/lib" type="dir" property="hdfs.exists"/>
+        <condition property="hdfs.include">
+            <and>
+                <isset property="hdfs.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-hdfs-doc-condition" depends="doc-hdfs-connector">
+        <available file="connectors/hdfs/dist/doc" type="dir" property="hdfs-doc.exists"/>
+        <condition property="hdfs-doc.include">
+            <and>
+                <isset property="hdfs-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="deliver-hdfs-connector" depends="calculate-hdfs-condition" if="hdfs.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="hdfs"/>
+        </antcall>
+        <antcall target="general-add-repository-connector">
+            <param name="connector-name" value="hdfs"/>
+            <param name="connector-label" value="HDFS"/>
+            <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"/>
+        </antcall>
+        <antcall target="general-add-output-connector">
+            <param name="connector-name" value="hdfs"/>
+            <param name="connector-label" value="HDFS"/>
+            <param name="connector-class" value="org.apache.manifoldcf.agents.output.hdfs.HDFSOutputConnector"/>
+        </antcall>
+    </target>
+
+    <target name="deliver-hdfs-connector-doc" depends="calculate-hdfs-doc-condition" if="hdfs-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="hdfs"/>
+        </antcall>
+    </target>
+
     <target name="calculate-jdbc-doc-condition" depends="doc-jdbc-connector">
         <available file="connectors/jdbc/dist/doc" type="dir" property="jdbc-doc.exists"/>
         <condition property="jdbc-doc.include">
@@ -1992,6 +2397,43 @@
         </antcall>
     </target>
 
+    <target name="calculate-regexpmapper-condition" depends="build-regexp-mapper">
+        <available file="connectors/regexpmapper/dist/lib" type="dir" property="regexpmapper.exists"/>
+        <condition property="regexpmapper.include">
+            <and>
+                <isset property="regexpmapper.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-regexpmapper-doc-condition" depends="doc-regexp-mapper">
+        <available file="connectors/regexpmapper/dist/doc" type="dir" property="regexpmapper-doc.exists"/>
+        <condition property="regexpmapper-doc.include">
+            <and>
+                <isset property="regexpmapper-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="deliver-regexp-mapper" depends="calculate-regexpmapper-condition" if="regexpmapper.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="regexpmapper"/>
+        </antcall>
+        <antcall target="general-add-mapping-connector">
+            <param name="connector-name" value="regexpmapper"/>
+            <param name="connector-label" value="Regular expression mapper"/>
+            <param name="connector-class" value="org.apache.manifoldcf.authorities.mappers.regexp.RegexpMapper"/>
+        </antcall>
+    </target>
+    
+    <target name="deliver-regexp-mapper-doc" depends="calculate-regexpmapper-doc-condition" if="regexpmapper-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="regexpmapper"/>
+        </antcall>
+    </target>
+
     <target name="calculate-sharepoint-condition" depends="build-sharepoint-connector">
         <available file="connectors/sharepoint/dist/lib" type="dir" property="sharepoint.exists"/>
         <condition property="sharepoint.include">
@@ -2021,6 +2463,16 @@
             <param name="connector-label" value="SharePoint"/>
             <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.sharepoint.SharePointRepository"/>
         </antcall>
+        <antcall target="general-add-authority-connector">
+            <param name="connector-name" value="sharepoint"/>
+            <param name="connector-label" value="SharePoint/ActiveDirectory"/>
+            <param name="connector-class" value="org.apache.manifoldcf.authorities.authorities.sharepoint.SharePointADAuthority"/>
+        </antcall>
+        <antcall target="general-add-authority-connector">
+            <param name="connector-name" value="sharepoint"/>
+            <param name="connector-label" value="SharePoint/Native"/>
+            <param name="connector-class" value="org.apache.manifoldcf.authorities.authorities.sharepoint.SharePointAuthority"/>
+        </antcall>
     </target>
     
     <target name="deliver-sharepoint-connector-doc" depends="calculate-sharepoint-doc-condition" if="sharepoint-doc.include">
@@ -2103,6 +2555,43 @@
         </antcall>
     </target>
 
+    <target name="calculate-email-condition" depends="build-email-connector">
+        <available file="connectors/email/dist/lib" type="dir" property="email.exists"/>
+        <condition property="email.include">
+            <and>
+                <isset property="email.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="calculate-email-doc-condition" depends="doc-email-connector">
+        <available file="connectors/email/dist/doc" type="dir" property="email-doc.exists"/>
+        <condition property="email-doc.include">
+            <and>
+                <isset property="email-doc.exists"/>
+                <isset property="downloaded"/>
+            </and>
+        </condition>
+    </target>
+
+    <target name="deliver-email-connector" depends="calculate-email-condition" if="email.include">
+        <antcall target="general-connector-delivery">
+            <param name="connector-name" value="email"/>
+        </antcall>
+        <antcall target="general-add-repository-connector">
+            <param name="connector-name" value="email"/>
+            <param name="connector-label" value="EMail"/>
+            <param name="connector-class" value="org.apache.manifoldcf.crawler.connectors.email.EmailConnector"/>
+        </antcall>
+    </target>
+    
+    <target name="deliver-email-connector-doc" depends="calculate-email-doc-condition" if="email-doc.include">
+        <antcall target="general-connector-doc-delivery">
+            <param name="connector-name" value="email"/>
+        </antcall>
+    </target>
+
     <target name="calculate-filesystem-tests-condition" depends="calculate-filesystem-condition,calculate-nulloutput-condition">
       <condition property="filesystem-tests.include">
         <and>
@@ -2112,7 +2601,16 @@
       </condition>
     </target>
 
-    <target name="calculate-jcifs-tests-condition" depends="calculate-jcifs-condition,calculate-nulloutput-condition">
+    <target name="calculate-hdfs-tests-condition" depends="calculate-hdfs-condition,calculate-nulloutput-condition">
+      <condition property="hdfs-tests.include">
+        <and>
+            <isset property="hdfs.include"/>
+            <isset property="nulloutput.include"/>
+        </and>
+      </condition>
+    </target>
+
+	<target name="calculate-jcifs-tests-condition" depends="calculate-jcifs-condition,calculate-nulloutput-condition">
       <condition property="jcifs-tests.include">
         <and>
             <isset property="jcifs.include"/>
@@ -2247,6 +2745,10 @@
         <ant dir="tests/filesystem" target="run-load-derby"/>
     </target>
 
+    <target name="run-hdfs-UI-tests-derby" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-UI-derby"/>
+    </target>
+
     <target name="run-jcifs-UI-tests-derby" depends="build-tests-framework,build-tests-jcifs-connector,build-tests-nulloutput-connector,calculate-jcifs-tests-condition" if="jcifs-tests.include">
         <ant dir="tests/jcifs" target="run-UI-derby"/>
     </target>
@@ -2307,6 +2809,10 @@
         <ant dir="tests/webcrawler" target="run-postgresql"/>
     </target>
 
+    <target name="run-webcrawler-loadtests-derby" depends="build-tests-framework,build-tests-webcrawler-connector,build-tests-nulloutput-connector,calculate-webcrawler-tests-condition" if="webcrawler-tests.include">
+        <ant dir="tests/webcrawler" target="run-load-derby"/>
+    </target>
+
     <target name="run-webcrawler-loadtests-postgresql" depends="build-tests-framework,build-tests-webcrawler-connector,build-tests-nulloutput-connector,calculate-webcrawler-tests-condition" if="webcrawler-tests.include">
         <ant dir="tests/webcrawler" target="run-load-postgresql"/>
     </target>
@@ -2387,6 +2893,22 @@
         <ant dir="tests/filesystem" target="run-load-mysql"/>
     </target>
 
+    <target name="run-hdfs-tests-postgresql" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-postgresql"/>
+    </target>
+
+    <target name="run-hdfs-tests-mysql" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-mysql"/>
+    </target>
+
+    <target name="run-hdfs-loadtests-postgresql" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-load-postgresql"/>
+    </target>
+
+    <target name="run-hdfs-loadtests-mysql" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-load-mysql"/>
+    </target>
+
     <target name="run-wiki-tests-postgresql" depends="build-tests-framework,build-tests-wiki-connector,build-tests-nulloutput-connector,calculate-wiki-tests-condition" if="wiki-tests.include">
         <ant dir="tests/wiki" target="run-postgresql"/>
     </target>
@@ -2451,6 +2973,18 @@
         <ant dir="tests/filesystem" target="run-load-HSQLDB"/>
     </target>
 
+    <target name="run-hdfs-tests-HSQLDB" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-HSQLDB"/>
+    </target>
+
+    <target name="run-hdfs-UI-tests-HSQLDB" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-UI-HSQLDB"/>
+    </target>
+
+    <target name="run-hdfs-loadtests-HSQLDB" depends="build-tests-framework,build-tests-hdfs-connector,build-tests-nulloutput-connector,calculate-hdfs-tests-condition" if="hdfs-tests.include">
+        <ant dir="tests/hdfs" target="run-load-HSQLDB"/>
+    </target>
+
     <target name="run-wiki-tests-HSQLDB" depends="build-tests-framework,build-tests-wiki-connector,build-tests-nulloutput-connector,calculate-wiki-tests-condition" if="wiki-tests.include">
         <ant dir="tests/wiki" target="run-HSQLDB"/>
     </target>
@@ -2499,63 +3033,65 @@
         <ant dir="tests/sharepoint" target="run-load-HSQLDB"/>
     </target>
 
-    <target name="run-tests-open-connectors" depends="run-tests-activedirectory-connector,run-tests-ldap-connector,run-tests-alfresco-connector,run-tests-cmis-connector,run-tests-filesystem-connector,run-tests-nullauthority-connector,run-tests-nulloutput-connector,run-tests-rss-connector,run-tests-solr-connector,run-tests-webcrawler-connector,run-tests-wiki-connector,run-tests-jdbc-connector"/>
+    <target name="run-tests-open-connectors" depends="run-tests-activedirectory-connector,run-tests-ldap-connector,run-tests-alfresco-connector,run-tests-cmis-connector,run-tests-filesystem-connector,run-tests-nullauthority-connector,run-tests-nulloutput-connector,run-tests-rss-connector,run-tests-solr-connector,run-tests-webcrawler-connector,run-tests-wiki-connector,run-tests-jdbc-connector,run-tests-hdfs-connector"/>
     <target name="run-tests-lgpl-connectors" depends="run-tests-jcifs-connector"/>
     <target name="run-tests-proprietary-connectors" depends="run-tests-documentum-connector,run-tests-filenet-connector,run-tests-livelink-connector,run-tests-memex-connector,run-tests-meridio-connector,run-tests-sharepoint-connector"/>
 
-    <target name="run-tests-derby-open-connectors" depends="run-tests-derby-activedirectory-connector,run-tests-derby-ldap-connector,run-tests-derby-alfresco-connector,run-tests-derby-cmis-connector,run-tests-derby-filesystem-connector,run-tests-derby-nullauthority-connector,run-tests-derby-nulloutput-connector,run-tests-derby-rss-connector,run-tests-derby-solr-connector,run-tests-derby-webcrawler-connector,run-tests-derby-wiki-connector,run-tests-derby-jdbc-connector"/>
+    <target name="run-tests-derby-open-connectors" depends="run-tests-derby-activedirectory-connector,run-tests-derby-ldap-connector,run-tests-derby-alfresco-connector,run-tests-derby-cmis-connector,run-tests-derby-filesystem-connector,run-tests-derby-hdfs-connector,run-tests-derby-nullauthority-connector,run-tests-derby-nulloutput-connector,run-tests-derby-rss-connector,run-tests-derby-solr-connector,run-tests-derby-webcrawler-connector,run-tests-derby-wiki-connector,run-tests-derby-jdbc-connector"/>
     <target name="run-tests-derby-lgpl-connectors" depends="run-tests-derby-jcifs-connector"/>
     <target name="run-tests-derby-proprietary-connectors" depends="run-tests-derby-documentum-connector,run-tests-derby-filenet-connector,run-tests-derby-livelink-connector,run-tests-derby-memex-connector,run-tests-derby-meridio-connector,run-tests-derby-sharepoint-connector"/>
     
     <target name="end-to-end-tests-derby" depends="run-filesystem-tests-derby,run-webcrawler-tests-derby,run-rss-tests-derby,run-solr-tests-derby,run-wiki-tests-derby,run-alfresco-tests-derby,run-cmis-tests-derby,run-sharepoint-tests-derby"/>
 
-    <target name="run-tests-postgresql-open-connectors" depends="run-tests-postgresql-activedirectory-connector,run-tests-postgresql-ldap-connector,run-tests-postgresql-alfresco-connector,run-tests-postgresql-cmis-connector,run-tests-postgresql-filesystem-connector,run-tests-postgresql-nullauthority-connector,run-tests-postgresql-nulloutput-connector,run-tests-postgresql-rss-connector,run-tests-postgresql-solr-connector,run-tests-postgresql-webcrawler-connector,run-tests-postgresql-wiki-connector,run-tests-postgresql-jdbc-connector,run-tests-postgresql-opensearchserver-connector,run-tests-postgresql-elasticsearch-connector"/>
+    <target name="run-tests-postgresql-open-connectors" depends="run-tests-postgresql-activedirectory-connector,run-tests-postgresql-ldap-connector,run-tests-postgresql-alfresco-connector,run-tests-postgresql-cmis-connector,run-tests-postgresql-filesystem-connector,run-tests-postgresql-hdfs-connector,run-tests-postgresql-nullauthority-connector,run-tests-postgresql-nulloutput-connector,run-tests-postgresql-rss-connector,run-tests-postgresql-solr-connector,run-tests-postgresql-webcrawler-connector,run-tests-postgresql-wiki-connector,run-tests-postgresql-jdbc-connector,run-tests-postgresql-opensearchserver-connector,run-tests-postgresql-elasticsearch-connector"/>
     <target name="run-tests-postgresql-lgpl-connectors" depends="run-tests-postgresql-jcifs-connector"/>
     <target name="run-tests-postgresql-proprietary-connectors" depends="run-tests-postgresql-documentum-connector,run-tests-postgresql-filenet-connector,run-tests-postgresql-livelink-connector,run-tests-postgresql-memex-connector,run-tests-postgresql-meridio-connector,run-tests-postgresql-sharepoint-connector"/>
     
-    <target name="end-to-end-tests-postgresql" depends="run-filesystem-tests-postgresql,run-webcrawler-tests-postgresql,run-wiki-tests-postgresql,run-alfresco-tests-postgresql,run-cmis-tests-postgresql,run-sharepoint-tests-postgresql"/>
+    <target name="end-to-end-tests-postgresql" depends="run-filesystem-tests-postgresql,run-hdfs-tests-postgresql,run-webcrawler-tests-postgresql,run-wiki-tests-postgresql,run-alfresco-tests-postgresql,run-cmis-tests-postgresql,run-sharepoint-tests-postgresql"/>
 
-    <target name="run-tests-mysql-open-connectors" depends="run-tests-mysql-activedirectory-connector,run-tests-mysql-ldap-connector,run-tests-mysql-alfresco-connector,run-tests-mysql-cmis-connector,run-tests-mysql-filesystem-connector,run-tests-mysql-nullauthority-connector,run-tests-mysql-nulloutput-connector,run-tests-mysql-rss-connector,run-tests-mysql-solr-connector,run-tests-mysql-webcrawler-connector,run-tests-mysql-wiki-connector,run-tests-mysql-jdbc-connector,run-tests-mysql-opensearchserver-connector,run-tests-mysql-elasticsearch-connector"/>
+    <target name="run-tests-mysql-open-connectors" depends="run-tests-mysql-activedirectory-connector,run-tests-mysql-ldap-connector,run-tests-mysql-alfresco-connector,run-tests-mysql-cmis-connector,run-tests-mysql-filesystem-connector,run-tests-mysql-hdfs-connector,run-tests-mysql-nullauthority-connector,run-tests-mysql-nulloutput-connector,run-tests-mysql-rss-connector,run-tests-mysql-solr-connector,run-tests-mysql-webcrawler-connector,run-tests-mysql-wiki-connector,run-tests-mysql-jdbc-connector,run-tests-mysql-opensearchserver-connector,run-tests-mysql-elasticsearch-connector"/>
     <target name="run-tests-mysql-lgpl-connectors" depends="run-tests-mysql-jcifs-connector"/>
     <target name="run-tests-mysql-proprietary-connectors" depends="run-tests-mysql-documentum-connector,run-tests-mysql-filenet-connector,run-tests-mysql-livelink-connector,run-tests-mysql-memex-connector,run-tests-mysql-meridio-connector,run-tests-mysql-sharepoint-connector"/>
     
-    <target name="end-to-end-tests-mysql" depends="run-filesystem-tests-mysql,run-webcrawler-tests-mysql,run-wiki-tests-mysql,run-alfresco-tests-mysql,run-cmis-tests-mysql,run-sharepoint-tests-mysql"/>
+    <target name="end-to-end-tests-mysql" depends="run-filesystem-tests-mysql,run-hdfs-tests-mysql,run-webcrawler-tests-mysql,run-wiki-tests-mysql,run-alfresco-tests-mysql,run-cmis-tests-mysql,run-sharepoint-tests-mysql"/>
 
-    <target name="run-tests-HSQLDB-open-connectors" depends="run-tests-HSQLDB-activedirectory-connector,run-tests-HSQLDB-ldap-connector,run-tests-HSQLDB-alfresco-connector,run-tests-HSQLDB-cmis-connector,run-tests-HSQLDB-filesystem-connector,run-tests-HSQLDB-nullauthority-connector,run-tests-HSQLDB-nulloutput-connector,run-tests-HSQLDB-rss-connector,run-tests-HSQLDB-solr-connector,run-tests-HSQLDB-webcrawler-connector,run-tests-HSQLDB-wiki-connector,run-tests-HSQLDB-jdbc-connector,run-tests-HSQLDB-opensearchserver-connector,run-tests-HSQLDB-elasticsearch-connector"/>
+    <target name="run-tests-HSQLDB-open-connectors" depends="run-tests-HSQLDB-activedirectory-connector,run-tests-HSQLDB-ldap-connector,run-tests-HSQLDB-alfresco-connector,run-tests-HSQLDB-cmis-connector,run-tests-HSQLDB-filesystem-connector,run-tests-HSQLDB-hdfs-connector,run-tests-HSQLDB-nullauthority-connector,run-tests-HSQLDB-nulloutput-connector,run-tests-HSQLDB-rss-connector,run-tests-HSQLDB-solr-connector,run-tests-HSQLDB-webcrawler-connector,run-tests-HSQLDB-wiki-connector,run-tests-HSQLDB-jdbc-connector,run-tests-HSQLDB-opensearchserver-connector,run-tests-HSQLDB-elasticsearch-connector"/>
     <target name="run-tests-HSQLDB-lgpl-connectors" depends="run-tests-HSQLDB-jcifs-connector"/>
     <target name="run-tests-HSQLDB-proprietary-connectors" depends="run-tests-HSQLDB-documentum-connector,run-tests-HSQLDB-filenet-connector,run-tests-HSQLDB-livelink-connector,run-tests-HSQLDB-memex-connector,run-tests-HSQLDB-meridio-connector,run-tests-HSQLDB-sharepoint-connector"/>
     
-    <target name="end-to-end-tests-HSQLDB" depends="run-filesystem-tests-HSQLDB,run-rss-tests-HSQLDB,run-wiki-tests-HSQLDB,run-alfresco-tests-HSQLDB,run-cmis-tests-HSQLDB,run-sharepoint-tests-HSQLDB"/>
+    <target name="end-to-end-tests-HSQLDB" depends="run-filesystem-tests-HSQLDB,run-hdfs-tests-HSQLDB,run-rss-tests-HSQLDB,run-wiki-tests-HSQLDB,run-alfresco-tests-HSQLDB,run-cmis-tests-HSQLDB,run-sharepoint-tests-HSQLDB"/>
 
     <target name="end-to-end-loadtests-derby" depends="run-filesystem-loadtests-derby,run-rss-loadtests-derby,run-wiki-loadtests-derby,run-alfresco-loadtests-derby,run-cmis-loadtests-derby,run-sharepoint-loadtests-derby"/>
 
-    <target name="end-to-end-loadtests-postgresql" depends="run-filesystem-loadtests-postgresql,run-rss-loadtests-postgresql,run-wiki-loadtests-postgresql,run-alfresco-loadtests-postgresql,run-cmis-loadtests-postgresql,run-sharepoint-loadtests-postgresql"/>
+    <target name="end-to-end-loadtests-postgresql" depends="run-filesystem-loadtests-postgresql,run-hdfs-loadtests-postgresql,run-rss-loadtests-postgresql,run-wiki-loadtests-postgresql,run-alfresco-loadtests-postgresql,run-cmis-loadtests-postgresql,run-sharepoint-loadtests-postgresql"/>
 
-    <target name="end-to-end-loadtests-mysql" depends="run-filesystem-loadtests-mysql,run-rss-loadtests-mysql,run-wiki-loadtests-mysql,run-alfresco-loadtests-mysql,run-cmis-loadtests-mysql,run-sharepoint-loadtests-mysql"/>
+    <target name="end-to-end-loadtests-mysql" depends="run-filesystem-loadtests-mysql,run-hdfs-loadtests-mysql,run-rss-loadtests-mysql,run-wiki-loadtests-mysql,run-alfresco-loadtests-mysql,run-cmis-loadtests-mysql,run-sharepoint-loadtests-mysql"/>
 
-    <target name="end-to-end-loadtests-HSQLDB" depends="run-filesystem-loadtests-HSQLDB,run-rss-loadtests-HSQLDB,run-wiki-loadtests-HSQLDB,run-alfresco-loadtests-HSQLDB,run-cmis-loadtests-HSQLDB,run-sharepoint-loadtests-HSQLDB"/>
+    <target name="end-to-end-loadtests-HSQLDB" depends="run-filesystem-loadtests-HSQLDB,run-hdfs-loadtests-HSQLDB,run-rss-loadtests-HSQLDB,run-wiki-loadtests-HSQLDB,run-alfresco-loadtests-HSQLDB,run-cmis-loadtests-HSQLDB,run-sharepoint-loadtests-HSQLDB"/>
 
+    <target name="deliver-open-connectors" depends="deliver-email-connector,deliver-generic-connector,deliver-jira-connector,deliver-googledrive-connector,deliver-dropbox-connector,deliver-nullauthority-connector,deliver-activedirectory-connector,deliver-ldap-connector,deliver-alfresco-connector,deliver-cmis-connector,deliver-filesystem-connector,deliver-hdfs-connector,deliver-rss-connector,deliver-webcrawler-connector,deliver-wiki-connector,deliver-jdbc-connector"/>
+    <target name="deliver-open-connectors-doc" depends="deliver-email-connector-doc,deliver-generic-connector-doc,deliver-jira-connector-doc,deliver-googledrive-connector-doc,deliver-dropbox-connector-doc,deliver-nullauthority-connector-doc,deliver-activedirectory-connector-doc,deliver-ldap-connector-doc,deliver-alfresco-connector-doc,deliver-cmis-connector-doc,deliver-filesystem-connector-doc,deliver-hdfs-connector-doc,deliver-rss-connector-doc,deliver-webcrawler-connector-doc,deliver-wiki-connector-doc,deliver-jdbc-connector-doc"/>
 
-    <target name="deliver-open-connectors" depends="deliver-nullauthority-connector,deliver-activedirectory-connector,deliver-ldap-connector,deliver-alfresco-connector,deliver-cmis-connector,deliver-filesystem-connector,deliver-rss-connector,deliver-webcrawler-connector,deliver-wiki-connector,deliver-jdbc-connector"/>
-    <target name="deliver-open-connectors-doc" depends="deliver-nullauthority-connector-doc,deliver-activedirectory-connector-doc,deliver-ldap-connector-doc,deliver-alfresco-connector-doc,deliver-cmis-connector-doc,deliver-filesystem-connector-doc,deliver-rss-connector-doc,deliver-webcrawler-connector-doc,deliver-wiki-connector-doc,deliver-jdbc-connector-doc"/>
-    
     <target name="deliver-output-connectors" depends="deliver-gts-connector,deliver-solr-connector,deliver-nulloutput-connector,deliver-opensearchserver-connector,deliver-elasticsearch-connector"/>
     <target name="deliver-output-connectors-doc" depends="deliver-gts-connector-doc,deliver-solr-connector-doc,deliver-nulloutput-connector-doc,deliver-opensearchserver-connector-doc,deliver-elasticsearch-connector-doc"/>
     
+    <target name="deliver-mapping-connectors" depends="deliver-regexp-mapper"/>
+    <target name="deliver-mapping-connectors-doc" depends="deliver-regexp-mapper-doc"/>
+    
     <target name="deliver-lgpl-connectors" depends="deliver-jcifs-connector"/>
     <target name="deliver-lgpl-connectors-doc" depends="deliver-jcifs-connector-doc"/>
     
     <target name="deliver-proprietary-connectors" depends="deliver-documentum-connector,deliver-filenet-connector,deliver-livelink-connector,deliver-memex-connector,deliver-meridio-connector,deliver-sharepoint-connector"/>
     <target name="deliver-proprietary-connectors-doc" depends="deliver-documentum-connector-doc,deliver-filenet-connector-doc,deliver-livelink-connector-doc,deliver-memex-connector-doc,deliver-meridio-connector-doc,deliver-sharepoint-connector-doc"/>
 
-    <target name="build" depends="deliver-framework,deliver-open-connectors,deliver-output-connectors,deliver-lgpl-connectors,deliver-proprietary-connectors"/>
+    <target name="build" depends="deliver-framework,deliver-open-connectors,deliver-output-connectors,deliver-mapping-connectors,deliver-lgpl-connectors,deliver-proprietary-connectors"/>
     <target name="tmpclean" depends="cleanup-afterbuild"/>
     <target name="buildcln" depends="build,tmpclean"/>
-    <target name="javadoc" depends="deliver-framework-doc,deliver-open-connectors-doc,deliver-output-connectors-doc,deliver-lgpl-connectors-doc,deliver-proprietary-connectors-doc"/>
+    <target name="javadoc" depends="deliver-framework-doc,deliver-open-connectors-doc,deliver-output-connectors-doc,deliver-mapping-connectors-doc,deliver-lgpl-connectors-doc,deliver-proprietary-connectors-doc"/>
     <target name="doc" depends="deliver-site-doc"/>
     
     <target name="set-version">
-      <property name="release-version" value="1.2-dev"/>
+      <property name="release-version" value="1.5-dev"/>
     </target>
     
     <target name="create-source-zip" depends="set-version">
@@ -2646,7 +3182,8 @@
           <exclude name="connector-lib-proprietary/*-PLACEHOLDER.txt"/>
           <exclude name="connectors-proprietary.xml"/>
           <exclude name="/example-proprietary/"/>
-          <exclude name="/multiprocess-example-proprietary/"/>
+          <exclude name="/multiprocess-file-example-proprietary/"/>
+          <exclude name="/multiprocess-zk-example-proprietary/"/>
           <exclude name="/web-proprietary/"/>
         </zipfileset>
       </zip>
@@ -2660,7 +3197,8 @@
           <exclude name="connector-lib-proprietary/*-PLACEHOLDER.txt"/>
           <exclude name="connectors-proprietary.xml"/>
           <exclude name="/example-proprietary/"/>
-          <exclude name="/multiprocess-example-proprietary/"/>
+          <exclude name="/multiprocess-file-example-proprietary/"/>
+          <exclude name="/multiprocess-zk-example-proprietary/"/>
           <exclude name="/web-proprietary/"/>
         </tarfileset>
       </tar>
@@ -2688,7 +3226,7 @@
 
     <target name="ldtest" depends="load-dr,load-hs"/>
 
-    <target name="uitest" depends="run-filesystem-UI-tests-derby,run-filesystem-UI-tests-HSQLDB,run-jcifs-UI-tests-derby,run-jdbc-UI-tests-derby,run-activedirectory-UI-tests-derby,run-ldap-UI-tests-derby,run-rss-UI-tests-derby,run-webcrawler-UI-tests-derby,run-wiki-UI-tests-derby,run-solr-UI-tests-derby,run-cmis-UI-tests-derby,run-gts-UI-tests-derby,run-opensearchserver-UI-tests-derby"/>
+    <target name="uitest" depends="run-filesystem-UI-tests-derby,run-hdfs-UI-tests-derby,run-filesystem-UI-tests-HSQLDB,run-hdfs-UI-tests-HSQLDB,run-jcifs-UI-tests-derby,run-jdbc-UI-tests-derby,run-activedirectory-UI-tests-derby,run-ldap-UI-tests-derby,run-rss-UI-tests-derby,run-webcrawler-UI-tests-derby,run-wiki-UI-tests-derby,run-solr-UI-tests-derby,run-cmis-UI-tests-derby,run-gts-UI-tests-derby,run-opensearchserver-UI-tests-derby"/>
     
     <target name="all" depends="build,javadoc,doc,image,test-dr,test-hs"/>
 
@@ -2830,14 +3368,14 @@
 
         <antcall target="download-via-maven">
             <param name="project-path" value="org/apache/httpcomponents"/>
-            <param name="artifact-version" value="4.2.4"/>
+            <param name="artifact-version" value="4.2.5"/>
             <param name="target" value="lib"/>
             <param name="artifact-name" value="httpcore"/>
             <param name="artifact-type" value="jar"/>
         </antcall>
         <antcall target="download-via-maven">
             <param name="project-path" value="org/apache/httpcomponents"/>
-            <param name="artifact-version" value="4.2.4"/>
+            <param name="artifact-version" value="4.2.6"/>
             <param name="target" value="lib"/>
             <param name="artifact-name" value="httpclient"/>
             <param name="artifact-type" value="jar"/>
@@ -2845,7 +3383,7 @@
         <antcall target="download-via-maven">
             <param name="target" value="lib"/>
             <param name="project-path" value="org/apache/httpcomponents"/>
-            <param name="artifact-version" value="4.2.4"/>
+            <param name="artifact-version" value="4.2.6"/>
             <param name="artifact-name" value="httpmime"/>
             <param name="artifact-type" value="jar"/>
         </antcall>
@@ -2864,7 +3402,7 @@
     
     <target name="download-derby">
         <mkdir dir="lib"/>
-        <property name="derby-version" value="10.8.2.2"/>
+        <property name="derby-version" value="10.10.1.1"/>
         <property name="derby-package" value="org/apache/derby"/>
         <antcall target="download-via-maven"><param name="project-path" value="${derby-package}"/><param name="artifact-version" value="${derby-version}"/><param name="target" value="lib"/>
             <param name="artifact-name" value="derby"/>
@@ -2947,14 +3485,14 @@
             <param name="target" value="hsqldb"/>
         </antcall>
         <copy todir="lib" file="build/download/hsqldb/lib/hsqldb.jar"/ -->
-        <!-- antcall target="download-via-mvn">
+        <antcall target="download-via-maven">
             <param name="project-path" value="org/hsqldb"/>
-            <param name="artifact-version" value="2.2.9-snapshot"/>
+            <param name="artifact-version" value="2.3.1"/>
             <param name="target" value="lib"/>
             <param name="artifact-name" value="hsqldb"/>
             <param name="artifact-type" value="jar"/>
-        </antcall -->
-        <get src="http://www.hsqldb.org/repos/org/hsqldb/hsqldb/2.2.9/hsqldb-2.2.9.jar" dest="lib/hsqldb.jar"/>
+        </antcall>
+        <!-- get src="http://www.hsqldb.org/repos/org/hsqldb/hsqldb/2.2.9/hsqldb-2.2.9.jar" dest="lib/hsqldb.jar"/-->
     </target>
     
     <target name="download-postgresql">
@@ -3085,6 +3623,12 @@
             <param name="artifact-name" value="commons-logging"/>
             <param name="artifact-type" value="jar"/>
         </antcall>
+        <antcall target="download-via-maven"><param name="target" value="lib"/>
+            <param name="project-path" value="commons-configuration"/>
+            <param name="artifact-version" value="1.6"/>
+            <param name="artifact-name" value="commons-configuration"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
     </target>
   
     <target name="download-slf4j">
@@ -3312,7 +3856,7 @@
     
     <target name="download-chemistry">
         <mkdir dir="lib"/>
-        <property name="chemistry-version" value="0.8.0"/>
+        <property name="chemistry-version" value="0.9.0"/>
         <property name="chemistry-package" value="org/apache/chemistry/opencmis"/>
         <antcall target="download-via-maven"><param name="project-path" value="${chemistry-package}"/><param name="artifact-version" value="${chemistry-version}"/><param name="target" value="lib"/>
             <param name="artifact-name" value="chemistry-opencmis-client-impl"/>
@@ -3520,57 +4064,179 @@
         <mkdir dir="lib/elasticsearch"/>
         <!-- Download and unpack binary artifact -->
         <mkdir dir="build/download"/>
-        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-elasticsearch-plugin-0.1-bin.zip" dest="build/download/apache-manifoldcf-elasticsearch-plugin-bin.zip"/>
+        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-elasticsearch-plugin-1.1-bin.zip" dest="build/download/apache-manifoldcf-elasticsearch-plugin-bin.zip"/>
         <unzip src="build/download/apache-manifoldcf-elasticsearch-plugin-bin.zip" dest="build/download/apache-manifoldcf-elasticsearch-plugin-bin"/>
         <copy todir="lib/elasticsearch">
-            <fileset dir="build/download/apache-manifoldcf-elasticsearch-plugin-bin/elasticsearch-plugin-mcf-0.1"/>
+            <fileset dir="build/download/apache-manifoldcf-elasticsearch-plugin-bin/elasticsearch-plugin-mcf-1.1"/>
         </copy>
     </target>
-    
+
+   <target name="download-dropbox-client">
+        <mkdir dir="lib"/>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="org/syncloud"/>
+            <param name="artifact-version" value="1.5.3"/>
+            <param name="artifact-name" value="dropbox-client"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/googlecode/json-simple"/>
+            <param name="artifact-version" value="1.1"/>
+            <param name="artifact-name" value="json-simple"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+    </target>
+	
+    <target name="download-jira-client">
+        <mkdir dir="lib"/>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/googlecode/json-simple"/>
+            <param name="artifact-version" value="1.1"/>
+            <param name="artifact-name" value="json-simple"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="commons-codec"/>
+            <param name="artifact-version" value="1.8"/>
+            <param name="artifact-name" value="commons-codec"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+    </target>
+
+   <target name="download-google-api-client">
+        <mkdir dir="lib"/>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/google/apis"/>
+            <param name="artifact-version" value="v2-rev64-1.14.1-beta"/>
+            <param name="artifact-name" value="google-api-services-drive"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/google/http-client"/>
+            <param name="artifact-version" value="1.14.1-beta"/>
+            <param name="artifact-name" value="google-http-client"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/google/http-client"/>
+            <param name="artifact-version" value="1.14.1-beta"/>
+            <param name="artifact-name" value="google-http-client-jackson2"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/google/oauth-client"/>
+            <param name="artifact-version" value="1.14.1-beta"/>
+            <param name="artifact-name" value="google-oauth-client"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/fasterxml/jackson/core/"/>
+            <param name="artifact-version" value="2.1.3"/>
+            <param name="artifact-name" value="jackson-core"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall> 
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/google/api-client"/>
+            <param name="artifact-version" value="1.14.1-beta"/>
+            <param name="artifact-name" value="google-api-client"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+
+    </target>
+
     <target name="download-sharepoint-plugins">
         <mkdir dir="lib/sharepoint-2007"/>
         <!-- Download and unpack binary artifact -->
         <mkdir dir="build/download"/>
-        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-sharepoint-2007-plugin-0.4-bin.zip" dest="build/download/apache-manifoldcf-sharepoint-2007-plugin-bin.zip"/>
+        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-sharepoint-2007-plugin-0.5-bin.zip" dest="build/download/apache-manifoldcf-sharepoint-2007-plugin-bin.zip"/>
         <unzip src="build/download/apache-manifoldcf-sharepoint-2007-plugin-bin.zip" dest="build/download/apache-manifoldcf-sharepoint-2007-plugin-bin"/>
         <copy todir="lib/sharepoint-2007">
-            <fileset dir="build/download/apache-manifoldcf-sharepoint-2007-plugin-bin/apache-manifoldcf-sharepoint-2007-plugin-0.4"/>
+            <fileset dir="build/download/apache-manifoldcf-sharepoint-2007-plugin-bin/apache-manifoldcf-sharepoint-2007-plugin-0.5"/>
         </copy>
         <mkdir dir="lib/sharepoint-2010"/>
         <!-- Download and unpack binary artifact -->
         <mkdir dir="build/download"/>
-        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-sharepoint-2010-plugin-0.2-bin.zip" dest="build/download/apache-manifoldcf-sharepoint-2010-plugin-bin.zip"/>
+        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-sharepoint-2010-plugin-0.4-bin.zip" dest="build/download/apache-manifoldcf-sharepoint-2010-plugin-bin.zip"/>
         <unzip src="build/download/apache-manifoldcf-sharepoint-2010-plugin-bin.zip" dest="build/download/apache-manifoldcf-sharepoint-2010-plugin-bin"/>
         <copy todir="lib/sharepoint-2010">
-            <fileset dir="build/download/apache-manifoldcf-sharepoint-2010-plugin-bin/apache-manifoldcf-sharepoint-2010-plugin-0.2"/>
+            <fileset dir="build/download/apache-manifoldcf-sharepoint-2010-plugin-bin/apache-manifoldcf-sharepoint-2010-plugin-0.4"/>
         </copy>
     </target>
 
     <target name="download-solr-plugins">
         <mkdir dir="lib/solr-3.x"/>
         <mkdir dir="build/download"/>
-        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-solr-3.x-plugin-0.4-bin.zip" dest="build/download/apache-manifoldcf-solr-3.x-plugin-bin.zip"/>
+        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-solr-3.x-plugin-1.1-bin.zip" dest="build/download/apache-manifoldcf-solr-3.x-plugin-bin.zip"/>
         <unzip src="build/download/apache-manifoldcf-solr-3.x-plugin-bin.zip" dest="build/download/apache-manifoldcf-solr-3.x-plugin-bin"/>
         <copy todir="lib/solr-3.x">
-            <fileset dir="build/download/apache-manifoldcf-solr-3.x-plugin-bin/apache-manifoldcf-solr-3.x-plugin-0.4"/>
+            <fileset dir="build/download/apache-manifoldcf-solr-3.x-plugin-bin/apache-manifoldcf-solr-3.x-plugin-1.1"/>
         </copy>
         <mkdir dir="lib/solr-4.x"/>
-        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-solr-4.x-plugin-0.4-bin.zip" dest="build/download/apache-manifoldcf-solr-4.x-plugin-bin.zip"/>
+        <get src="http://archive.apache.org/dist/manifoldcf/apache-manifoldcf-solr-4.x-plugin-1.1.1-bin.zip" dest="build/download/apache-manifoldcf-solr-4.x-plugin-bin.zip"/>
         <unzip src="build/download/apache-manifoldcf-solr-4.x-plugin-bin.zip" dest="build/download/apache-manifoldcf-solr-4.x-plugin-bin"/>
         <copy todir="lib/solr-4.x">
-            <fileset dir="build/download/apache-manifoldcf-solr-4.x-plugin-bin/apache-manifoldcf-solr-4.x-plugin-0.4"/>
+            <fileset dir="build/download/apache-manifoldcf-solr-4.x-plugin-bin/apache-manifoldcf-solr-4.x-plugin-1.1.1"/>
         </copy>
     </target>
 
+    <target name="download-hadoop" depends="download-guava">
+        <mkdir dir="lib"/>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="org/apache/hadoop"/>
+            <param name="artifact-version" value="2.2.0"/>
+            <param name="artifact-name" value="hadoop-common"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="org/apache/hadoop"/>
+            <param name="artifact-version" value="2.2.0"/>
+            <param name="artifact-name" value="hadoop-annotations"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="org/apache/hadoop"/>
+            <param name="artifact-version" value="2.2.0"/>
+            <param name="artifact-name" value="hadoop-auth"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+    </target>
+    
+    <target name="download-guava">
+        <antcall target="download-via-maven">
+            <param name="target" value="lib"/>
+            <param name="project-path" value="com/google/guava"/>
+            <param name="artifact-version" value="r09"/>
+            <param name="artifact-name" value="guava"/>
+            <param name="artifact-type" value="jar"/>
+        </antcall>
+    </target>
+
     <target name="download-solrj">
         <mkdir dir="lib"/>
         <antcall target="download-via-maven">
             <param name="target" value="lib"/>
             <param name="project-path" value="org/apache/solr"/>
-            <param name="artifact-version" value="4.3.0"/>
+            <param name="artifact-version" value="4.6.0"/>
             <param name="artifact-name" value="solr-solrj"/>
             <param name="artifact-type" value="jar"/>
         </antcall>
+    </target>
+    
+    <target name="download-zookeeper">
+        <mkdir dir="lib"/>
         <antcall target="download-via-maven">
             <param name="target" value="lib"/>
             <param name="project-path" value="org/apache/zookeeper"/>
@@ -3579,8 +4245,8 @@
             <param name="artifact-type" value="jar"/>
         </antcall>
     </target>
-
-    <target name="make-core-deps" depends="download-solrj,download-httpcomponents,download-json,download-hsqldb,download-xerces,download-commons,download-elasticsearch-plugin,download-solr-plugins,download-sharepoint-plugins,download-jstl,download-xmlgraphics-commons,download-wstx-asl,download-xmlsec,download-xml-apis,download-wss4j,download-velocity,download-streambuffer,download-stax,download-servlet-api,download-xml-resolver,download-osgi,download-opensaml,download-mimepull,download-mail,download-log4j,download-junit,download-jaxws,download-glassfish,download-jaxb,download-tomcat,download-h2,download-h2-support,download-geronimo-specs,download-fop,download-derby,download-postgresql,download-axis,download-saaj,download-wsdl4j,download-castor,download-jetty,download-slf4j,download-xalan,download-activation,download-avalon-framework,download-poi,download-chemistry,download-ecj">
+    
+    <target name="make-core-deps" depends="download-jira-client,download-google-api-client,download-dropbox-client,download-solrj,download-zookeeper,download-httpcomponents,download-json,download-hsqldb,download-xerces,download-commons,download-elasticsearch-plugin,download-solr-plugins,download-sharepoint-plugins,download-jstl,download-xmlgraphics-commons,download-wstx-asl,download-xmlsec,download-xml-apis,download-wss4j,download-velocity,download-streambuffer,download-stax,download-servlet-api,download-xml-resolver,download-osgi,download-opensaml,download-mimepull,download-mail,download-log4j,download-junit,download-jaxws,download-glassfish,download-jaxb,download-tomcat,download-h2,download-h2-support,download-geronimo-specs,download-fop,download-derby,download-postgresql,download-axis,download-saaj,download-wsdl4j,download-castor,download-jetty,download-slf4j,download-xalan,download-activation,download-avalon-framework,download-poi,download-chemistry,download-ecj,download-hadoop">
         <copy todir="lib">
             <fileset dir="lib-license" includes="*.txt"/>
         </copy>
@@ -3609,12 +4275,18 @@
     <target name="make-deps" depends="download-proprietary-dependencies">
         <ant dir="connectors/alfresco" target="download-dependencies"/>
         <ant dir="connectors/cmis" target="download-dependencies"/>
+        <ant dir="connectors/generic" target="download-dependencies"/>
+        <ant dir="connectors/dropbox" target="download-dependencies"/>
+        <ant dir="connectors/googledrive" target="download-dependencies"/>
+        <ant dir="connectors/jira" target="download-dependencies"/>
         <ant dir="connectors/activedirectory" target="download-dependencies"/>
         <ant dir="connectors/ldap" target="download-dependencies"/>
         <ant dir="connectors/documentum" target="download-dependencies"/>
+        <ant dir="connectors/email" target="download-dependencies"/>
         <ant dir="connectors/filenet" target="download-dependencies"/>
         <ant dir="connectors/filesystem" target="download-dependencies"/>
         <ant dir="connectors/gts" target="download-dependencies"/>
+        <ant dir="connectors/hdfs" target="download-dependencies"/>
         <ant dir="connectors/jcifs" target="download-dependencies"/>
         <ant dir="connectors/jdbc" target="download-dependencies"/>
         <ant dir="connectors/livelink" target="download-dependencies"/>
@@ -3645,12 +4317,17 @@
     <target name="clean-deps" depends="download-proprietary-cleanup">
         <ant dir="connectors/alfresco" target="download-cleanup"/>
         <ant dir="connectors/cmis" target="download-cleanup"/>
+        <ant dir="connectors/generic" target="download-cleanup"/>        
+        <ant dir="connectors/dropbox" target="download-cleanup"/>        
+        <ant dir="connectors/googledrive" target="download-cleanup"/>
+        <ant dir="connectors/jira" target="download-cleanup"/>
         <ant dir="connectors/activedirectory" target="download-cleanup"/>
         <ant dir="connectors/ldap" target="download-cleanup"/>
         <ant dir="connectors/documentum" target="download-cleanup"/>
         <ant dir="connectors/filenet" target="download-cleanup"/>
         <ant dir="connectors/filesystem" target="download-cleanup"/>
         <ant dir="connectors/gts" target="download-cleanup"/>
+        <ant dir="connectors/hdfs" target="download-cleanup"/>
         <ant dir="connectors/jcifs" target="download-cleanup"/>
         <ant dir="connectors/jdbc" target="download-cleanup"/>
         <ant dir="connectors/livelink" target="download-cleanup"/>
diff --git a/connectors/activedirectory/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/activedirectory/ActiveDirectoryAuthority.java b/connectors/activedirectory/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/activedirectory/ActiveDirectoryAuthority.java
index dff312c..7bc7e58 100644
--- a/connectors/activedirectory/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/activedirectory/ActiveDirectoryAuthority.java
+++ b/connectors/activedirectory/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/activedirectory/ActiveDirectoryAuthority.java
@@ -63,14 +63,6 @@
   /** The length of time in milliseconds that the connection remains idle before expiring.  Currently 5 minutes. */
   private static final long expirationInterval = 300000L;
   
-  /** This is the active directory global deny token.  This should be ingested with all documents. */
-  private static final String globalDenyToken = "DEAD_AUTHORITY";
-  
-  private static final AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{globalDenyToken},
-    AuthorizationResponse.RESPONSE_UNREACHABLE);
-  private static final AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{globalDenyToken},
-    AuthorizationResponse.RESPONSE_USERNOTFOUND);
-  
   /** Constructor.
   */
   public ActiveDirectoryAuthority()
@@ -222,7 +214,21 @@
     super.poll();
   }
   
-  
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    for (Map.Entry<String,DCSessionInfo> sessionEntry : sessionInfo.entrySet())
+    {
+      if (sessionEntry.getValue().isOpen())
+        return true;
+    }
+    return false;
+  }
+
   /** Close the connection.  Call this before discarding the repository connector.
   */
   @Override
@@ -318,7 +324,7 @@
     if (domainController == null)
     {
       // No domain controller found for the user, so return "user not found".
-      return userNotFoundResponse;
+      return RESPONSE_USERNOTFOUND;
     }
     
     // Look up connection parameters
@@ -326,7 +332,7 @@
     if (dcParams == null)
     {
       // No domain controller, even though it's mentioned in a rule
-      return userNotFoundResponse;
+      return RESPONSE_USERNOTFOUND;
     }
     
     // Use the complete fqn if the field is the "userPrincipalName"
@@ -361,7 +367,7 @@
       //Get DistinguishedName (for this method we are using DomainPart as a searchBase ie: DC=qa-ad-76,DC=metacarta,DC=com")
       String searchBase = getDistinguishedName(ctx, userPart, domainsb.toString(), userACLsUsername);
       if (searchBase == null)
-        return userNotFoundResponse;
+        return RESPONSE_USERNOTFOUND;
 
       //specify the LDAP search filter
       String searchFilter = "(objectClass=user)";
@@ -412,7 +418,7 @@
       }
 
       if (theGroups.size() == 0)
-        return userNotFoundResponse;
+        return RESPONSE_USERNOTFOUND;
       
       // All users get certain well-known groups
       theGroups.add("S-1-1-0");
@@ -431,12 +437,12 @@
     catch (NameNotFoundException e)
     {
       // This means that the user doesn't exist
-      return userNotFoundResponse;
+      return RESPONSE_USERNOTFOUND;
     }
     catch (NamingException e)
     {
       // Unreachable
-      return unreachableResponse;
+      return RESPONSE_UNREACHABLE;
     }
   }
 
@@ -448,7 +454,7 @@
   public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
   {
     // The default response if the getConnection method fails
-    return unreachableResponse;
+    return RESPONSE_UNREACHABLE;
   }
 
   // UI support methods.
@@ -488,13 +494,13 @@
   {
     Map<String,Object> velocityContext = new HashMap<String,Object>();
     velocityContext.put("TabName",tabName);
-    fillInDomainControllerTab(velocityContext,parameters);
-    fillInCacheTab(velocityContext,parameters);
+    fillInDomainControllerTab(velocityContext,out,parameters);
+    fillInCacheTab(velocityContext,out,parameters);
     Messages.outputResourceWithVelocity(out,locale,"editConfiguration_DomainController.html",velocityContext);
     Messages.outputResourceWithVelocity(out,locale,"editConfiguration_Cache.html",velocityContext);
   }
   
-  protected static void fillInDomainControllerTab(Map<String,Object> velocityContext, ConfigParams parameters)
+  protected static void fillInDomainControllerTab(Map<String,Object> velocityContext, IPasswordMapperActivity mapper, ConfigParams parameters)
   {
     String domainControllerName = parameters.getParameter(ActiveDirectoryConfig.PARAM_DOMAINCONTROLLER);
     String userName = parameters.getParameter(ActiveDirectoryConfig.PARAM_USERNAME);
@@ -506,7 +512,7 @@
     // Backwards compatibility: if domain controller parameter is set, create an entry in the map.
     if (domainControllerName != null)
     {
-      domainControllers.add(createDomainControllerMap("",domainControllerName,userName,password,authentication,userACLsUsername));
+      domainControllers.add(createDomainControllerMap(mapper,"",domainControllerName,userName,password,authentication,userACLsUsername));
     }
     else
     {
@@ -524,14 +530,14 @@
           String dcPassword = deobfuscate(cn.getAttributeValue(ActiveDirectoryConfig.ATTR_PASSWORD));
           String dcAuthentication = cn.getAttributeValue(ActiveDirectoryConfig.ATTR_AUTHENTICATION);
           String dcUserACLsUsername = cn.getAttributeValue(ActiveDirectoryConfig.ATTR_USERACLsUSERNAME);
-          domainControllers.add(createDomainControllerMap(dcSuffix,dcDomainController,dcUserName,dcPassword,dcAuthentication,dcUserACLsUsername));
+          domainControllers.add(createDomainControllerMap(mapper,dcSuffix,dcDomainController,dcUserName,dcPassword,dcAuthentication,dcUserACLsUsername));
         }
       }
     }
     velocityContext.put("DOMAINCONTROLLERS",domainControllers);
   }
 
-  protected static Map<String,String> createDomainControllerMap(String suffix, String domainControllerName,
+  protected static Map<String,String> createDomainControllerMap(IPasswordMapperActivity mapper, String suffix, String domainControllerName,
     String userName, String password, String authentication, String userACLsUsername)
   {
     Map<String,String> defaultMap = new HashMap<String,String>();
@@ -542,7 +548,7 @@
     if (userName != null)
       defaultMap.put("USERNAME",userName);
     if (password != null)
-      defaultMap.put("PASSWORD",password);
+      defaultMap.put("PASSWORD",mapper.mapPasswordToKey(password));
     if (authentication != null)
       defaultMap.put("AUTHENTICATION",authentication);
     if (userACLsUsername != null)
@@ -550,7 +556,7 @@
     return defaultMap;
   }
   
-  protected static void fillInCacheTab(Map<String,Object> velocityContext, ConfigParams parameters)
+  protected static void fillInCacheTab(Map<String,Object> velocityContext, IPasswordMapperActivity mapper, ConfigParams parameters)
   {
     String cacheLifetime = parameters.getParameter(ActiveDirectoryConfig.PARAM_CACHELIFETIME);
     if (cacheLifetime == null)
@@ -611,7 +617,7 @@
             variableContext.getParameter("dcrecord_suffix"),
             variableContext.getParameter("dcrecord_domaincontrollername"),
             variableContext.getParameter("dcrecord_username"),
-            variableContext.getParameter("dcrecord_password"),
+            variableContext.mapKeyToPassword(variableContext.getParameter("dcrecord_password")),
             variableContext.getParameter("dcrecord_authentication"),
             variableContext.getParameter("dcrecord_userACLsUsername"));
         }
@@ -622,7 +628,7 @@
             variableContext.getParameter("dcrecord_suffix_"+i),
             variableContext.getParameter("dcrecord_domaincontrollername_"+i),
             variableContext.getParameter("dcrecord_username_"+i),
-            variableContext.getParameter("dcrecord_password_"+i),
+            variableContext.mapKeyToPassword(variableContext.getParameter("dcrecord_password_"+i)),
             variableContext.getParameter("dcrecord_authentication_"+i),
             variableContext.getParameter("dcrecord_userACLsUsername_"+i));
         }
@@ -683,8 +689,8 @@
     throws ManifoldCFException, IOException
   {
     Map<String,Object> velocityContext = new HashMap<String,Object>();
-    fillInDomainControllerTab(velocityContext,parameters);
-    fillInCacheTab(velocityContext,parameters);
+    fillInDomainControllerTab(velocityContext,out,parameters);
+    fillInCacheTab(velocityContext,out,parameters);
     Messages.outputResourceWithVelocity(out,locale,"viewConfiguration.html",velocityContext);
   }
 
@@ -925,6 +931,11 @@
         closeConnection();
     }
 
+    /** Check if open */
+    protected boolean isOpen()
+    {
+      return ctx != null;
+    }
   }
 
   /** Class describing a domain suffix and corresponding domain controller name rule.
diff --git a/connectors/activedirectory/pom.xml b/connectors/activedirectory/pom.xml
index c4b7d96..8517cac 100644
--- a/connectors/activedirectory/pom.xml
+++ b/connectors/activedirectory/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,29 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
+      </resource>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +62,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/alfresco/build.xml b/connectors/alfresco/build.xml
index 57f44e1..82d45bd 100644
--- a/connectors/alfresco/build.xml
+++ b/connectors/alfresco/build.xml
@@ -21,7 +21,7 @@
 
   
     <target name="calculate-condition">
-        <available file="lib-proprietary/alfresco-web-service-client-3.4.e.jar" property="alfrescoStatus"/>
+        <available file="lib-proprietary/alfresco-web-service-client-4.2.c.jar" property="alfrescoStatus"/>
         <condition property="build-present">
             <isset property="alfrescoStatus"/>
         </condition>
@@ -35,7 +35,7 @@
     </target>
 
     <target name="precompile-warn" depends="calculate-condition" unless="build-present">
-        <echo message="Alfresco Connector cannot be built without alfresco-web-service-client-3.4.e.jar"/>
+        <echo message="Alfresco Connector cannot be built without alfresco-web-service-client-4.2.c.jar"/>
     </target>
 
     <target name="pretest-warn" depends="calculate-testcode-condition" unless="tests-present">
@@ -50,7 +50,6 @@
             <include name="saaj*.jar"/>	
             <include name="commons-discovery*.jar"/>
             <include name="jaxrpc*.jar"/>
-            <include name="mail*.jar"/>
             <include name="opensaml*.jar"/>
             <include name="wsdl4j*.jar"/>
             <include name="wss4j*.jar"/>
@@ -73,7 +72,6 @@
                 <include name="saaj*.jar"/>	
                 <include name="commons-discovery*.jar"/>
                 <include name="jaxrpc*.jar"/>
-                <include name="mail*.jar"/>
                 <include name="opensaml*.jar"/>
                 <include name="wsdl4j*.jar"/>
                 <include name="wss4j*.jar"/>
@@ -91,8 +89,8 @@
     </target>
 
     <target name="build-test-materials" depends="pretest-check" if="canTest">
-        <mkdir dir="build/alfresco-war"/>
-        <copy todir="build/alfresco-war">
+        <mkdir dir="build/alfresco-4-war"/>
+        <copy todir="build/alfresco-4-war">
             <fileset dir="test-materials-proprietary">
                 <include name="alfresco*.war"/>
             </fileset>
@@ -100,7 +98,7 @@
     </target>
 
     <target name="download-alfresco-ws-client">
-      <get src="https://artifacts.alfresco.com/nexus/service/local/repositories/releases/content/org/alfresco/alfresco-web-service-client/3.4.e/alfresco-web-service-client-3.4.e.jar" dest="lib-proprietary"/>
+      <get src="https://artifacts.alfresco.com/nexus/service/local/repositories/releases/content/org/alfresco/alfresco-web-service-client/4.2.c/alfresco-web-service-client-4.2.c.jar" dest="lib-proprietary"/>
     </target>
 	
     <target name="download-dependencies" depends="download-alfresco-ws-client"/>
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoConfig.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoConfig.java
index 6fb9484..b618c00 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoConfig.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoConfig.java
@@ -51,6 +51,9 @@
   /** Separator for the username field dedicated to the tenant domain */
   public static final String TENANT_DOMAIN_SEP = "@";
   
+  /** Socket Timeout parameter for the Alfresco Web Service Client */
+  public static final String SOCKET_TIMEOUT_PARAM = "socketTimeout";
+  
   //default values
   public static final String USERNAME_DEFAULT_VALUE = "admin";
   public static final String PASSWORD_DEFAULT_VALUE = "admin";
@@ -58,5 +61,6 @@
   public static final String SERVER_DEFAULT_VALUE = "localhost";
   public static final String PORT_DEFAULT_VALUE = "8080";
   public static final String PATH_DEFAULT_VALUE = "/alfresco/api";
+  public static final int SOCKET_TIMEOUT_DEFAULT_VALUE = 120000;
   
 }
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoRepositoryConnector.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoRepositoryConnector.java
index fe102d9..0af2640 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoRepositoryConnector.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/AlfrescoRepositoryConnector.java
@@ -46,6 +46,7 @@
 import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
 import org.apache.manifoldcf.core.interfaces.ConfigParams;
 import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;
 import org.apache.manifoldcf.core.interfaces.IPostParameters;
 import org.apache.manifoldcf.core.interfaces.IThreadContext;
 import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
@@ -79,9 +80,15 @@
   /** Endpoint context path of the Alfresco webapp */
   protected String path = null;
   
+  /** Endpoint with all the details */
+  protected String endpoint = null;
+  
   /** Alfresco Tenant domain */
   protected String tenantDomain = null;
   
+  /** Socket Timeout for the Alfresco Web Service Client */
+  protected int socketTimeout = -1;
+  
   protected AuthenticationDetails session = null;
 
   protected static final long timeToRelease = 300000L;
@@ -91,6 +98,9 @@
 
   // Tabs
   
+  /** Tab name parameter for managin the view of the Web UI */
+  private static final String TAB_NAME_PARAM = "TabName";
+  
   /** The Lucene Query label for the configuration tab of the job settings */
   private static final String TAB_LABEL_LUCENE_QUERY_RESOURCE = "AlfrescoConnector.LuceneQuery";
   /** Alfresco Server configuration tab name */
@@ -164,7 +174,12 @@
   @Override
   public void disconnect() throws ManifoldCFException {
     if (session != null) {
-      AuthenticationUtils.endSession();
+      try {
+        AuthenticationUtils.endSession();
+      } catch (Exception e) {
+        Logging.connectors.error("Alfresco: error disconnect:"+e.getMessage(), e);
+        throw new ManifoldCFException("Alfresco: error disconnect:"+e.getMessage(), e);
+      }
       session = null;
       lastSessionFetch = -1L;
     }
@@ -175,7 +190,9 @@
     server = null;
     port = null;
     path = null;
+    endpoint = null;
     tenantDomain = null;
+    socketTimeout = AlfrescoConfig.SOCKET_TIMEOUT_DEFAULT_VALUE;
 
   }
 
@@ -195,10 +212,25 @@
     path = params.getParameter(AlfrescoConfig.PATH_PARAM);
     tenantDomain = params.getParameter(AlfrescoConfig.TENANT_DOMAIN_PARAM);
     
+    if(params.getParameter(AlfrescoConfig.SOCKET_TIMEOUT_PARAM)!=null){
+      socketTimeout = Integer.parseInt(params.getParameter(AlfrescoConfig.SOCKET_TIMEOUT_PARAM));
+    } else {
+      socketTimeout = AlfrescoConfig.SOCKET_TIMEOUT_DEFAULT_VALUE;
+    }
+    
+    //endpoint
+    if(StringUtils.isNotEmpty(protocol)
+        && StringUtils.isNotEmpty(server)
+        && StringUtils.isNotEmpty(port)
+        && StringUtils.isNotEmpty(path)){
+      endpoint = protocol+"://"+server+":"+port+path;
+    }
+    
     //tenant domain (optional parameter). Pattern: username@tenantDomain
     if(StringUtils.isNotEmpty(tenantDomain)){
       username += AlfrescoConfig.TENANT_DOMAIN_SEP + tenantDomain;
     }
+    
   }
 
   /** Test the connection.  Returns a string describing the connection integrity.
@@ -250,29 +282,31 @@
         throw new ManifoldCFException("Parameter " + AlfrescoConfig.PATH_PARAM
             + " required but not set");
     
-      
-    String endpoint = protocol+"://"+server+":"+port+path;
-    WebServiceFactory.setEndpointAddress(endpoint);
+    endpoint = protocol+"://"+server+":"+port+path;
     try {
+    
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
       session = AuthenticationUtils.getAuthenticationDetails();
-    } catch (AuthenticationFault e) {
-      Logging.connectors.warn(
-          "Alfresco: Error during authentication. Username: "+username + ", endpoint: "+endpoint+". "
-              + e.getMessage(), e);
-      throw new ManifoldCFException("Alfresco: Error during authentication. Username: "+username + ", endpoint: "+endpoint+". "
-          + e.getMessage(), e);
-    } catch (WebServiceException e){
-      Logging.connectors.warn(
-          "Alfresco: Error during trying to authenticate the user. Username: "+username + ", endpoint: "+endpoint
-          +". Please check the connector parameters. " 
-          + e.getMessage(), e);
-      throw new ManifoldCFException("Alfresco: Error during trying to authenticate the user. Username: "+username + ", endpoint: "+endpoint
-          +". Please check the connector parameters. "
-          + e.getMessage(), e);
-    }
+      
+    }catch (AuthenticationFault e) {
+        Logging.connectors.warn(
+            "Alfresco: Error during authentication. Username: "+username + ", endpoint: "+endpoint+". "
+                + e.getMessage(), e);
+        handleIOException(e);
+      } catch (WebServiceException e){
+        Logging.connectors.warn(
+            "Alfresco: Error during trying to authenticate the user. Username: "+username + ", endpoint: "+endpoint
+            +". Please check the connector parameters. " 
+            + e.getMessage(), e);
+        throw new ManifoldCFException("Alfresco: Error during trying to authenticate the user. Username: "+username + ", endpoint: "+endpoint
+            +". Please check the connector parameters. "
+            + e.getMessage(), e);
+      }
     
     lastSessionFetch = System.currentTimeMillis();
+    
     }
   }
 
@@ -285,7 +319,13 @@
 
     long currentTime = System.currentTimeMillis();
     if (currentTime >= lastSessionFetch + timeToRelease) {
-        AuthenticationUtils.endSession();
+        try {
+          AuthenticationUtils.endSession();
+        } catch (Exception e) {
+          Logging.connectors.error(
+              "Alfresco: Error during releasing the connection.");
+          throw new ManifoldCFException( "Alfresco: Error during releasing the connection.");
+        }
         session = null;
         lastSessionFetch = -1L;
     }
@@ -294,16 +334,22 @@
   protected void checkConnection() throws ManifoldCFException,
       ServiceInterruption {
     while (true) {
-      getSession();
-      String ticket = AuthenticationUtils.getTicket();
-      if(StringUtils.isEmpty(ticket)){
-        Logging.connectors.error(
-            "Alfresco: Error during checking the connection.");
-        throw new ManifoldCFException( "Alfresco: Error during checking the connection.");
-      }
-      AuthenticationUtils.endSession();
-      session=null;
-      return;
+      try {
+          getSession();
+          String ticket = AuthenticationUtils.getTicket();
+          if(StringUtils.isEmpty(ticket)){
+            Logging.connectors.error(
+                "Alfresco: Error during checking the connection.");
+            throw new ManifoldCFException( "Alfresco: Error during checking the connection.");
+          }
+          AuthenticationUtils.endSession();
+        } catch (Exception e) {
+          Logging.connectors.error(
+              "Alfresco: Error during checking the connection.");
+          throw new ManifoldCFException( "Alfresco: Error during checking the connection.");
+        }
+        session=null;
+        return;
     }
   }
 
@@ -331,6 +377,16 @@
     }
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
   /** Queue "seed" documents.  Seed documents are the starting places for crawling activity.  Documents
    * are seeded when this method calls appropriate methods in the passed in ISeedingActivity object.
    *
@@ -372,25 +428,30 @@
       }
       i++;
     }
-
-    QueryResult queryResult = null;
-    if (StringUtils.isEmpty(luceneQuery)) {
-      // get documents from the root of the Alfresco Repository
-      queryResult = SearchUtils.getChildrenFromCompanyHome(username, password, session);
-    } else {
-      // execute a Lucene query against the repository
-      queryResult = SearchUtils.luceneSearch(username, password, session, luceneQuery);
-    }
-
-    if(queryResult!=null){
-      ResultSet resultSet = queryResult.getResultSet();
-      ResultSetRow[] resultSetRows = resultSet.getRows();
-      for (ResultSetRow resultSetRow : resultSetRows) {
-          NamedValue[] properties = resultSetRow.getColumns();
-          String nodeReference = PropertiesUtils.getNodeReference(properties);
-          activities.addSeedDocument(nodeReference);
-        }
+    
+    try{
+      QueryResult queryResult = null;
+      if (StringUtils.isEmpty(luceneQuery)) {
+        // get documents from the root of the Alfresco Repository
+        queryResult = SearchUtils.getChildrenFromCompanyHome(endpoint, username, password, socketTimeout, session);
+      } else {
+        // execute a Lucene query against the repository
+        queryResult = SearchUtils.luceneSearch(endpoint, username, password, socketTimeout, session, luceneQuery);
       }
+  
+      if(queryResult!=null){
+        ResultSet resultSet = queryResult.getResultSet();
+        ResultSetRow[] resultSetRows = resultSet.getRows();
+        for (ResultSetRow resultSetRow : resultSetRows) {
+            NamedValue[] properties = resultSetRow.getColumns();
+            String nodeReference = PropertiesUtils.getNodeReference(properties);
+            activities.addSeedDocument(nodeReference);
+          }
+      }
+    } catch(IOException e){
+      Logging.connectors.warn("Alfresco: IOException: " + e.getMessage(), e);
+      handleIOException(e);
+    }
   }
 
   /** Get the maximum number of documents to amalgamate together into one batch, for this connector.
@@ -426,7 +487,7 @@
 
   /** Fill in Velocity parameters for the Server tab.
   */
-  private static void fillInServerParameters(Map<String,String> paramMap, ConfigParams parameters)
+  private static void fillInServerParameters(Map<String,String> paramMap, IPasswordMapperActivity mapper, ConfigParams parameters)
   {
     String username = parameters.getParameter(AlfrescoConfig.USERNAME_PARAM);
     if (username == null)
@@ -436,6 +497,8 @@
     String password = parameters.getParameter(AlfrescoConfig.PASSWORD_PARAM);
     if (password == null) 
       password = AlfrescoConfig.PASSWORD_DEFAULT_VALUE;
+    else
+      password = mapper.mapPasswordToKey(password);
     paramMap.put(AlfrescoConfig.PASSWORD_PARAM, password);
     
     String protocol = parameters.getParameter(AlfrescoConfig.PROTOCOL_PARAM);
@@ -460,8 +523,14 @@
     
     String tenantDomain = parameters.getParameter(AlfrescoConfig.TENANT_DOMAIN_PARAM);
     if (tenantDomain == null)
-      tenantDomain = "";
+      tenantDomain = StringUtils.EMPTY;
     paramMap.put(AlfrescoConfig.TENANT_DOMAIN_PARAM, tenantDomain);
+    
+    String socketTimeout = parameters.getParameter(AlfrescoConfig.SOCKET_TIMEOUT_PARAM);
+    if (socketTimeout == null)
+      socketTimeout = String.valueOf(AlfrescoConfig.SOCKET_TIMEOUT_DEFAULT_VALUE);
+    paramMap.put(AlfrescoConfig.SOCKET_TIMEOUT_PARAM, socketTimeout);
+    
   }
 
   /**
@@ -488,7 +557,7 @@
     // Fill in parameters for all tabs
 
     // Server tab
-    fillInServerParameters(paramMap, parameters);
+    fillInServerParameters(paramMap, out, parameters);
   
     outputResource(VIEW_CONFIG_FORWARD, out, locale, paramMap);
   }
@@ -520,7 +589,7 @@
     Map<String,String> paramMap = new HashMap<String,String>();
         
     // Fill in parameters for all tabs
-    fillInServerParameters(paramMap, parameters);
+    fillInServerParameters(paramMap, out, parameters);
 
     outputResource(EDIT_CONFIG_HEADER_FORWARD, out, locale, paramMap);
   }
@@ -541,8 +610,8 @@
     
     // Do the Server tab
     Map<String,String> paramMap = new HashMap<String,String>();
-    paramMap.put("TabName", tabName);
-    fillInServerParameters(paramMap, parameters);
+    paramMap.put(TAB_NAME_PARAM, tabName);
+    fillInServerParameters(paramMap, out, parameters);
     outputResource(EDIT_CONFIG_FORWARD_SERVER, out, locale, paramMap);  
   }
 
@@ -577,7 +646,7 @@
 
     String password = variableContext.getParameter(AlfrescoConfig.PASSWORD_PARAM);
     if (password != null) {
-      parameters.setParameter(AlfrescoConfig.PASSWORD_PARAM, password);
+      parameters.setParameter(AlfrescoConfig.PASSWORD_PARAM, variableContext.mapKeyToPassword(password));
     }
     
     String protocol = variableContext.getParameter(AlfrescoConfig.PROTOCOL_PARAM);
@@ -605,6 +674,11 @@
       parameters.setParameter(AlfrescoConfig.TENANT_DOMAIN_PARAM, tenantDomain);
     }
     
+    String socketTimeout = variableContext.getParameter(AlfrescoConfig.SOCKET_TIMEOUT_PARAM);
+    if (socketTimeout != null){
+      parameters.setParameter(AlfrescoConfig.SOCKET_TIMEOUT_PARAM, socketTimeout);
+    }
+    
     return null;
   }
 
@@ -707,7 +781,7 @@
         
     // LuceneQuery tab
     Map<String,String> paramMap = new HashMap<String,String>();
-    paramMap.put("TabName", tabName);
+    paramMap.put(TAB_NAME_PARAM, tabName);
     fillInLuceneQueryParameters(paramMap, ds);
     outputResource(EDIT_SPEC_FORWARD_LUCENEQUERY, out, locale, paramMap);
   }
@@ -782,7 +856,15 @@
       predicate.setNodes(new Reference[] { reference });
 
       // getting properties
-      Node resultNode = NodeUtils.get(username, password, session, predicate);
+      Node resultNode = null;
+      try {
+        resultNode = NodeUtils.get(endpoint, username, password, socketTimeout, session, predicate);
+      } catch (IOException e) {
+        Logging.connectors.warn(
+            "Alfresco: IOException closing file input stream: "
+                + e.getMessage(), e);
+        handleIOException(e);
+      }
       
       String errorCode = "OK";
       String errorDesc = StringUtils.EMPTY;
@@ -790,20 +872,29 @@
       NamedValue[] properties = resultNode.getProperties();
       boolean isDocument = ContentModelUtils.isDocument(properties);
       
-      boolean isFolder = ContentModelUtils.isFolder(username, password, session, reference);
-      
-      //a generic node in Alfresco could have child-associations
-      if (isFolder) {
-        // ingest all the children of the folder
-        QueryResult queryResult = SearchUtils.getChildren(username, password, session, reference);
-        ResultSet resultSet = queryResult.getResultSet();
-        ResultSetRow[] resultSetRows = resultSet.getRows();
-        for (ResultSetRow resultSetRow : resultSetRows) {
-          NamedValue[] childProperties = resultSetRow.getColumns();
-          String childNodeReference = PropertiesUtils.getNodeReference(childProperties);
-          activities.addDocumentReference(childNodeReference, nodeReference, RELATIONSHIP_CHILD);
-        }
-      } 
+      try{    
+        
+        boolean isFolder = ContentModelUtils.isFolder(endpoint, username, password, socketTimeout, session, reference);
+        
+        //a generic node in Alfresco could have child-associations
+        if (isFolder) {
+            // ingest all the children of the folder
+            QueryResult queryResult = SearchUtils.getChildren(endpoint, username, password, socketTimeout, session, reference);
+            ResultSet resultSet = queryResult.getResultSet();
+            ResultSetRow[] resultSetRows = resultSet.getRows();
+            for (ResultSetRow resultSetRow : resultSetRows) {
+              NamedValue[] childProperties = resultSetRow.getColumns();
+              String childNodeReference = PropertiesUtils.getNodeReference(childProperties);
+              activities.addDocumentReference(childNodeReference, nodeReference, RELATIONSHIP_CHILD);
+            }
+        } 
+
+      }catch(IOException e){
+        Logging.connectors.warn(
+            "Alfresco: IOException closing file input stream: "
+                + e.getMessage(), e);
+        handleIOException(e);
+      }
       
       //a generic node in Alfresco could also have binaries content
       if (isDocument) {
@@ -819,9 +910,9 @@
           // binaries ingestion - in Alfresco we could have more than one binary for each node (custom content models)
           for (NamedValue contentProperty : contentProperties) {
             //we are ingesting all the binaries defined as d:content property in the Alfresco content model
-            Content binary = ContentReader.read(username, password, session, predicate, contentProperty.getName());
+            Content binary = ContentReader.read(endpoint, username, password, socketTimeout, session, predicate, contentProperty.getName());
             fileLength = binary.getLength();
-            is = ContentReader.getBinary(binary, username, password, session);
+            is = ContentReader.getBinary(endpoint, binary, username, password, socketTimeout, session);
             rd.setBinary(is, fileLength);
             
             //id is the node reference only if the node has an unique content stream
@@ -843,12 +934,20 @@
             activities.ingestDocument(id, version, documentURI, rd);
           }
           
+          AuthenticationUtils.endSession();
+          
         } catch (ParseException e) {
           errorCode = "IO ERROR";
           errorDesc = e.getMessage();
           Logging.connectors.warn(
               "Alfresco: Error during the reading process of dates: "
                   + e.getMessage(), e);
+          handleParseException(e);
+        } catch (IOException e) {
+          Logging.connectors.warn(
+              "Alfresco: IOException: "
+                  + e.getMessage(), e);
+          handleIOException(e);
         } finally {
           try {
             if(is!=null){
@@ -865,9 +964,9 @@
             Logging.connectors.warn(
                 "Alfresco: IOException closing file input stream: "
                     + e.getMessage(), e);
+            handleIOException(e);
           }
-          
-          AuthenticationUtils.endSession();
+                    
           session = null;
           
           activities.recordActivity(new Long(startTime), ACTIVITY_READ,
@@ -878,7 +977,7 @@
       i++;
     }
   }
-  
+
   /** The short version of getDocumentVersions.
    * Get document versions given an array of document identifiers.
    * This method is called for EVERY document that is considered. It is
@@ -909,7 +1008,16 @@
       predicate.setStore(SearchUtils.STORE);
       predicate.setNodes(new Reference[]{reference});
       
-      Node node = NodeUtils.get(username, password, session, predicate);
+      Node node = null;
+      try {
+        node = NodeUtils.get(endpoint, username, password, socketTimeout, session, predicate);
+      } catch (IOException e) {
+        Logging.connectors.warn(
+            "Alfresco: IOException closing file input stream: "
+                + e.getMessage(), e);
+        handleIOException(e);
+      }
+      
       if(node.getProperties()!=null){
         NamedValue[] properties = node.getProperties();
         boolean isDocument = ContentModelUtils.isDocument(properties);
@@ -930,5 +1038,22 @@
     }
     return rval;
   }
+  
+  private static void handleIOException(IOException e)
+      throws ManifoldCFException, ServiceInterruption {
+      if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+        throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+          ManifoldCFException.INTERRUPTED);
+      }
+      long currentTime = System.currentTimeMillis();
+      throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L,
+        currentTime + 3 * 60 * 60000L,-1,false);
+  }
+  
+  private void handleParseException(ParseException e) 
+      throws ManifoldCFException {
+    throw new ManifoldCFException(
+        "Alfresco: Error during parsing date values. This should never happen: "+e.getMessage(),e);
+  }
 
 }
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentModelUtils.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentModelUtils.java
index b1f6ccf..a6b5836 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentModelUtils.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentModelUtils.java
@@ -29,6 +29,7 @@
 import org.alfresco.webservice.util.AuthenticationDetails;
 import org.alfresco.webservice.util.AuthenticationUtils;
 import org.alfresco.webservice.util.WebServiceFactory;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
 import org.apache.manifoldcf.crawler.system.Logging;
 
 public class ContentModelUtils {
@@ -54,37 +55,52 @@
    * @param node
    * @return TRUE if the reference contains a node that is an Alfresco space, otherwise FALSE
    */
-  public static boolean isFolder(String username, String password, AuthenticationDetails session, Reference node){
+  public static boolean isFolder(String endpoint, String username, String password, int socketTimeout, AuthenticationDetails session, Reference node) throws ManifoldCFException {
     QueryResult queryResult = null;
     try {
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
       session = AuthenticationUtils.getAuthenticationDetails();
       queryResult = WebServiceFactory.getRepositoryService().queryChildren(node);
+      if(queryResult!=null){
+        ResultSet rs = queryResult.getResultSet();
+        if(rs!=null){
+          ResultSetRow[] rows = rs.getRows();
+          if(rows!=null){
+            if(rows.length>0){
+              return true;
+            }
+          }
+        }
+      }
+      AuthenticationUtils.endSession();
     } catch (RepositoryFault e) {
       Logging.connectors.warn(
           "Alfresco: Repository Error during the queryChildren: "
               + e.getMessage(), e);
+      ContentModelUtils.handleRepositoryFaultException(e);
     } catch (RemoteException e) {
       Logging.connectors.warn(
           "Alfresco: Remote Error during the queryChildren: "
               + e.getMessage(), e);
+      ContentModelUtils.handleRemoteException(e);
     } finally {
-      AuthenticationUtils.endSession();
       session = null;
     }
-    
-    if(queryResult!=null){
-      ResultSet rs = queryResult.getResultSet();
-      if(rs!=null){
-        ResultSetRow[] rows = rs.getRows();
-        if(rows!=null){
-          if(rows.length>0){
-            return true;
-          }
-        }
-      }
-    }
     return false;
   }
   
-}
+  public static void handleRepositoryFaultException(RepositoryFault e) 
+      throws ManifoldCFException {
+    throw new ManifoldCFException(
+        "Alfresco: Error during getting children: "+e.getMessage(),e);
+  }
+  
+  public static void handleRemoteException(RemoteException e) 
+      throws ManifoldCFException {
+    throw new ManifoldCFException(
+        "Alfresco: Error during getting children: "+e.getMessage(),e);
+  }
+  
+}
\ No newline at end of file
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentReader.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentReader.java
index 07c1222..7cd376e 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentReader.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/ContentReader.java
@@ -18,13 +18,13 @@
  */
 package org.apache.manifoldcf.crawler.connectors.alfresco;
 
+import java.io.IOException;
 import java.io.InputStream;
 import java.rmi.RemoteException;
 
 import org.alfresco.webservice.authentication.AuthenticationFault;
 import org.alfresco.webservice.content.Content;
 import org.alfresco.webservice.content.ContentFault;
-import org.alfresco.webservice.content.ContentServiceSoapBindingStub;
 import org.alfresco.webservice.types.Predicate;
 import org.alfresco.webservice.util.AuthenticationDetails;
 import org.alfresco.webservice.util.AuthenticationUtils;
@@ -39,35 +39,42 @@
    * @param predicate
    * @return an unique binary for content
    */
-  public static Content read(String username, String password, AuthenticationDetails session, Predicate predicate, String contentProperty) {
+  public static Content read(String endpoint, String username, String password, int socketTimeout, AuthenticationDetails session, Predicate predicate, String contentProperty) throws IOException {
     Content[] resultBinary = null;
     try {
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
       session = AuthenticationUtils.getAuthenticationDetails();
-      ContentServiceSoapBindingStub contentService = WebServiceFactory.getContentService();
-      resultBinary = contentService.read(predicate, contentProperty);
+      resultBinary = WebServiceFactory.getContentService().read(predicate, contentProperty);
+      AuthenticationUtils.endSession();
     } catch (ContentFault e) {
         Logging.connectors
         .error(
             "Alfresco: Content fault exception error during getting the content binary in processDocuments. " +
             "Node: "+predicate.getNodes()[0].getPath() + ". "
                 + e.getMessage(), e);
+        throw new IOException("Alfresco: Content fault exception error during getting the content binary in processDocuments. " +
+            "Node: "+predicate.getNodes()[0].getPath() + ". "
+            + e.getMessage(), e);
     } catch (RemoteException e) {
         Logging.connectors
         .error(
             "Alfresco: Remote exception error during getting the content binary in processDocuments. " +
             "Node: "+predicate.getNodes()[0].getPath() + ". "
                 + e.getMessage(), e);
+        throw e;
     } finally{
-      AuthenticationUtils.endSession();
       session = null;
     }
     return resultBinary[0];
   }
   
-  public static InputStream getBinary(Content binary, String username, String password, AuthenticationDetails session){
+  public static InputStream getBinary(String endpoint, Content binary, String username, String password, int socketTimeout, AuthenticationDetails session) throws IOException {
     InputStream is = null;
-    try {
+   try { 
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
       session = AuthenticationUtils.getAuthenticationDetails();
       is = ContentUtils.getContentAsInputStream(binary);
@@ -76,8 +83,9 @@
       .error(
           "Alfresco: Error during getting the binary for the node: "+binary.getNode().getPath()+"."
               + e.getMessage(), e);
+      throw e;
     }
     return is;
   }
   
-}
+}
\ No newline at end of file
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/NodeUtils.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/NodeUtils.java
index 6a0a196..51d8687 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/NodeUtils.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/NodeUtils.java
@@ -18,6 +18,7 @@
  */
 package org.apache.manifoldcf.crawler.connectors.alfresco;
 
+import java.io.IOException;
 import java.rmi.RemoteException;
 
 import org.alfresco.webservice.repository.RepositoryFault;
@@ -50,23 +51,28 @@
    * @param predicate
    * @return the Node object instance of the current content
    */
-  public static Node get(String username, String password, AuthenticationDetails session, Predicate predicate){
+  public static Node get(String endpoint, String username, String password, int socketTimeout, AuthenticationDetails session, Predicate predicate) throws IOException {
     Node[] resultNodes = null;
     try {
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
       session = AuthenticationUtils.getAuthenticationDetails();
       resultNodes = WebServiceFactory.getRepositoryService().get(predicate);
+      AuthenticationUtils.endSession();
     } catch (RepositoryFault e) {
       Logging.connectors.error(
           "Alfresco: RepositoryFault during getting a node in processDocuments. Node: "
               + predicate.getNodes()[0].getPath() + ". " + e.getMessage(), e);
+      throw new IOException("Alfresco: RepositoryFault during getting a node in processDocuments. Node: "
+          + predicate.getNodes()[0].getPath() + ". " + e.getMessage(), e);
     } catch (RemoteException e) {
       Logging.connectors
           .error(
               "Alfresco: Remote exception error during getting a node in processDocuments. Node: "
                   + predicate.getNodes()[0].getPath() + ". " + e.getMessage(), e);
+      throw e;
     } finally {
-      AuthenticationUtils.endSession();
       session = null;
     }
     if(resultNodes!=null && resultNodes.length>0){
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/PropertiesUtils.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/PropertiesUtils.java
index ccbc84a..a26c7be 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/PropertiesUtils.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/PropertiesUtils.java
@@ -19,7 +19,6 @@
 package org.apache.manifoldcf.crawler.connectors.alfresco;
 
 import java.text.ParseException;
-import java.text.SimpleDateFormat;
 import java.util.ArrayList;
 import java.util.Date;
 import java.util.Iterator;
@@ -28,6 +27,7 @@
 import org.alfresco.webservice.types.NamedValue;
 import org.apache.commons.lang.StringUtils;
 import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.core.common.DateParser;
 import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
 
 /**
@@ -37,17 +37,9 @@
  */
 public class PropertiesUtils {
 
-  private static final String PROP_CONTENT_PREFIX_1 = "contentUrl";
-  private static final String PROP_CONTENT_PREFIX_2 = "ContentData";
+  private static final String PROP_CONTENT_PREFIX = "contentUrl";
   private static final String PROP_CONTENT_SEP = "|";
   private static final String PROP_MIMETYPE_SEP = "=";
-  
-  private final static ThreadLocal<SimpleDateFormat> ISO8601_DATE_FORMAT =
-      new ThreadLocal<SimpleDateFormat>() {
-             protected SimpleDateFormat initialValue() {
-                  return new SimpleDateFormat("yyyy-MM-dd'T'hh:mm:ss.mmm+hh:mm");
-              }
-       };
 
   private static final String PROP_MODIFIED = Constants.createQNameString(Constants.NAMESPACE_CONTENT_MODEL, "modified");
   
@@ -67,27 +59,49 @@
   
   public static void ingestProperties(RepositoryDocument rd, NamedValue[] properties, List<NamedValue> contentProperties) throws ManifoldCFException, ParseException{
     for(NamedValue property : properties){
-      if(property.getIsMultiValue()){
-        String[] values = property.getValues();
-        if(values!=null){
-          for (String value : values) {
-            rd.addField(property.getName(), value);
+      if(property!=null && StringUtils.isNotEmpty(property.getName())){
+        if(property.getIsMultiValue()){
+          String[] values = property.getValues();
+          if(values!=null){
+            for (String value : values) {
+              if(StringUtils.isNotEmpty(value)){
+                rd.addField(property.getName(), value);
+              }
+            }
+          }
+        } else {
+          if(StringUtils.isNotEmpty(property.getValue())){
+            rd.addField(property.getName(), property.getValue());
           }
         }
-      } else {
-        rd.addField(property.getName(), property.getValue());
       }
     }
     
-    String fileName = PropertiesUtils.getPropertyValues(properties, Constants.PROP_NAME)[0];
+    String fileName = StringUtils.EMPTY;
+    String[] propertyValues = PropertiesUtils.getPropertyValues(properties, Constants.PROP_NAME);
+    if(propertyValues!=null && propertyValues.length>0){
+      fileName = propertyValues[0];
+    }
+    
     String mimeType = PropertiesUtils.getMimeType(contentProperties);
     Date createdDate = PropertiesUtils.getDatePropertyValue(properties, Constants.PROP_CREATED);
     Date modifiedDate = PropertiesUtils.getDatePropertyValue(properties, PROP_MODIFIED);
-     
-    rd.setFileName(fileName);
-    rd.setMimeType(mimeType);
-    rd.setCreatedDate(createdDate);
-    rd.setModifiedDate(modifiedDate);
+    
+    if(StringUtils.isNotEmpty(fileName)){
+      rd.setFileName(fileName);
+    }
+    
+    if(StringUtils.isNotEmpty(mimeType)){
+      rd.setMimeType(mimeType);
+    }
+    
+    if(createdDate!=null){
+      rd.setCreatedDate(createdDate);
+    }
+    
+    if(modifiedDate!=null){
+      rd.setModifiedDate(modifiedDate);
+    }
   }
   
   /**
@@ -100,14 +114,10 @@
     if(properties!=null){
       for (NamedValue property : properties) {
         if(property!=null){
-          if(property.getIsMultiValue()!=null){
-            if(!property.getIsMultiValue()){
-              if(StringUtils.isNotEmpty(property.getValue())){
-                if(property.getValue().startsWith(PROP_CONTENT_PREFIX_1)
-                    || property.getValue().startsWith(PROP_CONTENT_PREFIX_2)){
-                    contentProperties.add(property);
-                }
-              }
+          if(property.getIsMultiValue()!=null && !property.getIsMultiValue()){
+            if(StringUtils.isNotEmpty(property.getValue()) 
+                && property.getValue().startsWith(PROP_CONTENT_PREFIX)){
+                  contentProperties.add(property);
             }
           }
         }
@@ -174,8 +184,11 @@
         if(Constants.PROP_CONTENT.equals(contentProperty.getName())){
           String defaultContentPropertyValue = contentProperty.getValue();
           String[] contentSplitted = StringUtils.split(defaultContentPropertyValue, PROP_CONTENT_SEP);
-          String[] mimeTypeSplitted = StringUtils.split(contentSplitted[1], PROP_MIMETYPE_SEP);
-          return mimeTypeSplitted[1];
+          if (contentSplitted.length > 1) {
+            String[] mimeTypeSplitted = StringUtils.split(contentSplitted[1], PROP_MIMETYPE_SEP);
+            return mimeTypeSplitted[1];
+          }
+          return contentSplitted[0];
         }
       }
     }
@@ -189,9 +202,17 @@
    * @throws ParseException 
    */
   public static Date getDatePropertyValue(NamedValue[] properties, String qname) throws ParseException{
-    String dateString = PropertiesUtils.getPropertyValues(properties, qname)[0];
-    //String finalDateString = dateString.replaceAll(ISO8601_REPLACE, ISO8601_REPLACE_TO);
-    return ISO8601_DATE_FORMAT.get().parse(dateString);
+    Date date = null;
+    if(properties!=null && properties.length>0){
+      String[] propertyValues = PropertiesUtils.getPropertyValues(properties, qname);
+      if(propertyValues!=null && propertyValues.length>0){
+        String dateString = propertyValues[0];
+        if(StringUtils.isNotEmpty(dateString)){
+          date = DateParser.parseISO8601Date(dateString);
+        }
+      }
+    }
+    return date;
   }
   
 }
diff --git a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/SearchUtils.java b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/SearchUtils.java
index f92f342..d9b6c42 100644
--- a/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/SearchUtils.java
+++ b/connectors/alfresco/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/alfresco/SearchUtils.java
@@ -18,6 +18,7 @@
  */
 package org.apache.manifoldcf.crawler.connectors.alfresco;
 
+import java.io.IOException;
 import java.rmi.RemoteException;
 import java.util.ArrayList;
 import java.util.List;
@@ -54,44 +55,56 @@
     "{http://www.alfresco.org/model/site/1.0}sites"};
   
 
-  public static QueryResult luceneSearch(String username, String password, AuthenticationDetails session, String luceneQuery){
+  public static QueryResult luceneSearch(String endpoint, String username, String password, int socketTimeout, AuthenticationDetails session, String luceneQuery) throws IOException {
     QueryResult queryResult = null;
     Query query = new Query(Constants.QUERY_LANG_LUCENE, luceneQuery);
     try {
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
       session = AuthenticationUtils.getAuthenticationDetails();
       queryResult = WebServiceFactory.getRepositoryService().query(STORE, query, false);
+      AuthenticationUtils.endSession();
     } catch (RepositoryFault e) {
       Logging.connectors.error(
           "Alfresco: Repository fault during addSeedDocuments: "
               + e.getMessage(), e);
+      throw new IOException("Alfresco: Repository fault during addSeedDocuments: "
+          + e.getMessage(), e);
     } catch (RemoteException e) {
       Logging.connectors.error(
           "Alfresco: Remote exception during addSeedDocuments: "
               + e.getMessage(), e);
-    } finally{
-      AuthenticationUtils.endSession();
+      throw e;
+    } finally {
+      session = null;
     }
     return queryResult;
   }
   
-  public static QueryResult getChildren(String username, String password, AuthenticationDetails session, Reference reference){
+  public static QueryResult getChildren(String endpoint, String username, String password, int socketTimeout, AuthenticationDetails session, Reference reference) throws IOException {
     QueryResult queryResult = null;
     try {
+      WebServiceFactory.setEndpointAddress(endpoint);
+      WebServiceFactory.setTimeoutMilliseconds(socketTimeout);
       AuthenticationUtils.startSession(username, password);
-      session = AuthenticationUtils.getAuthenticationDetails();
+      session = AuthenticationUtils.getAuthenticationDetails();  
       queryResult = WebServiceFactory.getRepositoryService().queryChildren(reference);
+      AuthenticationUtils.endSession();
     } catch (RepositoryFault e) {
       Logging.connectors.error(
           "Alfresco: RepositoryFault during getting a node in processDocuments. Node: "
               + reference.getPath() + ". " + e.getMessage(), e);
+      throw new IOException("Alfresco: RepositoryFault during getting a node in processDocuments. Node: "
+              + reference.getPath() + ". " + e.getMessage(), e);
+      
     } catch (RemoteException e) {
       Logging.connectors
           .error(
               "Alfresco: Remote exception error during getting a node in processDocuments. Node: "
                   + reference.getPath() + ". " + e.getMessage(), e);
+      throw e;
     } finally {
-      AuthenticationUtils.endSession();
       session = null;
     }
     return queryResult;
@@ -104,9 +117,9 @@
    * @param session
    * @return filtered children of the Company Home without all the special spaces
    */
-  public static QueryResult getChildrenFromCompanyHome(String username, String password, AuthenticationDetails session){
+  public static QueryResult getChildrenFromCompanyHome(String endpoint, String username, String password, int socketTimeout, AuthenticationDetails session) throws IOException {
     Reference companyHome = new Reference(STORE, null, XPATH_COMPANY_HOME);
-    QueryResult queryResult = SearchUtils.getChildren(username,password,session,companyHome);
+    QueryResult queryResult = SearchUtils.getChildren(endpoint, username, password, socketTimeout, session, companyHome);
     ResultSet rs = queryResult.getResultSet();
     ResultSetRow[] rows = rs.getRows();
     List<ResultSetRow> filteredRows = new ArrayList<ResultSetRow>();
diff --git a/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_en_US.properties b/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_en_US.properties
index 0b2cc6e..03e384e 100644
--- a/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_en_US.properties
+++ b/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_en_US.properties
@@ -26,6 +26,7 @@
 AlfrescoConnector.ServerColon=Server:
 AlfrescoConnector.PortColon=Port:
 AlfrescoConnector.PathColon=Path:
+AlfrescoConnector.SocketTimeoutColon=Socket Timeout:
 
 AlfrescoConnector.TenantDomainEquals=tenantDomain=
 AlfrescoConnector.UserNameEquals=username=
@@ -34,6 +35,7 @@
 AlfrescoConnector.ServerEquals=server=
 AlfrescoConnector.PortEquals=port=
 AlfrescoConnector.PathEquals=path=
+AlfrescoConnector.SocketTimeoutEquals=socketTimeout=
 
 AlfrescoConnector.TheUsernameMustNotBeNull=The username must not be null
 AlfrescoConnector.ThePasswordMustNotBeNull=The password must not be null
@@ -42,6 +44,8 @@
 AlfrescoConnector.ThePortMustNotBeNull=The port must not be null
 AlfrescoConnector.TheServerPortMustBeAValidInteger=The server port must be a valid integer
 AlfrescoConnector.PathMustNotBeNull=Path must not be null
+AlfrescoConnector.TheSocketTimeoutMustNotBeNull=The connector socket timeout must not be null
+AlfrescoConnector.TheSocketTimeoutMustBeAValidInteger=The connector socket timeout must be a valid integer
 
 AlfrescoConnector.LuceneQuery=Lucene Query
 
diff --git a/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_ja_JP.properties b/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_ja_JP.properties
index a579732..3d964bc 100644
--- a/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_ja_JP.properties
+++ b/connectors/alfresco/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/alfresco/common_ja_JP.properties
@@ -26,6 +26,7 @@
 AlfrescoConnector.ServerColon=サーバ:
 AlfrescoConnector.PortColon=ポート:
 AlfrescoConnector.PathColon=パス:
+AlfrescoConnector.SocketTimeoutColon=Socket Timeout:
 
 AlfrescoConnector.TenantDomainEquals=テナントドメイン=
 AlfrescoConnector.UserNameEquals=ユーザ名=
@@ -34,6 +35,7 @@
 AlfrescoConnector.ServerEquals=サーバ=
 AlfrescoConnector.PortEquals=ポート=
 AlfrescoConnector.PathEquals=パス=
+AlfrescoConnector.SocketTimeoutEquals=socketTimeout=
 
 AlfrescoConnector.TheUsernameMustNotBeNull=ユーザ名を入力してください
 AlfrescoConnector.ThePasswordMustNotBeNull=パスワードを入力してください
@@ -42,6 +44,8 @@
 AlfrescoConnector.ThePortMustNotBeNull=ポート番号を入力してください
 AlfrescoConnector.TheServerPortMustBeAValidInteger=サーバポートには整数を入力してください
 AlfrescoConnector.PathMustNotBeNull=パスを入力してください
+AlfrescoConnector.TheSocketTimeoutMustNotBeNull=The connector socket timeout must not be null
+AlfrescoConnector.TheSocketTimeoutMustBeAValidInteger=The connector socket timeout must be a valid integer
 
 AlfrescoConnector.LuceneQuery=Luceneクエリー
 
diff --git a/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration.js b/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration.js
index 3611e5e..64af925 100644
--- a/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration.js
+++ b/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration.js
@@ -73,6 +73,20 @@
     editconnection.path.focus();
     return false;
   }
+  if (editconnection.socketTimeout.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('AlfrescoConnector.TheSocketTimeoutMustNotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('AlfrescoConnector.Server'))");
+    editconnection.socketTimeout.focus();
+    return false;
+  } 
+  else if (!isInteger(editconnection.socketTimeout.value))
+  {
+      alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('AlfrescoConnector.TheSocketTimeoutMustBeAValidInteger'))");
+      SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('AlfrescoConnector.Server'))");
+      editconnection.socketTimeout.focus();
+      return false;
+  }
   return true;
 }
 // -->
diff --git a/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration_Server.html b/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration_Server.html
index 1cc5843..d1b8c41 100644
--- a/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration_Server.html
+++ b/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/editConfiguration_Server.html
@@ -104,6 +104,16 @@
         <input id="path" name="path" type="text" size="32" value="$Encoder.attributeEscape($PATH)" />
     </td>
   </tr>
+  <tr>
+    <td class="description">
+      <nobr>
+        $Encoder.bodyEscape($ResourceBundle.getString('AlfrescoConnector.SocketTimeoutColon'))
+      </nobr>
+    </td>
+    <td class="value">
+        <input id="socketTimeout" name="socketTimeout" type="text" size="10" value="$Encoder.attributeEscape($SOCKETTIMEOUT)" />
+    </td>
+  </tr>
 </table>
 
 #else
@@ -115,6 +125,7 @@
 <input type="hidden" name="port" value="$Encoder.attributeEscape($PORT)" />
 <input type="hidden" name="path" value="$Encoder.attributeEscape($PATH)" />
 <input type="hidden" name="tenantDomain" value="$Encoder.attributeEscape($TENANTDOMAIN)" />
+<input type="hidden" name="socketTimeout" value="$Encoder.attributeEscape($SOCKETTIMEOUT)" />
 
 #end
 
diff --git a/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/viewConfiguration.html b/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/viewConfiguration.html
index 4f1b0e0..c943fd7 100644
--- a/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/viewConfiguration.html
+++ b/connectors/alfresco/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/alfresco/viewConfiguration.html
@@ -51,6 +51,10 @@
         $Encoder.bodyEscape($ResourceBundle.getString('AlfrescoConnector.PathEquals'))$Encoder.bodyEscape($PATH)
       </nobr>
       <br />
+      <nobr>
+        $Encoder.bodyEscape($ResourceBundle.getString('AlfrescoConnector.SocketTimeoutEquals'))$Encoder.bodyEscape($SOCKETTIMEOUT)
+      </nobr>
+      <br />
     </td>
   </tr>
 </table>
diff --git a/connectors/alfresco/pom.xml b/connectors/alfresco/pom.xml
index 4809cc5..7940efb 100644
--- a/connectors/alfresco/pom.xml
+++ b/connectors/alfresco/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   
@@ -53,6 +53,7 @@
         <url>https://artifacts.alfresco.com/nexus/content/repositories/alfresco-docs</url>
     </repository>
   </repositories>
+  <!-- 
   <pluginRepositories>
     <pluginRepository>
       <id>alfresco-release</id>
@@ -66,7 +67,7 @@
       </snapshots>
     </pluginRepository>
   </pluginRepositories>
-
+-->
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
@@ -78,15 +79,21 @@
           <include>**/*.js</include>
         </includes>
       </resource>
-    </resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -96,7 +103,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -215,7 +224,7 @@
     <dependency>
       <groupId>org.alfresco</groupId>
       <artifactId>alfresco-web-service-client</artifactId>
-      <version>3.4.e</version>
+      <version>4.2.c</version>
     </dependency>
     <dependency>
         <groupId>commons-lang</groupId>
diff --git a/connectors/cmis/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/cmis/CmisRepositoryConnector.java b/connectors/cmis/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/cmis/CmisRepositoryConnector.java
index 86e5c16..a3d53a5 100644
--- a/connectors/cmis/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/cmis/CmisRepositoryConnector.java
+++ b/connectors/cmis/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/cmis/CmisRepositoryConnector.java
@@ -55,6 +55,7 @@
 import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
 import org.apache.manifoldcf.core.interfaces.ConfigParams;
 import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;
 import org.apache.manifoldcf.core.interfaces.IPostParameters;
 import org.apache.manifoldcf.core.interfaces.IThreadContext;
 import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
@@ -602,6 +603,16 @@
     }
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
   /** Queue "seed" documents.  Seed documents are the starting places for crawling activity.  Documents
    * are seeded when this method calls appropriate methods in the passed in ISeedingActivity object.
    *
@@ -699,7 +710,7 @@
   *@param newMap is the map to fill in
   *@param parameters is the current set of configuration parameters
   */
-  private static void fillInServerConfigurationMap(Map<String,String> newMap, ConfigParams parameters)
+  private static void fillInServerConfigurationMap(Map<String,String> newMap, IPasswordMapperActivity mapper, ConfigParams parameters)
   {
     String username = parameters.getParameter(CmisConfig.USERNAME_PARAM);
     String password = parameters.getParameter(CmisConfig.PASSWORD_PARAM);
@@ -714,6 +725,8 @@
       username = StringUtils.EMPTY;
     if(password == null)
       password = StringUtils.EMPTY;
+    else
+      password = mapper.mapPasswordToKey(password);
     if(protocol == null)
       protocol = CmisConfig.PROTOCOL_DEFAULT_VALUE;
     if(server == null)
@@ -758,7 +771,7 @@
     Map<String,String> paramMap = new HashMap<String,String>();
   
     // Fill in map from each tab
-    fillInServerConfigurationMap(paramMap, parameters);
+    fillInServerConfigurationMap(paramMap, out, parameters);
 
     outputResource(VIEW_CONFIG_FORWARD, out, locale, paramMap);
   }
@@ -791,7 +804,7 @@
     Map<String,String> paramMap = new HashMap<String,String>();
 
     // Fill in the parameters from each tab
-    fillInServerConfigurationMap(paramMap, parameters);
+    fillInServerConfigurationMap(paramMap, out, parameters);
 
     // Output the Javascript - only one Velocity template for all tabs
     outputResource(EDIT_CONFIG_HEADER_FORWARD, out, locale, paramMap);
@@ -809,7 +822,7 @@
     // Set the tab name
     paramMap.put("TabName", tabName);
     // Fill in the parameters
-    fillInServerConfigurationMap(paramMap, parameters);
+    fillInServerConfigurationMap(paramMap, out, parameters);
     outputResource(EDIT_CONFIG_FORWARD_SERVER, out, locale, paramMap);
   
   }
@@ -848,7 +861,7 @@
 
     String password = variableContext.getParameter(CmisConfig.PASSWORD_PARAM);
     if (password != null)
-      parameters.setParameter(CmisConfig.PASSWORD_PARAM, password);
+      parameters.setParameter(CmisConfig.PASSWORD_PARAM, variableContext.mapKeyToPassword(password));
 
     String protocol = variableContext.getParameter(CmisConfig.PROTOCOL_PARAM);
     if (protocol != null) {
diff --git a/connectors/cmis/pom.xml b/connectors/cmis/pom.xml
index 5d75777..e791fd2 100644
--- a/connectors/cmis/pom.xml
+++ b/connectors/cmis/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   
@@ -47,15 +47,21 @@
           <include>**/*.js</include>
         </includes>
       </resource>
-    </resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -65,7 +71,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -109,7 +117,7 @@
     <dependency>
         <groupId>org.apache.chemistry.opencmis</groupId>
         <artifactId>chemistry-opencmis-client-impl</artifactId>
-        <version>0.8.0</version>
+        <version>0.9.0</version>
      </dependency>
      <dependency>
         <groupId>commons-lang</groupId>
diff --git a/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/authorities/DCTM/AuthorityConnector.java b/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/authorities/DCTM/AuthorityConnector.java
index 0385773..a53f658 100644
--- a/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/authorities/DCTM/AuthorityConnector.java
+++ b/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/authorities/DCTM/AuthorityConnector.java
@@ -51,13 +51,8 @@
   protected boolean useSystemAcls = true;
 
   // Documentum has no "deny" tokens, and its document acls cannot be empty, so no local authority deny token is required.
-  // However, it is felt that we need to be suspenders-and-belt, so here is the deny token.
+  // However, it is felt that we need to be suspenders-and-belt, so we use the deny token.
   // The documentum tokens are of the form xxx:yyy, so they cannot collide with the standard deny token.
-  protected static final String denyToken = "DEAD_AUTHORITY";
-
-  protected static final AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{denyToken},AuthorizationResponse.RESPONSE_UNREACHABLE);
-  protected static final AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{denyToken},AuthorizationResponse.RESPONSE_USERNOTFOUND);
-  protected static final AuthorizationResponse userUnauthorizedResponse = new AuthorizationResponse(new String[]{denyToken},AuthorizationResponse.RESPONSE_USERUNAUTHORIZED);
 
     /** Cache manager. */
   protected ICacheManager cacheManager = null;
@@ -528,7 +523,7 @@
           {
             if (Logging.authorityConnectors.isDebugEnabled())
               Logging.authorityConnectors.debug("DCTM: No user found for username '" + strUserName + "'");
-            response = userNotFoundResponse;
+            response = RESPONSE_USERNOTFOUND;
             return;
           }
 
@@ -536,7 +531,7 @@
           {
             if (Logging.authorityConnectors.isDebugEnabled())
               Logging.authorityConnectors.debug("DCTM: User found for username '" + strUserName + "' but the account is not active.");
-            response = userUnauthorizedResponse;
+            response = RESPONSE_USERUNAUTHORIZED;
             return;
           }
 
@@ -741,7 +736,7 @@
           if (noSession)
           {
             Logging.authorityConnectors.warn("DCTM: Transient error checking authorization: "+e.getMessage(),e);
-            return unreachableResponse;
+            return RESPONSE_UNREACHABLE;
           }
           session = null;
           lastSessionFetch = -1L;
@@ -820,7 +815,7 @@
           if (noSession)
           {
             Logging.authorityConnectors.warn("DCTM: Transient error checking authorization: "+e.getMessage(),e);
-            return unreachableResponse;
+            return RESPONSE_UNREACHABLE;
           }
           session = null;
           lastSessionFetch = -1L;
@@ -835,7 +830,7 @@
       {
         Logging.authorityConnectors.warn("DCTM: Transient error checking authorization: "+e.getMessage(),e);
         // Transient: Treat as if user does not exist, not like credentials invalid.
-        return unreachableResponse;
+        return RESPONSE_UNREACHABLE;
       }
       throw new ManifoldCFException(e.getMessage(),e);
     }
@@ -848,7 +843,7 @@
   @Override
   public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
   {
-    return unreachableResponse;
+    return RESPONSE_UNREACHABLE;
   }
 
   protected static String insensitiveMatch(boolean insensitive, String field, String value)
@@ -952,6 +947,16 @@
 
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
   /** Disconnect from Documentum.
   */
   @Override
@@ -1128,27 +1133,29 @@
     Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    String docbaseName = parameters.getParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_DOCBASE);
+    String docbaseName = parameters.getParameter(CONFIG_PARAM_DOCBASE);
     if (docbaseName == null)
       docbaseName = "";
 
-    String docbaseUserName = parameters.getParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_USERNAME);
+    String docbaseUserName = parameters.getParameter(CONFIG_PARAM_USERNAME);
     if (docbaseUserName == null)
       docbaseUserName = "";
 
-    String docbasePassword = parameters.getObfuscatedParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_PASSWORD);
+    String docbasePassword = parameters.getObfuscatedParameter(CONFIG_PARAM_PASSWORD);
     if (docbasePassword == null)
       docbasePassword = "";
+    else
+      docbasePassword = out.mapPasswordToKey(docbasePassword);
 
-    String docbaseDomain = parameters.getParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_DOMAIN);
+    String docbaseDomain = parameters.getParameter(CONFIG_PARAM_DOMAIN);
     if (docbaseDomain == null)
       docbaseDomain = "";
 
-    String caseInsensitiveUser = parameters.getParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_CASEINSENSITIVE);
+    String caseInsensitiveUser = parameters.getParameter(CONFIG_PARAM_CASEINSENSITIVE);
     if (caseInsensitiveUser == null)
       caseInsensitiveUser = "false";
 
-    String useSystemAcls = parameters.getParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_USESYSTEMACLS);
+    String useSystemAcls = parameters.getParameter(CONFIG_PARAM_USESYSTEMACLS);
     if (useSystemAcls == null)
       useSystemAcls = "true";
 
@@ -1303,27 +1310,27 @@
   {
     String docbaseName = variableContext.getParameter("docbasename");
     if (docbaseName != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_DOCBASE,docbaseName);
+      parameters.setParameter(CONFIG_PARAM_DOCBASE,docbaseName);
 	
     String docbaseUserName = variableContext.getParameter("docbaseusername");
     if (docbaseUserName != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_USERNAME,docbaseUserName);
+      parameters.setParameter(CONFIG_PARAM_USERNAME,docbaseUserName);
 	
     String docbasePassword = variableContext.getParameter("docbasepassword");
     if (docbasePassword != null)
-      parameters.setObfuscatedParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_PASSWORD,docbasePassword);
+      parameters.setObfuscatedParameter(CONFIG_PARAM_PASSWORD,variableContext.mapKeyToPassword(docbasePassword));
 	
     String docbaseDomain = variableContext.getParameter("docbasedomain");
     if (docbaseDomain != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_DOMAIN,docbaseDomain);
+      parameters.setParameter(CONFIG_PARAM_DOMAIN,docbaseDomain);
 
     String caseInsensitiveUser = variableContext.getParameter("usernamecaseinsensitive");
     if (caseInsensitiveUser != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_CASEINSENSITIVE,caseInsensitiveUser);
+      parameters.setParameter(CONFIG_PARAM_CASEINSENSITIVE,caseInsensitiveUser);
 
     String useSystemAcls = variableContext.getParameter("usesystemacls");
     if (useSystemAcls != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.authorities.DCTM.AuthorityConnector.CONFIG_PARAM_USESYSTEMACLS,useSystemAcls);
+      parameters.setParameter(CONFIG_PARAM_USESYSTEMACLS,useSystemAcls);
     
     String cacheLifetime = variableContext.getParameter("cachelifetime");
     if (cacheLifetime != null)
diff --git a/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/DCTM/DCTM.java b/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/DCTM/DCTM.java
index 0759393..90d86fb 100644
--- a/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/DCTM/DCTM.java
+++ b/connectors/documentum/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/DCTM/DCTM.java
@@ -65,7 +65,7 @@
   /** Documentum has no "deny" tokens, and its document acls cannot be empty, so no local authority deny token is required.
   * However, it is felt that we need to be suspenders-and-belt, so here is the deny token.
   * The documentum tokens are of the form xxx:yyy, so they cannot collide with the standard deny token. */
-  private static final String denyToken = "DEAD_AUTHORITY";
+  private static final String denyToken = GLOBAL_DENY_TOKEN;
 
   protected class GetSessionThread extends Thread
   {
@@ -653,6 +653,16 @@
     releaseCheck();
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
   /** Disconnect from Documentum.
   */
   @Override
@@ -2014,19 +2024,21 @@
     Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    String docbaseName = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_DOCBASE);
+    String docbaseName = parameters.getParameter(CONFIG_PARAM_DOCBASE);
     if (docbaseName == null)
       docbaseName = "";
-    String docbaseUserName = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_USERNAME);
+    String docbaseUserName = parameters.getParameter(CONFIG_PARAM_USERNAME);
     if (docbaseUserName == null)
       docbaseUserName = "";
-    String docbasePassword = parameters.getObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PASSWORD);
+    String docbasePassword = parameters.getObfuscatedParameter(CONFIG_PARAM_PASSWORD);
     if (docbasePassword == null)
       docbasePassword = "";
-    String docbaseDomain = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_DOMAIN);
+    else
+      docbasePassword = out.mapPasswordToKey(docbasePassword);
+    String docbaseDomain = parameters.getParameter(CONFIG_PARAM_DOMAIN);
     if (docbaseDomain == null)
       docbaseDomain = "";
-    String webtopBaseUrl = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_WEBTOPBASEURL);
+    String webtopBaseUrl = parameters.getParameter(CONFIG_PARAM_WEBTOPBASEURL);
     if (webtopBaseUrl == null)
       webtopBaseUrl = "http://localhost/webtop/";
 
@@ -2099,23 +2111,23 @@
   {
     String docbaseName = variableContext.getParameter("docbasename");
     if (docbaseName != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_DOCBASE,docbaseName);
+      parameters.setParameter(CONFIG_PARAM_DOCBASE,docbaseName);
 
     String docbaseUserName = variableContext.getParameter("docbaseusername");
     if (docbaseUserName != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_USERNAME,docbaseUserName);
+      parameters.setParameter(CONFIG_PARAM_USERNAME,docbaseUserName);
 
     String docbasePassword = variableContext.getParameter("docbasepassword");
     if (docbasePassword != null)
-      parameters.setObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PASSWORD,docbasePassword);
+      parameters.setObfuscatedParameter(CONFIG_PARAM_PASSWORD,variableContext.mapKeyToPassword(docbasePassword));
 
     String docbaseDomain = variableContext.getParameter("docbasedomain");
     if (docbaseDomain != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_DOMAIN,docbaseDomain);
+      parameters.setParameter(CONFIG_PARAM_DOMAIN,docbaseDomain);
 
     String webtopBaseUrl = variableContext.getParameter("webtopbaseurl");
     if (webtopBaseUrl != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_WEBTOPBASEURL,webtopBaseUrl);
+      parameters.setParameter(CONFIG_PARAM_WEBTOPBASEURL,webtopBaseUrl);
 
     return null;
   }
@@ -2282,7 +2294,7 @@
       while (i < ds.getChildCount())
       {
         SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_LOCATION))
+        if (sn.getType().equals(CONFIG_PARAM_LOCATION))
         {
           String pathDescription = "_" + Integer.toString(k);
           String pathOpName = "pathop" + pathDescription;
@@ -2397,7 +2409,7 @@
       while (i < ds.getChildCount())
       {
         SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_LOCATION))
+        if (sn.getType().equals(CONFIG_PARAM_LOCATION))
         {
           String pathDescription = "_" + Integer.toString(k);
           out.print(
@@ -2530,7 +2542,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_OBJECTTYPE))
+      if (sn.getType().equals(CONFIG_PARAM_OBJECTTYPE))
       {
         String token = sn.getAttributeValue("token");
         if (token != null && token.length() > 0)
@@ -2546,7 +2558,7 @@
             while (kk < sn.getChildCount())
             {
               SpecificationNode dsn = sn.getChild(kk++);
-              if (dsn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_ATTRIBUTENAME))
+              if (dsn.getType().equals(CONFIG_PARAM_ATTRIBUTENAME))
               {
                 String attr = dsn.getAttributeValue("attrname");
                 attrMap.put(attr,attr);
@@ -2706,7 +2718,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_FORMAT))
+      if (sn.getType().equals(CONFIG_PARAM_FORMAT))
       {
         String token = sn.getAttributeValue("value");
         if (token != null && token.length() > 0)
@@ -2808,7 +2820,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_MAXLENGTH))
+      if (sn.getType().equals(CONFIG_PARAM_MAXLENGTH))
       {
         maxDocLength = sn.getAttributeValue("value");
       }
@@ -2846,7 +2858,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHNAMEATTRIBUTE))
+      if (sn.getType().equals(CONFIG_PARAM_PATHNAMEATTRIBUTE))
       {
         pathNameAttribute = sn.getAttributeValue("value");
       }
@@ -2858,7 +2870,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHMAP))
+      if (sn.getType().equals(CONFIG_PARAM_PATHMAP))
       {
         String pathMatch = sn.getAttributeValue("match");
         String pathReplace = sn.getAttributeValue("replace");
@@ -2963,7 +2975,7 @@
       while (i < ds.getChildCount())
       {
       	SpecificationNode sn = ds.getChild(i);
-      	if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_LOCATION))
+      	if (sn.getType().equals(CONFIG_PARAM_LOCATION))
           ds.removeChild(i);
       	else
           i++;
@@ -2986,7 +2998,7 @@
       	}
       	// Path inserts won't happen until the very end
       	String path = variableContext.getParameter("specpath"+pathDescription);
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_LOCATION);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_LOCATION);
       	node.setAttribute("path",path);
       	ds.addChild(ds.getChildCount(),node);
       	i++;
@@ -2997,7 +3009,7 @@
       if (op != null && op.equals("Add"))
       {
       	String path = variableContext.getParameter("specpath");
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_LOCATION);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_LOCATION);
       	node.setAttribute("path",path);
       	ds.addChild(ds.getChildCount(),node);
       }
@@ -3100,7 +3112,7 @@
       while (i < ds.getChildCount())
       {
       	SpecificationNode sn = ds.getChild(i);
-      	if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_OBJECTTYPE))
+      	if (sn.getType().equals(CONFIG_PARAM_OBJECTTYPE))
           ds.removeChild(i);
       	else
           i++;
@@ -3111,7 +3123,7 @@
       while (i < y.length)
       {
       	String fileType = y[i++];
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_OBJECTTYPE);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_OBJECTTYPE);
       	node.setAttribute("token",fileType);
       	String isAll = variableContext.getParameter("specfileallattrs_"+fileType);
       	if (isAll != null)
@@ -3122,7 +3134,7 @@
           int k = 0;
           while (k < z.length)
           {
-            SpecificationNode attrNode = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_ATTRIBUTENAME);
+            SpecificationNode attrNode = new SpecificationNode(CONFIG_PARAM_ATTRIBUTENAME);
             attrNode.setAttribute("attrname",z[k++]);
             node.addChild(node.getChildCount(),attrNode);
           }
@@ -3139,7 +3151,7 @@
       while (i < ds.getChildCount())
       {
       	SpecificationNode sn = ds.getChild(i);
-      	if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_FORMAT))
+      	if (sn.getType().equals(CONFIG_PARAM_FORMAT))
           ds.removeChild(i);
       	else
           i++;
@@ -3150,7 +3162,7 @@
       while (i < y.length)
       {
       	String fileType = y[i++];
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_FORMAT);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_FORMAT);
       	node.setAttribute("value",fileType);
       	ds.addChild(ds.getChildCount(),node);
       }
@@ -3164,7 +3176,7 @@
       while (i < ds.getChildCount())
       {
       	SpecificationNode sn = ds.getChild(i);
-      	if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_MAXLENGTH))
+      	if (sn.getType().equals(CONFIG_PARAM_MAXLENGTH))
           ds.removeChild(i);
       	else
           i++;
@@ -3172,7 +3184,7 @@
 
       if (x.length() > 0)
       {
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_MAXLENGTH);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_MAXLENGTH);
       	node.setAttribute("value",x);
       	ds.addChild(ds.getChildCount(),node);
       }
@@ -3186,14 +3198,14 @@
       while (i < ds.getChildCount())
       {
       	SpecificationNode sn = ds.getChild(i);
-      	if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHNAMEATTRIBUTE))
+      	if (sn.getType().equals(CONFIG_PARAM_PATHNAMEATTRIBUTE))
           ds.removeChild(i);
       	else
           i++;
       }
       if (xc.length() > 0)
       {
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHNAMEATTRIBUTE);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_PATHNAMEATTRIBUTE);
       	node.setAttribute("value",xc);
       	ds.addChild(ds.getChildCount(),node);
       }
@@ -3207,7 +3219,7 @@
       while (i < ds.getChildCount())
       {
       	SpecificationNode sn = ds.getChild(i);
-      	if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHMAP))
+      	if (sn.getType().equals(CONFIG_PARAM_PATHMAP))
           ds.removeChild(i);
       	else
           i++;
@@ -3232,7 +3244,7 @@
       	// Inserts won't happen until the very end
       	String match = variableContext.getParameter("specmatch"+pathDescription);
       	String replace = variableContext.getParameter("specreplace"+pathDescription);
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHMAP);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_PATHMAP);
       	node.setAttribute("match",match);
       	node.setAttribute("replace",replace);
       	ds.addChild(ds.getChildCount(),node);
@@ -3245,7 +3257,7 @@
       {
       	String match = variableContext.getParameter("specmatch");
       	String replace = variableContext.getParameter("specreplace");
-      	SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHMAP);
+      	SpecificationNode node = new SpecificationNode(CONFIG_PARAM_PATHMAP);
       	node.setAttribute("match",match);
       	node.setAttribute("replace",replace);
       	ds.addChild(ds.getChildCount(),node);
@@ -3273,7 +3285,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_LOCATION))
+      if (sn.getType().equals(CONFIG_PARAM_LOCATION))
       {
         if (seenAny == false)
         {
@@ -3312,7 +3324,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_OBJECTTYPE))
+      if (sn.getType().equals(CONFIG_PARAM_OBJECTTYPE))
       {
         if (seenAny == false)
         {
@@ -3342,7 +3354,7 @@
           while (k < sn.getChildCount())
           {
             SpecificationNode dsn = sn.getChild(k++);
-            if (dsn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_ATTRIBUTENAME))
+            if (dsn.getType().equals(CONFIG_PARAM_ATTRIBUTENAME))
             {
               String attrName = dsn.getAttributeValue("attrname");
               out.print(
@@ -3382,7 +3394,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_FORMAT))
+      if (sn.getType().equals(CONFIG_PARAM_FORMAT))
       {
         if (seenAny == false)
         {
@@ -3421,7 +3433,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_MAXLENGTH))
+      if (sn.getType().equals(CONFIG_PARAM_MAXLENGTH))
       {
         maxDocumentLength = sn.getAttributeValue("value");
       }
@@ -3511,7 +3523,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHNAMEATTRIBUTE))
+      if (sn.getType().equals(CONFIG_PARAM_PATHNAMEATTRIBUTE))
       {
         pathNameAttribute = sn.getAttributeValue("value");
       }
@@ -3548,7 +3560,7 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.DCTM.DCTM.CONFIG_PARAM_PATHMAP))
+      if (sn.getType().equals(CONFIG_PARAM_PATHMAP))
       {
         String pathMatch = sn.getAttributeValue("match");
         String pathReplace = sn.getAttributeValue("replace");
diff --git a/connectors/dropbox/build.xml b/connectors/dropbox/build.xml
new file mode 100644
index 0000000..bdfa11b
--- /dev/null
+++ b/connectors/dropbox/build.xml
@@ -0,0 +1,40 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project name="dropbox" default="all">

+

+    <import file="../connector-build.xml"/>

+

+    <path id="connector-classpath">

+        <path refid="mcf-connector-build.connector-classpath"/>

+        <fileset dir="../../lib">

+            <include name="dropbox-client*.jar"/>

+            <include name="json-simple*.jar"/>

+        </fileset>

+    </path>

+

+    <target name="lib" depends="mcf-connector-build.lib,precompile-check" if="canBuild">

+        <mkdir dir="dist/lib"/>

+        <copy todir="dist/lib">

+            <fileset dir="../../lib">

+                <include name="dropbox*.jar"/>

+                <include name="json-simple*.jar"/>

+            </fileset>

+        </copy>

+    </target>

+

+</project>

diff --git a/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxConfig.java b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxConfig.java
new file mode 100644
index 0000000..c33e3f8
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxConfig.java
@@ -0,0 +1,55 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+/*
+ * To change this template, choose Tools | Templates
+ * and open the template in the editor.
+ */
+package org.apache.manifoldcf.crawler.connectors.dropbox;
+
+/**
+ *
+ * @author andrew
+ */
+public class DropboxConfig {
+     
+  /** Username */
+  public static final String KEY_PARAM = "key";
+ 
+  /** Password */
+  public static final String SECRET_PARAM = "secret";
+  
+  
+   public static final String APP_KEY_PARAM = "app_key";
+ 
+  /** Password */
+  public static final String APP_SECRET_PARAM = "app_secret";
+
+  
+  /** CMIS Repository Id */
+  public static final String REPOSITORY_ID_PARAM = "repositoryId";
+  
+  //default values
+  public static final String PATH_DEFAULT_VALUE = "/";
+  public static final String REPOSITORY_ID_DEFAULT_VALUE = "dropbox";
+  
+  public static final String DROPBOX_PATH_PARAM = "dropboxpath";
+  public static final String DROPBOX_PATH_PARAM_DEFAULT_VALUE = "/";
+  
+}
diff --git a/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxRepositoryConnector.java b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxRepositoryConnector.java
new file mode 100644
index 0000000..8ef8599
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxRepositoryConnector.java
@@ -0,0 +1,1288 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.dropbox;
+
+import org.apache.manifoldcf.core.common.*;
+
+import com.dropbox.client2.DropboxAPI;
+import com.dropbox.client2.exception.DropboxException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InterruptedIOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Set;
+import java.util.Iterator;
+import org.apache.manifoldcf.crawler.system.Logging;
+import org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.common.XThreadInputStream;
+import org.apache.commons.lang.StringUtils;
+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.SpecificationNode;
+import org.apache.manifoldcf.crawler.interfaces.DocumentSpecification;
+import org.apache.manifoldcf.crawler.interfaces.IProcessActivity;
+import org.apache.manifoldcf.crawler.interfaces.ISeedingActivity;
+import org.apache.log4j.Logger;
+
+/**
+ *
+ * @author andrew
+ */
+public class DropboxRepositoryConnector extends BaseRepositoryConnector {
+
+  protected final static String ACTIVITY_READ = "read document";
+  public final static String ACTIVITY_FETCH = "fetch";
+  protected static final String RELATIONSHIP_CHILD = "child";
+  
+  /** Deny access token for default authority */
+  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
+
+  // Nodes and attributes
+  private static final String JOB_STARTPOINT_NODE_TYPE = "startpoint";
+  private static final String JOB_PATH_ATTRIBUTE = "path";
+  private static final String JOB_ACCESS_NODE_TYPE = "access";
+  private static final String JOB_TOKEN_ATTRIBUTE = "token";
+
+  // Tab properties
+  private static final String DROPBOX_SERVER_TAB_PROPERTY = "DropboxRepositoryConnector.Server";
+  private static final String DROPBOX_PATH_TAB_PROPERTY = "DropboxRepositoryConnector.DropboxPath";
+  private static final String DROPBOX_SECURITY_TAB_PROPERTY = "DropboxRepositoryConnector.Security";
+
+  // Template names
+  
+  /**
+   * Forward to the javascript to check the configuration parameters
+   */
+  private static final String EDIT_CONFIG_HEADER_FORWARD = "editConfiguration.js";
+  /**
+   * Server tab template
+   */
+  private static final String EDIT_CONFIG_FORWARD_SERVER = "editConfiguration_Server.html";
+  /**
+   * Forward to the HTML template to view the configuration parameters
+   */
+  private static final String VIEW_CONFIG_FORWARD = "viewConfiguration.html";
+  
+  /**
+   * Forward to the javascript to check the specification parameters for the
+   * job
+   */
+  private static final String EDIT_SPEC_HEADER_FORWARD = "editSpecification.js";
+  /**
+   * Forward to the template to edit the configuration parameters for the job
+   */
+  private static final String EDIT_SPEC_FORWARD_DROPBOXPATH = "editSpecification_DropboxPath.html";
+  /**
+   * Forward to the template to edit the configuration parameters for the job
+   */
+  private static final String EDIT_SPEC_FORWARD_SECURITY = "editSpecification_Security.html";
+  /**
+   * Forward to the template to view the specification parameters for the job
+   */
+  private static final String VIEW_SPEC_FORWARD = "viewSpecification.html";
+
+  /**
+   * Endpoint server name
+   */
+  protected String server = "dropbox";
+  protected DropboxSession session = null;
+  protected long lastSessionFetch = -1L;
+  protected static final long timeToRelease = 300000L;
+  
+  protected String app_key = null;
+  protected String app_secret = null;
+  protected String key = null;
+  protected String secret = null;
+
+  public DropboxRepositoryConnector() {
+    super();
+  }
+
+  /**
+   * Return the list of activities that this connector supports (i.e. writes
+   * into the log).
+   *
+   * @return the list.
+   */
+  @Override
+  public String[] getActivitiesList() {
+    return new String[]{ACTIVITY_FETCH, ACTIVITY_READ};
+  }
+
+  /**
+   * Get the bin name strings for a document identifier. The bin name
+   * describes the queue to which the document will be assigned for throttling
+   * purposes. Throttling controls the rate at which items in a given queue
+   * are fetched; it does not say anything about the overall fetch rate, which
+   * may operate on multiple queues or bins. For example, if you implement a
+   * web crawler, a good choice of bin name would be the server name, since
+   * that is likely to correspond to a real resource that will need real
+   * throttle protection.
+   *
+   * @param documentIdentifier is the document identifier.
+   * @return the set of bin names. If an empty array is returned, it is
+   * equivalent to there being no request rate throttling available for this
+   * identifier.
+   */
+  @Override
+  public String[] getBinNames(String documentIdentifier) {
+    return new String[]{server};
+  }
+
+  /**
+   * Close the connection. Call this before discarding the connection.
+   */
+  @Override
+  public void disconnect() throws ManifoldCFException {
+    if (session != null) {
+      session.close();
+      session = null;
+      lastSessionFetch = -1L;
+    }
+
+    app_key = null;
+    app_secret= null;
+    key = null;
+    secret = null;
+    
+  }
+
+  /**
+   * This method create a new DROPBOX session for a DROPBOX repository, if the
+   * repositoryId is not provided in the configuration, the connector will
+   * retrieve all the repositories exposed for this endpoint the it will start
+   * to use the first one.
+   *
+   * @param configParameters is the set of configuration parameters, which in
+   * this case describe the target appliance, basic auth configuration, etc.
+   * (This formerly came out of the ini file.)
+   */
+  @Override
+  public void connect(ConfigParams configParams) {
+    super.connect(configParams);
+
+    app_key=params.getParameter(DropboxConfig.APP_KEY_PARAM);
+    app_secret=params.getObfuscatedParameter(DropboxConfig.APP_SECRET_PARAM);
+    key = params.getParameter(DropboxConfig.KEY_PARAM);
+    secret = params.getObfuscatedParameter(DropboxConfig.SECRET_PARAM);
+    
+  }
+
+  /**
+   * Test the connection. Returns a string describing the connection
+   * integrity.
+   *
+   * @return the connection's status as a displayable string.
+   */
+  @Override
+  public String check() throws ManifoldCFException {
+    try {
+      checkConnection();
+      return super.check();
+    } catch (ServiceInterruption e) {
+      return "Connection temporarily failed: " + e.getMessage();
+    } catch (ManifoldCFException e) {
+      return "Connection failed: " + e.getMessage();
+    }
+  }
+
+  protected void checkConnection()
+    throws ManifoldCFException, ServiceInterruption {
+    getSession();
+    CheckConnectionThread t = new CheckConnectionThread();
+    try {
+      t.start();
+      t.join();
+      Throwable thr = t.getException();
+      if (thr != null) {
+        if (thr instanceof DropboxException) {
+          throw (DropboxException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+      return;
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (DropboxException e) {
+      Logging.connectors.warn("DROPBOX: Error checking repository: " + e.getMessage(), e);
+      handleDropboxException(e);
+    }
+  }
+  
+  protected class CheckConnectionThread extends Thread {
+
+    protected Throwable exception = null;
+
+    public CheckConnectionThread() {
+      super();
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.getRepositoryInfo();
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public Throwable getException() {
+      return exception;
+    }
+  }
+
+  /**
+   * Set up a session
+   */
+  protected void getSession() throws ManifoldCFException, ServiceInterruption {
+    if (session == null) {
+      // Check for parameter validity
+
+      if (StringUtils.isEmpty(app_key)) {
+        throw new ManifoldCFException("Parameter " + DropboxConfig.APP_KEY_PARAM
+            + " required but not set");
+      }
+      
+      if (StringUtils.isEmpty(app_secret)) {
+        throw new ManifoldCFException("Parameter " + DropboxConfig.APP_SECRET_PARAM
+            + " required but not set");
+      }
+      
+      
+      if (StringUtils.isEmpty(key)) {
+        throw new ManifoldCFException("Parameter " + DropboxConfig.KEY_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("DROPBOX: Username = '" + key + "'");
+      }
+
+      if (StringUtils.isEmpty(secret)) {
+        throw new ManifoldCFException("Parameter " + DropboxConfig.SECRET_PARAM
+            + " required but not set");
+      }
+
+      Logging.connectors.debug("DROPBOX: Password exists");
+
+      
+      // Create a session
+      session = new DropboxSession(app_key, app_secret, key, secret);
+      lastSessionFetch = System.currentTimeMillis();
+    }
+  }
+
+  @Override
+  public void poll() throws ManifoldCFException {
+    if (lastSessionFetch == -1L) {
+      return;
+    }
+
+    long currentTime = System.currentTimeMillis();
+    if (currentTime >= lastSessionFetch + timeToRelease) {
+      session.close();
+      session = null;
+      lastSessionFetch = -1L;
+    }
+
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
+  /**
+   * Get the maximum number of documents to amalgamate together into one
+   * batch, for this connector.
+   *
+   * @return the maximum number. 0 indicates "unlimited".
+   */
+  @Override
+  public int getMaxDocumentRequest() {
+    return 1;
+  }
+
+  /**
+   * Return the list of relationship types that this connector recognizes.
+   *
+   * @return the list.
+   */
+  @Override
+  public String[] getRelationshipTypes() {
+    return new String[]{RELATIONSHIP_CHILD};
+  }
+
+  /**
+   * Fill in a Server tab configuration parameter map for calling a Velocity
+   * template.
+   *
+   * @param newMap is the map to fill in
+   * @param parameters is the current set of configuration parameters
+   */
+  private static void fillInServerConfigurationMap(Map<String, Object> newMap, IPasswordMapperActivity mapper, ConfigParams parameters) {
+    
+    String app_key = parameters.getParameter(DropboxConfig.APP_KEY_PARAM);
+    String app_secret = parameters.getObfuscatedParameter(DropboxConfig.APP_SECRET_PARAM);
+    
+    String username = parameters.getParameter(DropboxConfig.KEY_PARAM);
+    String password = parameters.getObfuscatedParameter(DropboxConfig.SECRET_PARAM);
+    
+    if (app_key == null) {
+      app_key = StringUtils.EMPTY;
+    }
+    
+    if (app_secret == null) {
+      app_secret = StringUtils.EMPTY;
+    } else {
+      app_secret = mapper.mapPasswordToKey(app_secret);
+    }
+    
+    if (username == null) {
+      username = StringUtils.EMPTY;
+    }
+    if (password == null) {
+      password = StringUtils.EMPTY;
+    } else {
+      password = mapper.mapPasswordToKey(password);
+    }
+    
+    newMap.put("APP_KEY", app_key);
+    newMap.put("APP_SECRET", app_secret);
+    newMap.put("KEY", username);
+    newMap.put("SECRET", password);
+    
+  }
+
+  /**
+   * View configuration. This method is called in the body section of the
+   * connector's view configuration page. Its purpose is to present the
+   * connection information to the user. The coder can presume that the HTML
+   * that is output from this configuration will be within appropriate <html>
+   * and <body> tags.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in map from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+
+    Messages.outputResourceWithVelocity(out,locale,VIEW_CONFIG_FORWARD,paramMap);
+  }
+
+  /**
+   *
+   * Output the configuration header section. This method is called in the
+   * head section of the connector's configuration page. Its purpose is to add
+   * the required tabs to the list, and to output any javascript methods that
+   * might be needed by the configuration editing HTML.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @param tabsArray is an array of tab names. Add to this array any tab
+   * names that are specific to the connector.
+   */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext,
+    IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
+    throws ManifoldCFException, IOException {
+    // Add the Server tab
+    tabsArray.add(Messages.getString(locale, DROPBOX_SERVER_TAB_PROPERTY));
+
+    // Map the parameters
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the parameters from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+
+    // Output the Javascript - only one Velocity template for all tabs
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_HEADER_FORWARD,paramMap);
+  }
+
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext,
+    IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException {
+
+    // Call the Velocity templates for each tab
+
+    // Server tab
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    // Set the tab name
+    paramMap.put("TabName", tabName);
+    // Fill in the parameters
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_FORWARD_SERVER,paramMap);
+
+  }
+
+  /**
+   * Process a configuration post. This method is called at the start of the
+   * connector's configuration page, whenever there is a possibility that form
+   * data for a connection has been posted. Its purpose is to gather form
+   * information and modify the configuration parameters accordingly. The name
+   * of the posted form is "editconnection".
+   *
+   * @param threadContext is the local thread context.
+   * @param variableContext is the set of variables available from the post,
+   * including binary file post information.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @return null if all is well, or a string error message if there is an
+   * error that should prevent saving of the connection (and cause a
+   * redirection to an error page).
+   *
+   */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext,
+    IPostParameters variableContext, ConfigParams parameters)
+    throws ManifoldCFException {
+
+    
+    String app_key = variableContext.getParameter("app_key");
+    if (app_key != null) {
+      parameters.setParameter(DropboxConfig.APP_KEY_PARAM, app_key);
+    }
+    
+    String app_secret = variableContext.getParameter("app_secret");
+    if (app_secret != null) {
+      parameters.setObfuscatedParameter(DropboxConfig.APP_SECRET_PARAM, variableContext.mapKeyToPassword(app_secret));
+    }
+    
+    String key = variableContext.getParameter("key");
+    if (key != null) {
+      parameters.setParameter(DropboxConfig.KEY_PARAM, key);
+    }
+
+    String secret = variableContext.getParameter("secret");
+    if (secret != null) {
+      parameters.setObfuscatedParameter(DropboxConfig.SECRET_PARAM, variableContext.mapKeyToPassword(secret));
+    }
+
+    return null;
+  }
+
+  /**
+   * Fill in specification Velocity parameter map for DROPBOXPath tab.
+   */
+  private static void fillInDropboxPathSpecificationMap(Map<String, Object> newMap, DocumentSpecification ds) {
+    int i = 0;
+    String DropboxPath = DropboxConfig.DROPBOX_PATH_PARAM_DEFAULT_VALUE;
+    while (i < ds.getChildCount()) {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {
+        DropboxPath = sn.getAttributeValue(JOB_PATH_ATTRIBUTE);
+      }
+      i++;
+    }
+    newMap.put("DROPBOXPATH", DropboxPath);
+  }
+
+  /**
+   * Fill in specification Velocity parameter map for Dropbox Security tab.
+   */
+  private static void fillInDropboxSecuritySpecificationMap(Map<String, Object> newMap, DocumentSpecification ds) {
+    List<Map<String,String>> accessTokenList = new ArrayList<Map<String,String>>();
+    for (int i = 0; i < ds.getChildCount(); i++) {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals(JOB_ACCESS_NODE_TYPE)) {
+        String token = sn.getAttributeValue(JOB_TOKEN_ATTRIBUTE);
+        Map<String,String> accessMap = new HashMap<String,String>();
+        accessMap.put("TOKEN",token);
+        accessTokenList.add(accessMap);
+      }
+    }
+    newMap.put("ACCESSTOKENS", accessTokenList);
+  }
+
+  /**
+   * View specification. This method is called in the body section of a job's
+   * view page. Its purpose is to present the document specification
+   * information to the user. The coder can presume that the HTML that is
+   * output from this configuration will be within appropriate <html> and
+   * <body> tags.
+   *
+   * @param out is the output to which any HTML should be sent.
+   * @param ds is the current document specification for this job.
+   */
+  @Override
+  public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)
+    throws ManifoldCFException, IOException {
+
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the map with data from all tabs
+    fillInDropboxPathSpecificationMap(paramMap, ds);
+    fillInDropboxSecuritySpecificationMap(paramMap, ds);
+      
+    Messages.outputResourceWithVelocity(out,locale,VIEW_SPEC_FORWARD,paramMap);
+  }
+
+  /**
+   * Process a specification post. This method is called at the start of job's
+   * edit or view page, whenever there is a possibility that form data for a
+   * connection has been posted. Its purpose is to gather form information and
+   * modify the document specification accordingly. The name of the posted
+   * form is "editjob".
+   *
+   * @param variableContext contains the post data, including binary
+   * file-upload information.
+   * @param ds is the current document specification for this job.
+   * @return null if all is well, or a string error message if there is an
+   * error that should prevent saving of the job (and cause a redirection to
+   * an error page).
+   */
+  @Override
+  public String processSpecificationPost(IPostParameters variableContext,
+    DocumentSpecification ds) throws ManifoldCFException {
+    String dropboxPath = variableContext.getParameter("dropboxpath");
+    if (dropboxPath != null) {
+      int i = 0;
+      while (i < ds.getChildCount()) {
+        SpecificationNode oldNode = ds.getChild(i);
+        if (oldNode.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {
+          ds.removeChild(i);
+          break;
+        }
+        i++;
+      }
+      SpecificationNode node = new SpecificationNode(JOB_STARTPOINT_NODE_TYPE);
+      node.setAttribute(JOB_PATH_ATTRIBUTE, dropboxPath);
+      ds.addChild(ds.getChildCount(), node);
+    }
+    String xc = variableContext.getParameter("tokencount");
+    if (xc != null) {
+      // Delete all tokens first
+      int i = 0;
+      while (i < ds.getChildCount()) {
+        SpecificationNode sn = ds.getChild(i);
+        if (sn.getType().equals(JOB_ACCESS_NODE_TYPE))
+          ds.removeChild(i);
+        else
+          i++;
+      }
+
+      int accessCount = Integer.parseInt(xc);
+      i = 0;
+      while (i < accessCount) {
+        String accessDescription = "_"+Integer.toString(i);
+        String accessOpName = "accessop"+accessDescription;
+        xc = variableContext.getParameter(accessOpName);
+        if (xc != null && xc.equals("Delete")) {
+          // Next row
+          i++;
+          continue;
+        }
+        // Get the stuff we need
+        String accessSpec = variableContext.getParameter("spectoken"+accessDescription);
+        SpecificationNode node = new SpecificationNode(JOB_ACCESS_NODE_TYPE);
+        node.setAttribute(JOB_TOKEN_ATTRIBUTE,accessSpec);
+        ds.addChild(ds.getChildCount(),node);
+        i++;
+      }
+
+      String op = variableContext.getParameter("accessop");
+      if (op != null && op.equals("Add"))
+      {
+        String accessspec = variableContext.getParameter("spectoken");
+        SpecificationNode node = new SpecificationNode(JOB_ACCESS_NODE_TYPE);
+        node.setAttribute(JOB_TOKEN_ATTRIBUTE,accessspec);
+        ds.addChild(ds.getChildCount(),node);
+      }
+    }
+
+    return null;
+  }
+
+  /**
+   * Output the specification body section. This method is called in the body
+   * section of a job page which has selected a repository connection of the
+   * current type. Its purpose is to present the required form elements for
+   * editing. The coder can presume that the HTML that is output from this
+   * configuration will be within appropriate <html>, <body>, and <form> tags.
+   * The name of the form is "editjob".
+   *
+   * @param out is the output to which any HTML should be sent.
+   * @param ds is the current document specification for this job.
+   * @param tabName is the current tab name.
+   */
+  @Override
+  public void outputSpecificationBody(IHTTPOutput out,
+    Locale locale, DocumentSpecification ds, String tabName) throws ManifoldCFException,
+    IOException {
+
+    // Output DROPBOXPath tab
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    paramMap.put("TabName", tabName);
+    fillInDropboxPathSpecificationMap(paramMap, ds);
+    fillInDropboxSecuritySpecificationMap(paramMap, ds);
+
+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_FORWARD_DROPBOXPATH,paramMap);
+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_FORWARD_SECURITY,paramMap);
+  }
+
+  /**
+   * Output the specification header section. This method is called in the
+   * head section of a job page which has selected a repository connection of
+   * the current type. Its purpose is to add the required tabs to the list,
+   * and to output any javascript methods that might be needed by the job
+   * editing HTML.
+   *
+   * @param out is the output to which any HTML should be sent.
+   * @param ds is the current document specification for this job.
+   * @param tabsArray is an array of tab names. Add to this array any tab
+   * names that are specific to the connector.
+   */
+  @Override
+  public void outputSpecificationHeader(IHTTPOutput out,
+    Locale locale, DocumentSpecification ds, List<String> tabsArray)
+    throws ManifoldCFException, IOException {
+    tabsArray.add(Messages.getString(locale, DROPBOX_PATH_TAB_PROPERTY));
+    tabsArray.add(Messages.getString(locale, DROPBOX_SECURITY_TAB_PROPERTY));
+
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the specification header map, using data from all tabs.
+    fillInDropboxPathSpecificationMap(paramMap, ds);
+    fillInDropboxSecuritySpecificationMap(paramMap, ds);
+
+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_HEADER_FORWARD,paramMap);
+  }
+
+  /**
+   * Queue "seed" documents. Seed documents are the starting places for
+   * crawling activity. Documents are seeded when this method calls
+   * appropriate methods in the passed in ISeedingActivity object.
+   *
+   * This method can choose to find repository changes that happen only during
+   * the specified time interval. The seeds recorded by this method will be
+   * viewed by the framework based on what the getConnectorModel() method
+   * returns.
+   *
+   * It is not a big problem if the connector chooses to create more seeds
+   * than are strictly necessary; it is merely a question of overall work
+   * required.
+   *
+   * The times passed to this method may be interpreted for greatest
+   * efficiency. The time ranges any given job uses with this connector will
+   * not overlap, but will proceed starting at 0 and going to the "current
+   * time", each time the job is run. For continuous crawling jobs, this
+   * method will be called once, when the job starts, and at various periodic
+   * intervals as the job executes.
+   *
+   * When a job's specification is changed, the framework automatically resets
+   * the seeding start time to 0. The seeding start time may also be set to 0
+   * on each job run, depending on the connector model returned by
+   * getConnectorModel().
+   *
+   * Note that it is always ok to send MORE documents rather than less to this
+   * method.
+   *
+   * @param activities is the interface this method should use to perform
+   * whatever framework actions are desired.
+   * @param spec is a document specification (that comes from the job).
+   * @param startTime is the beginning of the time range to consider,
+   * inclusive.
+   * @param endTime is the end of the time range to consider, exclusive.
+   * @param jobMode is an integer describing how the job is being run, whether
+   * continuous or once-only.
+   */
+  @Override
+  public void addSeedDocuments(ISeedingActivity activities,
+    DocumentSpecification spec, long startTime, long endTime, int jobMode)
+    throws ManifoldCFException, ServiceInterruption {
+    
+      
+    String dropboxPath = StringUtils.EMPTY;
+    int i = 0;
+    while (i < spec.getChildCount()) {
+      SpecificationNode sn = spec.getChild(i);
+      if (sn.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {
+        dropboxPath = sn.getAttributeValue(JOB_PATH_ATTRIBUTE);
+        break;
+      }
+      i++;
+    }
+    
+    getSession();
+    GetSeedsThread t = new GetSeedsThread(dropboxPath);
+    try {
+      t.start();
+      boolean wasInterrupted = false;
+      try {
+        XThreadStringBuffer seedBuffer = t.getBuffer();
+
+        // Pick up the paths, and add them to the activities, before we join with the child thread.
+        while (true) {
+          // The only kind of exceptions this can throw are going to shut the process down.
+          String docPath = seedBuffer.fetch();
+          if (docPath ==  null)
+            break;
+          // Add the pageID to the queue
+          activities.addSeedDocument(docPath);
+        }
+      } catch (InterruptedException e) {
+        wasInterrupted = true;
+        throw e;
+      } catch (ManifoldCFException e) {
+        if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+          wasInterrupted = true;
+        throw e;
+      } finally {
+        if (!wasInterrupted)
+          t.finishUp();
+      }
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (DropboxException e) {
+      Logging.connectors.warn("DROPBOX: Error adding seed documents: " + e.getMessage(), e);
+      handleDropboxException(e);
+    }
+  }
+
+  protected class GetSeedsThread extends Thread {
+
+    protected Throwable exception = null;
+    protected final String path;
+    protected final XThreadStringBuffer seedBuffer;
+    
+    public GetSeedsThread(String path) {
+      super();
+      this.path = path;
+      this.seedBuffer = new XThreadStringBuffer();
+      setDaemon(true);
+    }
+
+    @Override
+    public void run() {
+      try {
+        session.getSeeds(seedBuffer,path,25000); //upper limit on files to get supported by dropbox api in a single directory
+      } catch (Throwable e) {
+        this.exception = e;
+      } finally {
+        seedBuffer.signalDone();
+      }
+    }
+
+    public XThreadStringBuffer getBuffer() {
+      return seedBuffer;
+    }
+    
+    public void finishUp()
+      throws InterruptedException, DropboxException {
+      seedBuffer.abandon();
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof DropboxException)
+          throw (DropboxException) thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException) thr;
+        else if (thr instanceof Error)
+          throw (Error) thr;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+      }
+    }
+  }
+
+  /**
+   * Process a set of documents. This is the method that should cause each
+   * document to be fetched, processed, and the results either added to the
+   * queue of documents for the current job, and/or entered into the
+   * incremental ingestion manager. The document specification allows this
+   * class to filter what is done based on the job.
+   *
+   * @param documentIdentifiers is the set of document identifiers to process.
+   * @param versions is the corresponding document versions to process, as
+   * returned by getDocumentVersions() above. The implementation may choose to
+   * ignore this parameter and always process the current version.
+   * @param activities is the interface this method should use to queue up new
+   * document references and ingest documents.
+   * @param spec is the document specification.
+   * @param scanOnly is an array corresponding to the document identifiers. It
+   * is set to true to indicate when the processing should only find other
+   * references, and should not actually call the ingestion methods.
+   * @param jobMode is an integer describing how the job is being run, whether
+   * continuous or once-only.
+   */
+  @SuppressWarnings("unchecked")
+  @Override
+  public void processDocuments(String[] documentIdentifiers, String[] versions,
+    IProcessActivity activities, DocumentSpecification spec,
+    boolean[] scanOnly) throws ManifoldCFException, ServiceInterruption {
+      
+    Logging.connectors.debug("DROPBOX: Inside processDocuments");
+      
+    for (int i = 0; i < documentIdentifiers.length; i++) {
+      long startTime = System.currentTimeMillis();
+      String errorCode = "FAILED";
+      String errorDesc = StringUtils.EMPTY;
+      Long fileSize = null;
+      boolean doLog = false;
+      String nodeId = documentIdentifiers[i];
+      String version = versions[i];
+      
+      try {
+        if (Logging.connectors.isDebugEnabled()) {
+          Logging.connectors.debug("DROPBOX: Processing document identifier '"
+              + nodeId + "'");
+        }
+
+        getSession();
+        GetObjectThread objt = new GetObjectThread(nodeId);
+        try {
+          objt.start();
+          objt.finishUp();
+        } catch (InterruptedException e) {
+          objt.interrupt();
+          throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+            ManifoldCFException.INTERRUPTED);
+        } catch (DropboxException e) {
+          errorCode = "DROPBOX ERROR";
+          errorDesc = e.getMessage();
+          Logging.connectors.warn("DROPBOX: Error getting object: " + e.getMessage(), e);
+          handleDropboxException(e);
+        }
+
+        DropboxAPI.Entry dropboxObject = objt.getResponse();
+
+        if(dropboxObject.isDeleted){
+          continue;
+        }
+        
+        if (dropboxObject.isDir) {
+
+          // adding all the children + subdirs for a folder
+
+          List<DropboxAPI.Entry> children = dropboxObject.contents;
+          for (DropboxAPI.Entry child : children) {
+            activities.addDocumentReference(child.path, nodeId, RELATIONSHIP_CHILD);
+          }
+
+        } else {
+          // its a file
+          if (!scanOnly[i]) {
+            doLog = true;
+            
+            // Unpack the version string
+            ArrayList acls = new ArrayList();
+            StringBuilder denyAclBuffer = new StringBuilder();
+            int index = unpackList(acls,version,0,'+');
+            if (index < version.length() && version.charAt(index++) == '+') {
+              index = unpack(denyAclBuffer,version,index,'+');
+            }
+
+            // content ingestion
+            RepositoryDocument rd = new RepositoryDocument();
+
+            // Turn into acls and add into description
+            String[] aclArray = new String[acls.size()];
+            for (int j = 0; j < aclArray.length; j++) {
+              aclArray[j] = (String)acls.get(j);
+            }
+            rd.setACL(aclArray);
+            if (denyAclBuffer.length() > 0) {
+              String[] denyAclArray = new String[]{denyAclBuffer.toString()};
+              rd.setDenyACL(denyAclArray);
+            }
+
+            // Length in bytes
+            long fileLength = dropboxObject.bytes;
+            //documentURI
+            String documentURI = dropboxObject.path;
+
+            if (dropboxObject.path != null)
+              rd.setFileName(dropboxObject.path);
+            if (dropboxObject.mimeType != null)
+              rd.setMimeType(dropboxObject.mimeType);
+            if (dropboxObject.modified != null)
+              rd.setModifiedDate(com.dropbox.client2.RESTUtility.parseDate(dropboxObject.modified));
+            // There doesn't appear to be a created date...
+              
+            rd.addField("Modified", dropboxObject.modified);
+            rd.addField("Size", dropboxObject.size);
+            rd.addField("Path", dropboxObject.path);
+            rd.addField("Root", dropboxObject.root);
+            rd.addField("ClientMtime", dropboxObject.clientMtime);
+            rd.addField("mimeType", dropboxObject.mimeType);
+            rd.addField("rev", dropboxObject.rev);
+            
+            getSession();
+            BackgroundStreamThread t = new BackgroundStreamThread(nodeId);
+            try {
+              t.start();
+              boolean wasInterrupted = false;
+              try {
+                InputStream is = t.getSafeInputStream();
+                try {
+                  rd.setBinary(is, fileLength);
+                  activities.ingestDocument(nodeId, version, documentURI, rd);
+                } finally {
+                  is.close();
+                }
+              } catch (java.net.SocketTimeoutException e) {
+                throw e;
+              } catch (InterruptedIOException e) {
+                wasInterrupted = true;
+                throw e;
+              } catch (ManifoldCFException e) {
+                if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                  wasInterrupted = true;
+                throw e;
+              } finally {
+                if (!wasInterrupted)
+                  // This does a join
+                  t.finishUp();
+              }
+
+              // No errors.  Record the fact that we made it.
+              errorCode = "OK";
+              fileSize = new Long(fileLength);
+            } catch (InterruptedException e) {
+              // We were interrupted out of the join, most likely.  Before we abandon the thread,
+              // send a courtesy interrupt.
+              t.interrupt();
+              throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+                ManifoldCFException.INTERRUPTED);
+            } catch (java.net.SocketTimeoutException e) {
+              errorCode = "IO ERROR";
+              errorDesc = e.getMessage();
+              handleIOException(e);
+            } catch (InterruptedIOException e) {
+              t.interrupt();
+              throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+                ManifoldCFException.INTERRUPTED);
+            } catch (IOException e) {
+              errorCode = "IO ERROR";
+              errorDesc = e.getMessage();
+              handleIOException(e);
+            } catch (DropboxException e) {
+              Logging.connectors.warn("DROPBOX: Error getting stream: " + e.getMessage(), e);
+              errorCode = "DROPBOX ERROR";
+              errorDesc = e.getMessage();
+              handleDropboxException(e);
+            }
+          }
+        }
+      } finally {
+        if (doLog)
+          activities.recordActivity(new Long(startTime), ACTIVITY_READ,
+            fileSize, nodeId, errorCode, errorDesc, null);
+      }
+    }
+  }
+
+
+  protected class GetObjectThread extends Thread {
+
+    protected final String nodeId;
+    protected Throwable exception = null;
+    protected DropboxAPI.Entry response = null;
+
+    public GetObjectThread(String nodeId) {
+      super();
+      setDaemon(true);
+      this.nodeId = nodeId;
+    }
+
+    public void run() {
+      try {
+        response = session.getObject(nodeId);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException, DropboxException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof DropboxException)
+          throw (DropboxException) thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException) thr;
+        else if (thr instanceof Error)
+          throw (Error) thr;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+      }
+    }
+
+    public DropboxAPI.Entry getResponse() {
+      return response;
+    }
+    
+    public Throwable getException() {
+      return exception;
+    }
+  }
+
+  protected class BackgroundStreamThread extends Thread
+  {
+    protected final String nodeId;
+    
+    protected boolean abortThread = false;
+    protected Throwable responseException = null;
+    protected InputStream sourceStream = null;
+    protected XThreadInputStream threadStream = null;
+    
+    public BackgroundStreamThread(String nodeId)
+    {
+      super();
+      setDaemon(true);
+      this.nodeId = nodeId;
+    }
+
+    public void run()
+    {
+      try {
+        try {
+          synchronized (this) {
+            if (!abortThread) {
+              sourceStream = session.getDropboxInputStream(nodeId);
+              threadStream = new XThreadInputStream(sourceStream);
+              this.notifyAll();
+            }
+          }
+          
+          if (threadStream != null)
+          {
+            // Stuff the content until we are done
+            threadStream.stuffQueue();
+          }
+        } finally {
+          if (sourceStream != null)
+            sourceStream.close();
+        }
+      } catch (Throwable e) {
+        responseException = e;
+      }
+    }
+
+    public InputStream getSafeInputStream()
+      throws InterruptedException, IOException, DropboxException
+    {
+      // Must wait until stream is created, or until we note an exception was thrown.
+      while (true)
+      {
+        synchronized (this)
+        {
+          if (responseException != null)
+            throw new IllegalStateException("Check for response before getting stream");
+          checkException(responseException);
+          if (threadStream != null)
+            return threadStream;
+          wait();
+        }
+      }
+    }
+    
+    public void finishUp()
+      throws InterruptedException, IOException, DropboxException
+    {
+      // This will be called during the finally
+      // block in the case where all is well (and
+      // the stream completed) and in the case where
+      // there were exceptions.
+      synchronized (this) {
+        if (threadStream != null) {
+          threadStream.abort();
+        }
+        abortThread = true;
+      }
+
+      join();
+
+      checkException(responseException);
+    }
+    
+    protected synchronized void checkException(Throwable exception)
+      throws IOException, DropboxException
+    {
+      if (exception != null)
+      {
+        Throwable e = exception;
+        if (e instanceof DropboxException)
+          throw (DropboxException)e;
+        else if (e instanceof IOException)
+          throw (IOException)e;
+        else if (e instanceof RuntimeException)
+          throw (RuntimeException)e;
+        else if (e instanceof Error)
+          throw (Error)e;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+e.getClass().getName(),e);
+      }
+    }
+
+  }
+
+  /**
+   * The short version of getDocumentVersions. Get document versions given an
+   * array of document identifiers. This method is called for EVERY document
+   * that is considered. It is therefore important to perform as little work
+   * as possible here.
+   *
+   * @param documentIdentifiers is the array of local document identifiers, as
+   * understood by this connector.
+   * @param spec is the current document specification for the current job. If
+   * there is a dependency on this specification, then the version string
+   * should include the pertinent data, so that reingestion will occur when
+   * the specification changes. This is primarily useful for metadata.
+   * @return the corresponding version strings, with null in the places where
+   * the document no longer exists. Empty version strings indicate that there
+   * is no versioning ability for the corresponding document, and the document
+   * will always be processed.
+   */
+  @Override
+  public String[] getDocumentVersions(String[] documentIdentifiers,
+    DocumentSpecification spec) throws ManifoldCFException, ServiceInterruption {
+
+    // Forced acls
+    String[] acls = getAcls(spec);
+    // Sort it,
+    java.util.Arrays.sort(acls);
+
+    String[] rval = new String[documentIdentifiers.length];
+    for (int i = 0; i < rval.length; i++) {
+      getSession();
+      GetObjectThread objt = new GetObjectThread(documentIdentifiers[i]);
+      try {
+        objt.start();
+        objt.finishUp();
+      } catch (InterruptedException e) {
+        objt.interrupt();
+        throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+          ManifoldCFException.INTERRUPTED);
+      } catch (DropboxException e) {
+        Logging.connectors.warn("DROPBOX: Error getting object: " + e.getMessage(), e);
+        handleDropboxException(e);
+      }
+
+      DropboxAPI.Entry dropboxObject = objt.getResponse();
+
+      if (!dropboxObject.isDir) {
+        if (dropboxObject.isDeleted) {
+          rval[i] = null;
+        } else if (StringUtils.isNotEmpty(dropboxObject.rev)) {
+          StringBuilder sb = new StringBuilder();
+
+          // Acls
+          packList(sb,acls,'+');
+          if (acls.length > 0) {
+            sb.append('+');
+            pack(sb,defaultAuthorityDenyToken,'+');
+          }
+          else
+            sb.append('-');
+
+          sb.append(dropboxObject.rev);
+          rval[i] = sb.toString();
+        } else {
+          //a document that doesn't contain versioning information will never be processed
+          rval[i] = null;
+        }
+      } else {
+        //a folder will always be processed
+        rval[i] = StringUtils.EMPTY;
+      }
+    }
+    return rval;
+  }
+  
+  /** Grab forced acl out of document specification.
+  *@param spec is the document specification.
+  *@return the acls.
+  */
+  protected static String[] getAcls(DocumentSpecification spec) {
+    Set<String> map = new HashSet<String>();
+    for (int i = 0; i < spec.getChildCount(); i++) {
+      SpecificationNode sn = spec.getChild(i);
+      if (sn.getType().equals(JOB_ACCESS_NODE_TYPE)) {
+        String token = sn.getAttributeValue(JOB_TOKEN_ATTRIBUTE);
+        map.add(token);
+      }
+    }
+
+    String[] rval = new String[map.size()];
+    Iterator<String> iter = map.iterator();
+    int i = 0;
+    while (iter.hasNext()) {
+      rval[i++] = (String)iter.next();
+    }
+    return rval;
+  }
+
+  /** Handle a dropbox exception. */
+  protected static void handleDropboxException(DropboxException e)
+    throws ManifoldCFException, ServiceInterruption {
+    // Right now I don't know enough, so throw Service Interruptions
+    long currentTime = System.currentTimeMillis();
+    throw new ServiceInterruption("Dropbox exception: "+e.getMessage(), e, currentTime + 300000L,
+      currentTime + 3 * 60 * 60000L,-1,false);
+  }
+  
+  /** Handle an IO exception. */
+  protected static void handleIOException(IOException e)
+    throws ManifoldCFException, ServiceInterruption {
+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    }
+    long currentTime = System.currentTimeMillis();
+    throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L,
+      currentTime + 3 * 60 * 60000L,-1,false);
+  }
+  
+}
diff --git a/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxSession.java b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxSession.java
new file mode 100644
index 0000000..2009281
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/DropboxSession.java
@@ -0,0 +1,106 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+/*
+ * To change this template, choose Tools | Templates
+ * and open the template in the editor.
+ */
+package org.apache.manifoldcf.crawler.connectors.dropbox;
+
+import org.apache.manifoldcf.core.common.*;
+
+import com.dropbox.client2.session.AppKeyPair;
+import java.util.Map;
+import com.dropbox.client2.session.WebAuthSession;
+import com.dropbox.client2.DropboxAPI;
+import com.dropbox.client2.DropboxAPI.DeltaEntry;
+import com.dropbox.client2.DropboxAPI.DropboxInputStream;
+import com.dropbox.client2.DropboxAPI.Entry;
+import com.dropbox.client2.exception.DropboxException;
+import com.dropbox.client2.jsonextract.JsonExtractionException;
+import com.dropbox.client2.jsonextract.JsonList;
+import com.dropbox.client2.jsonextract.JsonMap;
+import com.dropbox.client2.jsonextract.JsonThing;
+import com.dropbox.client2.session.AccessTokenPair;
+import com.dropbox.client2.session.AppKeyPair;
+import com.dropbox.client2.session.Session;
+import com.dropbox.client2.session.WebAuthSession;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.List;
+import org.json.simple.parser.ParseException;
+
+/**
+ *
+ * @author andrew
+ */
+public class DropboxSession {
+
+  private DropboxAPI<?> client;
+  
+  public DropboxSession(String app_key, String app_secret, String key, String secret) {
+    AppKeyPair appKeyPair = new AppKeyPair(app_key, app_secret);
+    WebAuthSession session = new WebAuthSession(appKeyPair, WebAuthSession.AccessType.DROPBOX);
+    AccessTokenPair ac = new AccessTokenPair(key, secret);
+    session.setAccessTokenPair(ac);
+    client = new DropboxAPI<WebAuthSession>(session);
+  }
+
+  public Map<String, String> getRepositoryInfo() throws DropboxException {
+    Map<String, String> info = new HashMap<String, String>();
+
+    info.put("Country", client.accountInfo().country);
+    info.put("Display Name", client.accountInfo().displayName);
+    info.put("Referral Link", client.accountInfo().referralLink);
+    info.put("Quota", String.valueOf(client.accountInfo().quota));
+    info.put("Quota Normal", String.valueOf(client.accountInfo().quotaNormal));
+    info.put("Quota Shared", String.valueOf(client.accountInfo().quotaShared));
+    info.put("Uid", String.valueOf(client.accountInfo().uid));
+    return info;
+  }
+
+  public void getSeeds(XThreadStringBuffer idBuffer, String path, int max_dirs)
+    throws DropboxException, InterruptedException {
+
+    idBuffer.add(path); //need to add root dir so that single files such as /file1 will still get read
+        
+        
+    DropboxAPI.Entry root_entry = client.metadata(path, max_dirs, null, true, null);
+    List<DropboxAPI.Entry> entries = root_entry.contents; //gets a list of the contents of the entire folder: subfolders + files
+
+    // Apply the entries one by one.
+    for (DropboxAPI.Entry e : entries) {
+      if (e.isDir) { //only add the directories as seeds, we'll add the files later
+        idBuffer.add(e.path);
+      }
+    }
+  }
+  
+  public DropboxAPI.Entry getObject(String id) throws DropboxException {
+    return client.metadata(id, 25000, null, true, null);
+  }
+
+  public DropboxInputStream getDropboxInputStream(String id) throws DropboxException {
+    return client.getFileStream(id, null);
+  }
+  
+  public void close() {
+    // MHL
+  }
+}
diff --git a/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/Messages.java b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/Messages.java
new file mode 100644
index 0000000..ee00b26
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/dropbox/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.dropbox;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.crawler.connectors.dropbox.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.crawler.connectors.dropbox";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/dropbox/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/dropbox/common_en_US.properties b/connectors/dropbox/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/dropbox/common_en_US.properties
new file mode 100644
index 0000000..3a79cc4
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/dropbox/common_en_US.properties
@@ -0,0 +1,39 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+DropboxRepositoryConnector.Server=Server

+DropboxRepositoryConnector.Security=Security

+DropboxRepositoryConnector.DropboxPath=Folder

+

+DropboxRepositoryConnector.AppKeyColon=Application Key:

+DropboxRepositoryConnector.AppSecretColon=Application Secret:

+DropboxRepositoryConnector.KeyColon=Key:

+DropboxRepositoryConnector.SecretColon=Secret:

+

+DropboxRepositoryConnector.TheAppKeyMustNotBeNull=The Application Key must not be null

+DropboxRepositoryConnector.TheAppSecretMustNotBeNull=The Application Secret must not be null

+DropboxRepositoryConnector.TheKeyMustNotBeNull=The Key must not be null

+DropboxRepositoryConnector.TheSecretMustNotBeNull=The Secret must not be null

+DropboxRepositoryConnector.PathMustNotBeNull=Path must be not null

+DropboxRepositoryConnector.TypeInAnAccessToken=Type in an access token

+

+DropboxRepositoryConnector.DropboxPathColon=Dropbox folder to index:

+

+DropboxRepositoryConnector.NoAccessTokensPresent=No access tokens present

+DropboxRepositoryConnector.Add=Add

+DropboxRepositoryConnector.AddAccessToken=Add access token

+DropboxRepositoryConnector.Delete=Delete

+DropboxRepositoryConnector.DeleteToken=Delete token #

+DropboxRepositoryConnector.AccessTokensColon=Access tokens:

diff --git a/connectors/dropbox/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/dropbox/common_ja_JP.properties b/connectors/dropbox/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/dropbox/common_ja_JP.properties
new file mode 100644
index 0000000..e75479e
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/dropbox/common_ja_JP.properties
@@ -0,0 +1,39 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+DropboxRepositoryConnector.Server=Server
+DropboxRepositoryConnector.Security=Security
+DropboxRepositoryConnector.DropboxPath=Folder
+
+DropboxRepositoryConnector.AppKeyColon=Application Key:
+DropboxRepositoryConnector.AppSecretColon=Application Secret:
+DropboxRepositoryConnector.KeyColon=Key:
+DropboxRepositoryConnector.SecretColon=Secret:
+
+DropboxRepositoryConnector.TheAppKeyMustNotBeNull=The Application Key must not be null
+DropboxRepositoryConnector.TheAppSecretMustNotBeNull=The Application Secret must not be null
+DropboxRepositoryConnector.TheKeyMustNotBeNull=The Key must not be null
+DropboxRepositoryConnector.TheSecretMustNotBeNull=The Secret must not be null
+DropboxRepositoryConnector.PathMustNotBeNull=Path must be not null
+DropboxRepositoryConnector.TypeInAnAccessToken=Type in an access token
+
+DropboxRepositoryConnector.DropboxPathColon=Dropbox folder to index:
+
+DropboxRepositoryConnector.NoAccessTokensPresent=No access tokens present
+DropboxRepositoryConnector.Add=Add
+DropboxRepositoryConnector.AddAccessToken=Add access token
+DropboxRepositoryConnector.Delete=Delete
+DropboxRepositoryConnector.DeleteToken=Delete token #
+DropboxRepositoryConnector.AccessTokensColon=Access tokens:
diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editConfiguration.js b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editConfiguration.js
new file mode 100644
index 0000000..f6efe8a
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editConfiguration.js
@@ -0,0 +1,61 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<script type="text/javascript">

+<!--

+function checkConfig()

+{

+  return true;

+}

+ 

+function checkConfigForSave()

+{

+    

+    if (editconnection.app_key.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.TheAppKeyMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.Server'))");

+    editconnection.app_key.focus();

+    return false;

+  }

+  

+    if (editconnection.app_secret.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.TheAppSecretMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.Server'))");

+    editconnection.app_secret.focus();

+    return false;

+  }

+    

+  if (editconnection.key.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.TheKeyMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.Server'))");

+    editconnection.key.focus();

+    return false;

+  }

+  if (editconnection.secret.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.TheSecretMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.Server'))");

+    editconnection.secret.focus();

+    return false;

+  }

+  return true;

+}

+//-->

+</script>
\ No newline at end of file
diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editConfiguration_Server.html b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editConfiguration_Server.html
new file mode 100644
index 0000000..2b7cb9e
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editConfiguration_Server.html
@@ -0,0 +1,67 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+#if($TabName == $ResourceBundle.getString('DropboxRepositoryConnector.Server'))

+

+<table class="displaytable">

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.AppKeyColon'))</nobr>

+    </td>

+    <td class="value">

+      <input type="text" id="app_key" name="app_key" value="$Encoder.attributeEscape($APP_KEY)" />

+    </td>

+  </tr>

+  

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.AppSecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <input type="password" id="app_secret" name="app_secret" value="$Encoder.attributeEscape($APP_SECRET)" />

+    </td>

+  </tr>

+  

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.KeyColon'))</nobr>

+    </td>

+    <td class="value">

+      <input type="text" id="key" name="key" value="$Encoder.attributeEscape($KEY)" />

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.SecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <input type="password" id="secret" name="secret" value="$Encoder.attributeEscape($SECRET)" />

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="app_key" value="$Encoder.attributeEscape($APP_KEY)" />

+<input type="hidden" name="app_secret" value="$Encoder.attributeEscape($APP_SECRET)" />

+<input type="hidden" name="key" value="$Encoder.attributeEscape($KEY)" />

+<input type="hidden" name="secret" value="$Encoder.attributeEscape($SECRET)" />

+

+#end

diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification.js b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification.js
new file mode 100644
index 0000000..9d9f26e
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification.js
@@ -0,0 +1,59 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<script type="text/javascript">

+<!--

+function checkSpecification()

+{

+  return true;

+}

+ 

+function checkSpecificationForSave()

+{

+  if(editjob.dropboxpath.value == "") {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.PathMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.Server'))");

+    editjob.dropboxpath.focus();

+    return false;

+  }

+  return true;

+}

+ 

+function SpecOp(n, opValue, anchorvalue)

+{

+  eval("editjob."+n+".value = \""+opValue+"\"");

+  postFormSetAnchor(anchorvalue);

+}

+

+function SpecDeleteToken(i)

+{

+  SpecOp("accessop_"+i,"Delete","token_"+i);

+}

+

+function SpecAddToken(i)

+{

+  if (editjob.spectoken.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('DropboxRepositoryConnector.TypeInAnAccessToken'))");

+    editjob.spectoken.focus();

+    return;

+  }

+  SpecOp("accessop","Add","token_"+i);

+}

+

+//-->

+</script>
\ No newline at end of file
diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification_DropboxPath.html b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification_DropboxPath.html
new file mode 100644
index 0000000..b9c235d
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification_DropboxPath.html
@@ -0,0 +1,36 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+#if($TabName == $ResourceBundle.getString('DropboxRepositoryConnector.DropboxPath'))

+

+<table class="displaytable">

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.DropboxPathColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr><input type="text" size="80" name="dropboxpath" value="$Encoder.attributeEscape($DROPBOXPATH)" /></nobr>

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="dropboxpath" value="$Encoder.attributeEscape($DROPBOXPATH)" />

+

+#end

+

diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification_Security.html b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification_Security.html
new file mode 100644
index 0000000..80a8501
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/editSpecification_Security.html
@@ -0,0 +1,73 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('DropboxRepositoryConnector.Security'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  
+  #set($atcounter = 0)
+  #foreach($atoken in $ACCESSTOKENS)
+
+  <tr>
+    <td class="description">
+      <input type="hidden" name="accessop_$atcounter" value=""/>
+      <input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($atoken.get('TOKEN'))"/>
+      <a name="token_$atcounter">
+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('DropboxRepositoryConnector.Delete'))" onClick='Javascript:SpecDeleteToken($atcounter)' alt="$Encoder.attributeEscape($ResourceBundle.getString('DropboxRepositoryConnector.DeleteToken'))$atcounter"/>
+      </a>
+    </td>
+    <td class="value">$Encoder.bodyEscape($atoken.get('TOKEN'))</td>
+  </tr>
+
+    #set($atcounter = $atcounter + 1)
+  #end
+
+  #set($nexttoken = $atcounter + 1)
+
+  #if($atcounter == 0)
+  <tr>
+    <td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.NoAccessTokensPresent'))</td>
+  </tr>
+  #end
+
+  <tr><td class="lightseparator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+    <td class="description">
+      <input type="hidden" name="tokencount" value="$atcounter"/>
+      <input type="hidden" name="accessop" value=""/>
+      <a name="token_$atcounter">
+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('DropboxRepositoryConnector.Add'))" onClick='Javascript:SpecAddToken($nexttoken)' alt="$Encoder.attributeEscape($ResourceBundle.getString('DropboxRepositoryConnector.AddAccessToken'))"/>
+      </a>
+    </td>
+    <td class="value">
+      <input type="text" size="30" name="spectoken" value=""/>
+    </td>
+  </tr>
+</table>
+
+#else
+
+  #set($atcounter = 0)
+  #foreach($atoken in $ACCESSTOKENS)
+<input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($atoken.get('TOKEN'))"/>
+    #set($atcounter = $atcounter + 1)
+  #end
+<input type="hidden" name="tokencount" value="$atcounter"/>
+
+#end
diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/viewConfiguration.html b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/viewConfiguration.html
new file mode 100644
index 0000000..3b4d74f
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/viewConfiguration.html
@@ -0,0 +1,52 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<table class="displaytable">

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.AppKeyColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($APP_KEY)</nobr>

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.AppSecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>********</nobr>

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.KeyColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($KEY)</nobr>

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.SecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>********</nobr>

+    </td>

+  </tr>

+</table>

+

diff --git a/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/viewSpecification.html b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/viewSpecification.html
new file mode 100644
index 0000000..4e28a44
--- /dev/null
+++ b/connectors/dropbox/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/dropbox/viewSpecification.html
@@ -0,0 +1,46 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<table class="displaytable">

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.DropboxPathColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($DROPBOXPATH)</nobr>

+    </td>

+  </tr>

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+#if($ACCESSTOKENS.size() == 0)

+    <td class="message" colspan="2">

+      $Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.NoAccessTokensPresent'))

+    </td>

+#else

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('DropboxRepositoryConnector.AccessTokensColon'))</nobr>

+    </td>

+    <td class="value">

+  #set($atcounter = 0)

+  #foreach($atoken in $ACCESSTOKENS)

+    <nobr>$Encoder.bodyEscape($atoken.get('TOKEN'))</nobr><br/>

+    #set($atcounter = $atcounter + 1)

+  #end

+    </td>

+#end

+  </tr>

+</table>

diff --git a/connectors/dropbox/pom.xml b/connectors/dropbox/pom.xml
new file mode 100644
index 0000000..c46e7a4
--- /dev/null
+++ b/connectors/dropbox/pom.xml
@@ -0,0 +1,158 @@
+<?xml version="1.0" encoding="UTF-8"?>

+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

+

+    <parent>

+        <groupId>org.apache.manifoldcf</groupId>

+        <artifactId>mcf-connectors</artifactId>

+        <version>1.5-SNAPSHOT</version>

+    </parent>

+    <modelVersion>4.0.0</modelVersion>

+

+    <packaging>jar</packaging>

+

+    <developers>

+        <developer>

+            <name>Andrew Janowczyk</name>

+            <organization>Searchbox</organization>

+            <organizationUrl>http://www.searchbox.com</organizationUrl>

+            <url>http://www.searchbox.com</url>

+        </developer>

+    </developers>

+

+    <artifactId>mcf-dropbox-connector</artifactId>

+    <name>ManifoldCF - Connectors - Dropbox</name>

+

+

+    <properties>

+        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

+    </properties>

+

+    <build>

+        <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>

+        <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>

+        <resources>

+            <resource>

+                <directory>${basedir}/connector/src/main/resources</directory>

+                <includes>

+                    <include>**/*.html</include>

+                    <include>**/*.js</include>

+                </includes>

+            </resource>

+            <resource>

+                <directory>${basedir}/connector/src/main/native2ascii</directory>

+                <includes>

+                    <include>**/*.properties</include>

+                </includes>

+            </resource>

+        </resources>

+        <plugins>

+            <plugin>

+                <groupId>org.codehaus.mojo</groupId>

+                <artifactId>native2ascii-maven-plugin</artifactId>

+                <version>1.0-beta-1</version>

+                <configuration>

+                    <workDir>target/classes</workDir>

+                </configuration>

+                <executions>

+                    <execution>

+                        <id>native2ascii-utf8</id>

+                        <goals>

+                            <goal>native2ascii</goal>

+                        </goals>

+                        <configuration>

+                            <encoding>UTF8</encoding>

+                            <includes>

+                                <include>**/*.properties</include>

+                            </includes>

+                        </configuration>

+                    </execution>

+                </executions>

+            </plugin>

+

+            <plugin>

+                <groupId>org.apache.maven.plugins</groupId>

+                <artifactId>maven-surefire-plugin</artifactId>

+                <configuration>

+                    <excludes>

+                        <exclude>**/*Postgresql*.java</exclude>

+                        <exclude>**/*MySQL*.java</exclude>

+                    </excludes>

+                    <forkMode>always</forkMode>

+                    <workingDirectory>target/test-output</workingDirectory>

+                </configuration>

+            </plugin>

+

+        </plugins>

+    </build>

+

+    <dependencies>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-pull-agent</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-agents</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-ui-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>commons-lang</groupId>

+            <artifactId>commons-lang</artifactId>

+            <version>${commons-lang.version}</version>

+            <type>jar</type>

+        </dependency>

+        <dependency>

+            <groupId>org.syncloud</groupId>

+            <artifactId>dropbox-client</artifactId>

+            <version>1.5.3</version>

+            <type>jar</type>

+        </dependency>

+        <dependency>

+            <groupId>com.googlecode.json-simple</groupId>

+            <artifactId>json-simple</artifactId>

+            <version>1.1</version>

+            <type>jar</type>

+        </dependency>

+        <dependency>

+            <groupId>commons-logging</groupId>

+            <artifactId>commons-logging</artifactId>

+            <version>${commons-logging.version}</version>

+            <scope>test</scope>

+        </dependency>

+        <dependency>

+            <groupId>log4j</groupId>

+            <artifactId>log4j</artifactId>

+            <version>${log4j.version}</version>

+            <type>jar</type>

+        </dependency>

+    </dependencies>

+</project>

diff --git a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnection.java b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnection.java
index a248b0f..3b2013f 100644
--- a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnection.java
+++ b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnection.java
@@ -74,8 +74,8 @@
 
   private String response;
 
-  protected String jsonStatus = "\"ok\"";
-  protected String jsonException = "\"error\"";
+  protected final static String jsonStatus = "\"ok\"";
+  protected final static String jsonException = "\"error\"";
 
   public enum Result
   {
diff --git a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnector.java b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnector.java
index 2494af4..fc82120 100644
--- a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnector.java
+++ b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchConnector.java
@@ -168,6 +168,16 @@
     expirationTime = -1L;
   }
   
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return connectionManager != null;
+  }
+
   @Override
   public void disconnect()
     throws ManifoldCFException
@@ -342,9 +352,23 @@
   public boolean checkDocumentIndexable(String outputDescription, File localFile)
       throws ManifoldCFException, ServiceInterruption
   {
+    // No filtering here; we don't look inside the file and don't know its extension.  That's done via the url
+    // filter
+    return true;
+  }
+  
+  /** Pre-determine whether a document's URL is indexable by this connector.  This method is used by participating repository connectors
+  * to help filter out documents that are not worth indexing.
+  *@param outputDescription is the document's output version.
+  *@param url is the URL of the document.
+  *@return true if the file is indexable.
+  */
+  @Override
+  public boolean checkURLIndexable(String outputDescription, String url)
+    throws ManifoldCFException, ServiceInterruption
+  {
     ElasticSearchSpecs specs = getSpecsCache(outputDescription);
-    return specs
-        .checkExtension(FilenameUtils.getExtension(localFile.getName()));
+    return specs.checkExtension(FilenameUtils.getExtension(url));
   }
 
   @Override
diff --git a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchIndex.java b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchIndex.java
index f90d28c..42c911f 100644
--- a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchIndex.java
+++ b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchIndex.java
@@ -22,6 +22,7 @@
 import java.io.IOException;
 import java.io.InputStream;
 import java.io.OutputStream;
+import java.io.OutputStreamWriter;
 import java.io.PrintWriter;
 import java.util.Iterator;
 
@@ -96,7 +97,7 @@
     @Override
     public void writeTo(OutputStream out)
       throws IOException {
-      PrintWriter pw = new PrintWriter(out);
+      PrintWriter pw = new PrintWriter(new OutputStreamWriter(out, "utf-8"));
       try
       {
         pw.print("{");
@@ -115,17 +116,19 @@
           if(needComma){
             pw.print(",");
           }
-          pw.print("\"type\" : \"attachment\",");
+          // I'm told this is not necessary: see CONNECTORS-690
+          //pw.print("\"type\" : \"attachment\",");
+          pw.print("\"file\" : {");
           String contentType = document.getMimeType();
           if (contentType != null)
             pw.print("\"_content_type\" : "+jsonStringEscape(contentType)+",");
           String fileName = document.getFileName();
           if (fileName != null)
             pw.print("\"_name\" : "+jsonStringEscape(fileName)+",");
-          pw.print("\"file\" : \"");
+          pw.print(" \"content\" : \"");
           Base64 base64 = new Base64();
           base64.encodeStream(inputStream, pw);
-          pw.print("\"");
+          pw.print("\"}");
         }
         
         pw.print("}");
@@ -134,6 +137,7 @@
         throw new IOException(e.getMessage());
       } finally
       {
+        pw.flush();
         IOUtils.closeQuietly(pw);
       }
     }
@@ -196,9 +200,22 @@
     for (int i = 0; i < value.length(); i++)
     {
       char x = value.charAt(i);
-      if (x == '\"' || x == '\\' || x == '/')
-        sb.append('\\');
-      sb.append(x);
+      if (x == '\n')
+        sb.append('\\').append('n');
+      else if (x == '\r')
+        sb.append('\\').append('r');
+      else if (x == '\t')
+        sb.append('\\').append('t');
+      else if (x == '\b')
+        sb.append('\\').append('b');
+      else if (x == '\f')
+        sb.append('\\').append('f');
+      else
+      {
+        if (x == '\"' || x == '\\' || x == '/')
+          sb.append('\\');
+        sb.append(x);
+      }
     }
     sb.append("\"");
     return sb.toString();
diff --git a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchSpecs.java b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchSpecs.java
index 4c25471..351bc5f 100644
--- a/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchSpecs.java
+++ b/connectors/elasticsearch/connector/src/main/java/org/apache/manifoldcf/agents/output/elasticsearch/ElasticSearchSpecs.java
@@ -156,8 +156,9 @@
 
   public boolean checkExtension(String extension)
   {
-    if (extension == null)
-      extension = "";
+    if (extension == null || extension.length() == 0)
+      // Special character to match - see CONNECTORS-707
+      extension = ".";
     return extensionSet.contains(extension);
   }
 
diff --git a/connectors/elasticsearch/pom.xml b/connectors/elasticsearch/pom.xml
index 7473a38..4d97b6d 100644
--- a/connectors/elasticsearch/pom.xml
+++ b/connectors/elasticsearch/pom.xml
@@ -20,7 +20,7 @@
   <parent>

     <groupId>org.apache.manifoldcf</groupId>

     <artifactId>mcf-connectors</artifactId>

-    <version>1.2-SNAPSHOT</version>

+    <version>1.5-SNAPSHOT</version>

   </parent>

   <modelVersion>4.0.0</modelVersion>

   

@@ -42,16 +42,26 @@
     <resources>

       <resource>

         <directory>${basedir}/connector/src/main/resources</directory>

+        <includes>

+          <include>**/*.html</include>

+          <include>**/*.js</include>

+        </includes>

       </resource>

-    </resources>

+      <resource>

+        <directory>${basedir}/connector/src/main/native2ascii</directory>

+        <includes>

+          <include>**/*.properties</include>

+        </includes>

+      </resource>

+    </resources> 

+

     <plugins>

       <plugin>

         <groupId>org.codehaus.mojo</groupId>

         <artifactId>native2ascii-maven-plugin</artifactId>

-        <version>1.0-alpha-1</version>

+        <version>1.0-beta-1</version>

         <configuration>

-            <dest>target/classes</dest>

-            <src>connector/src/main/native2ascii</src>

+            <workDir>target/classes</workDir>

         </configuration>

         <executions>

             <execution>

@@ -61,7 +71,9 @@
                 </goals>

                 <configuration>

                     <encoding>UTF8</encoding>

-                    <includes>**/*.properties</includes>

+                    <includes>

+                      <include>**/*.properties</include>

+                    </includes>

                 </configuration>

             </execution>

         </executions>

@@ -93,7 +105,7 @@
     <dependency>

       <groupId>org.apache.httpcomponents</groupId>

       <artifactId>httpclient</artifactId>

-      <version>${httpcomponent.version}</version>

+      <version>${httpcomponent.httpclient.version}</version>

     </dependency>

     <dependency>

       <groupId>commons-logging</groupId>

diff --git a/connectors/email/build.xml b/connectors/email/build.xml
new file mode 100644
index 0000000..7e7e045
--- /dev/null
+++ b/connectors/email/build.xml
@@ -0,0 +1,22 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+<project name="email" default="all">
+
+    <import file="../connector-build.xml"/>
+
+</project>
\ No newline at end of file
diff --git a/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailConfig.java b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailConfig.java
new file mode 100644
index 0000000..d05baab
--- /dev/null
+++ b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailConfig.java
@@ -0,0 +1,113 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.email;
+
+
+/**
+* Parameters data for the Email repository connector.
+*/
+public class EmailConfig {
+
+  /**
+  * Username
+  */
+  public static final String USERNAME_PARAM = "username";
+
+  /**
+  * Password
+  */
+  public static final String PASSWORD_PARAM = "password";
+
+  /**
+  * Protocol
+  */
+  public static final String PROTOCOL_PARAM = "protocol";
+
+  /**
+  * Server name
+  */
+  public static final String SERVER_PARAM = "server";
+
+  /**
+  * Port
+  */
+  public static final String PORT_PARAM = "port";
+
+  /**
+  * URL template
+  */
+  public static final String URL_PARAM = "url";
+  
+  // Protocol options
+  
+  public static final String PROTOCOL_IMAP = "IMAP";
+  public static final String PROTOCOL_IMAPS = "IMAP-SSL";
+  public static final String PROTOCOL_POP3 = "POP3";
+  public static final String PROTOCOL_POP3S = "POP3-SSL";
+  
+  // Protocol providers
+  
+  public static final String PROTOCOL_IMAP_PROVIDER = "imap";
+  public static final String PROTOCOL_IMAPS_PROVIDER = "imaps";
+  public static final String PROTOCOL_POP3_PROVIDER = "pop3";
+  public static final String PROTOCOL_POP3S_PROVIDER = "pop3s";
+  
+  // Default values and various other constants
+  
+  public static final String PROTOCOL_DEFAULT_VALUE = "IMAP";
+  public static final String PORT_DEFAULT_VALUE = "";
+  public static final String[] BASIC_METADATA = {"To","From","Subject","Body","Date","Encoding of Attachment","MIME type of attachment"};
+  public static final String[] BASIC_SEARCHABLE_ATTRIBUTES = {"To","From","Subject","Body","Date"};
+
+  // Specification nodes
+  
+  public static final String NODE_PROPERTIES = "properties";
+  public static final String NODE_METADATA = "metadata";
+  public static final String NODE_FILTER = "filter";
+  public static final String NODE_FOLDER = "folder";
+  
+  public static final String ATTRIBUTE_NAME = "name";
+  public static final String ATTRIBUTE_VALUE = "value";
+
+  // Metadata field names
+  
+  public static final String EMAIL_SUBJECT = "subject";
+  public static final String EMAIL_FROM = "from";
+  public static final String EMAIL_TO = "to";
+  public static final String EMAIL_BODY = "body";
+  public static final String EMAIL_DATE = "date";
+  public static final String EMAIL_ATTACHMENT_ENCODING = "encoding of attachment";
+  public static final String EMAIL_ATTACHMENT_MIMETYPE = "mime type of attachment";
+  public static final String EMAIL_VERSION = "1.0";
+  
+  // Mime types
+  
+  public static final String MIMETYPE_TEXT_PLAIN = "text/plain";
+  public static final String MIMETYPE_HTML = "text/html";
+  
+  // Fields
+  
+  public static final String ENCODING_FIELD = "encoding";
+  public static final String MIMETYPE_FIELD = "mimetype";
+  //public static final String TO = "To";
+  
+  // Activity names
+  
+  public final static String ACTIVITY_FETCH = "fetch";
+
+}
\ No newline at end of file
diff --git a/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailConnector.java b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailConnector.java
new file mode 100644
index 0000000..9d493c0
--- /dev/null
+++ b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailConnector.java
@@ -0,0 +1,1754 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.email;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+
+import java.io.*;
+import java.util.*;
+import javax.mail.*;
+import javax.mail.internet.MimeBodyPart;
+import javax.mail.internet.MimeMessage;
+import javax.mail.search.*;
+
+/**
+* This interface describes an instance of a connection between a repository and ManifoldCF's
+* standard "pull" ingestion agent.
+* <p/>
+* Each instance of this interface is used in only one thread at a time. Connection Pooling
+* on these kinds of objects is performed by the factory which instantiates repository connectors
+* from symbolic names and config parameters, and is pooled by these parameters. That is, a pooled connector
+* handle is used only if all the connection parameters for the handle match.
+* <p/>
+* Implementers of this interface should provide a default constructor which has this signature:
+* <p/>
+* xxx();
+* <p/>
+* Connectors are either configured or not. If configured, they will persist in a pool, and be
+* reused multiple times. Certain methods of a connector may be called before the connector is
+* configured. This includes basically all methods that permit inspection of the connector's
+* capabilities. The complete list is:
+* <p/>
+* <p/>
+* The purpose of the repository connector is to allow documents to be fetched from the repository.
+* <p/>
+* Each repository connector describes a set of documents that are known only to that connector.
+* It therefore establishes a space of document identifiers. Each connector will only ever be
+* asked to deal with identifiers that have in some way originated from the connector.
+* <p/>
+* Documents are fetched in three stages. First, the getDocuments() method is called in the connector
+* implementation. This returns a set of document identifiers. The document identifiers are used to
+* obtain the current document version strings in the second stage, using the getDocumentVersions() method.
+* The last stage is processDocuments(), which queues up any additional documents needed, and also ingests.
+* This method will not be called if the document version seems to indicate that no document change took
+* place.
+*/
+
+public class EmailConnector extends org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector {
+
+  protected final static long SESSION_EXPIRATION_MILLISECONDS = 300000L;
+  
+  // Local variables.
+  protected long sessionExpiration = -1L;
+  
+  // Parameters for establishing a session
+  
+  protected String server = null;
+  protected String portString = null;
+  protected String username = null;
+  protected String password = null;
+  protected String protocol = null;
+  protected Properties properties = null;
+  protected String urlTemplate = null;
+  
+  // Local session handle
+  protected EmailSession session = null;
+
+  private static Map<String,String> providerMap;
+  static
+  {
+    providerMap = new HashMap<String,String>();
+    providerMap.put(EmailConfig.PROTOCOL_POP3, EmailConfig.PROTOCOL_POP3_PROVIDER);
+    providerMap.put(EmailConfig.PROTOCOL_POP3S, EmailConfig.PROTOCOL_POP3S_PROVIDER);
+    providerMap.put(EmailConfig.PROTOCOL_IMAP, EmailConfig.PROTOCOL_IMAP_PROVIDER);
+    providerMap.put(EmailConfig.PROTOCOL_IMAPS, EmailConfig.PROTOCOL_IMAPS_PROVIDER);
+  }
+  //////////////////////////////////Start of Basic Connector Methods/////////////////////////
+
+  /**
+  * Connect.
+  *
+  * @param configParameters is the set of configuration parameters, which
+  * in this case describe the root directory.
+  */
+  @Override
+  public void connect(ConfigParams configParameters) {
+    super.connect(configParameters);
+    this.server = configParameters.getParameter(EmailConfig.SERVER_PARAM);
+    this.portString = configParameters.getParameter(EmailConfig.PORT_PARAM);
+    this.protocol = configParameters.getParameter(EmailConfig.PROTOCOL_PARAM);
+    this.username = configParameters.getParameter(EmailConfig.USERNAME_PARAM);
+    this.password = configParameters.getObfuscatedParameter(EmailConfig.PASSWORD_PARAM);
+    this.urlTemplate = configParameters.getParameter(EmailConfig.URL_PARAM);
+    this.properties = new Properties();
+    int i = 0;
+    while (i < configParameters.getChildCount()) //In post property set is added as a configuration node
+    {
+      ConfigNode cn = configParameters.getChild(i++);
+      if (cn.getType().equals(EmailConfig.NODE_PROPERTIES)) {
+        String findParameterName = cn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        String findParameterValue = cn.getAttributeValue(EmailConfig.ATTRIBUTE_VALUE);
+        this.properties.setProperty(findParameterName, findParameterValue);
+      }
+    }
+  }
+
+  /**
+  * Close the connection. Call this before discarding this instance of the
+  * repository connector.
+  */
+  @Override
+  public void disconnect()
+    throws ManifoldCFException {
+    this.urlTemplate = null;
+    this.server = null;
+    this.portString = null;
+    this.protocol = null;
+    this.username = null;
+    this.password = null;
+    this.properties = null;
+    finalizeConnection();
+    super.disconnect();
+  }
+
+  /**
+  * This method is periodically called for all connectors that are connected but not
+  * in active use.
+  */
+  @Override
+  public void poll() throws ManifoldCFException {
+    if (session != null)
+    {
+      if (System.currentTimeMillis() >= sessionExpiration)
+        finalizeConnection();
+    }
+  }
+
+  /**
+  * Test the connection. Returns a string describing the connection integrity.
+  *
+  * @return the connection's status as a displayable string.
+  */
+  @Override
+  public String check()
+      throws ManifoldCFException {
+    try {
+      checkConnection();
+      return super.check();
+    } catch (ServiceInterruption e) {
+      return "Connection temporarily failed: " + e.getMessage();
+    } catch (ManifoldCFException e) {
+      return "Connection failed: " + e.getMessage();
+    }
+  }
+
+  protected void checkConnection() throws ManifoldCFException, ServiceInterruption {
+    // Force a re-connection
+    finalizeConnection();
+    getSession();
+    try {
+      CheckConnectionThread cct = new CheckConnectionThread(session);
+      cct.start();
+      cct.finishUp();
+    } catch (InterruptedException e) {
+      throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+    } catch (MessagingException e) {
+      handleMessagingException(e,"checking the connection");
+    }
+  }
+
+  ///////////////////////////////End of Basic Connector Methods////////////////////////////////////////
+
+  //////////////////////////////Start of Repository Connector Method///////////////////////////////////
+
+  @Override
+  public int getConnectorModel() {
+    return MODEL_ADD; //Change is not applicable in context of email
+  }
+
+  /**
+  * Return the list of activities that this connector supports (i.e. writes into the log).
+  *
+  * @return the list.
+  */
+  @Override
+  public String[] getActivitiesList() {
+    return new String[]{EmailConfig.ACTIVITY_FETCH};
+  }
+
+  /**
+  * Get the bin name strings for a document identifier. The bin name describes the queue to which the
+  * document will be assigned for throttling purposes. Throttling controls the rate at which items in a
+  * given queue are fetched; it does not say anything about the overall fetch rate, which may operate on
+  * multiple queues or bins.
+  * For example, if you implement a web crawler, a good choice of bin name would be the server name, since
+  * that is likely to correspond to a real resource that will need real throttle protection.
+  *
+  * @param documentIdentifier is the document identifier.
+  * @return the set of bin names. If an empty array is returned, it is equivalent to there being no request
+  * rate throttling available for this identifier.
+  */
+  @Override
+  public String[] getBinNames(String documentIdentifier) {
+    return new String[]{server};
+  }
+
+  /**
+  * Get the maximum number of documents to amalgamate together into one batch, for this connector.
+  *
+  * @return the maximum number. 0 indicates "unlimited".
+  */
+  @Override
+  public int getMaxDocumentRequest() {
+    return 10;
+  }
+
+  /**
+  * Queue "seed" documents. Seed documents are the starting places for crawling activity. Documents
+  * are seeded when this method calls appropriate methods in the passed in ISeedingActivity object.
+  * <p/>
+  * This method can choose to find repository changes that happen only during the specified time interval.
+  * The seeds recorded by this method will be viewed by the framework based on what the
+  * getConnectorModel() method returns.
+  * <p/>
+  * It is not a big problem if the connector chooses to create more seeds than are
+  * strictly necessary; it is merely a question of overall work required.
+  * <p/>
+  * The times passed to this method may be interpreted for greatest efficiency. The time ranges
+  * any given job uses with this connector will not overlap, but will proceed starting at 0 and going
+  * to the "current time", each time the job is run. For continuous crawling jobs, this method will
+  * be called once, when the job starts, and at various periodic intervals as the job executes.
+  * <p/>
+  * When a job's specification is changed, the framework automatically resets the seeding start time to 0. The
+  * seeding start time may also be set to 0 on each job run, depending on the connector model returned by
+  * getConnectorModel().
+  * <p/>
+  * Note that it is always ok to send MORE documents rather than less to this method.
+  *
+  * @param activities is the interface this method should use to perform whatever framework actions are desired.
+  * @param spec is a document specification (that comes from the job).
+  * @param startTime is the beginning of the time range to consider, inclusive.
+  * @param endTime is the end of the time range to consider, exclusive.
+  * @param jobMode is an integer describing how the job is being run, whether continuous or once-only.
+  */
+  @Override
+  public void addSeedDocuments(ISeedingActivity activities,
+    DocumentSpecification spec, long startTime, long endTime, int jobMode)
+    throws ManifoldCFException, ServiceInterruption {
+
+    getSession();
+
+    int i = 0;
+    Map<String,String> findMap = new HashMap<String,String>();
+    List<String> folderNames = new ArrayList<String>();
+    while (i < spec.getChildCount()) {
+      SpecificationNode sn = spec.getChild(i++);
+      if (sn.getType().equals(EmailConfig.NODE_FOLDER)) {
+        folderNames.add(sn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME));
+      } else if (sn.getType().equals(EmailConfig.NODE_FILTER)) {
+        String findParameterName, findParameterValue;
+        findParameterName = sn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        findParameterValue = sn.getAttributeValue(EmailConfig.ATTRIBUTE_VALUE);
+        findMap.put(findParameterName, findParameterValue);
+
+      }
+
+    }
+    
+    for (String folderName : folderNames)
+    {
+      try {
+        OpenFolderThread oft = new OpenFolderThread(session, folderName);
+        oft.start();
+        Folder folder = oft.finishUp();
+        try
+        {
+          Message[] messages = findMessages(folder, startTime, endTime, findMap);
+          for (Message message : messages) {
+            String emailID = ((MimeMessage) message).getMessageID();
+            activities.addSeedDocument(createDocumentIdentifier(folderName,emailID));
+          }
+        }
+        finally
+        {
+          CloseFolderThread cft = new CloseFolderThread(session, folder);
+          cft.start();
+          cft.finishUp();
+        }
+      } catch (InterruptedException e) {
+        throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+      } catch (MessagingException e) {
+        handleMessagingException(e, "finding emails");
+      }
+    }
+
+  }
+
+  /*
+  This method will return the list of messages which matches the given criteria
+  */
+  private Message[] findMessages(Folder folder, long startTime, long endTime, Map<String,String> findMap)
+    throws MessagingException, InterruptedException {
+    String findParameterName;
+    String findParameterValue;
+    
+    SearchTerm searchTerm = null;
+    
+    Iterator<Map.Entry<String,String>> it = findMap.entrySet().iterator();
+    while (it.hasNext()) {
+      Map.Entry<String,String> pair = it.next();
+      findParameterName = pair.getKey().toLowerCase();
+      findParameterValue = pair.getValue();
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("Email: Finding emails where '" + findParameterName +
+            "' = '" + findParameterValue + "'");
+      SearchTerm searchClause = null;
+      if (findParameterName.equals(EmailConfig.EMAIL_SUBJECT)) {
+        searchClause = new SubjectTerm(findParameterValue);
+      } else if (findParameterName.equals(EmailConfig.EMAIL_FROM)) {
+        searchClause = new FromStringTerm(findParameterValue);
+      } else if (findParameterName.equals(EmailConfig.EMAIL_TO)) {
+        searchClause = new RecipientStringTerm(Message.RecipientType.TO, findParameterValue);
+      } else if (findParameterName.equals(EmailConfig.EMAIL_BODY)) {
+        searchClause = new BodyTerm(findParameterValue);
+      }
+      
+      if (searchClause != null)
+      {
+        if (searchTerm == null)
+          searchTerm = searchClause;
+        else
+          searchTerm = new AndTerm(searchTerm, searchClause);
+      }
+      else
+      {
+        Logging.connectors.warn("Email: Unknown filter parameter name: '"+findParameterName+"'");
+      }
+    }
+    
+    Message[] result;
+    if (searchTerm == null)
+    {
+      GetMessagesThread gmt = new GetMessagesThread(session, folder);
+      gmt.start();
+      result = gmt.finishUp();
+    }
+    else
+    {
+      SearchMessagesThread smt = new SearchMessagesThread(session, folder, searchTerm);
+      smt.start();
+      result = smt.finishUp();
+    }
+    return result;
+  }
+
+  protected void getSession()
+    throws ManifoldCFException, ServiceInterruption {
+    if (session == null) {
+      
+      // Check that all the required parameters are there.
+      if (urlTemplate == null)
+        throw new ManifoldCFException("Missing url parameter");
+      if (server == null)
+        throw new ManifoldCFException("Missing server parameter");
+      if (properties == null)
+        throw new ManifoldCFException("Missing server properties");
+      if (protocol == null)
+        throw new ManifoldCFException("Missing protocol parameter");
+      
+      // Create a session.
+      int port;
+      if (portString != null && portString.length() > 0)
+      {
+        try
+        {
+          port = Integer.parseInt(portString);
+        }
+        catch (NumberFormatException e)
+        {
+          throw new ManifoldCFException("Port number has bad format: "+e.getMessage(),e);
+        }
+      }
+      else
+        port = -1;
+
+      try {
+        ConnectThread connectThread = new ConnectThread(server, port, username, password,
+          providerMap.get(protocol), properties);
+        connectThread.start();
+        session = connectThread.finishUp();
+      } catch (InterruptedException e) {
+        throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+      } catch (MessagingException e) {
+        handleMessagingException(e, "connecting");
+      }
+    }
+    sessionExpiration = System.currentTimeMillis() + SESSION_EXPIRATION_MILLISECONDS;
+  }
+
+  protected void finalizeConnection() {
+    if (session != null) {
+      try {
+        CloseSessionThread closeSessionThread = new CloseSessionThread(session);
+        closeSessionThread.start();
+        closeSessionThread.finishUp();
+      } catch (InterruptedException e) {
+      } catch (MessagingException e) {
+        Logging.connectors.warn("Error while closing connection to server: " + e.getMessage(),e);
+      } finally {
+        session = null;
+      }
+    }
+  }
+
+  /**
+  * Get document versions given an array of document identifiers.
+  * This method is called for EVERY document that is considered. It is therefore important to perform
+  * as little work as possible here.
+  * The connector will be connected before this method can be called.
+  *
+  * @param documentIdentifiers is the array of local document identifiers, as understood by this connector.
+  * @param oldVersions is the corresponding array of version strings that have been saved for the document identifiers.
+  * A null value indicates that this is a first-time fetch, while an empty string indicates that the previous document
+  * had an empty version string.
+  * @param activities is the interface this method should use to perform whatever framework actions are desired.
+  * @param spec is the current document specification for the current job. If there is a dependency on this
+  * specification, then the version string should include the pertinent data, so that reingestion will occur
+  * when the specification changes. This is primarily useful for metadata.
+  * @param jobMode is an integer describing how the job is being run, whether continuous or once-only.
+  * @param usesDefaultAuthority will be true only if the authority in use for these documents is the default one.
+  * @return the corresponding version strings, with null in the places where the document no longer exists.
+  * Empty version strings indicate that there is no versioning ability for the corresponding document, and the document
+  * will always be processed.
+  */
+  @Override
+  public String[] getDocumentVersions(String[] documentIdentifiers, String[] oldVersions, IVersionActivity activities,
+    DocumentSpecification spec, int jobMode, boolean usesDefaultAuthority)
+    throws ManifoldCFException, ServiceInterruption {
+
+    String[] result = new String[documentIdentifiers.length];
+    for (int i = 0; i < documentIdentifiers.length; i++)
+    {
+      result[i] = "_" + urlTemplate;   // NOT empty; we need to make ManifoldCF understand that this is a document that never will change.
+    }
+    return result;
+
+  }
+
+  /**
+  * Process a set of documents.
+  * This is the method that should cause each document to be fetched, processed, and the results either added
+  * to the queue of documents for the current job, and/or entered into the incremental ingestion manager.
+  * The document specification allows this class to filter what is done based on the job.
+  * The connector will be connected before this method can be called.
+  *
+  * @param documentIdentifiers is the set of document identifiers to process.
+  * @param versions is the corresponding document versions to process, as returned by getDocumentVersions() above.
+  * The implementation may choose to ignore this parameter and always process the current version.
+  * @param activities is the interface this method should use to queue up new document references
+  * and ingest documents.
+  * @param spec is the document specification.
+  * @param scanOnly is an array corresponding to the document identifiers. It is set to true to indicate when the processing
+  * should only find other references, and should not actually call the ingestion methods.
+  * @param jobMode is an integer describing how the job is being run, whether continuous or once-only.
+  */
+  @Override
+  public void processDocuments(String[] documentIdentifiers, String[] versions, IProcessActivity activities,
+    DocumentSpecification spec, boolean[] scanOnly, int jobMode)
+    throws ManifoldCFException, ServiceInterruption {
+    getSession();
+    int i = 0;
+    List<String> requiredMetadata = new ArrayList<String>();
+    while (i < spec.getChildCount()) {
+      SpecificationNode sn = spec.getChild(i++);
+      if (sn.getType().equals(EmailConfig.NODE_METADATA)) {
+        String metadataAttribute = sn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        requiredMetadata.add(metadataAttribute);
+      }
+    }
+    
+    // Keep a cached set of open folders
+    Map<String,Folder> openFolders = new HashMap<String,Folder>();
+    try {
+      i = 0;
+      while (i < documentIdentifiers.length) {
+        String compositeID = documentIdentifiers[i];
+        String version = versions[i];
+        String folderName = extractFolderNameFromDocumentIdentifier(compositeID);
+        String id = extractEmailIDFromDocumentIdentifier(compositeID);
+        try {
+          Folder folder = openFolders.get(folderName);
+          if (folder == null)
+          {
+            OpenFolderThread oft = new OpenFolderThread(session, folderName);
+            oft.start();
+            folder = oft.finishUp();
+            openFolders.put(folderName,folder);
+          }
+          
+          long startTime = System.currentTimeMillis();
+          InputStream is = null;
+          if (Logging.connectors.isDebugEnabled())
+            Logging.connectors.debug("Email: Processing document identifier '"
+              + compositeID + "'");
+          SearchTerm messageIDTerm = new MessageIDTerm(id);
+          
+          SearchMessagesThread smt = new SearchMessagesThread(session, folder, messageIDTerm);
+          smt.start();
+          Message[] message = smt.finishUp();
+
+          for (Message msg : message) {
+            RepositoryDocument rd = new RepositoryDocument();
+            Date setDate = msg.getSentDate();
+            rd.setFileName(msg.getFileName());
+            is = msg.getInputStream();
+            rd.setBinary(is, msg.getSize());
+            String subject = StringUtils.EMPTY;
+            for (String metadata : requiredMetadata) {
+              if (metadata.toLowerCase().equals(EmailConfig.EMAIL_TO)) {
+                Address[] to = msg.getRecipients(Message.RecipientType.TO);
+                String[] toStr = new String[to.length];
+                int j = 0;
+                for (Address address : to) {
+                  toStr[j] = address.toString();
+                }
+                rd.addField(EmailConfig.EMAIL_TO, toStr);
+              } else if (metadata.toLowerCase().equals(EmailConfig.EMAIL_FROM)) {
+                Address[] from = msg.getFrom();
+                String[] fromStr = new String[from.length];
+                int j = 0;
+                for (Address address : from) {
+                  fromStr[j] = address.toString();
+                }
+                rd.addField(EmailConfig.EMAIL_TO, fromStr);
+
+              } else if (metadata.toLowerCase().equals(EmailConfig.EMAIL_SUBJECT)) {
+                subject = msg.getSubject();
+                rd.addField(EmailConfig.EMAIL_SUBJECT, subject);
+              } else if (metadata.toLowerCase().equals(EmailConfig.EMAIL_BODY)) {
+                Multipart mp = (Multipart) msg.getContent();
+                for (int j = 0, n = mp.getCount(); i < n; i++) {
+                  Part part = mp.getBodyPart(i);
+                  String disposition = part.getDisposition();
+                  if ((disposition == null)) {
+                    MimeBodyPart mbp = (MimeBodyPart) part;
+                    if (mbp.isMimeType(EmailConfig.MIMETYPE_TEXT_PLAIN)) {
+                      rd.addField(EmailConfig.EMAIL_BODY, mbp.getContent().toString());
+                    } else if (mbp.isMimeType(EmailConfig.MIMETYPE_HTML)) {
+                      rd.addField(EmailConfig.EMAIL_BODY, mbp.getContent().toString()); //handle html accordingly. Returns content with html tags
+                    }
+                  }
+                }
+              } else if (metadata.toLowerCase().equals(EmailConfig.EMAIL_DATE)) {
+                Date sentDate = msg.getSentDate();
+                rd.addField(EmailConfig.EMAIL_DATE, sentDate.toString());
+              } else if (metadata.toLowerCase().equals(EmailConfig.EMAIL_ATTACHMENT_ENCODING)) {
+                Multipart mp = (Multipart) msg.getContent();
+                if (mp != null) {
+                  String[] encoding = new String[mp.getCount()];
+                  for (int k = 0, n = mp.getCount(); i < n; i++) {
+                    Part part = mp.getBodyPart(i);
+                    String disposition = part.getDisposition();
+                    if ((disposition != null) &&
+                        ((disposition.equals(Part.ATTACHMENT) ||
+                            (disposition.equals(Part.INLINE))))) {
+                      encoding[k] = part.getFileName().split("\\?")[1];
+
+                    }
+                  }
+                  rd.addField(EmailConfig.ENCODING_FIELD, encoding);
+                }
+              } else if (metadata.toLowerCase().equals(EmailConfig.EMAIL_ATTACHMENT_MIMETYPE)) {
+                Multipart mp = (Multipart) msg.getContent();
+                String[] MIMEType = new String[mp.getCount()];
+                for (int k = 0, n = mp.getCount(); i < n; i++) {
+                  Part part = mp.getBodyPart(i);
+                  String disposition = part.getDisposition();
+                  if ((disposition != null) &&
+                      ((disposition.equals(Part.ATTACHMENT) ||
+                          (disposition.equals(Part.INLINE))))) {
+                    MIMEType[k] = part.getContentType();
+
+                  }
+                }
+                rd.addField(EmailConfig.MIMETYPE_FIELD, MIMEType);
+              }
+            }
+            String documentURI = makeDocumentURI(urlTemplate, folderName, id);
+            activities.ingestDocument(id, version, documentURI, rd);
+
+          }
+        } catch (InterruptedException e) {
+          throw new ManifoldCFException(e.getMessage(), ManifoldCFException.INTERRUPTED);
+        } catch (MessagingException e) {
+          handleMessagingException(e, "processing email");
+        } catch (IOException e) {
+          handleIOException(e, "processing email");
+          throw new ManifoldCFException(e.getMessage(), e);
+        }
+        
+        i++;
+      }
+    }
+    finally
+    {
+      for (Folder f : openFolders.values())
+      {
+        try
+        {
+          CloseFolderThread cft = new CloseFolderThread(session, f);
+          cft.start();
+          cft.finishUp();
+        }
+        catch (InterruptedException e)
+        {
+          throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+        }
+        catch (MessagingException e)
+        {
+          handleMessagingException(e, "closing folders");
+        }
+      }
+    }
+  }
+
+  //////////////////////////////End of Repository Connector Methods///////////////////////////////////
+
+
+  ///////////////////////////////////////Start of Configuration UI/////////////////////////////////////
+
+  /**
+  * Output the configuration header section.
+  * This method is called in the head section of the connector's configuration page. Its purpose is to
+  * add the required tabs to the list, and to output any javascript methods that might be needed by
+  * the configuration editing HTML.
+  * The connector does not need to be connected for this method to be called.
+  *
+  * @param threadContext is the local thread context.
+  * @param out is the output to which any HTML should be sent.
+  * @param locale is the desired locale.
+  * @param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  * @param tabsArray is an array of tab names. Add to this array any tab names that are specific to the connector.
+  */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters, List<String> tabsArray)
+    throws ManifoldCFException, IOException {
+    tabsArray.add(Messages.getString(locale, "EmailConnector.Server"));
+    tabsArray.add(Messages.getString(locale, "EmailConnector.URL"));
+    // Map the parameters
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the parameters from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    fillInURLConfigurationMap(paramMap, out, parameters);
+
+    // Output the Javascript - only one Velocity template for all tabs
+    Messages.outputResourceWithVelocity(out, locale, "ConfigurationHeader.js", paramMap);
+  }
+
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException {
+    // Output the Server tab
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    // Set the tab name
+    paramMap.put("TabName", tabName);
+    // Fill in the parameters
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    fillInURLConfigurationMap(paramMap, out, parameters);
+    Messages.outputResourceWithVelocity(out, locale, "Configuration_Server.html", paramMap);
+    Messages.outputResourceWithVelocity(out, locale, "Configuration_URL.html", paramMap);
+  }
+
+  private static void fillInServerConfigurationMap(Map<String, Object> paramMap, IPasswordMapperActivity mapper, ConfigParams parameters) {
+    int i = 0;
+    String username = parameters.getParameter(EmailConfig.USERNAME_PARAM);
+    String password = parameters.getObfuscatedParameter(EmailConfig.PASSWORD_PARAM);
+    String protocol = parameters.getParameter(EmailConfig.PROTOCOL_PARAM);
+    String server = parameters.getParameter(EmailConfig.SERVER_PARAM);
+    String port = parameters.getParameter(EmailConfig.PORT_PARAM);
+    List<Map<String, String>> list = new ArrayList<Map<String, String>>();
+    while (i < parameters.getChildCount()) //In post property set is added as a configuration node
+    {
+      ConfigNode cn = parameters.getChild(i++);
+      if (cn.getType().equals(EmailConfig.NODE_PROPERTIES)) {
+        String findParameterName = cn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        String findParameterValue = cn.getAttributeValue(EmailConfig.ATTRIBUTE_VALUE);
+        Map<String, String> row = new HashMap<String, String>();
+        row.put("name", findParameterName);
+        row.put("value", findParameterValue);
+        list.add(row);
+      }
+    }
+
+    if (username == null)
+      username = StringUtils.EMPTY;
+    if (password == null)
+      password = StringUtils.EMPTY;
+    else
+      password = mapper.mapPasswordToKey(password);
+    if (protocol == null)
+      protocol = EmailConfig.PROTOCOL_DEFAULT_VALUE;
+    if (server == null)
+      server = StringUtils.EMPTY;
+    if (port == null)
+      port = EmailConfig.PORT_DEFAULT_VALUE;
+
+    paramMap.put("USERNAME", username);
+    paramMap.put("PASSWORD", password);
+    paramMap.put("PROTOCOL", protocol);
+    paramMap.put("SERVER", server);
+    paramMap.put("PORT", port);
+    paramMap.put("PROPERTIES", list);
+
+  }
+
+  private static void fillInURLConfigurationMap(Map<String, Object> paramMap, IPasswordMapperActivity mapper, ConfigParams parameters) {
+    String urlTemplate = parameters.getParameter(EmailConfig.URL_PARAM);
+
+    if (urlTemplate == null)
+      urlTemplate = "http://sampleserver/$(FOLDERNAME)?id=$(MESSAGEID)";
+
+    paramMap.put("URL", urlTemplate);
+  }
+
+  /**
+  * Process a configuration post.
+  * This method is called at the start of the connector's configuration page, whenever there is a possibility
+  * that form data for a connection has been posted. Its purpose is to gather form information and modify
+  * the configuration parameters accordingly.
+  * The name of the posted form is always "editconnection".
+  * The connector does not need to be connected for this method to be called.
+  *
+  * @param threadContext is the local thread context.
+  * @param variableContext is the set of variables available from the post, including binary file post information.
+  * @param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  * @return null if all is well, or a string error message if there is an error that should prevent saving of the
+  * connection (and cause a redirection to an error page).
+  */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext,
+    ConfigParams parameters) throws ManifoldCFException {
+
+    String urlTemplate = variableContext.getParameter("url");
+    if (urlTemplate != null)
+      parameters.setParameter(EmailConfig.URL_PARAM, urlTemplate);
+
+    String userName = variableContext.getParameter("username");
+    if (userName != null)
+      parameters.setParameter(EmailConfig.USERNAME_PARAM, userName);
+
+    String password = variableContext.getParameter("password");
+    if (password != null)
+      parameters.setObfuscatedParameter(EmailConfig.PASSWORD_PARAM, variableContext.mapKeyToPassword(password));
+
+    String protocol = variableContext.getParameter("protocol");
+    if (protocol != null)
+      parameters.setParameter(EmailConfig.PROTOCOL_PARAM, protocol);
+
+    String server = variableContext.getParameter("server");
+    if (server != null)
+      parameters.setParameter(EmailConfig.SERVER_PARAM, server);
+    String port = variableContext.getParameter("port");
+    if (port != null)
+      parameters.setParameter(EmailConfig.PORT_PARAM, port);
+    // Remove old find parameter document specification information
+    removeNodes(parameters, EmailConfig.NODE_PROPERTIES);
+
+    // Parse the number of records that were posted
+    String findCountString = variableContext.getParameter("findcount");
+    if (findCountString != null) {
+      int findCount = Integer.parseInt(findCountString);
+
+      // Loop throught them and add new server properties
+      int i = 0;
+      while (i < findCount) {
+        String suffix = "_" + Integer.toString(i++);
+        // Only add the name/value if the item was not deleted.
+        String findParameterOp = variableContext.getParameter("findop" + suffix);
+        if (findParameterOp == null || !findParameterOp.equals("Delete")) {
+          String findParameterName = variableContext.getParameter("findname" + suffix);
+          String findParameterValue = variableContext.getParameter("findvalue" + suffix);
+          addFindParameterNode(parameters, findParameterName, findParameterValue);
+        }
+      }
+    }
+
+    // Now, look for a global "Add" operation
+    String operation = variableContext.getParameter("findop");
+    if (operation != null && operation.equals("Add")) {
+      // Pick up the global parameter name and value
+      String findParameterName = variableContext.getParameter("findname");
+      String findParameterValue = variableContext.getParameter("findvalue");
+      addFindParameterNode(parameters, findParameterName, findParameterValue);
+    }
+
+    return null;
+  }
+
+  /**
+  * View configuration. This method is called in the body section of the
+  * connector's view configuration page. Its purpose is to present the
+  * connection information to the user. The coder can presume that the HTML that
+  * is output from this configuration will be within appropriate <html> and
+  * <body> tags.
+  *
+  * @param threadContext is the local thread context.
+  * @param out is the output to which any HTML should be sent.
+  * @param parameters are the configuration parameters, as they currently exist, for
+  * this connection being configured.
+  */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in map from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    fillInURLConfigurationMap(paramMap, out, parameters);
+
+    Messages.outputResourceWithVelocity(out, locale, "ConfigurationView.html", paramMap);
+  }
+
+
+  /////////////////////////////////End of configuration UI////////////////////////////////////////////////////
+
+
+  /////////////////////////////////Start of Specification UI//////////////////////////////////////////////////
+
+  /**
+  * Output the specification header section.
+  * This method is called in the head section of a job page which has selected a repository connection of the
+  * current type. Its purpose is to add the required tabs to the list, and to output any javascript methods
+  * that might be needed by the job editing HTML.
+  * The connector will be connected before this method can be called.
+  *
+  * @param out is the output to which any HTML should be sent.
+  * @param locale is the desired locale.
+  * @param ds is the current document specification for this job.
+  * @param tabsArray is an array of tab names. Add to this array any tab names that are specific to the connector.
+  */
+  @Override
+  public void outputSpecificationHeader(IHTTPOutput out, Locale locale,
+    DocumentSpecification ds, List<String> tabsArray)
+    throws ManifoldCFException, IOException {
+    // Add the tabs
+    tabsArray.add(Messages.getString(locale, "EmailConnector.Metadata"));
+    tabsArray.add(Messages.getString(locale, "EmailConnector.Filter"));
+    Messages.outputResourceWithVelocity(out, locale, "SpecificationHeader.js", null);
+  }
+
+  /**
+  * Output the specification body section.
+  * This method is called in the body section of a job page which has selected a repository connection of the
+  * current type. Its purpose is to present the required form elements for editing.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate
+  * <html>, <body>, and <form> tags. The name of the form is always "editjob".
+  * The connector will be connected before this method can be called.
+  *
+  * @param out is the output to which any HTML should be sent.
+  * @param locale is the desired locale.
+  * @param ds is the current document specification for this job.
+  * @param tabName is the current tab name.
+  */
+  @Override
+  public void outputSpecificationBody(IHTTPOutput out, Locale locale,
+    DocumentSpecification ds, String tabName)
+    throws ManifoldCFException, IOException {
+    outputFilterTab(out, locale, ds, tabName);
+    outputMetadataTab(out, locale, ds, tabName);
+  }
+
+  /**
+* Take care of "Metadata" tab.
+*/
+  protected void outputMetadataTab(IHTTPOutput out, Locale locale,
+                   DocumentSpecification ds, String tabName)
+      throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    paramMap.put("TabName", tabName);
+    fillInMetadataTab(paramMap, ds);
+    fillInMetadataAttributes(paramMap);
+    Messages.outputResourceWithVelocity(out, locale, "Specification_Metadata.html", paramMap);
+  }
+
+  /**
+  * Fill in Velocity context for Metadata tab.
+  */
+  protected static void fillInMetadataTab(Map<String, Object> paramMap,
+    DocumentSpecification ds) {
+    Set<String> metadataSelections = new HashSet<String>();
+    int i = 0;
+    while (i < ds.getChildCount()) {
+      SpecificationNode sn = ds.getChild(i++);
+      if (sn.getType().equals(EmailConfig.NODE_METADATA)) {
+        String metadataName = sn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        metadataSelections.add(metadataName);
+      }
+    }
+    paramMap.put("METADATASELECTIONS", metadataSelections);
+  }
+
+  /**
+  * Fill in Velocity context with data to permit attribute selection.
+  */
+  protected void fillInMetadataAttributes(Map<String, Object> paramMap) {
+    String[] matchNames = EmailConfig.BASIC_METADATA;
+    paramMap.put("METADATAATTRIBUTES", matchNames);
+  }
+
+  protected void outputFilterTab(IHTTPOutput out, Locale locale,
+    DocumentSpecification ds, String tabName)
+    throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    paramMap.put("TabName", tabName);
+    fillInFilterTab(paramMap, ds);
+    if (tabName.equals(Messages.getString(locale, "EmailConnector.Filter")))
+      fillInSearchableAttributes(paramMap);
+    Messages.outputResourceWithVelocity(out, locale, "Specification_Filter.html", paramMap);
+  }
+
+  private void fillInSearchableAttributes(Map<String, Object> paramMap)
+  {
+    String[] attributes = EmailConfig.BASIC_SEARCHABLE_ATTRIBUTES;
+    paramMap.put("SEARCHABLEATTRIBUTES", attributes);
+    try
+    {
+      String[] folderNames = getFolderNames();
+      paramMap.put("FOLDERNAMES", folderNames);
+      paramMap.put("EXCEPTION", "");
+    }
+    catch (ManifoldCFException e)
+    {
+      paramMap.put("EXCEPTION", e.getMessage());
+    }
+    catch (ServiceInterruption e)
+    {
+      paramMap.put("EXCEPTION", e.getMessage());
+    }
+  }
+
+  protected static void fillInFilterTab(Map<String, Object> paramMap,
+    DocumentSpecification ds) {
+    List<Map<String, String>> filterList = new ArrayList<Map<String, String>>();
+    Set<String> folders = new HashSet<String>();
+    int i = 0;
+    while (i < ds.getChildCount()) {
+      SpecificationNode sn = ds.getChild(i++);
+      if (sn.getType().equals(EmailConfig.NODE_FILTER)) {
+
+        String findParameterName = sn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        String findParameterValue = sn.getAttributeValue(EmailConfig.ATTRIBUTE_VALUE);
+        Map<String, String> row = new HashMap<String, String>();
+        row.put("name", findParameterName);
+        row.put("value", findParameterValue);
+        filterList.add(row);
+      }
+      else if (sn.getType().equals(EmailConfig.NODE_FOLDER)) {
+        String folderName = sn.getAttributeValue(EmailConfig.ATTRIBUTE_NAME);
+        folders.add(folderName);
+      }
+    }
+    paramMap.put("MATCHES", filterList);
+    paramMap.put("FOLDERS", folders);
+  }
+
+  /**
+  * Process a specification post.
+  * This method is called at the start of job's edit or view page, whenever there is a possibility that form
+  * data for a connection has been posted. Its purpose is to gather form information and modify the
+  * document specification accordingly. The name of the posted form is always "editjob".
+  * The connector will be connected before this method can be called.
+  *
+  * @param variableContext contains the post data, including binary file-upload information.
+  * @param ds is the current document specification for this job.
+  * @return null if all is well, or a string error message if there is an error that should prevent saving of
+  * the job (and cause a redirection to an error page).
+  */
+  @Override
+  public String processSpecificationPost(IPostParameters variableContext, DocumentSpecification ds)
+      throws ManifoldCFException {
+
+    String result = processFilterTab(variableContext, ds);
+    if (result != null)
+      return result;
+    result = processMetadataTab(variableContext, ds);
+    return result;
+  }
+
+
+  protected String processFilterTab(IPostParameters variableContext, DocumentSpecification ds)
+      throws ManifoldCFException {
+
+    String findCountString = variableContext.getParameter("findcount");
+    if (findCountString != null) {
+      int findCount = Integer.parseInt(findCountString);
+      
+      // Remove old find parameter document specification information
+      removeNodes(ds, EmailConfig.NODE_FILTER);
+
+      int i = 0;
+      while (i < findCount) {
+        String suffix = "_" + Integer.toString(i++);
+        // Only add the name/value if the item was not deleted.
+        String findParameterOp = variableContext.getParameter("findop" + suffix);
+        if (findParameterOp == null || !findParameterOp.equals("Delete")) {
+          String findParameterName = variableContext.getParameter("findname" + suffix);
+          String findParameterValue = variableContext.getParameter("findvalue" + suffix);
+          addFindParameterNode(ds, findParameterName, findParameterValue);
+        }
+      }
+
+      String operation = variableContext.getParameter("findop");
+      if (operation != null && operation.equals("Add")) {
+        String findParameterName = variableContext.getParameter("findname");
+        String findParameterValue = variableContext.getParameter("findvalue");
+        addFindParameterNode(ds, findParameterName, findParameterValue);
+      }
+    }
+    
+    String[] folders = variableContext.getParameterValues("folders");
+    if (folders != null)
+    {
+      removeNodes(ds, EmailConfig.NODE_FOLDER);
+      for (String folder : folders)
+      {
+        addFolderNode(ds, folder);
+      }
+    }
+    return null;
+  }
+
+
+  protected String processMetadataTab(IPostParameters variableContext, DocumentSpecification ds)
+      throws ManifoldCFException {
+    // Remove old included metadata nodes
+    removeNodes(ds, EmailConfig.NODE_METADATA);
+
+    // Get the posted metadata values
+    String[] metadataNames = variableContext.getParameterValues("metadata");
+    if (metadataNames != null) {
+      // Add each metadata name as a node to the document specification
+      int i = 0;
+      while (i < metadataNames.length) {
+        String metadataName = metadataNames[i++];
+        addIncludedMetadataNode(ds, metadataName);
+      }
+    }
+
+    return null;
+  }
+
+  /**
+  * View specification.
+  * This method is called in the body section of a job's view page. Its purpose is to present the document
+  * specification information to the user. The coder can presume that the HTML that is output from
+  * this configuration will be within appropriate <html> and <body> tags.
+  * The connector will be connected before this method can be called.
+  *
+  * @param out is the output to which any HTML should be sent.
+  * @param locale is the desired locale.
+  * @param ds is the current document specification for this job.
+*/
+  @Override
+  public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)
+      throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    fillInFilterTab(paramMap, ds);
+    fillInMetadataTab(paramMap, ds);
+    Messages.outputResourceWithVelocity(out, locale, "SpecificationView.html", paramMap);
+  }
+
+  ///////////////////////////////////////End of specification UI///////////////////////////////////////////////
+  
+  /** Get a sorted list of folder names */
+  protected String[] getFolderNames()
+    throws ManifoldCFException, ServiceInterruption
+  {
+    getSession();
+    try
+    {
+      ListFoldersThread lft = new ListFoldersThread(session);
+      lft.start();
+      return lft.finishUp();
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+    }
+    catch (MessagingException e)
+    {
+      handleMessagingException(e,"getting folder list");
+      return null;
+    }
+  }
+
+  /** Create a document's URI given a template, a folder name, and a message ID */
+  protected static String makeDocumentURI(String urlTemplate, String folderName, String id)
+  {
+    try {
+      // First, URL encode folder name and id
+      String encodedFolderName = java.net.URLEncoder.encode(folderName, "utf-8");
+      String encodedId = java.net.URLEncoder.encode(id, "utf-8");
+      // The template is already URL encoded, except for the substitution points
+      Map<String,String> subsMap = new HashMap<String,String>();
+      subsMap.put("FOLDERNAME", encodedFolderName);
+      subsMap.put("MESSAGEID", encodedId);
+      return substitute(urlTemplate, subsMap);
+    } catch (UnsupportedEncodingException e) {
+      throw new RuntimeException("No utf-8 encoder found: "+e.getMessage(), e);
+    }
+  }
+
+  protected static String substitute(String template, Map<String,String> map)
+  {
+    StringBuilder sb = new StringBuilder();
+    int index = 0;
+    while (true)
+    {
+      int newIndex = template.indexOf("$(",index);
+      if (newIndex == -1)
+      {
+        sb.append(template.substring(index));
+        break;
+      }
+      sb.append(template.substring(index, newIndex));
+      int endIndex = template.indexOf(")",newIndex+2);
+      String varName;
+      if (endIndex == -1)
+        varName = template.substring(newIndex + 2);
+      else
+        varName = template.substring(newIndex + 2, endIndex);
+      String subsValue = map.get(varName);
+      if (subsValue == null)
+        subsValue = "";
+      sb.append(subsValue);
+      if (endIndex == -1)
+        break;
+      index = endIndex+1;
+    }
+    return sb.toString();
+  }
+  
+  protected static void addFindParameterNode(ConfigParams parameters, String findParameterName, String findParameterValue) {
+    ConfigNode cn = new ConfigNode(EmailConfig.NODE_PROPERTIES);
+    cn.setAttribute(EmailConfig.ATTRIBUTE_NAME, findParameterName);
+    cn.setAttribute(EmailConfig.ATTRIBUTE_VALUE, findParameterValue);
+    // Add to the end
+    parameters.addChild(parameters.getChildCount(), cn);
+  }
+
+  protected static void removeNodes(ConfigParams parameters,
+                    String nodeTypeName) {
+    int i = 0;
+    while (i < parameters.getChildCount()) {
+      ConfigNode cn = parameters.getChild(i);
+      if (cn.getType().equals(nodeTypeName))
+        parameters.removeChild(i);
+      else
+        i++;
+    }
+  }
+
+  protected static void removeNodes(DocumentSpecification ds,
+                    String nodeTypeName) {
+    int i = 0;
+    while (i < ds.getChildCount()) {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals(nodeTypeName))
+        ds.removeChild(i);
+      else
+        i++;
+    }
+  }
+
+  protected static void addIncludedMetadataNode(DocumentSpecification ds,
+                          String metadataName) {
+    // Build the proper node
+    SpecificationNode sn = new SpecificationNode(EmailConfig.NODE_METADATA);
+    sn.setAttribute(EmailConfig.ATTRIBUTE_NAME, metadataName);
+    // Add to the end
+    ds.addChild(ds.getChildCount(), sn);
+  }
+
+  protected static void addFindParameterNode(DocumentSpecification ds, String findParameterName, String findParameterValue) {
+    SpecificationNode sn = new SpecificationNode(EmailConfig.NODE_FILTER);
+    sn.setAttribute(EmailConfig.ATTRIBUTE_NAME, findParameterName);
+    sn.setAttribute(EmailConfig.ATTRIBUTE_VALUE, findParameterValue);
+    // Add to the end
+    ds.addChild(ds.getChildCount(), sn);
+  }
+
+  protected static void addFolderNode(DocumentSpecification ds, String folderName)
+  {
+    SpecificationNode sn = new SpecificationNode(EmailConfig.NODE_FOLDER);
+    sn.setAttribute(EmailConfig.ATTRIBUTE_NAME, folderName);
+    ds.addChild(ds.getChildCount(), sn);
+  }
+  
+
+  /** Create a document identifier from a folder name and an email ID */
+  protected static String createDocumentIdentifier(String folderName, String emailID)
+  {
+    return makeSafeFolderName(folderName) + ":" + emailID;
+  }
+  
+  /** Find a folder name in a document identifier */
+  protected static String extractFolderNameFromDocumentIdentifier(String di)
+  {
+    int index = di.indexOf(":");
+    if (index == -1)
+      throw new RuntimeException("Bad document identifier: '"+di+"'");
+    return di.substring(0,index);
+  }
+
+  /** Find an email ID in a document identifier */
+  protected static String extractEmailIDFromDocumentIdentifier(String di)
+  {
+    int index = di.indexOf(":");
+    if (index == -1)
+      throw new RuntimeException("Bad document identifier: '"+di+"'");
+    return di.substring(index+1);
+  }
+  
+  /** Create a safe folder name (which doesn't contain colons) */
+  protected static String makeSafeFolderName(String folderName)
+  {
+    StringBuilder sb = new StringBuilder();
+    for (int i = 0; i < folderName.length(); i++)
+    {
+      char x = folderName.charAt(i);
+      if (x == '\\')
+        sb.append('\\').append('\\');
+      else if (x == ':')
+        sb.append('\\').append('0');
+      else
+        sb.append(x);
+    }
+    return sb.toString();
+  }
+  
+  /** Unpack a safe folder name */
+  protected static String unpackSafeFolderName(String packedFolderName)
+  {
+    StringBuilder sb = new StringBuilder();
+    int i = 0;
+    while (i < packedFolderName.length())
+    {
+      char x = packedFolderName.charAt(i++);
+      if (x == '\\')
+      {
+        if (i == packedFolderName.length())
+          throw new RuntimeException("Illegal packed folder name: '"+packedFolderName+"'");
+        x = packedFolderName.charAt(i++);
+        if (x == '\\')
+          sb.append('\\');
+        else if (x == '0')
+          sb.append(':');
+        else
+          throw new RuntimeException("Illegal packed folder name: '"+packedFolderName+"'");
+      }
+      else
+        sb.append(x);
+    }
+    return sb.toString();
+  }
+  
+  /** Handle Messaging exceptions in a consistent global manner */
+  protected static void handleMessagingException(MessagingException e, String context)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    Logging.connectors.error("Email: Error "+context+": "+e.getMessage(),e);
+    throw new ManifoldCFException("Error "+context+": "+e.getMessage(),e);
+  }
+  
+  /** Handle IO Exception */
+  protected static void handleIOException(IOException e, String context)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    if (e instanceof java.net.SocketTimeoutException)
+    {
+      Logging.connectors.error("Email: Socket timeout "+context+": "+e.getMessage(),e);
+      throw new ManifoldCFException("Socket timeout: "+e.getMessage(),e);
+    }
+    else if (e instanceof InterruptedIOException)
+    {
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),ManifoldCFException.INTERRUPTED);
+    }
+    else
+    {
+      Logging.connectors.error("Email: IO error "+context+": "+e.getMessage(),e);
+      throw new ManifoldCFException("IO error "+context+": "+e.getMessage(),e);
+    }
+  }
+
+  /** Class to set up connection.
+  */
+  protected static class ConnectThread extends Thread
+  {
+    protected final String server;
+    protected final int port;
+    protected final String username;
+    protected final String password;
+    protected final String protocol;
+    protected final Properties properties;
+    
+    // Local session handle
+    protected EmailSession session = null;
+    protected Throwable exception = null;
+    
+    public ConnectThread(String server, int port, String username, String password, String protocol, Properties properties)
+    {
+      this.server = server;
+      this.port = port;
+      this.username = username;
+      this.password = password;
+      this.protocol = protocol;
+      this.properties = properties;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        session = new EmailSession(server, port, username, password, protocol, properties);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public EmailSession finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+        return session;
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+  /** Class to close the session.
+  */
+  protected static class CloseSessionThread extends Thread
+  {
+    protected final EmailSession session;
+    
+    protected Throwable exception = null;
+    
+    public CloseSessionThread(EmailSession session)
+    {
+      this.session = session;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        session.close();
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public void finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+  /** Class to list all folders.
+  */
+  protected static class ListFoldersThread extends Thread
+  {
+    protected final EmailSession session;
+    
+    protected String[] rval = null;
+    protected Throwable exception = null;
+    
+    public ListFoldersThread(EmailSession session)
+    {
+      this.session = session;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        rval = session.listFolders();
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public String[] finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+        return rval;
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+  /** Class to check the connection.
+  */
+  protected static class CheckConnectionThread extends Thread
+  {
+    protected final EmailSession session;
+    
+    protected Throwable exception = null;
+    
+    public CheckConnectionThread(EmailSession session)
+    {
+      this.session = session;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        session.checkConnection();
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public void finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+  /** Class to open a folder.
+  */
+  protected static class OpenFolderThread extends Thread
+  {
+    protected final EmailSession session;
+    protected final String folderName;
+    
+    // Local folder
+    protected Folder folder = null;
+    protected Throwable exception = null;
+    
+    public OpenFolderThread(EmailSession session, String folderName)
+    {
+      this.session = session;
+      this.folderName = folderName;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        folder = session.openFolder(folderName);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public Folder finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+        return folder;
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+  
+  /** Class to close a folder.
+  */
+  protected static class CloseFolderThread extends Thread
+  {
+    protected final EmailSession session;
+    protected final Folder folder;
+    
+    // Local folder
+    protected Throwable exception = null;
+    
+    public CloseFolderThread(EmailSession session, Folder folder)
+    {
+      this.session = session;
+      this.folder = folder;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        session.closeFolder(folder);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public void finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+  /** Class to get all messages from a folder.
+  */
+  protected static class GetMessagesThread extends Thread
+  {
+    protected final EmailSession session;
+    protected final Folder folder;
+    
+    // Local messages
+    protected Message[] messages = null;
+    protected Throwable exception = null;
+    
+    public GetMessagesThread(EmailSession session, Folder folder)
+    {
+      this.session = session;
+      this.folder = folder;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        messages = session.getMessages(folder);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public Message[] finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+        return messages;
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+  /** Class to search for messages in a folder.
+  */
+  protected static class SearchMessagesThread extends Thread
+  {
+    protected final EmailSession session;
+    protected final Folder folder;
+    protected final SearchTerm searchTerm;
+    
+    // Local messages
+    protected Message[] messages = null;
+    protected Throwable exception = null;
+    
+    public SearchMessagesThread(EmailSession session, Folder folder, SearchTerm searchTerm)
+    {
+      this.session = session;
+      this.folder = folder;
+      this.searchTerm = searchTerm;
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        messages = session.search(folder, searchTerm);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public Message[] finishUp()
+      throws MessagingException, InterruptedException
+    {
+      try
+      {
+        join();
+        if (exception != null)
+        {
+          if (exception instanceof RuntimeException)
+            throw (RuntimeException)exception;
+          else if (exception instanceof Error)
+            throw (Error)exception;
+          else if (exception instanceof MessagingException)
+            throw (MessagingException)exception;
+          else
+            throw new RuntimeException("Unknown exception type: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+        }
+        return messages;
+      } catch (InterruptedException e) {
+        this.interrupt();
+        throw e;
+      }
+    }
+  }
+
+}
\ No newline at end of file
diff --git a/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailSession.java b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailSession.java
new file mode 100644
index 0000000..17908bb
--- /dev/null
+++ b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/EmailSession.java
@@ -0,0 +1,133 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.email;
+
+import java.io.*;
+import java.util.*;
+import javax.mail.*;
+import javax.mail.internet.MimeBodyPart;
+import javax.mail.internet.MimeMessage;
+import javax.mail.search.*;
+
+/** This class represents a raw email session, without any protection
+* from threads waiting on sockets, etc.
+*/
+public class EmailSession
+{
+  protected final String server;
+  protected final int port;
+  protected final String username;
+  protected final String password;
+  protected final String protocol;
+  protected final Properties properties;
+
+  private Session session = null;
+  private Store store = null;
+
+  /** Create a session */
+  public EmailSession(String server, int port, String username, String password,
+    String protocol, Properties properties)
+    throws MessagingException
+  {
+    this.server = server;
+    this.port = port;
+    this.username = username;
+    this.password = password;
+    this.protocol = protocol;
+    this.properties = properties;
+    
+    // Now, try to connect
+    Session thisSession = Session.getDefaultInstance(properties, null);
+    Store thisStore = thisSession.getStore(protocol);
+    thisStore.connect(server, port, username, password);
+
+    session = thisSession;
+    store = thisStore;
+  }
+  
+  public String[] listFolders()
+    throws MessagingException
+  {
+    if (store != null)
+    {
+      List<String> folderList = new ArrayList<String>();
+      Folder[] folders = store.getDefaultFolder().list("*");
+      for (Folder folder : folders)
+      {
+        if ((folder.getType() & Folder.HOLDS_MESSAGES) != 0)
+          folderList.add(folder.getFullName());
+      }
+      String[] rval = folderList.toArray(new String[0]);
+      java.util.Arrays.sort(rval);
+      return rval;
+    }
+    return null;
+  }
+  
+  public void checkConnection()
+    throws MessagingException
+  {
+    if (store != null)
+    {
+      if (store.getDefaultFolder() == null) {
+        throw new MessagingException("Error checking the connection: No default folder.");
+      }
+    }
+  }
+
+  public Folder openFolder(String folderName)
+    throws MessagingException
+  {
+    if (store != null)
+    {
+      Folder thisFolder = store.getFolder(folderName);
+      thisFolder.open(Folder.READ_ONLY);
+      return thisFolder;
+    }
+    return null;
+  }
+  
+  public void closeFolder(Folder folder)
+    throws MessagingException
+  {
+    folder.close(false);
+  }
+  
+  public Message[] getMessages(Folder folder)
+    throws MessagingException
+  {
+    return folder.getMessages();
+  }
+  
+  public Message[] search(Folder folder, SearchTerm searchTerm)
+    throws MessagingException
+  {
+    return folder.search(searchTerm);
+  }
+  
+  public void close()
+    throws MessagingException
+  {
+    if (store != null)
+    {
+      store.close();
+      store = null;
+    }
+    session = null;
+  }
+}
\ No newline at end of file
diff --git a/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/Messages.java b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/Messages.java
new file mode 100644
index 0000000..655fa6e
--- /dev/null
+++ b/connectors/email/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/email/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.email;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.crawler.connectors.email.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.crawler.connectors.email";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/email/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/email/common_en_US.properties b/connectors/email/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/email/common_en_US.properties
new file mode 100644
index 0000000..c71afec
--- /dev/null
+++ b/connectors/email/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/email/common_en_US.properties
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+EmailConnector.Server=Server
+EmailConnector.URL=URL
+EmailConnector.Metadata=Metadata
+EmailConnector.Filter=Filter
+
+EmailConnector.EnterAMailServerHostName=Enter a mail server host name
+EmailConnector.PleaseSelectAConfigurationParameterName=Please select a configuration parameter name
+EmailConnector.PleaseSelectAMetadataName=Please select a metadata name
+EmailConnector.ValueCannotBeBlank=Value cannot be blank
+EmailConnector.URLTemplateCannotBeBlank=URL template cannot be blank
+
+EmailConnector.URLTemplateColon=URL template:
+EmailConnector.ConfigurationPropertiesColon=Configuration properties:
+EmailConnector.ProtocolColon=Protocol:
+EmailConnector.HostNameColon=Host name:
+EmailConnector.PortColon=Port:
+EmailConnector.UserNameColon=User name:
+EmailConnector.PasswordColon=Password:
+EmailConnector.MatchesColon=Matches:
+EmailConnector.FoldersColon=Folders:
+EmailConnector.RecordFilterColon=Record filter:
+EmailConnector.ServerProperty=Server property
+EmailConnector.Value=Value
+EmailConnector.NoServerPropertiesSpecified=No server properties specified
+EmailConnector.AddNewMatch=Add new match
+EmailConnector.AddNewProperty=Add new property
+EmailConnector.Add=Add
+EmailConnector.DeleteMatchNumber=Delete match #
+EmailConnector.DeletePropertyNumber=Delete property #
+EmailConnector.Delete=Delete
+EmailConnector.MetadataName=Metadata name
+EmailConnector.NoMetadataSpecified=No metadata specified
+EmailConnector.SelectMetadataName=--Select metadata name --
+EmailConnector.IncludedMetadataColon=Included metadata:
+
+
+
+
+
+
+
+
diff --git a/connectors/email/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/email/common_ja_JP.properties b/connectors/email/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/email/common_ja_JP.properties
new file mode 100644
index 0000000..c71afec
--- /dev/null
+++ b/connectors/email/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/email/common_ja_JP.properties
@@ -0,0 +1,57 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+EmailConnector.Server=Server
+EmailConnector.URL=URL
+EmailConnector.Metadata=Metadata
+EmailConnector.Filter=Filter
+
+EmailConnector.EnterAMailServerHostName=Enter a mail server host name
+EmailConnector.PleaseSelectAConfigurationParameterName=Please select a configuration parameter name
+EmailConnector.PleaseSelectAMetadataName=Please select a metadata name
+EmailConnector.ValueCannotBeBlank=Value cannot be blank
+EmailConnector.URLTemplateCannotBeBlank=URL template cannot be blank
+
+EmailConnector.URLTemplateColon=URL template:
+EmailConnector.ConfigurationPropertiesColon=Configuration properties:
+EmailConnector.ProtocolColon=Protocol:
+EmailConnector.HostNameColon=Host name:
+EmailConnector.PortColon=Port:
+EmailConnector.UserNameColon=User name:
+EmailConnector.PasswordColon=Password:
+EmailConnector.MatchesColon=Matches:
+EmailConnector.FoldersColon=Folders:
+EmailConnector.RecordFilterColon=Record filter:
+EmailConnector.ServerProperty=Server property
+EmailConnector.Value=Value
+EmailConnector.NoServerPropertiesSpecified=No server properties specified
+EmailConnector.AddNewMatch=Add new match
+EmailConnector.AddNewProperty=Add new property
+EmailConnector.Add=Add
+EmailConnector.DeleteMatchNumber=Delete match #
+EmailConnector.DeletePropertyNumber=Delete property #
+EmailConnector.Delete=Delete
+EmailConnector.MetadataName=Metadata name
+EmailConnector.NoMetadataSpecified=No metadata specified
+EmailConnector.SelectMetadataName=--Select metadata name --
+EmailConnector.IncludedMetadataColon=Included metadata:
+
+
+
+
+
+
+
+
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/ConfigurationHeader.js b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/ConfigurationHeader.js
new file mode 100644
index 0000000..0ff31f2
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/ConfigurationHeader.js
@@ -0,0 +1,91 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkConfig()
+{
+  if (editconnection.port.value != "" && !isInteger(editconnection.port.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.PortMustBeIntegerOrBlank'))");
+    editconnection.port.focus();
+    return false;
+  }
+  return true;
+}
+
+function checkConfigForSave()
+{
+  if (editconnection.server.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.EnterAMailServerHostName'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.Server'))");
+    editconnection.server.focus();
+    return false;
+  }
+  if (editconnection.port.value != "" && !isInteger(editconnection.port.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.PortMustBeIntegerOrBlank'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.Server'))");
+    editconnection.port.focus();
+    return false;
+  }
+  if (editconnection.url.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.URLTemplateCannotBeBlank'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.URL'))");
+    editconnection.url.focus();
+    return false;
+  }
+  return true;
+}
+
+function addProperty()
+{
+  postFormSetAnchor("property"); //Repost the form and send the browser to property anchor
+}
+
+function SpecOp(n, opValue, anchorvalue)
+{
+  eval("editconnection."+n+".value = \""+opValue+"\"");
+  postFormSetAnchor(anchorvalue);
+}
+
+function FindDelete(n)
+{
+  SpecOp("findop_"+n, "Delete", "find_"+n);
+}
+
+function FindAdd(n)
+{
+  if (editconnection.findname.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.PleaseSelectAConfigurationParameterName'))");
+    editconnection.findname.focus();
+    return;
+  }
+  if (editconnection.findvalue.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.ValueCannotBeBlank'))");
+    editconnection.findvalue.focus();
+    return;
+  }
+  SpecOp("findop", "Add", "find_"+n);
+}
+
+//-->
+</script>
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/ConfigurationView.html b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/ConfigurationView.html
new file mode 100644
index 0000000..597e74e
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/ConfigurationView.html
@@ -0,0 +1,96 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+<table class="displaytable">
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.ProtocolColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($PROTOCOL)</nobr>
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.HostNameColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($SERVER)</nobr>
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.PortColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($PORT)</nobr>
+    </td>
+  </tr>
+
+  <tr><td  class="separator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.UserNameColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($USERNAME)</nobr>
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.PasswordColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>********</nobr>
+    </td>
+  </tr>
+
+  <tr><td  class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.ConfigurationPropertiesColon'))</nobr>
+    </td>
+    <td class="value">
+#foreach( $property in $PROPERTIES )
+        <nobr>
+            $Encoder.bodyEscape($property.get('name')) : $Encoder.bodyEscape($property.get('value'))
+        </nobr>
+        <br/>
+#end
+
+    </td>
+  </tr>
+  
+  <tr><td  class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.URLTemplateColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($URL)</nobr>
+    </td>
+  </tr>
+
+</table>
+
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Configuration_Server.html b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Configuration_Server.html
new file mode 100644
index 0000000..ee74393
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Configuration_Server.html
@@ -0,0 +1,189 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('EmailConnector.Server'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.ProtocolColon'))</nobr>
+    </td>
+    <td class="value">
+      <select id="protocol" name="protocol" size="2">
+  #if($PROTOCOL == 'IMAP')
+        <option value="IMAP" selected="selected">IMAP</option>
+  #else
+        <option value="IMAP">IMAP</option>
+  #end
+  #if($PROTOCOL == 'IMAP-SSL')
+        <option value="IMAP-SSL" selected="selected">IMAP-SSL</option>
+  #else
+        <option value="IMAP-SSL">IMAP-SSL</option>
+  #end
+  #if($PROTOCOL == 'POP3')
+        <option value="POP3" selected="selected">POP3</option>
+  #else
+        <option value="POP3">POP3</option>
+  #end
+  #if($PROTOCOL == 'POP3-SSL')
+        <option value="POP3-SSL" selected="selected">POP3-SSL</option>
+  #else
+        <option value="POP3-SSL">POP3-SSL</option>
+  #end
+      </select>
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.HostNameColon'))</nobr>
+    </td>
+    <td class="value">
+      <input id="server" name="server" type="text" size="32" value="$Encoder.attributeEscape($SERVER)"/>
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.PortColon'))</nobr>
+    </td>
+    <td class="value">
+      <input type="text" id="port" name="port" value="$Encoder.attributeEscape($PORT)"/>
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.UserNameColon'))</nobr>
+    </td>
+    <td class="value">
+      <input type="text" id="username" name="username" value="$Encoder.attributeEscape($USERNAME)"/>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.PasswordColon'))</nobr>
+    </td>
+    <td class="value">
+      <input type="password" id="password" name="password" value="$Encoder.attributeEscape($PASSWORD)"/>
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.ConfigurationPropertiesColon'))</nobr>
+    </td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"></td>
+          <td class="formcolumnheader">
+            <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.ServerProperty'))</nobr>
+          </td>
+          <td class="formcolumnheader">
+            <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.Value'))</nobr>
+          </td>
+        </tr>
+
+  #set($k = 0)
+  #foreach($property in $PROPERTIES)
+
+    #if(($k % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <input type="hidden" name="findop_$k" value=""/>
+            <input type="hidden" name="findname_$k" value="$Encoder.attributeEscape($property.get('name'))"/>
+            <input type="hidden" name="findvalue_$k" value="$Encoder.attributeEscape($property.get('value'))"/>
+            <a name="find_$k">
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.Delete'))" onClick='Javascript:FindDelete("$k")' alt="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.DeletePropertyNumber'))$k"/>
+            </a>
+          </td>
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($property.get('name'))</nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($property.get('value'))</nobr>
+          </td>
+        </tr>
+
+    #set($k = $k + 1)
+  #end
+
+  #if($k == 0)
+        <tr class="formrow">
+          <td class="formcolumnmessage" colspan="3">$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.NoServerPropertiesSpecified'))</td>
+        </tr>
+  #end
+
+        <tr class="formrow"><td class="formseparator" colspan="3"><hr/></td></tr>
+
+  #set($nextk = $k + 1)
+
+        <tr class="formrow">
+          <td class="formcolumncell">
+            <nobr>
+              <a name="find_$k">
+                <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.Add'))" onClick='Javascript:FindAdd("$nextk")' alt="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.AddNewProperty'))"/>
+                <input type="hidden" name="findcount" value="$k"/>
+                <input type="hidden" name="findop" value=""/>
+              </a>
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr><input type="text" size="32" name="findname" value=""/></nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr><input type="text" size="32" name="findvalue" value=""/></nobr>
+          </td>
+        </tr>
+
+
+      </table>
+    </td>
+  </tr>
+
+</table>
+
+#else
+
+<input type="hidden" name="username" value="$Encoder.attributeEscape($USERNAME)"/>
+<input type="hidden" name="password" value="$Encoder.attributeEscape($PASSWORD)"/>
+<input type="hidden" name="protocol" value="$Encoder.attributeEscape($PROTOCOL)"/>
+<input type="hidden" name="server" value="$Encoder.attributeEscape($SERVER)"/>
+<input type="hidden" name="port" value="$Encoder.attributeEscape($PORT)"/>
+
+  #set($k = 0)
+  #foreach($property in $PROPERTIES)
+
+<input type="hidden" name="findname_$k" value="$Encoder.attributeEscape($property.get('name'))"/>
+<input type="hidden" name="findvalue_$k" value="$Encoder.attributeEscape($property.get('value'))"/>
+
+    #set($k = $k + 1)
+  #end
+
+<input type="hidden" name="findcount" value="$k"/>
+
+#end
\ No newline at end of file
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Configuration_URL.html b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Configuration_URL.html
new file mode 100644
index 0000000..100fc0a
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Configuration_URL.html
@@ -0,0 +1,36 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('EmailConnector.URL'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.URLTemplateColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr><input type="text" name="url" size="60" value="$Encoder.attributeEscape($URL)"/></nobr>
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="url" value="$Encoder.attributeEscape($URL)"/>
+
+#end
\ No newline at end of file
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/SpecificationHeader.js b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/SpecificationHeader.js
new file mode 100644
index 0000000..137acd2
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/SpecificationHeader.js
@@ -0,0 +1,69 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements. See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License. You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+
+function checkSpecification()
+{
+  if (checkDocumentsTab() == false)
+    return false;
+  if (checkMetadataTab() == false)
+    return false;
+  return true;
+}
+ 
+function SpecOp(n, opValue, anchorvalue)
+{
+  eval("editjob."+n+".value = \""+opValue+"\"");
+  postFormSetAnchor(anchorvalue);
+}
+
+function checkDocumentsTab()
+{
+  return true;
+}
+
+function checkMetadataTab()
+{
+  return true;
+}
+
+function FindDelete(n)
+{
+  SpecOp("findop_"+n, "Delete", "find_"+n);
+}
+
+function FindAdd(n)
+{
+  if (editjob.findname.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.PleaseSelectAMetadataName'))");
+    editjob.findname.focus();
+    return;
+  }
+  if (editjob.findvalue.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('EmailConnector.ValueCannotBeBlank'))");
+    editjob.findvalue.focus();
+    return;
+  }
+  SpecOp("findop", "Add", "find_"+n);
+}
+
+//-->
+</script>
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/SpecificationView.html b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/SpecificationView.html
new file mode 100644
index 0000000..9a6538f
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/SpecificationView.html
@@ -0,0 +1,73 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.FoldersColon'))</nobr></td>
+    <td class="value">
+  #foreach($folder in $FOLDERS)
+      <nobr>$Encoder.bodyEscape($folder)</nobr><br/>
+  #end
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.RecordFilterColon'))</nobr></td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.MetadataName'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.Value'))</nobr></td>
+        </tr>
+
+#set($k = 0)
+#foreach($match in $MATCHES)
+  #if(($k % 2) == 0)
+        <tr class="evenformrow">
+  #else
+        <tr class="oddformrow">
+  #end
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($match.get('name'))</nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($match.get('value'))</nobr>
+          </td>
+        </tr>
+  #set($k = $k + 1)
+#end
+    
+      </table>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.IncludedMetadataColon'))</nobr></td>
+    <td class="value">
+
+#set($seendata = false)
+#foreach($metadataselection in $METADATASELECTIONS)
+  #if($seendata)
+      <br/>
+  #end
+  #set($seendata = true)
+      <nobr>$Encoder.bodyEscape($metadataselection)</nobr>
+#end
+
+    </td>
+  </tr>
+</table>
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Specification_Filter.html b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Specification_Filter.html
new file mode 100644
index 0000000..8d64885
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Specification_Filter.html
@@ -0,0 +1,145 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('EmailConnector.Filter'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+  #if($EXCEPTION == '')
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.FoldersColon'))</nobr>
+    </td>
+    <td class="value">
+      <select name="folders" multiple="true" size="4">
+    #foreach($name in $FOLDERNAMES)
+      #if($FOLDERS.contains($name))
+        <option value="$Encoder.attributeEscape($name)" selected="true">$Encoder.bodyEscape($name)</option>
+      #else
+        <option value="$Encoder.attributeEscape($name)">$Encoder.bodyEscape($name)</option>
+      #end
+    #end
+      </select>
+    </td>
+  #else
+    <td class="message" colspan="2">
+    #foreach($name in $FOLDERS)
+      <input type="hidden" name="folders" value="$Encoder.attributeEscape($name)"/>
+    #end
+      $Encoder.bodyEscape($EXCEPTION)
+    </td>
+  #end
+  </tr>
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.RecordFilterColon'))</nobr>
+    </td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"></td>
+          <td class="formcolumnheader">
+            <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.MetadataName'))</nobr>
+          </td>
+          <td class="formcolumnheader">
+            <nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.Value'))</nobr>
+          </td>
+        </tr>
+
+  #set($k = 0)
+  #foreach($match in $MATCHES)
+
+    #if(($k % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <input type="hidden" name="findop_$k" value=""/>
+            <input type="hidden" name="findname_$k" value="$Encoder.attributeEscape($match.get('name'))"/>
+            <input type="hidden" name="findvalue_$k" value="$Encoder.attributeEscape($match.get('value'))"/>
+            <a name="find_$k">
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.Delete'))" onClick='Javascript:FindDelete("$k")' alt="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.DeleteMatchNumber'))$k"/>
+            </a>
+          </td>
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($match.get('name'))</nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($match.get('value'))</nobr>
+          </td>
+        </tr>
+
+    #set($k = $k + 1)
+  #end
+
+  #if($k == 0)
+        <tr class="formrow">
+          <td class="formcolumnmessage" colspan="3">$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.NoMetadataSpecified'))</td>
+        </tr>
+  #end
+
+        <tr class="formrow"><td class="formseparator" colspan="3"><hr/></td></tr>
+  #set($nextk = $k + 1)
+
+        <tr class="formrow">
+          <td class="formcolumncell">
+            <nobr>
+              <a name="find_$k">
+                <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.Add'))" onClick='Javascript:FindAdd("$nextk")' alt="$Encoder.attributeEscape($ResourceBundle.getString('EmailConnector.AddNewMatch'))"/>
+                <input type="hidden" name="findcount" value="$k"/>
+                <input type="hidden" name="findop" value=""/>
+              </a>
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <select name="findname">
+              <option value="" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.SelectMetadataName'))</option>
+  #foreach($name in $SEARCHABLEATTRIBUTES)
+              <option value="$Encoder.attributeEscape($name)">$Encoder.bodyEscape($name)</option>
+  #end
+            </select>
+          </td>
+          <td class="formcolumncell">
+            <nobr><input type="text" size="32" name="findvalue" value=""/></nobr>
+          </td>
+        </tr>
+
+      </table>
+    </td>
+  </tr>
+</table>
+
+#else
+
+  #foreach($name in $FOLDERS)
+<input type="hidden" name="folders" value="$Encoder.attributeEscape($name)"/>
+  #end
+
+  #set($k = 0)
+  #foreach($match in $MATCHES)
+
+<input type="hidden" name="findname_$k" value="$Encoder.attributeEscape($match.get('name'))"/>
+<input type="hidden" name="findvalue_$k" value="$Encoder.attributeEscape($match.get('value'))"/>
+
+    #set($k = $k + 1)
+  #end
+
+<input type="hidden" name="findcount" value="$k"/>
+
+#end
diff --git a/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Specification_Metadata.html b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Specification_Metadata.html
new file mode 100644
index 0000000..9d1006f
--- /dev/null
+++ b/connectors/email/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/email/Specification_Metadata.html
@@ -0,0 +1,44 @@
+<!--
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file except in compliance with
+the License. You may obtain a copy of the License at
+
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('EmailConnector.Metadata'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('EmailConnector.IncludedMetadataColon'))</nobr></td>
+    <td class="value">
+  #foreach($metadataattribute in $METADATAATTRIBUTES)
+    #if($METADATASELECTIONS.contains($metadataattribute))
+      <input type="checkbox" name="metadata" value="$Encoder.attributeEscape($metadataattribute)" checked="true"/>
+    #else
+      <input type="checkbox" name="metadata" value="$Encoder.attributeEscape($metadataattribute)"/>
+    #end
+      <nobr>$Encoder.bodyEscape($metadataattribute)</nobr><br/>
+  #end
+    </td>
+
+  </tr>
+</table>
+
+#else
+
+  #foreach($metadataselection in $METADATASELECTIONS)
+<input type="hidden" name="metadata" value="$Encoder.attributeEscape($metadataselection)"/>
+  #end
+  
+#end
diff --git a/connectors/email/pom.xml b/connectors/email/pom.xml
new file mode 100644
index 0000000..da19ec3
--- /dev/null
+++ b/connectors/email/pom.xml
@@ -0,0 +1,165 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <parent>
+    <groupId>org.apache.manifoldcf</groupId>
+    <artifactId>mcf-connectors</artifactId>
+    <version>1.5-SNAPSHOT</version>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+
+  <artifactId>mcf-email-connector</artifactId>
+  <name>ManifoldCF - Connectors - Email</name>
+
+  <build>
+    <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
+    <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
+      </resource>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
+    <plugins>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>native2ascii-maven-plugin</artifactId>
+        <version>1.0-beta-1</version>
+        <configuration>
+            <workDir>target/classes</workDir>
+        </configuration>
+        <executions>
+            <execution>
+                <id>native2ascii-utf8</id>
+                <goals>
+                    <goal>native2ascii</goal>
+                </goals>
+                <configuration>
+                    <encoding>UTF8</encoding>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
+                </configuration>
+            </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-surefire-plugin</artifactId>
+        <configuration>
+          <excludes>
+            <exclude>**/*Postgresql*.java</exclude>
+            <exclude>**/*MySQL*.java</exclude>
+          </excludes>
+          <forkMode>always</forkMode>
+          <workingDirectory>target/test-output</workingDirectory>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
+  
+  <dependencies>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-core</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-agents</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-pull-agent</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-ui-core</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <version>${junit.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-core</artifactId>
+      <version>${project.version}</version>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-agents</artifactId>
+      <version>${project.version}</version>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-pull-agent</artifactId>
+      <version>${project.version}</version>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>postgresql</groupId>
+      <artifactId>postgresql</artifactId>
+      <version>${postgresql.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.hsqldb</groupId>
+      <artifactId>hsqldb</artifactId>
+      <version>${hsqldb.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.derby</groupId>
+      <artifactId>derby</artifactId>
+      <version>${derby.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>mysql</groupId>
+      <artifactId>mysql-connector-java</artifactId>
+      <version>${mysql.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>javax.mail</groupId>
+      <artifactId>mail</artifactId>
+      <version>1.4</version>
+    </dependency> 
+  </dependencies>
+</project>
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/ContainableSet.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/ContainableSet.java
new file mode 100644
index 0000000..2e31f98
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/ContainableSet.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.collection;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface ContainableSet
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/FolderSet.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/FolderSet.java
index 0bfe245..cd62da3 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/FolderSet.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/FolderSet.java
@@ -20,6 +20,6 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface FolderSet extends IndependentObjectSet
+public interface FolderSet extends ContainableSet, SubscribableSet, IndependentObjectSet
 {
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/PropertyDescriptionList.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/PropertyDescriptionList.java
index ad6b9b8..60d71e2 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/PropertyDescriptionList.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/PropertyDescriptionList.java
@@ -20,6 +20,6 @@
 

 /** Stub interface to allow the connector to build fully.

 */

-public interface PropertyDescriptionList extends DependentObjectList

+public interface PropertyDescriptionList extends DependentObjectList, EngineCollection

 {

 }

diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/SubscribableSet.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/SubscribableSet.java
new file mode 100644
index 0000000..b0d6a73
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/collection/SubscribableSet.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.collection;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface SubscribableSet
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessLevel.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessLevel.java
index 2d69b3b..4a50812 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessLevel.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessLevel.java
@@ -18,11 +18,11 @@
 */
 package com.filenet.api.constants;
 
-/** Stub interface to allow the connector to build fully.
+/** Stub class to allow the connector to build fully.
 */
-public interface AccessLevel
+public class AccessLevel implements java.io.Serializable
 {
   public static final int VIEW_AS_INT = 131201;
-  
-  public int getValue();
+
+  public int getValue() { return 0; }
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessType.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessType.java
index c3caaef..1687d7a 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessType.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/AccessType.java
@@ -18,12 +18,15 @@
 */
 package com.filenet.api.constants;
 
-/** Stub interface to allow the connector to build fully.
+/** Stub class to allow the connector to build fully.
 */
-public interface AccessType
+public class AccessType implements java.io.Serializable
 {
   public static final int ALLOW_AS_INT = 1;
   public static final int DENY_AS_INT = 2;
   
-  public int getValue();
+  public int getValue()
+  {
+    return 0;
+  }
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/FilteredPropertyType.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/FilteredPropertyType.java
index 01d31bb..b757cfb 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/FilteredPropertyType.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/constants/FilteredPropertyType.java
@@ -18,9 +18,11 @@
 */
 package com.filenet.api.constants;
 
-/** Stub interface to allow the connector to build fully.
+/** Stub class to allow the connector to build fully.
 */
-public enum FilteredPropertyType
+public class FilteredPropertyType implements java.io.Serializable
 {
-  ANY
+  public static final FilteredPropertyType ANY = new FilteredPropertyType();
+
+  public int getValue() { return 0; }
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Connection.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Connection.java
index 2cdd0bc..04a7759 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Connection.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Connection.java
@@ -20,6 +20,6 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface Connection
+public interface Connection extends java.io.Serializable
 {
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Containable.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Containable.java
new file mode 100644
index 0000000..7ad332a
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Containable.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.core;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface Containable
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentElement.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentElement.java
index 89170c6..dbbb7a3 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentElement.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentElement.java
@@ -20,6 +20,6 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface ContentElement
+public interface ContentElement extends RepositoryObject, EngineObject, DependentObject
 {
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentTransfer.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentTransfer.java
index 9caf1de..971d673 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentTransfer.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ContentTransfer.java
@@ -22,7 +22,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface ContentTransfer
+public interface ContentTransfer extends RepositoryObject, ContentElement, DependentObject
 {
   public InputStream accessContentStream();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/DependentObject.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/DependentObject.java
new file mode 100644
index 0000000..984338e
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/DependentObject.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.core;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface DependentObject
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Document.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Document.java
index 1a74140..dc39bad 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Document.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Document.java
@@ -23,7 +23,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface Document extends Versionable, EngineObject
+public interface Document extends Versionable, Containable, Subscribable, RepositoryObject, IndependentlyPersistableObject
 {
   public ContentElementList get_ContentElements();
   public AccessPermissionList get_Permissions();
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Domain.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Domain.java
index ae837c5..7bedf86 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Domain.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Domain.java
@@ -20,6 +20,6 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface Domain
+public interface Domain extends InstantiatingScope, IndependentlyPersistableObject
 {
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/EngineObject.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/EngineObject.java
index e5c3aaf..1ee3302 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/EngineObject.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/EngineObject.java
@@ -22,7 +22,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface EngineObject
+public interface EngineObject extends java.io.Serializable
 {
   public String getClassName();
   public Properties getProperties();
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Folder.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Folder.java
index 075c025..9db2947 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Folder.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Folder.java
@@ -22,7 +22,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface Folder
+public interface Folder extends IndependentlyPersistableObject, Versionable, Containable, Subscribable
 {
   public FolderSet get_SubFolders();
   public String get_FolderName();
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/IndependentObject.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/IndependentObject.java
new file mode 100644
index 0000000..91cd675
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/IndependentObject.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.core;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface IndependentObject extends EngineObject
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/IndependentlyPersistableObject.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/IndependentlyPersistableObject.java
new file mode 100644
index 0000000..be57cbf
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/IndependentlyPersistableObject.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.core;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface IndependentlyPersistableObject extends IndependentObject
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ObjectStore.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ObjectStore.java
index 0ed7ef1..21ea813 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ObjectStore.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/ObjectStore.java
@@ -20,7 +20,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface ObjectStore extends InstantiatingScope
+public interface ObjectStore extends InstantiatingScope, IndependentlyPersistableObject
 {
   public Folder get_RootFolder();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/RepositoryObject.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/RepositoryObject.java
new file mode 100644
index 0000000..7a59212
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/RepositoryObject.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.core;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface RepositoryObject
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Subscribable.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Subscribable.java
new file mode 100644
index 0000000..237b442
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/core/Subscribable.java
@@ -0,0 +1,25 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.core;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface Subscribable
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/ClassDescription.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/ClassDescription.java
index 3903f3d..37ba003 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/ClassDescription.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/ClassDescription.java
@@ -19,10 +19,11 @@
 package com.filenet.api.meta;
 
 import com.filenet.api.collection.PropertyDescriptionList;
+import com.filenet.api.core.IndependentObject;
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface ClassDescription
+public interface ClassDescription extends Metadata, IndependentObject
 {
   public PropertyDescriptionList get_PropertyDescriptions();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/PropertyDescription.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/PropertyDescription.java
index 8d6f79b..1218443 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/PropertyDescription.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/meta/PropertyDescription.java
@@ -18,9 +18,11 @@
 */
 package com.filenet.api.meta;
 
+import com.filenet.api.core.*;
+
 /** Stub interface to allow the connector to build fully.
 */
-public interface PropertyDescription extends Metadata
+public interface PropertyDescription extends Metadata, EngineObject, DependentObject
 {
   public String get_SymbolicName();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Properties.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Properties.java
index 6b60949..8c10573 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Properties.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Properties.java
@@ -22,7 +22,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface Properties
+public interface Properties extends java.io.Serializable
 {
   public Iterator<Property> iterator();
   public Object getObjectValue(String propName);
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Property.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Property.java
index b429ac9..1994d1b 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Property.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/property/Property.java
@@ -22,7 +22,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface Property
+public interface Property extends java.io.Serializable
 {
   public Object getObjectValue();
   public String getPropertyName();
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/query/RepositoryRow.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/query/RepositoryRow.java
index 17cfa27..260e9c2 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/query/RepositoryRow.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/query/RepositoryRow.java
@@ -22,7 +22,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface RepositoryRow
+public interface RepositoryRow extends java.io.Serializable
 {
   public Properties getProperties();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/AccessPermission.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/AccessPermission.java
index ef3c642..a96d6da 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/AccessPermission.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/AccessPermission.java
@@ -18,9 +18,11 @@
 */
 package com.filenet.api.security;
 
+import com.filenet.api.core.DependentObject;
+
 /** Stub interface to allow the connector to build fully.
 */
-public interface AccessPermission extends DiscretionaryPermission
+public interface AccessPermission extends DiscretionaryPermission, DependentObject
 {
   public Integer get_AccessMask();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/DiscretionaryPermission.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/DiscretionaryPermission.java
index 114c8b3..82fe07a 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/DiscretionaryPermission.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/DiscretionaryPermission.java
@@ -19,10 +19,11 @@
 package com.filenet.api.security;
 
 import com.filenet.api.constants.AccessType;
+import com.filenet.api.core.DependentObject;
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface DiscretionaryPermission extends Permission
+public interface DiscretionaryPermission extends Permission, DependentObject
 {
   public AccessType get_AccessType();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/Permission.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/Permission.java
index 944f048..3d6e4fd 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/Permission.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/Permission.java
@@ -18,9 +18,11 @@
 */
 package com.filenet.api.security;
 
+import com.filenet.api.core.*;
+
 /** Stub interface to allow the connector to build fully.
 */
-public interface Permission
+public interface Permission extends EngineObject, DependentObject
 {
   public String get_GranteeName();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/SecurityPrincipal.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/SecurityPrincipal.java
new file mode 100644
index 0000000..e0010a2
--- /dev/null
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/SecurityPrincipal.java
@@ -0,0 +1,27 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.filenet.api.security;
+
+import com.filenet.api.core.IndependentObject;
+
+/** Stub interface to allow the connector to build fully.
+*/
+public interface SecurityPrincipal extends IndependentObject
+{
+}
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/User.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/User.java
index 410d85a..22b4009 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/User.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/security/User.java
@@ -20,7 +20,7 @@
 
 /** Stub interface to allow the connector to build fully.
 */
-public interface User
+public interface User extends SecurityPrincipal
 {
   public String get_Id();
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/ConfigurationParameters.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/ConfigurationParameters.java
index 0ca4e77..399609b 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/ConfigurationParameters.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/ConfigurationParameters.java
@@ -18,8 +18,8 @@
 */
 package com.filenet.api.util;
 
-/** Stub interface to allow the connector to build fully.
+/** Stub class to allow the connector to build fully.
 */
-public interface ConfigurationParameters
+public class ConfigurationParameters implements java.io.Serializable
 {
 }
diff --git a/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/Id.java b/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/Id.java
index dd36556..e6312ee 100644
--- a/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/Id.java
+++ b/connectors/filenet/build-stub/src/main/java/com/filenet/api/util/Id.java
@@ -18,8 +18,9 @@
 */
 package com.filenet.api.util;
 
-/** Stub interface to allow the connector to build fully.
+/** Stub class to allow the connector to build fully.
 */
-public interface Id
+public class Id implements java.io.Serializable, Comparable
 {
+  public int compareTo(Object o) { return 0; }
 }
diff --git a/connectors/filenet/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filenet/FilenetConnector.java b/connectors/filenet/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filenet/FilenetConnector.java
index 6aa1894..80ba2f9 100644
--- a/connectors/filenet/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filenet/FilenetConnector.java
+++ b/connectors/filenet/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filenet/FilenetConnector.java
@@ -104,7 +104,7 @@
   protected String docURIPrefix = null;
 
   /** Deny access token for default authority */
-  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
+  private final static String defaultAuthorityDenyToken = GLOBAL_DENY_TOKEN;
 
   protected class GetSessionThread extends Thread
   {
@@ -473,6 +473,16 @@
     releaseCheck();
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
   /** Disconnect from Filenet.
   */
   @Override
@@ -1567,44 +1577,46 @@
     Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    String userID = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_USERID);
+    String userID = parameters.getParameter(CONFIG_PARAM_USERID);
     if (userID == null)
       userID = "";
-    String password = parameters.getObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_PASSWORD);
+    String password = parameters.getObfuscatedParameter(CONFIG_PARAM_PASSWORD);
     if (password == null)
       password = "";
-    String filenetdomain = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_FILENETDOMAIN);
+    else
+      password = out.mapPasswordToKey(password);
+    String filenetdomain = parameters.getParameter(CONFIG_PARAM_FILENETDOMAIN);
     if (filenetdomain == null)
     {
-      filenetdomain = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_FILENETDOMAIN_OLD);
+      filenetdomain = parameters.getParameter(CONFIG_PARAM_FILENETDOMAIN_OLD);
       if (filenetdomain == null)
         filenetdomain = "";
     }
-    String objectstore = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_OBJECTSTORE);
+    String objectstore = parameters.getParameter(CONFIG_PARAM_OBJECTSTORE);
     if (objectstore == null)
       objectstore = "";
-    String serverprotocol = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERPROTOCOL);
+    String serverprotocol = parameters.getParameter(CONFIG_PARAM_SERVERPROTOCOL);
     if (serverprotocol == null)
       serverprotocol = "http";
-    String serverhostname = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERHOSTNAME);
+    String serverhostname = parameters.getParameter(CONFIG_PARAM_SERVERHOSTNAME);
     if (serverhostname == null)
       serverhostname = "";
-    String serverport = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERPORT);
+    String serverport = parameters.getParameter(CONFIG_PARAM_SERVERPORT);
     if (serverport == null)
       serverport = "";
-    String serverwsilocation = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERWSILOCATION);
+    String serverwsilocation = parameters.getParameter(CONFIG_PARAM_SERVERWSILOCATION);
     if (serverwsilocation == null)
       serverwsilocation = "wsi/FNCEWS40DIME";
-    String urlprotocol = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLPROTOCOL);
+    String urlprotocol = parameters.getParameter(CONFIG_PARAM_URLPROTOCOL);
     if (urlprotocol == null)
       urlprotocol = "http";
-    String urlhostname = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLHOSTNAME);
+    String urlhostname = parameters.getParameter(CONFIG_PARAM_URLHOSTNAME);
     if (urlhostname == null)
       urlhostname = "";
-    String urlport = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLPORT);
+    String urlport = parameters.getParameter(CONFIG_PARAM_URLPORT);
     if (urlport == null)
       urlport = "";
-    String urllocation = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLLOCATION);
+    String urllocation = parameters.getParameter(CONFIG_PARAM_URLLOCATION);
     if (urllocation == null)
       urllocation = "Workplace/Browse.jsp";
 
@@ -1751,51 +1763,51 @@
   {
     String serverprotocol = variableContext.getParameter("serverprotocol");
     if (serverprotocol != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERPROTOCOL,serverprotocol);
+      parameters.setParameter(CONFIG_PARAM_SERVERPROTOCOL,serverprotocol);
 
     String serverhostname = variableContext.getParameter("serverhostname");
     if (serverhostname != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERHOSTNAME,serverhostname);
+      parameters.setParameter(CONFIG_PARAM_SERVERHOSTNAME,serverhostname);
     
     String serverport = variableContext.getParameter("serverport");
     if (serverport != null && serverport.length() > 0)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERPORT,serverport);
+      parameters.setParameter(CONFIG_PARAM_SERVERPORT,serverport);
 
     String serverwsilocation = variableContext.getParameter("serverwsilocation");
     if (serverwsilocation != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_SERVERWSILOCATION,serverwsilocation);
+      parameters.setParameter(CONFIG_PARAM_SERVERWSILOCATION,serverwsilocation);
 
     String urlprotocol = variableContext.getParameter("urlprotocol");
     if (urlprotocol != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLPROTOCOL,urlprotocol);
+      parameters.setParameter(CONFIG_PARAM_URLPROTOCOL,urlprotocol);
 
     String urlhostname = variableContext.getParameter("urlhostname");
     if (urlhostname != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLHOSTNAME,urlhostname);
+      parameters.setParameter(CONFIG_PARAM_URLHOSTNAME,urlhostname);
 
     String urlport = variableContext.getParameter("urlport");
     if (urlport != null && urlport.length() > 0)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLPORT,urlport);
+      parameters.setParameter(CONFIG_PARAM_URLPORT,urlport);
 
     String urllocation = variableContext.getParameter("urllocation");
     if (urllocation != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_URLLOCATION,urllocation);
+      parameters.setParameter(CONFIG_PARAM_URLLOCATION,urllocation);
 
     String userID = variableContext.getParameter("userid");
     if (userID != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_USERID,userID);
+      parameters.setParameter(CONFIG_PARAM_USERID,userID);
 
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_PASSWORD,password);
+      parameters.setObfuscatedParameter(CONFIG_PARAM_PASSWORD,variableContext.mapKeyToPassword(password));
 
     String filenetdomain = variableContext.getParameter("filenetdomain");
     if (filenetdomain != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_FILENETDOMAIN,filenetdomain);
+      parameters.setParameter(CONFIG_PARAM_FILENETDOMAIN,filenetdomain);
 
     String objectstore = variableContext.getParameter("objectstore");
     if (objectstore != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.CONFIG_PARAM_OBJECTSTORE,objectstore);
+      parameters.setParameter(CONFIG_PARAM_OBJECTSTORE,objectstore);
     return null;
   }
   
@@ -1949,9 +1961,9 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_DOCUMENTCLASS))
+      if (sn.getType().equals(SPEC_NODE_DOCUMENTCLASS))
       {
-        String value = sn.getAttributeValue(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE);
+        String value = sn.getAttributeValue(SPEC_ATTRIBUTE_VALUE);
         // Now, scan for metadata etc.
         org.apache.manifoldcf.crawler.connectors.filenet.DocClassSpec spec = new org.apache.manifoldcf.crawler.connectors.filenet.DocClassSpec(sn);
         documentClasses.put(value,spec);
@@ -2366,9 +2378,9 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_MIMETYPE))
+      if (sn.getType().equals(SPEC_NODE_MIMETYPE))
       {
-        String value = sn.getAttributeValue(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE);
+        String value = sn.getAttributeValue(SPEC_ATTRIBUTE_VALUE);
         if (mimeTypes == null)
           mimeTypes = new HashMap();
         mimeTypes.put(value,value);
@@ -2594,7 +2606,7 @@
       i = 0;
       while (i < ds.getChildCount())
       {
-        if (ds.getChild(i).getType().equals(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_DOCUMENTCLASS))
+        if (ds.getChild(i).getType().equals(SPEC_NODE_DOCUMENTCLASS))
           ds.removeChild(i);
         else
           i++;
@@ -2606,14 +2618,14 @@
         while (i < x.length)
         {
           String value = x[i++];
-          SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_DOCUMENTCLASS);
-          node.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE,value);
+          SpecificationNode node = new SpecificationNode(SPEC_NODE_DOCUMENTCLASS);
+          node.setAttribute(SPEC_ATTRIBUTE_VALUE,value);
           // Get the allmetadata value for this document class
           String allmetadata = variableContext.getParameter("allmetadata_"+value);
           if (allmetadata == null)
             allmetadata = "false";
           if (allmetadata.equals("true"))
-            node.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_ALLMETADATA,allmetadata);
+            node.setAttribute(SPEC_ATTRIBUTE_ALLMETADATA,allmetadata);
           else
           {
             String[] fields = variableContext.getParameterValues("metadatafield_"+value);
@@ -2623,8 +2635,8 @@
               while (j < fields.length)
               {
                 String field = fields[j++];
-                SpecificationNode sp2 = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_METADATAFIELD);
-                sp2.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE,field);
+                SpecificationNode sp2 = new SpecificationNode(SPEC_NODE_METADATAFIELD);
+                sp2.setAttribute(SPEC_ATTRIBUTE_VALUE,field);
                 node.addChild(node.getChildCount(),sp2);
               }
             }
@@ -2642,12 +2654,12 @@
             String matchValue = variableContext.getParameter("matchvalue_"+value+"_"+Integer.toString(q));
             if (matchOp == null || !matchOp.equals("Delete"))
             {
-              SpecificationNode matchNode = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_MATCH);
-              matchNode.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_MATCHTYPE,matchType);
-              matchNode.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_FIELDNAME,matchField);
+              SpecificationNode matchNode = new SpecificationNode(SPEC_NODE_MATCH);
+              matchNode.setAttribute(SPEC_ATTRIBUTE_MATCHTYPE,matchType);
+              matchNode.setAttribute(SPEC_ATTRIBUTE_FIELDNAME,matchField);
               if (matchValue == null)
                 matchValue = "";
-              matchNode.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE,matchValue);
+              matchNode.setAttribute(SPEC_ATTRIBUTE_VALUE,matchValue);
               node.addChild(node.getChildCount(),matchNode);
             }
             q++;
@@ -2661,12 +2673,12 @@
             String matchType = variableContext.getParameter("matchtype_"+value);
             String matchField = variableContext.getParameter("matchfield_"+value);
             String matchValue = variableContext.getParameter("matchvalue_"+value);
-            SpecificationNode matchNode = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_MATCH);
-            matchNode.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_MATCHTYPE,matchType);
-            matchNode.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_FIELDNAME,matchField);
+            SpecificationNode matchNode = new SpecificationNode(SPEC_NODE_MATCH);
+            matchNode.setAttribute(SPEC_ATTRIBUTE_MATCHTYPE,matchType);
+            matchNode.setAttribute(SPEC_ATTRIBUTE_FIELDNAME,matchField);
             if (matchValue == null)
               matchValue = "";
-            matchNode.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE,matchValue);
+            matchNode.setAttribute(SPEC_ATTRIBUTE_VALUE,matchValue);
             node.addChild(node.getChildCount(),matchNode);
           }
 			
@@ -2679,7 +2691,7 @@
       i = 0;
       while (i < ds.getChildCount())
       {
-        if (ds.getChild(i).getType().equals(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_MIMETYPE))
+        if (ds.getChild(i).getType().equals(SPEC_NODE_MIMETYPE))
           ds.removeChild(i);
         else
           i++;
@@ -2691,8 +2703,8 @@
         while (i < x.length)
         {
           String value = x[i++];
-          SpecificationNode node = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_MIMETYPE);
-          node.setAttribute(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE,value);
+          SpecificationNode node = new SpecificationNode(SPEC_NODE_MIMETYPE);
+          node.setAttribute(SPEC_ATTRIBUTE_VALUE,value);
           ds.addChild(ds.getChildCount(),node);
         }
       }
@@ -2861,9 +2873,9 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_DOCUMENTCLASS))
+      if (sn.getType().equals(SPEC_NODE_DOCUMENTCLASS))
       {
-        String value = sn.getAttributeValue(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE);
+        String value = sn.getAttributeValue(SPEC_ATTRIBUTE_VALUE);
         org.apache.manifoldcf.crawler.connectors.filenet.DocClassSpec spec = new org.apache.manifoldcf.crawler.connectors.filenet.DocClassSpec(sn);
         documentClasses.put(value,spec);
       }
@@ -2973,9 +2985,9 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_NODE_MIMETYPE))
+      if (sn.getType().equals(SPEC_NODE_MIMETYPE))
       {
-        String value = sn.getAttributeValue(org.apache.manifoldcf.crawler.connectors.filenet.FilenetConnector.SPEC_ATTRIBUTE_VALUE);
+        String value = sn.getAttributeValue(SPEC_ATTRIBUTE_VALUE);
         if (mimeTypes == null)
           mimeTypes = new HashMap();
         mimeTypes.put(value,value);
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConfig.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConfig.java
new file mode 100644
index 0000000..c34a7f4
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConfig.java
@@ -0,0 +1,68 @@
+/* $Id: FileOutputConfig.java 1299512 2013-05-31 22:59:38Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+
+
+public class FileOutputConfig extends FileOutputParam {
+
+  /**
+   * 
+   */
+  private static final long serialVersionUID = -2071290103498352538L;
+
+  /** Parameters used for the configuration */
+  final private static ParameterEnum[] CONFIGURATIONLIST = {};
+
+  /** Build a set of ElasticSearchParameters by reading ConfigParams. If the
+   * value returned by ConfigParams.getParameter is null, the default value is
+   * set.
+   * 
+   * @param paramList
+   * @param params
+   */
+  public FileOutputConfig(ConfigParams params)
+  {
+    super(CONFIGURATIONLIST);
+    for (ParameterEnum param : CONFIGURATIONLIST) {
+      String value = params.getParameter(param.name());
+      if (value == null) {
+        value = param.defaultValue;
+      }
+      put(param, value);
+    }
+  }
+
+  /**
+   * @param variableContext
+   * @param parameters
+   */
+  public final static void contextToConfig(IPostParameters variableContext, ConfigParams parameters) {
+    for (ParameterEnum param : CONFIGURATIONLIST) {
+      String p = variableContext.getParameter(param.name().toLowerCase());
+      if (p != null) {
+        parameters.setParameter(param.name(), p);
+      }
+    }
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConnector.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConnector.java
new file mode 100644
index 0000000..291d8d5
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConnector.java
@@ -0,0 +1,689 @@
+/* $Id: FileOutputConnector.java 991374 2013-05-31 23:04:08Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.io.InputStream;
+import java.io.UnsupportedEncodingException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URLEncoder;
+import java.nio.channels.ClosedChannelException;
+import java.nio.channels.FileChannel;
+import java.nio.channels.FileLock;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+
+import org.apache.manifoldcf.agents.interfaces.IOutputAddActivity;
+import org.apache.manifoldcf.agents.interfaces.IOutputRemoveActivity;
+import org.apache.manifoldcf.agents.interfaces.OutputSpecification;
+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.agents.output.BaseOutputConnector;
+import org.apache.manifoldcf.agents.system.Logging;
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.ConfigurationNode;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.SpecificationNode;
+import org.json.JSONException;
+
+public class FileOutputConnector extends BaseOutputConnector {
+
+  public static final String _rcsid = "@(#)$Id: FileOutputConnector.java 988245 2010-08-23 18:39:35Z minoru $";
+
+  // Activities we log
+
+  /** Ingestion activity */
+  public final static String INGEST_ACTIVITY = "document ingest";
+  /** Document removal activity */
+  public final static String REMOVE_ACTIVITY = "document deletion";
+
+  // Activities list
+  protected static final String[] activitiesList = new String[]{INGEST_ACTIVITY, REMOVE_ACTIVITY};
+
+  /** Forward to the javascript to check the configuration parameters */
+  private static final String EDIT_CONFIGURATION_JS = "editConfiguration.js";
+
+  /** Forward to the HTML template to edit the configuration parameters */
+  private static final String EDIT_CONFIGURATION_HTML = "editConfiguration.html";
+
+  /** Forward to the HTML template to view the configuration parameters */
+  private static final String VIEW_CONFIGURATION_HTML = "viewConfiguration.html";
+
+  /** Forward to the javascript to check the specification parameters for the job */
+  private static final String EDIT_SPECIFICATION_JS = "editSpecification.js";
+
+  /** Forward to the template to edit the configuration parameters for the job */
+  private static final String EDIT_SPECIFICATION_HTML = "editSpecification.html";
+
+  /** Forward to the template to view the specification parameters for the job */
+  private static final String VIEW_SPECIFICATION_HTML = "viewSpecification.html";
+
+  /** Constructor.
+   */
+  public FileOutputConnector() {
+  }
+
+  /** Return the list of activities that this connector supports (i.e. writes into the log).
+   *@return the list.
+   */
+  @Override
+  public String[] getActivitiesList() {
+    return activitiesList;
+  }
+
+  /** Connect.
+   *@param configParameters is the set of configuration parameters, which
+   * in this case describe the target appliance, basic auth configuration, etc.  (This formerly came
+   * out of the ini file.)
+   */
+  @Override
+  public void connect(ConfigParams configParameters) {
+    super.connect(configParameters);
+  }
+
+  /** Close the connection.  Call this before discarding the connection.
+   */
+  @Override
+  public void disconnect() throws ManifoldCFException {
+    super.disconnect();
+  }
+
+  /** Set up a session */
+  protected void getSession() throws ManifoldCFException, ServiceInterruption {
+  }
+
+  /** Test the connection.  Returns a string describing the connection integrity.
+   *@return the connection's status as a displayable string.
+   */
+  @Override
+  public String check() throws ManifoldCFException {
+    try {
+      getSession();
+      return super.check();
+    } catch (ServiceInterruption e) {
+      return "Transient error: "+e.getMessage();
+    }
+  }
+
+  /** Get an output version string, given an output specification.  The output version string is used to uniquely describe the pertinent details of
+   * the output specification and the configuration, to allow the Connector Framework to determine whether a document will need to be output again.
+   * Note that the contents of the document cannot be considered by this method, and that a different version string (defined in IRepositoryConnector)
+   * is used to describe the version of the actual document.
+   *
+   * This method presumes that the connector object has been configured, and it is thus able to communicate with the output data store should that be
+   * necessary.
+   *@param spec is the current output specification for the job that is doing the crawling.
+   *@return a string, of unlimited length, which uniquely describes output configuration and specification in such a way that if two such strings are equal,
+   * the document will not need to be sent again to the output data store.
+   */
+  @Override
+  public String getOutputDescription(OutputSpecification spec) throws ManifoldCFException, ServiceInterruption {
+    FileOutputSpecs specs = new FileOutputSpecs(getSpecNode(spec));
+    return specs.toJson().toString();
+  }
+
+  /** Add (or replace) a document in the output data store using the connector.
+   * This method presumes that the connector object has been configured, and it is thus able to communicate with the output data store should that be
+   * necessary.
+   * The OutputSpecification is *not* provided to this method, because the goal is consistency, and if output is done it must be consistent with the
+   * output description, since that was what was partly used to determine if output should be taking place.  So it may be necessary for this method to decode
+   * an output description string in order to determine what should be done.
+   *@param documentURI is the URI of the document.  The URI is presumed to be the unique identifier which the output data store will use to process
+   * and serve the document.  This URI is constructed by the repository connector which fetches the document, and is thus universal across all output connectors.
+   *@param outputDescription is the description string that was constructed for this document by the getOutputDescription() method.
+   *@param document is the document data to be processed (handed to the output data store).
+   *@param authorityNameString is the name of the authority responsible for authorizing any access tokens passed in with the repository document.  May be null.
+   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
+   *@return the document status (accepted or permanently rejected).
+   */
+  @Override
+  public int addOrReplaceDocument(String documentURI, String outputDescription, RepositoryDocument document, String authorityNameString, IOutputAddActivity activities) throws ManifoldCFException, ServiceInterruption {
+    // Establish a session
+    getSession();
+
+    FileOutputConfig config = getConfigParameters(null);
+
+    FileOutputSpecs specs = null;
+    StringBuffer path = new StringBuffer();
+    try {
+      specs = new FileOutputSpecs(outputDescription);
+
+      /*
+       * make file path
+       */
+      if (specs.getRootPath() != null) {
+        path.append(specs.getRootPath());
+      }
+      
+      // If the path does not yet exist at the root level, it is dangerous to create it.
+      File currentPath = new File(path.toString());
+      if (!currentPath.exists())
+        throw new ManifoldCFException("Root path does not yet exist: '"+currentPath+"'");
+      if (!currentPath.isDirectory())
+        throw new ManifoldCFException("Root path is not a directory: '"+currentPath+"'");
+      
+      String filePath = documentURItoFilePath(documentURI);
+      
+      // Build path one level at a time.  This is needed because there may be a collision at
+      // every level.
+      int index = 0;
+      while (true)
+      {
+        int currentIndex = filePath.indexOf("/",index);
+        if (currentIndex == -1)
+          break;
+        String dirName = filePath.substring(index,currentIndex);
+        File newPath = new File(currentPath, dirName);
+        index = currentIndex + 1;
+        int suffix = 1;
+        while (true)
+        {
+          if (newPath.exists() && newPath.isDirectory())
+            break;
+          // Try to create it.  If we fail, check if it now exists as a file.
+          if (newPath.mkdir())
+            break;
+          // Hmm, didn't create.  If it is a file, we suffered a collision, so try again with ".N" as a suffix.
+          if (newPath.exists())
+          {
+            if (newPath.isDirectory())
+              break;
+            newPath = new File(currentPath, dirName + "." + suffix);
+            suffix++;
+          }
+          else
+            throw new ManifoldCFException("Could not create directory '"+newPath+"'.  Permission issue?");
+        }
+        // Directory successfully created!
+        currentPath = newPath;
+        // Go on to the next one.
+      }
+      
+      // Path successfully created.  Now create file.
+      FileOutputStream output = null;
+      String fileName = filePath.substring(index);
+      File outputPath = new File(currentPath, fileName);
+      int fileSuffix = 1;
+      while (true)
+      {
+        try
+        {
+          output = new FileOutputStream(outputPath);
+          break;
+        }
+        catch (FileNotFoundException e)
+        {
+          // Figure out why it could not be created.
+          if (outputPath.exists() && !outputPath.isFile())
+          {
+            // try a new file
+            outputPath = new File(currentPath, fileName + "." + fileSuffix);
+            fileSuffix++;
+            continue;
+          }
+          // Probably some other error
+          throw new ManifoldCFException("Could not create file '"+outputPath+"': "+e.getMessage(),e);
+        }
+      }
+
+      try {
+        /*
+         * lock file
+         */
+        FileChannel channel = output.getChannel();
+        FileLock lock = channel.tryLock();
+        if (lock == null)
+          throw new ServiceInterruption("Could not lock file: '"+outputPath+"'",null,1000L,-1L,10,false);
+
+        try {
+
+          /*
+           * write file
+           */
+          InputStream input = document.getBinaryStream();
+          byte buf[] = new byte[65536];
+          int len;
+          while((len = input.read(buf)) != -1) {
+            output.write(buf, 0, len);
+          }
+          output.flush();
+        } finally {
+          // Unlock
+          try {
+            if (lock != null) {
+              lock.release();
+            }
+          } catch (ClosedChannelException e) {
+          }
+        }
+      } finally {
+        try {
+          output.close();
+        } catch (IOException e) {
+        }
+      }
+    } catch (JSONException e) {
+      handleJSONException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    } catch (URISyntaxException e) {
+      handleURISyntaxException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    } catch (FileNotFoundException e) {
+      handleFileNotFoundException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    } catch (SecurityException e) {
+      handleSecurityException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    } catch (IOException e) {
+      handleIOException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    }
+
+    activities.recordActivity(null, INGEST_ACTIVITY, new Long(document.getBinaryLength()), documentURI, "OK", null);
+    return DOCUMENTSTATUS_ACCEPTED;
+  }
+
+  protected static void handleJSONException(JSONException e)
+    throws ManifoldCFException, ServiceInterruption {
+    Logging.agents.error("FileSystem: JSONException: "+e.getMessage(),e);
+    throw new ManifoldCFException(e.getMessage(),e);
+  }
+
+  protected static void handleURISyntaxException(URISyntaxException e)
+    throws ManifoldCFException, ServiceInterruption {
+    Logging.agents.error("FileSystem: URISyntaxException: "+e.getMessage(),e);
+    throw new ManifoldCFException(e.getMessage(),e);
+  }
+
+  protected static void handleSecurityException(SecurityException e)
+    throws ManifoldCFException, ServiceInterruption {
+    Logging.agents.error("FileSystem: SecurityException: "+e.getMessage(),e);
+    throw new ManifoldCFException(e.getMessage(),e);
+  }
+
+  protected static void handleFileNotFoundException(FileNotFoundException e)
+    throws ManifoldCFException, ServiceInterruption {
+    Logging.agents.error("FileSystem: Path is illegal: "+e.getMessage(),e);
+    throw new ManifoldCFException(e.getMessage(),e);
+  }
+
+  /** Handle IOException */
+  protected static void handleIOException(IOException e)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    }
+    long currentTime = System.currentTimeMillis();
+    Logging.agents.warn("FileSystem: IO exception: "+e.getMessage(),e);
+    throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L, currentTime + 3 * 60 * 60000L,-1,false);
+  }
+
+  /** Remove a document using the connector.
+   * Note that the last outputDescription is included, since it may be necessary for the connector to use such information to know how to properly remove the document.
+   *@param documentURI is the URI of the document.  The URI is presumed to be the unique identifier which the output data store will use to process
+   * and serve the document.  This URI is constructed by the repository connector which fetches the document, and is thus universal across all output connectors.
+   *@param outputDescription is the last description string that was constructed for this document by the getOutputDescription() method above.
+   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
+   */
+  @Override
+  public void removeDocument(String documentURI, String outputDescription, IOutputRemoveActivity activities) throws ManifoldCFException, ServiceInterruption {
+    // Establish a session
+    getSession();
+
+    FileOutputConfig config = getConfigParameters(null);
+
+    FileOutputSpecs specs = null;
+    StringBuffer path = new StringBuffer();
+    try {
+      specs = new FileOutputSpecs(outputDescription);
+
+      // We cannot remove documents, because it is unsafe to do so.
+      // Paths that were created when the document existed will not
+      // be found if it goes away.  So we have to leave a grave marker,
+      // in this case a zero-length file, instead.
+      
+      // If the path does not yet exist at the root level, it is dangerous to create it.
+      File currentPath = new File(path.toString());
+      if (!currentPath.exists())
+        return;
+      if (!currentPath.isDirectory())
+        return;
+      
+      String filePath = documentURItoFilePath(documentURI);
+      
+      // Build path one level at a time.  This is needed because there may be a collision at
+      // every level.  If we don't find a directory where we expect it, we just exit.
+      int index = 0;
+      while (true)
+      {
+        int currentIndex = filePath.indexOf("/",index);
+        if (currentIndex == -1)
+          break;
+        String dirName = filePath.substring(index,currentIndex);
+        File newPath = new File(currentPath, dirName);
+        index = currentIndex + 1;
+        int suffix = 1;
+        while (true)
+        {
+          if (!newPath.exists())
+            return;
+          if (newPath.isDirectory())
+            break;
+          // It's a file.  Move on to the next one.
+          newPath = new File(currentPath, dirName + "." + suffix);
+          suffix++;
+        }
+        // Directory successfully created!
+        currentPath = newPath;
+        // Go on to the next level.
+      }
+      
+      // Path found.  Now, see if we can find the file to null out.
+      FileOutputStream output = null;
+      String fileName = filePath.substring(index);
+      File outputPath = new File(currentPath, fileName);
+      int fileSuffix = 1;
+      while (true)
+      {
+        if (!outputPath.exists())
+          return;
+        if (!outputPath.isFile())
+        {
+          // Try a new one
+          outputPath = new File(currentPath, fileName + "." + fileSuffix);
+          fileSuffix++;
+          continue;
+        }
+        // Null it out!
+        try
+        {
+          output = new FileOutputStream(outputPath);
+          break;
+        }
+        catch (FileNotFoundException e)
+        {
+          // Probably some other error
+          throw new ManifoldCFException("Could not zero out file '"+outputPath+"': "+e.getMessage(),e);
+        }
+      }
+      // Just close it, to make a zero-length grave marker.
+      output.close();
+    } catch (JSONException e) {
+      handleJSONException(e);
+    } catch (URISyntaxException e) {
+      handleURISyntaxException(e);
+    } catch (FileNotFoundException e) {
+      handleFileNotFoundException(e);
+    } catch (SecurityException e) {
+      handleSecurityException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+
+    activities.recordActivity(null, REMOVE_ACTIVITY, null, documentURI, "OK", null);
+  }
+
+  /** Output the configuration header section.
+   * This method is called in the head section of the connector's configuration page.  Its purpose is to add the required tabs to the list, and to output any
+   * javascript methods that might be needed by the configuration editing HTML.
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+   */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray) throws ManifoldCFException, IOException {
+  }
+
+  /** Output the configuration body section.
+   * This method is called in the body section of the connector's configuration page.  Its purpose is to present the required form elements for editing.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+   * form is "editconnection".
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@param tabName is the current tab name.
+   */
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName) throws ManifoldCFException, IOException {
+  }
+
+  /** Process a configuration post.
+   * This method is called at the start of the connector's configuration page, whenever there is a possibility that form data for a connection has been
+   * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
+   * The name of the posted form is "editconnection".
+   *@param threadContext is the local thread context.
+   *@param variableContext is the set of variables available from the post, including binary file post information.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
+   */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, Locale locale, ConfigParams parameters) throws ManifoldCFException {
+    return null;
+  }
+
+  /** View configuration.
+   * This method is called in the body section of the connector's view configuration page.  Its purpose is to present the connection information to the user.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
+  }
+
+  /** Output the specification header section.
+   * This method is called in the head section of a job page which has selected an output connection of the current type.  Its purpose is to add the required tabs
+   * to the list, and to output any javascript methods that might be needed by the job editing HTML.
+   *@param out is the output to which any HTML should be sent.
+   *@param os is the current output specification for this job.
+   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+   */
+  @Override
+  public void outputSpecificationHeader(IHTTPOutput out, Locale locale, OutputSpecification os, List<String> tabsArray) throws ManifoldCFException, IOException {
+    super.outputSpecificationHeader(out, locale, os, tabsArray);
+    tabsArray.add(Messages.getString(locale, "FileConnector.PathTabName"));
+    outputResource(EDIT_SPECIFICATION_JS, out, locale, null, null);
+  }
+
+  /** Output the specification body section.
+   * This method is called in the body section of a job page which has selected an output connection of the current type.  Its purpose is to present the required form elements for editing.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+   * form is "editjob".
+   *@param out is the output to which any HTML should be sent.
+   *@param os is the current output specification for this job.
+   *@param tabName is the current tab name.
+   */
+  @Override
+  public void outputSpecificationBody(IHTTPOutput out, Locale locale, OutputSpecification os, String tabName) throws ManifoldCFException, IOException {
+    super.outputSpecificationBody(out, locale, os, tabName);
+    FileOutputSpecs specs = getSpecParameters(os);
+    outputResource(EDIT_SPECIFICATION_HTML, out, locale, specs, tabName);
+  }
+
+  /** Process a specification post.
+   * This method is called at the start of job's edit or view page, whenever there is a possibility that form data for a connection has been
+   * posted.  Its purpose is to gather form information and modify the output specification accordingly.
+   * The name of the posted form is "editjob".
+   *@param variableContext contains the post data, including binary file-upload information.
+   *@param os is the current output specification for this job.
+   *@return null if all is well, or a string error message if there is an error that should prevent saving of the job (and cause a redirection to an error page).
+   */
+  @Override
+  public String processSpecificationPost(IPostParameters variableContext, Locale locale, OutputSpecification os) throws ManifoldCFException {
+    ConfigurationNode specNode = getSpecNode(os);
+    boolean bAdd = (specNode == null);
+    if (bAdd) {
+      specNode = new SpecificationNode(FileOutputConstant.PARAM_ROOTPATH);
+    }
+    FileOutputSpecs.contextToSpecNode(variableContext, specNode);
+    if (bAdd) {
+      os.addChild(os.getChildCount(), specNode);
+    }
+
+    return null;
+  }
+
+  /** View specification.
+   * This method is called in the body section of a job's view page.  Its purpose is to present the output specification information to the user.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+   *@param out is the output to which any HTML should be sent.
+   *@param os is the current output specification for this job.
+   */
+  @Override
+  public void viewSpecification(IHTTPOutput out, Locale locale, OutputSpecification os) throws ManifoldCFException, IOException {
+    outputResource(VIEW_SPECIFICATION_HTML, out, locale, getSpecParameters(os), null);
+  }
+
+  /**
+   * @param os
+   * @return
+   */
+  final private SpecificationNode getSpecNode(OutputSpecification os)
+  {
+    int l = os.getChildCount();
+    for (int i = 0; i < l; i++) {
+      SpecificationNode node = os.getChild(i);
+      if (node.getType().equals(FileOutputConstant.PARAM_ROOTPATH)) {
+        return node;
+      }
+    }
+    return null;
+  }
+
+  /**
+   * @param os
+   * @return
+   * @throws ManifoldCFException
+   */
+  final private FileOutputSpecs getSpecParameters(OutputSpecification os) throws ManifoldCFException {
+    return new FileOutputSpecs(getSpecNode(os));
+  }
+
+  /**
+   * @param configParams
+   * @return
+   */
+  final private FileOutputConfig getConfigParameters(ConfigParams configParams) {
+    if (configParams == null)
+      configParams = getConfiguration();
+    return new FileOutputConfig(configParams);
+  }
+
+  /** Read the content of a resource, replace the variable ${PARAMNAME} with the
+   * value and copy it to the out.
+   * 
+   * @param resName
+   * @param out
+   * @throws ManifoldCFException */
+  private static void outputResource(String resName, IHTTPOutput out, Locale locale, FileOutputParam params, String tabName) throws ManifoldCFException {
+    Map<String,String> paramMap = null;
+    if (params != null) {
+      paramMap = params.buildMap();
+      if (tabName != null) {
+        paramMap.put("TabName", tabName);
+      }
+    }
+    Messages.outputResourceWithVelocity(out, locale, resName, paramMap, true);
+  }
+
+  /**
+   * @param documentURI
+   * @return
+   * @throws URISyntaxException
+   * @throws NullPointerException
+   */
+  final private String documentURItoFilePath(String documentURI) throws URISyntaxException, NullPointerException {
+    StringBuffer path = new StringBuffer();
+    URI uri = null;
+
+    uri = new URI(documentURI);
+
+    if (uri.getScheme() != null) {
+      path.append(uri.getScheme());
+      path.append("/");
+    }
+
+    if (uri.getHost() != null) {
+      path.append(uri.getHost());
+      if (uri.getPort() != -1) {
+        path.append(":");
+        path.append(uri.getPort());
+      }
+      if (uri.getRawPath() != null) {
+        if (uri.getRawPath().length() == 0) {
+          path.append("/");
+        } else if (uri.getRawPath().equals("/")) {
+          path.append(uri.getRawPath());
+        } else {
+          for (String name : uri.getRawPath().split("/")) {
+            if (name.length() > 0) {
+              path.append("/");
+              path.append(convertString(name));
+            }
+          }
+        }
+      }
+      if (uri.getRawQuery() != null) {
+        path.append("?");
+        path.append(convertString(uri.getRawQuery()));
+      }
+    } else {
+      if (uri.getRawSchemeSpecificPart() != null) {
+        for (String name : uri.getRawSchemeSpecificPart().split("/")) {
+          if (name.length() > 0) {
+            path.append("/");
+            path.append(convertString(name));
+          }
+        }
+      }
+    }
+
+    if (path.toString().endsWith("/")) {
+      path.append(".content");
+    }
+    return path.toString();
+  }
+  
+  final private String convertString(final String input) {
+    StringBuilder sb = new StringBuilder();
+    for (int i = 0; i < input.length(); i++) {
+      char c = input.charAt(i);
+      // Handle filename disallowed special characters!
+      if (c == ':') {
+        // MHL for what really happens to colons
+      }
+      else
+        sb.append(c);
+    }
+    return sb.toString();
+  }
+}
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConstant.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConstant.java
new file mode 100644
index 0000000..6627b92
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputConstant.java
@@ -0,0 +1,33 @@
+/* $Id: FileOutputConstant.java 991374 2013-05-31 23:01:08Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.agents.output.filesystem;
+
+
+/** Parameters and output data for File output connector.
+ */
+public class FileOutputConstant
+{
+  public static final String _rcsid = "@(#)$Id: SolrConfig.java 991374 2010-08-31 22:32:08Z minoru $";
+
+  // Configuration parameters
+
+  /** Root path */
+  public static final String PARAM_ROOTPATH = "rootpath";
+
+}
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputParam.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputParam.java
new file mode 100644
index 0000000..8621e3c
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputParam.java
@@ -0,0 +1,45 @@
+/* $Id: FileOutputParam.java 1299512 2013-05-31 22:59:38Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/** 
+ * Parameters data for the elasticsearch output connector.
+ */
+public class FileOutputParam extends HashMap<ParameterEnum, String>
+{
+  private static final long serialVersionUID = -140994685772720029L;
+
+
+  protected FileOutputParam(ParameterEnum[] params) {
+    super(params.length);
+  }
+
+  final public Map<String, String> buildMap() {
+    Map<String, String> rval = new HashMap<String, String>();
+    for (Map.Entry<ParameterEnum, String> entry : this.entrySet()) {
+      rval.put(entry.getKey().name(), entry.getValue());
+    }
+    return rval;
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputSpecs.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputSpecs.java
new file mode 100644
index 0000000..08dd152
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/FileOutputSpecs.java
@@ -0,0 +1,153 @@
+/* $Id: FileOutputSpecs.java 1299512 2013-05-31 22:58:38Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.StringReader;
+import java.util.Set;
+import java.util.TreeSet;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.manifoldcf.core.interfaces.ConfigurationNode;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+public class FileOutputSpecs extends FileOutputParam {
+  /**
+   * 
+   */
+  private static final long serialVersionUID = 1859652730572662025L;
+
+  final public static ParameterEnum[] SPECIFICATIONLIST = {
+    ParameterEnum.ROOTPATH
+  };
+
+  private String rootPath;
+
+  /** Build a set of ElasticSearch parameters by reading an JSON object
+   * 
+   * @param json
+   * @throws JSONException
+   * @throws ManifoldCFException
+   */
+  public FileOutputSpecs(String json) throws JSONException, ManifoldCFException {
+    this(new JSONObject(json));
+  }
+
+  /** Build a set of ElasticSearch parameters by reading an JSON object
+   * 
+   * @param json
+   * @throws JSONException
+   * @throws ManifoldCFException
+   */
+  public FileOutputSpecs(JSONObject json) throws JSONException, ManifoldCFException {
+    super(SPECIFICATIONLIST);
+    rootPath = null;
+    for (ParameterEnum param : SPECIFICATIONLIST) {
+      String value = null;
+      value = json.getString(param.name());
+      if (value == null) {
+        value = param.defaultValue;
+      }
+      put(param, value);
+    }
+    rootPath = getRootPath();
+  }
+
+  /** Build a set of ElasticSearch parameters by reading an instance of
+   * SpecificationNode.
+   * 
+   * @param node
+   * @throws ManifoldCFException
+   */
+  public FileOutputSpecs(ConfigurationNode node) throws ManifoldCFException {
+    super(SPECIFICATIONLIST);
+    rootPath = null;
+    for (ParameterEnum param : SPECIFICATIONLIST) {
+      String value = null;
+      if (node != null) {
+        value = node.getAttributeValue(param.name());
+      }
+      if (value == null) {
+        value = param.defaultValue;
+      }
+      put(param, value);
+    }
+    rootPath = getRootPath();
+  }
+
+  /**
+   * @param variableContext
+   * @param specNode
+   */
+  public static void contextToSpecNode(IPostParameters variableContext, ConfigurationNode specNode) {
+    for (ParameterEnum param : SPECIFICATIONLIST) {
+      String p = variableContext.getParameter(param.name().toLowerCase());
+      if (p != null) {
+        specNode.setAttribute(param.name(), p);
+      }
+    }
+  }
+
+  /** @return a JSON representation of the parameter list */
+  public JSONObject toJson() {
+    return new JSONObject(this);
+  }
+
+  /**
+   * @return
+   */
+  public String getRootPath() {
+    return get(ParameterEnum.ROOTPATH);
+  }
+
+  /**
+   * @param content
+   * @return
+   * @throws ManifoldCFException
+   */
+  private final static TreeSet<String> createStringSet(String content) throws ManifoldCFException {
+    TreeSet<String> set = new TreeSet<String>();
+    BufferedReader br = null;
+    StringReader sr = null;
+    try {
+      sr = new StringReader(content);
+      br = new BufferedReader(sr);
+      String line = null;
+      while ((line = br.readLine()) != null) {
+        line = line.trim();
+        if (line.length() > 0) {
+          set.add(line);
+        }
+      }
+      return set;
+    } catch (IOException e) {
+      throw new ManifoldCFException(e);
+    } finally {
+      if (br != null) {
+        IOUtils.closeQuietly(br);
+      }
+    }
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/Messages.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/Messages.java
new file mode 100644
index 0000000..967501c
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/Messages.java
@@ -0,0 +1,141 @@
+/* $Id: Messages.java 1295926 2013-05-31 23:00:00Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.agents.output.filesystem.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.agents.output.filesystem";
+
+  /** Constructor - do no instantiate
+   */
+  protected Messages()
+  {
+  }
+
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+      Map<String,String> substitutionParameters, boolean mapToUpperCase)
+          throws ManifoldCFException
+          {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+        substitutionParameters,mapToUpperCase);
+          }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+      Map<String,String> substitutionParameters, boolean mapToUpperCase)
+          throws ManifoldCFException
+          {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+        substitutionParameters,mapToUpperCase);
+          }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+      Map<String,Object> contextObjects)
+          throws ManifoldCFException
+          {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+        contextObjects);
+          }
+
+}
+
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/ParameterEnum.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/ParameterEnum.java
new file mode 100644
index 0000000..4131d49
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/agents/output/filesystem/ParameterEnum.java
@@ -0,0 +1,31 @@
+/* $Id$ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.filesystem;
+
+/** Parameters constants */
+public enum ParameterEnum {
+  ROOTPATH("");
+
+  final protected String defaultValue;
+
+  private ParameterEnum(String defaultValue) {
+    this.defaultValue = defaultValue;
+  }
+}
diff --git a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filesystem/FileConnector.java b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filesystem/FileConnector.java
index e8ea8b8..e185db9 100644
--- a/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filesystem/FileConnector.java
+++ b/connectors/filesystem/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/filesystem/FileConnector.java
@@ -25,6 +25,10 @@
 import org.apache.manifoldcf.core.extmimemap.ExtensionMimeMap;
 import java.util.*;
 import java.io.*;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URLDecoder;
+import java.net.URLEncoder;
 
 /** This is the "repository connector" for a file system.  It's a relative of the share crawler, and should have
 * comparable basic functionality, with the exception of the ability to use ActiveDirectory and look at other shares.
@@ -103,10 +107,52 @@
 
   /** Convert a document identifier to a URI.  The URI is the URI that will be the unique key from
   * the search index, and will be presented to the user as part of the search results.
+  *@param filePath is the document filePath.
+  *@param repositoryPath is the document repositoryPath.
+  *@return the document uri.
+  */
+  protected static String convertToWGETURI(String path)
+    throws ManifoldCFException
+  {
+    //
+    // Note well:  This MUST be a legal URI!!!
+    try
+    {
+      StringBuffer sb = new StringBuffer();
+      String[] tmp = path.split("/", 3);
+      String scheme = "";
+      String host = "";
+      String other = "";
+      if (tmp.length >= 1)
+        scheme = tmp[0];
+      else
+        scheme = "http";
+      if (tmp.length >= 2)
+        host = tmp[1];
+      else
+        host = "localhost";
+      if (tmp.length >= 3)
+        other = "/" + tmp[2];
+      else
+        other = "/";
+      return new URI(scheme + "://" + host + other).toURL().toString();
+    }
+    catch (java.net.MalformedURLException e)
+    {
+      throw new ManifoldCFException("Bad url: "+e.getMessage(),e);
+    }
+    catch (URISyntaxException e)
+    {
+      throw new ManifoldCFException("Bad url: "+e.getMessage(),e);
+    }
+  }
+  
+  /** Convert a document identifier to a URI.  The URI is the URI that will be the unique key from
+  * the search index, and will be presented to the user as part of the search results.
   *@param documentIdentifier is the document identifier.
   *@return the document uri.
   */
-  protected String convertToURI(String documentIdentifier)
+  protected static String convertToURI(String documentIdentifier)
     throws ManifoldCFException
   {
     //
@@ -164,11 +210,14 @@
     DocumentSpecification spec, int jobMode, boolean usesDefaultAuthority)
     throws ManifoldCFException, ServiceInterruption
   {
-    String[] rval = new String[documentIdentifiers.length];
     int i = 0;
+    
+    String[] rval = new String[documentIdentifiers.length];
+    i = 0;
     while (i < rval.length)
     {
-      File file = new File(documentIdentifiers[i]);
+      String documentIdentifier = documentIdentifiers[i];
+      File file = new File(documentIdentifier);
       if (file.exists())
       {
         if (file.isDirectory())
@@ -189,7 +238,19 @@
           {
             // Get the file's modified date.
             long lastModified = file.lastModified();
+            
+            // Check if the path is to be converted.  We record that info in the version string so that we'll reindex documents whose
+            // URI's change.
+            String convertPath = findConvertPath(spec, file);
             StringBuilder sb = new StringBuilder();
+            if (convertPath != null)
+            {
+              // Record the path.
+              sb.append("+");
+              pack(sb,convertPath,'+');
+            }
+            else
+              sb.append("-");
             sb.append(new Long(lastModified).toString()).append(":").append(new Long(fileLength).toString());
             rval[i] = sb.toString();
           }
@@ -223,7 +284,9 @@
     int i = 0;
     while (i < documentIdentifiers.length)
     {
-      File file = new File(documentIdentifiers[i]);
+      String version = versions[i];
+      String documentIdentifier = documentIdentifiers[i];
+      File file = new File(documentIdentifier);
       if (file.exists())
       {
         if (file.isDirectory())
@@ -232,7 +295,6 @@
           long startTime = System.currentTimeMillis();
           String errorCode = "OK";
           String errorDesc = null;
-          String documentIdentifier = documentIdentifiers[i];
           String entityReference = documentIdentifier;
           try
           {
@@ -271,12 +333,22 @@
             // We still need to check based on file data.
             if (checkIngest(file,spec))
             {
+              
+              /*
+               * get filepathtouri value
+               */
+              String convertPath = null;
+              if (version.length() > 0 && version.startsWith("+"))
+              {
+                StringBuilder unpack = new StringBuilder();
+                unpack(unpack, version, 1, '+');
+                convertPath = unpack.toString();
+              }
+              
               long startTime = System.currentTimeMillis();
               String errorCode = "OK";
               String errorDesc = null;
               Long fileLength = null;
-              String documentIdentifier = documentIdentifiers[i];
-              String version = versions[i];
               String entityDescription = documentIdentifier;
               try
               {
@@ -293,9 +365,17 @@
                     data.setFileName(fileName);
                     data.setMimeType(mapExtensionToMimeType(fileName));
                     data.setModifiedDate(new Date(file.lastModified()));
-                    data.addField("uri",file.toString());
+                    String uri;
+                    if (convertPath != null) {
+                      // WGET-compatible input; convert back to external URI
+                      uri = convertToWGETURI(convertPath);
+                      data.addField("uri",uri);
+                    } else {
+                      uri = convertToURI(documentIdentifier);
+                      data.addField("uri",file.toString());
+                    }
                     // MHL for other metadata
-                    activities.ingestDocument(documentIdentifier,version,convertToURI(documentIdentifier),data);
+                    activities.ingestDocument(documentIdentifier,version,uri,data);
                     fileLength = new Long(fileBytes);
                   }
                   finally
@@ -303,6 +383,11 @@
                     is.close();
                   }
                 }
+                catch (FileNotFoundException e)
+                {
+                  //skip. throw nothing.
+                  Logging.connectors.debug("Skipping file due to " +e.getMessage());
+                }
                 catch (IOException e)
                 {
                   errorCode = "IO ERROR";
@@ -322,6 +407,34 @@
     }
   }
 
+  /** This method finds the part of the path that should be converted to a URI.
+  * Returns null if the path should not be converted.
+  *@param spec is the document specification.
+  *@param documentIdentifier is the document identifier.
+  *@return the part of the path to be converted, or null.
+  */
+  protected static String findConvertPath(DocumentSpecification spec, File theFile)
+  {
+    String fullpath = theFile.getAbsolutePath().replaceAll("\\\\","/");
+    for (int j = 0; j < spec.getChildCount(); j++)
+    {
+      SpecificationNode sn = spec.getChild(j);
+      if (sn.getType().equals("startpoint"))
+      {
+        String path = sn.getAttributeValue("path").replaceAll("\\\\","/");
+        String convertToURI = sn.getAttributeValue("converttouri");
+        if (path.length() > 0 && convertToURI != null && convertToURI.equals("true"))
+        {
+          if (!path.endsWith("/"))
+            path += "/";
+          if (fullpath.startsWith(path))
+            return fullpath.substring(path.length());
+        }
+      }
+    }
+    return null;
+  }
+
   /** Map an extension to a mime type */
   protected static String mapExtensionToMimeType(String fileName)
   {
@@ -470,6 +583,7 @@
 "        <tr class=\"formheaderrow\">\n"+
 "          <td class=\"formcolumnheader\"></td>\n"+
 "          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.RootPath") + "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.ConvertToURI") + "<br/>" + Messages.getBodyString(locale,"FileConnector.ConvertToURIExample")+ "</nobr></td>\n"+
 "          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.Rules") + "</nobr></td>\n"+
 "        </tr>\n"
       );
@@ -482,6 +596,14 @@
         {
           String pathDescription = "_"+Integer.toString(k);
           String pathOpName = "specop"+pathDescription;
+
+          String path = sn.getAttributeValue("path");
+          String convertToURIString = sn.getAttributeValue("converttouri");
+
+          boolean convertToURI = false;
+          if (convertToURIString != null && convertToURIString.equals("true"))
+            convertToURI = true;
+
           out.print(
 "        <tr class=\""+(((k % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
 "          <td class=\"formcolumncell\">\n"+
@@ -493,7 +615,13 @@
 "          </td>\n"+
 "          <td class=\"formcolumncell\">\n"+
 "            <nobr>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(sn.getAttributeValue("path"))+" \n"+
+"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <input type=\"hidden\" name=\"converttouri"+pathDescription+"\" value=\""+(convertToURI?"true":"false")+"\">\n"+
+"            <nobr>\n"+
+"              "+(convertToURI?Messages.getBodyString(locale,"FileConnector.Yes"):Messages.getBodyString(locale,"FileConnector.No"))+" \n"+
 "            </nobr>\n"+
 "          </td>\n"+
 "          <td class=\"boxcell\">\n"+
@@ -579,7 +707,7 @@
           if (j == 0)
           {
             out.print(
-"              <tr class=\"formrow\"><td class=\"message\" colspan=\"4\">" + Messages.getBodyString(locale,"FileConnector.NoRulesDefined") + "</td></tr>\n"
+"              <tr class=\"formrow\"><td class=\"formcolumnmessage\" colspan=\"4\">" + Messages.getBodyString(locale,"FileConnector.NoRulesDefined") + "</td></tr>\n"
             );
           }
           out.print(
@@ -622,11 +750,11 @@
       if (k == 0)
       {
         out.print(
-"        <tr class=\"formrow\"><td class=\"message\" colspan=\"3\">" + Messages.getBodyString(locale,"FileConnector.NoDocumentsSpecified") + "</td></tr>\n"
+"        <tr class=\"formrow\"><td class=\"formcolumnmessage\" colspan=\"4\">" + Messages.getBodyString(locale,"FileConnector.NoDocumentsSpecified") + "</td></tr>\n"
         );
       }
       out.print(
-"        <tr class=\"formrow\"><td class=\"lightseparator\" colspan=\"3\"><hr/></td></tr>\n"+
+"        <tr class=\"formrow\"><td class=\"lightseparator\" colspan=\"4\"><hr/></td></tr>\n"+
 "        <tr class=\"formrow\">\n"+
 "          <td class=\"formcolumncell\">\n"+
 "            <nobr>\n"+
@@ -639,7 +767,12 @@
 "          </td>\n"+
 "          <td class=\"formcolumncell\">\n"+
 "            <nobr>\n"+
-"              <input type=\"text\" size=\"80\" name=\"specpath\" value=\"\"/>\n"+
+"              <input type=\"text\" size=\"30\" name=\"specpath\" value=\"\"/>\n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              <input name=\"converttouri\" type=\"checkbox\" value=\"true\"/>\n"+
 "            </nobr>\n"+
 "          </td>\n"+
 "          <td class=\"formcolumncell\">\n"+
@@ -661,8 +794,17 @@
         if (sn.getType().equals("startpoint"))
         {
           String pathDescription = "_"+Integer.toString(k);
+
+          String path = sn.getAttributeValue("path");
+          String convertToURIString = sn.getAttributeValue("converttouri");
+
+          boolean convertToURI = false;
+          if (convertToURIString != null && convertToURIString.equals("true"))
+            convertToURI = true;
+
           out.print(
-"<input type=\"hidden\" name=\"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(sn.getAttributeValue("path"))+"\"/>\n"+
+"<input type=\"hidden\" name=\"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
+"<input type=\"hidden\" name=\"converttouri"+pathDescription+"\" value=\""+(convertToURI?"true":"false")+"\">\n"+
 "<input type=\"hidden\" name=\"specchildcount"+pathDescription+"\" value=\""+Integer.toString(sn.getChildCount())+"\"/>\n"
           );
 
@@ -689,6 +831,7 @@
 "<input type=\"hidden\" name=\"pathcount\" value=\""+Integer.toString(k)+"\"/>\n"
       );
     }
+    
   }
   
   /** Process a specification post.
@@ -725,8 +868,13 @@
         }
         // Path inserts won't happen until the very end
         String path = variableContext.getParameter("specpath"+pathDescription);
+        String convertToURI = variableContext.getParameter("converttouri"+pathDescription);
+
         SpecificationNode node = new SpecificationNode("startpoint");
         node.setAttribute("path",path);
+        if (convertToURI != null)
+          node.setAttribute("converttouri",convertToURI);
+
         // Now, get the number of children
         String y = variableContext.getParameter("specchildcount"+pathDescription);
         int childCount = Integer.parseInt(y);
@@ -788,9 +936,13 @@
       if (op != null && op.equals("Add"))
       {
         String path = variableContext.getParameter("specpath");
+        String convertToURI = variableContext.getParameter("converttouri");
+
         SpecificationNode node = new SpecificationNode("startpoint");
         node.setAttribute("path",path);
-        
+        if (convertToURI != null)
+          node.setAttribute("converttouri",convertToURI);
+
         // Now add in the defaults; these will be "include all directories" and "include all files".
         SpecificationNode sn = new SpecificationNode("include");
         sn.setAttribute("type","file");
@@ -804,6 +956,7 @@
         ds.addChild(k,node);
       }
     }
+    
     return null;
   }
   
@@ -818,52 +971,119 @@
     throws ManifoldCFException, IOException
   {
     out.print(
-"<table class=\"displaytable\">\n"
+"<table class=\"displaytable\">\n"+
+"  <tr>\n"+
+"    <td class=\"description\">" + Messages.getAttributeString(locale,"FileConnector.Paths2") + "</td>\n"+    
+"    <td class=\"boxcell\">\n"+
+"      <table class=\"formtable\">\n"+
+"        <tr class=\"formheaderrow\">\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.RootPath") + "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.ConvertToURI") + "<br/>" + Messages.getBodyString(locale,"FileConnector.ConvertToURIExample")+ "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.Rules") + "</nobr></td>\n"+
+"        </tr>\n"
     );
-
-    int i = 0;
-    boolean seenAny = false;
-    while (i < ds.getChildCount())
+    
+    int k = 0;
+    for (int i = 0; i < ds.getChildCount(); i++)
     {
-      SpecificationNode sn = ds.getChild(i++);
+      SpecificationNode sn = ds.getChild(i);
       if (sn.getType().equals("startpoint"))
       {
-        if (seenAny == false)
-        {
-          seenAny = true;
-        }
+        String path = sn.getAttributeValue("path");
+        String convertToURIString = sn.getAttributeValue("converttouri");
+        boolean convertToURI = false;
+        if (convertToURIString != null && convertToURIString.equals("true"))
+          convertToURI = true;
+        
         out.print(
-"  <tr>\n"+
-"    <td class=\"description\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(sn.getAttributeValue("path"))+":"+"</td>\n"+
-"    <td class=\"value\">\n"
+"        <tr class=\""+(((k % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              "+(convertToURI?Messages.getBodyString(locale,"FileConnector.Yes"):Messages.getBodyString(locale,"FileConnector.No"))+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"boxcell\">\n"+
+"            <table class=\"formtable\">\n"+
+"              <tr class=\"formheaderrow\">\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.IncludeExclude") + "</nobr></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.FileDirectory") + "</nobr></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"FileConnector.Match") + "</nobr></td>\n"+
+"              </tr>\n"
         );
-        int j = 0;
-        while (j < sn.getChildCount())
+        
+        int l = 0;
+        for (int j = 0; j < sn.getChildCount(); j++)
         {
-          SpecificationNode excludeNode = sn.getChild(j++);
+          SpecificationNode excludeNode = sn.getChild(j);
+
+          String nodeFlavor = excludeNode.getType();
+          String nodeType = excludeNode.getAttributeValue("type");
+          String nodeMatch = excludeNode.getAttributeValue("match");
           out.print(
-"      "+(excludeNode.getType().equals("include")?"Include ":"")+"\n"+
-"      "+(excludeNode.getType().equals("exclude")?"Exclude ":"")+"\n"+
-"      "+(excludeNode.getAttributeValue("type").equals("file")?"file ":"")+"\n"+
-"      "+(excludeNode.getAttributeValue("type").equals("directory")?"directory ":"")+"\n"+
-"      "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(excludeNode.getAttributeValue("match"))+"<br/>\n"
+"              <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+nodeFlavor+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+nodeType+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(nodeMatch)+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"              </tr>\n"
+          );
+          l++;
+        }
+
+        if (l == 0)
+        {
+          out.print(
+"              <tr><td class=\"formcolumnmessage\" colspan=\"3\">" + Messages.getBodyString(locale,"FileConnector.NoRulesDefined") + "</td></tr>\n"
           );
         }
+
         out.print(
-"    </td>\n"+
-"  </tr>\n"
+"            </table>\n"+
+"           </td>\n"
         );
+
+        out.print(
+"        </tr>\n"
+        );
+
+        k++;
       }
+      
     }
-    if (seenAny == false)
+
+    if (k == 0)
     {
       out.print(
-"  <tr><td class=\"message\">" + Messages.getBodyString(locale,"FileConnector.NoDocumentsSpecified") + "</td></tr>\n"
+"        <tr><td class=\"formcolumnmessage\" colspan=\"3\">" + Messages.getBodyString(locale,"FileConnector.NoDocumentsSpecified") + "</td></tr>\n"
       );
     }
+    
+    out.print(
+"      </table>\n"+
+"    </td>\n"+
+"  </tr>\n"
+    );
+
     out.print(
 "</table>\n"
     );
+    
   }
 
   // Protected static methods
diff --git a/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/filesystem/common_en_US.properties b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/filesystem/common_en_US.properties
new file mode 100644
index 0000000..8564f43
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/filesystem/common_en_US.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+FileConnector.PathTabName=Output Path
+FileConnector.RootPath=Root path:
+FileConnector.RootPathCannotBeNull=Root path cannot be null
diff --git a/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/filesystem/common_ja_JP.properties b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/filesystem/common_ja_JP.properties
new file mode 100644
index 0000000..006a450
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/filesystem/common_ja_JP.properties
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+FileConnector.PathTabName=出力パス
+FileConnector.RootPath=ルートパス
+FileConnector.RootPathCannotBeNull=Root path cannot be null
diff --git a/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_en_US.properties b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_en_US.properties
index cd889a1..e6f15be 100644
--- a/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_en_US.properties
+++ b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_en_US.properties
@@ -13,9 +13,13 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FileConnector.Paths=Paths
-FileConnector.Paths2=Paths:
+FileConnector.Paths=Repository Paths
+FileConnector.Paths2=Repository Paths:
 FileConnector.RootPath=Root path
+FileConnector.ConvertToURI=Convert path to URI?
+FileConnector.ConvertToURIExample= (e.g. http/xyz/index.html => http://xyz/index.html)
+FileConnector.Yes=Yes
+FileConnector.No=No
 FileConnector.Rules=Rules
 FileConnector.Delete=Delete
 FileConnector.DeletePath=Delete path #
@@ -35,4 +39,3 @@
 FileConnector.DeletePath=Delete path #
 FileConnector.AddNewMatchForPath=Add new match for path #
 FileConnector.AddNewPath=Add new path
-
diff --git a/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_ja_JP.properties b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_ja_JP.properties
index bedd395..79ec2ba 100644
--- a/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_ja_JP.properties
+++ b/connectors/filesystem/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/filesystem/common_ja_JP.properties
@@ -13,9 +13,13 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-FileConnector.Paths=パス
-FileConnector.Paths2=パス:
+FileConnector.Paths=リポジトリパス
+FileConnector.Paths2=リポジトリパス:
 FileConnector.RootPath=ルートパス
+FileConnector.ConvertToURI=Convert path to URI?
+FileConnector.ConvertToURIExample= 例) http/xyz/index.html => http://xyz/index.html
+FileConnector.Yes=Yes
+FileConnector.No=No
 FileConnector.Rules=ルール
 FileConnector.Delete=削除
 FileConnector.DeletePath=パスを削除 #
diff --git a/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editConfiguration.html b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editConfiguration.html
new file mode 100644
index 0000000..bd1c7e4
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editConfiguration.html
@@ -0,0 +1,16 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
diff --git a/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editConfiguration.js b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editConfiguration.js
new file mode 100644
index 0000000..f37efd4
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editConfiguration.js
@@ -0,0 +1,21 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+//-->
+</script>
\ No newline at end of file
diff --git a/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editSpecification.html b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editSpecification.html
new file mode 100644
index 0000000..ca2a5e3
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editSpecification.html
@@ -0,0 +1,32 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TABNAME == $ResourceBundle.getString('FileConnector.PathTabName'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('FileConnector.RootPath'))</nobr></td>
+    <td class="value"><input type="text" name="rootpath" size="64" value="$Encoder.attributeEscape($ROOTPATH)" /></td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="rootpath" value="$Encoder.attributeEscape($ROOTPATH)" />
+
+#end
diff --git a/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editSpecification.js b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editSpecification.js
new file mode 100644
index 0000000..9ce70ec
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/editSpecification.js
@@ -0,0 +1,32 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkOutputSpecificationForSave()
+{
+  if (editjob.rootpath.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('FileConnector.RootPathCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('FileConnector.PathTabName'))");
+    editjob.rootpath.focus();
+    return false;
+  }
+  return true;
+}
+//-->
+</script>
\ No newline at end of file
diff --git a/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/viewConfiguration.html b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/viewConfiguration.html
new file mode 100644
index 0000000..bd1c7e4
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/viewConfiguration.html
@@ -0,0 +1,16 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
diff --git a/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/viewSpecification.html b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/viewSpecification.html
new file mode 100644
index 0000000..94d26e8
--- /dev/null
+++ b/connectors/filesystem/connector/src/main/resources/org/apache/manifoldcf/agents/output/filesystem/viewSpecification.html
@@ -0,0 +1,23 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('FileConnector.RootPath'))</nobr></td>
+    <td class="value">$Encoder.bodyEscape($ROOTPATH)</td>
+  </tr>
+</table>
\ No newline at end of file
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseDerby.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseDerby.java
new file mode 100644
index 0000000..565e677
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseDerby.java
@@ -0,0 +1,44 @@
+/* $Id: TestBase.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseDerby extends org.apache.manifoldcf.crawler.tests.ConnectorBaseDerby
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"File Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.filesystem.FileConnector"};
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseHSQLDB.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseHSQLDB.java
new file mode 100644
index 0000000..3855190
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseHSQLDB.java
@@ -0,0 +1,44 @@
+/* $Id: BaseHSQLDB.java 1147086 2011-07-15 10:58:30Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseHSQLDB extends org.apache.manifoldcf.crawler.tests.ConnectorBaseHSQLDB
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"File Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.filesystem.FileConnector"};
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseMySQL.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseMySQL.java
new file mode 100644
index 0000000..eabd07a
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BaseMySQL.java
@@ -0,0 +1,44 @@
+/* $Id: BaseMySQL.java 1221585 2011-12-21 03:10:03Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseMySQL extends org.apache.manifoldcf.crawler.tests.ConnectorBaseMySQL
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"File Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.filesystem.FileConnector"};
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BasePostgresql.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BasePostgresql.java
new file mode 100644
index 0000000..d8dfb16
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/BasePostgresql.java
@@ -0,0 +1,44 @@
+/* $Id: TestBase.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BasePostgresql extends org.apache.manifoldcf.crawler.tests.ConnectorBasePostgresql
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"File Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.filesystem.FileConnector"};
+  }
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityDerbyTest.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityDerbyTest.java
new file mode 100644
index 0000000..7694499
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityDerbyTest.java
@@ -0,0 +1,42 @@
+/* $Id: Sanity.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityDerbyTest extends BaseDerby
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityHSQLDBTest.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityHSQLDBTest.java
new file mode 100644
index 0000000..af6bb79
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityHSQLDBTest.java
@@ -0,0 +1,42 @@
+/* $Id: SanityHSQLDBTest.java 1147086 2011-07-15 10:58:30Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityHSQLDBTest extends BaseHSQLDB
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityMySQLTest.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityMySQLTest.java
new file mode 100644
index 0000000..cd08b3e
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityMySQLTest.java
@@ -0,0 +1,42 @@
+/* $Id: SanityMySQLTest.java 1221585 2011-12-21 03:10:03Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityMySQLTest extends BaseMySQL
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityPostgresqlTest.java b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityPostgresqlTest.java
new file mode 100644
index 0000000..1803f44
--- /dev/null
+++ b/connectors/filesystem/connector/src/test/java/org/apache/manifoldcf/agents/output/filesystem/SanityPostgresqlTest.java
@@ -0,0 +1,42 @@
+/* $Id: Sanity.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.filesystem;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityPostgresqlTest extends BasePostgresql
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/filesystem/pom.xml b/connectors/filesystem/pom.xml
index 6a9c749..09b7519 100644
--- a/connectors/filesystem/pom.xml
+++ b/connectors/filesystem/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,29 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
+      </resource>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +62,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/generic/API.txt b/connectors/generic/API.txt
new file mode 100644
index 0000000..ac6c24c
--- /dev/null
+++ b/connectors/generic/API.txt
@@ -0,0 +1,71 @@
+API should be implemented as web page (entry point) returning results based on provided GET params. API can be secured with HTTP basic authentication.

+There are 4 actions:

+- check

+- seed

+- items

+- item

+

+Action is passed as "action" GET param to the entrypoint.

+

+

+----------------------------------------------------------------------------------------------------

+[entrypoint]?action=check

+----------------------------------------------------------------------------------------------------

+should return HTTP status code 200 providing information that entrypoint is working properly.

+

+----------------------------------------------------------------------------------------------------

+[entrypoint]?action=seed&startDate=YYYY-MM-DDTHH:mm:ssZ&endDate=YYYY-MM-DDTHH:mm:ssZ

+----------------------------------------------------------------------------------------------------

+parameters:

+startDate - the start of time frame which should be applied to returned seeds. If this is a first run - this parameter will not be provided meaning that all documents should be returned.

+endDate - the end of time frame. Always provided.

+

+startDate and endDate parameters are encoded as YYYY-MM-DD'T'HH:mm:ss'Z'. Result should be valid XML of form:

+<seeds>

+   <seed id="document_id_1" />

+   <seed id="document_id_2" />

+   ...

+</seeds>

+

+attributes "id" are required.

+

+----------------------------------------------------------------------------------------------------

+[entrypoint]?action=items&id[]=document_id_1&id=document_id_2

+----------------------------------------------------------------------------------------------------

+parameters:

+id[] - array of document IDs that should be returned

+

+Result should be valid XML of form:

+<items>

+   <item id="document_id_1">

+      <url>[http://document_uri]</url>

+      <version>[document_version]</version>

+	  <created>2013-11-11T21:00:00Z</created>

+	  <updated>2013-11-11T21:00:00Z</updated>

+	  <filename>filename.ext</filename>

+	  <mimetype>mime/type</mimetype>

+	  <metadata>

+	     <meta name="meta_name_1">meta_value_1</meta>

+	     <meta name="meta_name_2">meta_value_2</meta>

+		 ...

+	  </metadata>

+	  <auth>

+		 <token>auth_token_1</token>

+		 <token>auth_token_2</token>

+		 ...

+	  </auth>

+	  <content>Document content</content>

+   </item>

+   ...

+</items>

+

+id, url, version are required, the rest is optional. If "auth" tag is provided - document will be treated as non-public with defined access tokens, if it is ommited - document will be public.

+if content tag is ommited - connector will ask for document content as "action=item" separate API call.

+

+----------------------------------------------------------------------------------------------------

+[entrypoint]?action=item&id=document_id

+----------------------------------------------------------------------------------------------------

+parameters:

+id - requested document ID

+

+Result should be the document content. It does not have to be XML - you may return binary data (PDF, DOC, etc) which represent the document.
\ No newline at end of file
diff --git a/connectors/generic/build.xml b/connectors/generic/build.xml
new file mode 100644
index 0000000..03ad71e
--- /dev/null
+++ b/connectors/generic/build.xml
@@ -0,0 +1,38 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project name="generic" default="all">

+

+    <import file="../connector-build.xml"/>

+

+    <path id="connector-classpath">

+        <path refid="mcf-connector-build.connector-classpath"/>

+        <fileset dir="../../lib">

+            <include name="jaxb-impl*.jar"/>

+        </fileset>

+    </path>

+

+    <target name="lib" depends="mcf-connector-build.lib,precompile-check" if="canBuild">

+        <mkdir dir="dist/lib"/>

+        <copy todir="dist/lib">

+            <fileset dir="../../lib">

+                <include name="jaxb-impl*.jar"/>

+            </fileset>

+        </copy>

+    </target>

+

+</project>

diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/generic/GenericAuthority.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/generic/GenericAuthority.java
new file mode 100644
index 0000000..cffc4cc
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/generic/GenericAuthority.java
@@ -0,0 +1,705 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.authorities.authorities.generic;
+
+import java.io.IOException;
+import java.io.UnsupportedEncodingException;
+import java.net.MalformedURLException;
+import java.net.URL;
+import java.net.URLEncoder;
+import java.util.List;
+import java.util.Locale;
+import javax.xml.bind.JAXBContext;
+import javax.xml.bind.JAXBException;
+import javax.xml.bind.Unmarshaller;
+import org.apache.http.HttpException;
+import org.apache.http.HttpRequest;
+import org.apache.http.HttpRequestInterceptor;
+import org.apache.http.HttpResponse;
+import org.apache.http.HttpStatus;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.Credentials;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.impl.auth.BasicScheme;
+import org.apache.http.impl.client.DefaultHttpClient;
+import org.apache.http.params.HttpConnectionParams;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.util.EntityUtils;
+import org.apache.manifoldcf.authorities.interfaces.AuthorizationResponse;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+import org.apache.manifoldcf.core.interfaces.CacheManagerFactory;
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.ICacheCreateHandle;
+import org.apache.manifoldcf.core.interfaces.ICacheDescription;
+import org.apache.manifoldcf.core.interfaces.ICacheHandle;
+import org.apache.manifoldcf.core.interfaces.ICacheManager;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.StringSet;
+import org.apache.manifoldcf.crawler.connectors.generic.api.Auth;
+import org.apache.manifoldcf.ui.util.Encoder;
+
+/**
+ *
+ * @author krycek
+ */
+public class GenericAuthority extends org.apache.manifoldcf.authorities.authorities.BaseAuthorityConnector {
+
+  public static final String _rcsid = "@(#)$Id: GenericAuthority.java 1496653 2013-06-25 22:05:04Z mlizewski $";
+
+  /**
+   * This is the active directory global deny token. This should be ingested
+   * with all documents.
+   */
+  private static final String globalDenyToken = "DEAD_AUTHORITY";
+
+  private static final AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{globalDenyToken},
+    AuthorizationResponse.RESPONSE_UNREACHABLE);
+
+  private static final AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{globalDenyToken},
+    AuthorizationResponse.RESPONSE_USERNOTFOUND);
+
+  private final static String ACTION_PARAM_NAME = "action";
+
+  private final static String ACTION_AUTH = "auth";
+
+  private final static String ACTION_CHECK = "check";
+
+  private String genericLogin = null;
+
+  private String genericPassword = null;
+
+  private String genericEntryPoint = null;
+
+  private int connectionTimeoutMillis = 60 * 1000;
+
+  private int socketTimeoutMillis = 30 * 60 * 1000;
+
+  private long responseLifetime = 60000L; //60sec
+
+  private int LRUsize = 1000;
+
+  private DefaultHttpClient client = null;
+
+  private long sessionExpirationTime = -1L;
+
+  /**
+   * Cache manager.
+   */
+  private ICacheManager cacheManager = null;
+
+  /**
+   * Constructor.
+   */
+  public GenericAuthority() {
+  }
+
+  /**
+   * Set thread context.
+   */
+  @Override
+  public void setThreadContext(IThreadContext tc)
+    throws ManifoldCFException {
+    super.setThreadContext(tc);
+    cacheManager = CacheManagerFactory.make(tc);
+  }
+
+  /**
+   * Connect. The configuration parameters are included.
+   *
+   * @param configParams are the configuration parameters for this connection.
+   */
+  @Override
+  public void connect(ConfigParams configParams) {
+    super.connect(configParams);
+    genericEntryPoint = getParam(configParams, "genericEntryPoint", null);
+    genericLogin = getParam(configParams, "genericLogin", null);
+    genericPassword = "";
+    try {
+      genericPassword = ManifoldCF.deobfuscate(getParam(configParams, "genericPassword", ""));
+    } catch (ManifoldCFException ignore) {
+    }
+    connectionTimeoutMillis = Integer.parseInt(getParam(configParams, "genericConnectionTimeout", "60000"));
+    if (connectionTimeoutMillis == 0) {
+      connectionTimeoutMillis = 60000;
+    }
+    socketTimeoutMillis = Integer.parseInt(getParam(configParams, "genericSocketTimeout", "1800000"));
+    if (socketTimeoutMillis == 0) {
+      socketTimeoutMillis = 1800000;
+    }
+    responseLifetime = Long.parseLong(getParam(configParams, "genericResponseLifetime", "60000"));
+    if (responseLifetime == 0) {
+      responseLifetime = 60000;
+    }
+  }
+
+  protected DefaultHttpClient getClient() throws ManifoldCFException {
+    synchronized (this) {
+      if (client != null) {
+        return client;
+      }
+      DefaultHttpClient cl = new DefaultHttpClient();
+      if (genericLogin != null && !genericLogin.isEmpty()) {
+        try {
+          URL url = new URL(genericEntryPoint);
+          Credentials credentials = new UsernamePasswordCredentials(genericLogin, genericPassword);
+          cl.getCredentialsProvider().setCredentials(new AuthScope(url.getHost(), url.getPort() > 0 ? url.getPort() : 80, AuthScope.ANY_REALM), credentials);
+          cl.addRequestInterceptor(new PreemptiveAuth(credentials), 0);
+        } catch (MalformedURLException ex) {
+          client = null;
+          sessionExpirationTime = -1L;
+          throw new ManifoldCFException("getClient exception: " + ex.getMessage(), ex);
+        }
+      }
+      HttpConnectionParams.setConnectionTimeout(cl.getParams(), connectionTimeoutMillis);
+      HttpConnectionParams.setSoTimeout(cl.getParams(), socketTimeoutMillis);
+      sessionExpirationTime = System.currentTimeMillis() + 300000L;
+      client = cl;
+      return cl;
+    }
+  }
+
+  /**
+   * Poll. The connection should be closed if it has been idle for too long.
+   */
+  @Override
+  public void poll()
+    throws ManifoldCFException {
+    if (client != null && System.currentTimeMillis() > sessionExpirationTime) {
+      disconnectSession();
+    }
+    super.poll();
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return client != null;
+  }
+
+  /**
+   * Check connection for sanity.
+   */
+  @Override
+  public String check()
+    throws ManifoldCFException {
+    HttpClient client = getClient();
+    try {
+      CheckThread checkThread = new CheckThread(client, genericEntryPoint + "?" + ACTION_PARAM_NAME + "=" + ACTION_CHECK);
+      checkThread.start();
+      checkThread.join();
+      if (checkThread.getException() != null) {
+        Throwable thr = checkThread.getException();
+        return "Check exception: " + thr.getMessage();
+      }
+      return checkThread.getResult();
+    } catch (InterruptedException ex) {
+      throw new ManifoldCFException(ex.getMessage(), ex, ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /**
+   * Close the connection. Call this before discarding the repository connector.
+   */
+  @Override
+  public void disconnect()
+    throws ManifoldCFException {
+    disconnectSession();
+    super.disconnect();
+
+    // Zero out all the stuff that we want to be sure we don't use again
+    genericEntryPoint = null;
+    genericLogin = null;
+    genericPassword = null;
+  }
+
+  protected String createCacheConnectionString() {
+    StringBuilder sb = new StringBuilder();
+    sb.append(genericEntryPoint).append("#").append(genericLogin);
+    return sb.toString();
+  }
+
+  /**
+   * Obtain the access tokens for a given user name.
+   *
+   * @param userName is the user name or identifier.
+   * @return the response tokens (according to the current authority). (Should
+   * throws an exception only when a condition cannot be properly described
+   * within the authorization response object.)
+   */
+  @Override
+  public AuthorizationResponse getAuthorizationResponse(String userName)
+    throws ManifoldCFException {
+
+    HttpClient client = getClient();
+    // Construct a cache description object
+    ICacheDescription objectDescription = new GenericAuthorizationResponseDescription(userName,
+      createCacheConnectionString(), this.responseLifetime, this.LRUsize);
+
+    // Enter the cache
+    ICacheHandle ch = cacheManager.enterCache(new ICacheDescription[]{objectDescription}, null, null);
+    try {
+      ICacheCreateHandle createHandle = cacheManager.enterCreateSection(ch);
+      try {
+        // Lookup the object
+        AuthorizationResponse response = (AuthorizationResponse) cacheManager.lookupObject(createHandle, objectDescription);
+        if (response != null) {
+          return response;
+        }
+        // Create the object.
+        response = getAuthorizationResponseUncached(client, userName);
+        // Save it in the cache
+        cacheManager.saveObject(createHandle, objectDescription, response);
+        // And return it...
+        return response;
+      } finally {
+        cacheManager.leaveCreateSection(createHandle);
+      }
+    } finally {
+      cacheManager.leaveCache(ch);
+    }
+  }
+
+  protected AuthorizationResponse getAuthorizationResponseUncached(HttpClient client, String userName)
+    throws ManifoldCFException {
+    StringBuilder url = new StringBuilder(genericEntryPoint);
+    try {
+      url.append("?").append(ACTION_PARAM_NAME).append("=").append(ACTION_AUTH);
+      url.append("&username=").append(URLEncoder.encode(userName, "UTF-8"));
+    } catch (UnsupportedEncodingException ex) {
+      throw new ManifoldCFException("getAuthorizationResponseUncached error: " + ex.getMessage(), ex);
+    }
+
+    try {
+      FetchTokensThread t = new FetchTokensThread(client, url.toString());
+      t.start();
+      t.join();
+      if (t.getException() != null) {
+        return unreachableResponse;
+      }
+      Auth auth = t.getAuthResponse();
+      if (auth == null) {
+        return userNotFoundResponse;
+      }
+      if (!auth.exists) {
+        return userNotFoundResponse;
+      }
+      if (auth.tokens == null) {
+        return new AuthorizationResponse(new String[]{}, AuthorizationResponse.RESPONSE_OK);
+      }
+
+      String[] tokens = new String[auth.tokens.size()];
+      int k = 0;
+      while (k < tokens.length) {
+        tokens[k] = (String) auth.tokens.get(k);
+        k++;
+      }
+
+      return new AuthorizationResponse(tokens, AuthorizationResponse.RESPONSE_OK);
+    } catch (InterruptedException ex) {
+      throw new ManifoldCFException(ex.getMessage(), ex, ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /**
+   * Obtain the default access tokens for a given user name.
+   *
+   * @param userName is the user name or identifier.
+   * @return the default response tokens, presuming that the connect method
+   * fails.
+   */
+  @Override
+  public AuthorizationResponse getDefaultAuthorizationResponse(String userName) {
+    // The default response if the getConnection method fails
+    return unreachableResponse;
+  }
+
+  // UI support methods.
+  //
+  // These support methods are involved in setting up authority connection configuration information. The configuration methods cannot assume that the
+  // current authority object is connected.  That is why they receive a thread context argument.
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters, List<String> tabsArray)
+    throws ManifoldCFException, IOException {
+    tabsArray.add(Messages.getString(locale, "generic.EntryPoint"));
+
+    out.print(
+      "<script type=\"text/javascript\">\n"
+      + "<!--\n"
+      + "function checkConfig() {\n"
+      + "  return true;\n"
+      + "}\n"
+      + "\n"
+      + "function checkConfigForSave() {\n"
+      + "  if (editconnection.genericEntryPoint.value == \"\") {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "generic.EntryPointCannotBeBlank") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "generic.EntryPoint") + "\");\n"
+      + "    editconnection.genericEntryPoint.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  return true;\n"
+      + "}\n"
+      + "//-->\n"
+      + "</script>\n");
+  }
+
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException {
+
+    String server = getParam(parameters, "genericEntryPoint", "");
+    String login = getParam(parameters, "genericLogin", "");
+    String password = "";
+    try {
+      password = ManifoldCF.deobfuscate(getParam(parameters, "genericPassword", ""));
+    } catch (ManifoldCFException ignore) {
+    }
+    String conTimeout = getParam(parameters, "genericConnectionTimeout", "60000");
+    String soTimeout = getParam(parameters, "genericSocketTimeout", "1800000");
+    String respLifetime = getParam(parameters, "genericResponseLifetime", "60000");
+
+    if (tabName.equals(Messages.getString(locale, "generic.EntryPoint"))) {
+      out.print(
+        "<table class=\"displaytable\">\n"
+        + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.EntryPointColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericEntryPoint\" value=\"" + Encoder.attributeEscape(server) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.LoginColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericLogin\" value=\"" + Encoder.attributeEscape(login) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.PasswordColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"password\" size=\"32\" name=\"genericPassword\" value=\"" + Encoder.attributeEscape(password) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ConnectionTimeoutColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericConTimeout\" value=\"" + Encoder.attributeEscape(conTimeout) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.SocketTimeoutColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericSoTimeout\" value=\"" + Encoder.attributeEscape(soTimeout) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ResponseLifetimeColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericResponseLifetime\" value=\"" + Encoder.attributeEscape(respLifetime) + "\"/></td>\n"
+        + " </tr>\n"
+        + "</table>\n");
+    } else {
+      out.print("<input type=\"hidden\" name=\"genericEntryPoint\" value=\"" + Encoder.attributeEscape(server) + "\"/>\n");
+      out.print("<input type=\"hidden\" name=\"genericLogin\" value=\"" + Encoder.attributeEscape(login) + "\"/>\n");
+      out.print("<input type=\"hidden\" name=\"genericPassword\" value=\"" + Encoder.attributeEscape(password) + "\"/>\n");
+      out.print("<input type=\"hidden\" name=\"genericConTimeout\" value=\"" + Encoder.attributeEscape(conTimeout) + "\"/>\n");
+      out.print("<input type=\"hidden\" name=\"genericSoTimeout\" value=\"" + Encoder.attributeEscape(soTimeout) + "\"/>\n");
+      out.print("<input type=\"hidden\" name=\"genericResponseLifetime\" value=\"" + Encoder.attributeEscape(respLifetime) + "\"/>\n");
+    }
+  }
+
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext,
+    Locale locale, ConfigParams parameters)
+    throws ManifoldCFException {
+
+    copyParam(variableContext, parameters, "genericLogin");
+    copyParam(variableContext, parameters, "genericEntryPoint");
+    copyParam(variableContext, parameters, "genericConTimeout");
+    copyParam(variableContext, parameters, "genericSoTimeout");
+    copyParam(variableContext, parameters, "genericResponseLifetime");
+
+    String password = variableContext.getParameter("genericPassword");
+    if (password == null) {
+      password = "";
+    }
+    parameters.setParameter("genericPassword", org.apache.manifoldcf.core.system.ManifoldCF.obfuscate(password));
+    return null;
+  }
+
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters)
+    throws ManifoldCFException, IOException {
+    String login = getParam(parameters, "genericLogin", "");
+    String server = getParam(parameters, "genericEntryPoint", "");
+    String conTimeout = getParam(parameters, "genericConnectionTimeout", "60000");
+    String soTimeout = getParam(parameters, "genericSocketTimeout", "1800000");
+    String respLifetime = getParam(parameters, "genericResponseLifetime", "60000");
+
+    out.print(
+      "<table class=\"displaytable\">\n"
+      + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.EntryPointColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(server) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.LoginColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(login) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.PasswordColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">**********</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ConnectionTimeoutColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(conTimeout) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.SocketTimeoutColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(soTimeout) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ResponseLifetimeColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(respLifetime) + "</td>\n"
+      + " </tr>\n"
+      + "</table>\n");
+  }
+
+  private String getParam(ConfigParams parameters, String name, String def) {
+    return parameters.getParameter(name) != null ? parameters.getParameter(name) : def;
+  }
+
+  private boolean copyParam(IPostParameters variableContext, ConfigParams parameters, String name) {
+    String val = variableContext.getParameter(name);
+    if (val == null) {
+      return false;
+    }
+    parameters.setParameter(name, val);
+    return true;
+  }
+
+  // Protected methods
+  protected static StringSet emptyStringSet = new StringSet();
+
+  private void disconnectSession() {
+    synchronized (this) {
+      client.getConnectionManager().shutdown();
+      client = null;
+    }
+  }
+
+  /**
+   * This is the cache object descriptor for cached access tokens from this
+   * connector.
+   */
+  protected class GenericAuthorizationResponseDescription extends org.apache.manifoldcf.core.cachemanager.BaseDescription {
+
+    /**
+     * The user name
+     */
+    protected String userName;
+
+    /**
+     * LDAP connection string with server name and base DN
+     */
+    protected String connectionString;
+
+    /**
+     * The response lifetime
+     */
+    protected long responseLifetime;
+
+    /**
+     * The expiration time
+     */
+    protected long expirationTime = -1;
+
+    /**
+     * Constructor.
+     */
+    public GenericAuthorizationResponseDescription(String userName, String connectionString, long responseLifetime, int LRUsize) {
+      super("LDAPAuthority", LRUsize);
+      this.userName = userName;
+      this.connectionString = connectionString;
+      this.responseLifetime = responseLifetime;
+    }
+
+    /**
+     * Return the invalidation keys for this object.
+     */
+    @Override
+    public StringSet getObjectKeys() {
+      return emptyStringSet;
+    }
+
+    /**
+     * Get the critical section name, used for synchronizing the creation of the
+     * object
+     */
+    @Override
+    public String getCriticalSectionName() {
+      StringBuilder sb = new StringBuilder(getClass().getName());
+      sb.append("-").append(userName).append("-").append(connectionString);
+      return sb.toString();
+    }
+
+    /**
+     * Return the object expiration interval
+     */
+    @Override
+    public long getObjectExpirationTime(long currentTime) {
+      if (expirationTime == -1) {
+        expirationTime = currentTime + responseLifetime;
+      }
+      return expirationTime;
+    }
+
+    @Override
+    public int hashCode() {
+      return userName.hashCode() + connectionString.hashCode();
+    }
+
+    @Override
+    public boolean equals(Object o) {
+      if (!(o instanceof GenericAuthorizationResponseDescription)) {
+        return false;
+      }
+      GenericAuthorizationResponseDescription ard = (GenericAuthorizationResponseDescription) o;
+      if (!ard.userName.equals(userName)) {
+        return false;
+      }
+      if (!ard.connectionString.equals(connectionString)) {
+        return false;
+      }
+      return true;
+    }
+  }
+
+  static class PreemptiveAuth implements HttpRequestInterceptor {
+
+    private Credentials credentials;
+
+    public PreemptiveAuth(Credentials creds) {
+      this.credentials = creds;
+    }
+
+    @Override
+    public void process(final HttpRequest request, final HttpContext context) throws HttpException, IOException {
+      request.addHeader(BasicScheme.authenticate(credentials, "US-ASCII", false));
+    }
+  }
+
+  protected static class CheckThread extends Thread {
+
+    protected HttpClient client;
+
+    protected String url;
+
+    protected Throwable exception = null;
+
+    protected String result = "Unknown";
+
+    public CheckThread(HttpClient client, String url) {
+      super();
+      setDaemon(true);
+      this.client = client;
+      this.url = url;
+    }
+
+    @Override
+    public void run() {
+      HttpGet method = new HttpGet(url);
+      try {
+        HttpResponse response = client.execute(method);
+        try {
+          if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {
+            result = "Connection failed: " + response.getStatusLine().getReasonPhrase();
+            return;
+          }
+          EntityUtils.consume(response.getEntity());
+          result = "Connection OK";
+        } finally {
+          EntityUtils.consume(response.getEntity());
+          method.releaseConnection();
+        }
+      } catch (IOException ex) {
+        exception = ex;
+      }
+    }
+
+    public Throwable getException() {
+      return exception;
+    }
+
+    public String getResult() {
+      return result;
+    }
+  }
+
+  protected static class FetchTokensThread extends Thread {
+
+    protected HttpClient client;
+
+    protected String url;
+
+    protected Throwable exception = null;
+
+    protected Auth auth;
+
+    public FetchTokensThread(HttpClient client, String url) {
+      super();
+      setDaemon(true);
+      this.client = client;
+      this.url = url;
+      this.auth = null;
+    }
+
+    @Override
+    public void run() {
+      try {
+        HttpGet method = new HttpGet(url.toString());
+
+        HttpResponse response = client.execute(method);
+        try {
+          if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {
+            exception = new ManifoldCFException("FetchTokensThread error - interface returned incorrect return code for: " + url + " - " + response.getStatusLine().toString());
+            return;
+          }
+          JAXBContext context;
+          context = JAXBContext.newInstance(Auth.class);
+          Unmarshaller m = context.createUnmarshaller();
+          auth = (Auth) m.unmarshal(response.getEntity().getContent());
+        } catch (JAXBException ex) {
+          exception = ex;
+        } finally {
+          EntityUtils.consume(response.getEntity());
+          method.releaseConnection();
+        }
+      } catch (Exception ex) {
+        exception = ex;
+      }
+    }
+
+    public Throwable getException() {
+      return exception;
+    }
+
+    public Auth getAuthResponse() {
+      return auth;
+    }
+  }
+}
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/generic/Messages.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/generic/Messages.java
new file mode 100644
index 0000000..6c60406
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/generic/Messages.java
@@ -0,0 +1,119 @@
+/* $Id: Messages.java 1295926 2012-03-01 21:56:27Z kwright $ */

+/**

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements. See the NOTICE file distributed with this

+ * work for additional information regarding copyright ownership. The ASF

+ * licenses this file to You under the Apache License, Version 2.0 (the

+ * "License"); you may not use this file except in compliance with the License.

+ * You may obtain a copy of the License at

+ * 

+* http://www.apache.org/licenses/LICENSE-2.0

+ * 

+* Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT

+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the

+ * License for the specific language governing permissions and limitations under

+ * the License.

+ */

+package org.apache.manifoldcf.authorities.authorities.generic;

+

+import java.util.Locale;

+import java.util.Map;

+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;

+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;

+

+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages {

+

+  public static final String DEFAULT_BUNDLE_NAME = "org.apache.manifoldcf.authorities.authorities.generic.common";

+

+  public static final String DEFAULT_PATH_NAME = "org.apache.manifoldcf.authorities.authorities.generic";

+

+  /**

+   * Constructor - do no instantiate

+   */

+  protected Messages() {

+  }

+

+  public static String getString(Locale locale, String messageKey) {

+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getAttributeString(Locale locale, String messageKey) {

+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getBodyString(Locale locale, String messageKey) {

+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getAttributeJavascriptString(Locale locale, String messageKey) {

+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getBodyJavascriptString(Locale locale, String messageKey) {

+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getString(Locale locale, String messageKey, Object[] args) {

+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getAttributeString(Locale locale, String messageKey, Object[] args) {

+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getBodyString(Locale locale, String messageKey, Object[] args) {

+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args) {

+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args) {

+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  // More general methods which allow bundlenames and class loaders to be specified.

+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  // Resource output

+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String, String> substitutionParameters, boolean mapToUpperCase)

+    throws ManifoldCFException {

+    outputResource(output, Messages.class, DEFAULT_PATH_NAME, locale, resourceKey,

+      substitutionParameters, mapToUpperCase);

+  }

+

+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String, String> substitutionParameters, boolean mapToUpperCase)

+    throws ManifoldCFException {

+    outputResourceWithVelocity(output, Messages.class, DEFAULT_BUNDLE_NAME, DEFAULT_PATH_NAME, locale, resourceKey,

+      substitutionParameters, mapToUpperCase);

+  }

+

+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String, Object> contextObjects)

+    throws ManifoldCFException {

+    outputResourceWithVelocity(output, Messages.class, DEFAULT_BUNDLE_NAME, DEFAULT_PATH_NAME, locale, resourceKey,

+      contextObjects);

+  }

+}

diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/GenericConnector.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/GenericConnector.java
new file mode 100644
index 0000000..43b7093
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/GenericConnector.java
@@ -0,0 +1,1412 @@
+/* $Id: SvnConnector.java 994959 2010-09-08 10:04:42Z krycek $ */

+/**

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements. See the NOTICE file distributed with this

+ * work for additional information regarding copyright ownership. The ASF

+ * licenses this file to You under the Apache License, Version 2.0 (the

+ * "License"); you may not use this file except in compliance with the License.

+ * You may obtain a copy of the License at

+ * 

+* http://www.apache.org/licenses/LICENSE-2.0

+ * 

+* Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT

+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the

+ * License for the specific language governing permissions and limitations under

+ * the License.

+ */

+package org.apache.manifoldcf.crawler.connectors.generic;

+

+import java.io.*;

+import java.net.MalformedURLException;

+import java.net.URL;

+import java.net.URLEncoder;

+import java.text.SimpleDateFormat;

+import java.util.*;

+import java.util.concurrent.ConcurrentHashMap;

+import javax.xml.bind.JAXBContext;

+import javax.xml.bind.JAXBException;

+import javax.xml.bind.Unmarshaller;

+import javax.xml.parsers.FactoryConfigurationError;

+import javax.xml.parsers.ParserConfigurationException;

+import javax.xml.parsers.SAXParser;

+import javax.xml.parsers.SAXParserFactory;

+import org.apache.http.HttpException;

+import org.apache.http.HttpRequest;

+import org.apache.http.HttpRequestInterceptor;

+import org.apache.http.HttpResponse;

+import org.apache.http.HttpStatus;

+import org.apache.http.auth.AuthScope;

+import org.apache.http.auth.Credentials;

+import org.apache.http.auth.UsernamePasswordCredentials;

+import org.apache.http.client.HttpClient;

+import org.apache.http.client.methods.HttpGet;

+import org.apache.http.impl.auth.BasicScheme;

+import org.apache.http.impl.client.DefaultHttpClient;

+import org.apache.http.params.HttpConnectionParams;

+import org.apache.http.protocol.HttpContext;

+import org.apache.http.util.EntityUtils;

+import org.apache.manifoldcf.agents.interfaces.*;

+import org.apache.manifoldcf.core.common.XThreadInputStream;

+import org.apache.manifoldcf.core.common.XThreadStringBuffer;

+import org.apache.manifoldcf.core.interfaces.*;

+import org.apache.manifoldcf.core.system.ManifoldCF;

+import org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector;

+import org.apache.manifoldcf.crawler.connectors.generic.api.Item;

+import org.apache.manifoldcf.crawler.connectors.generic.api.Items;

+import org.apache.manifoldcf.crawler.connectors.generic.api.Meta;

+import org.apache.manifoldcf.crawler.interfaces.*;

+import org.apache.manifoldcf.ui.util.Encoder;

+import org.xml.sax.Attributes;

+import org.xml.sax.SAXException;

+import org.xml.sax.helpers.DefaultHandler;

+

+public class GenericConnector extends BaseRepositoryConnector {

+

+  public static final String _rcsid = "@(#)$Id: GenericConnector.java 994959 2010-09-08 10:04:42Z redguy $";

+

+  /**

+   * Deny access token for default authority

+   */

+  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";

+

+  private final static String ACTION_PARAM_NAME = "action";

+

+  private final static String ACTION_CHECK = "check";

+

+  private final static String ACTION_SEED = "seed";

+

+  private final static String ACTION_ITEMS = "items";

+

+  private final static String ACTION_ITEM = "item";

+

+  private String genericLogin = null;

+

+  private String genericPassword = null;

+

+  private String genericEntryPoint = null;

+

+  private int connectionTimeoutMillis = 60 * 1000;

+

+  private int socketTimeoutMillis = 30 * 60 * 1000;

+

+  protected static final String RELATIONSHIP_RELATED = "related";

+

+  private ConcurrentHashMap<String, Item> documentCache = new ConcurrentHashMap<String, Item>(10);

+

+  /**

+   * Constructor.

+   */

+  public GenericConnector() {

+  }

+

+  @Override

+  public int getMaxDocumentRequest() {

+    return 10;

+  }

+

+  @Override

+  public String[] getRelationshipTypes() {

+    return new String[]{RELATIONSHIP_RELATED};

+  }

+

+  @Override

+  public int getConnectorModel() {

+    return GenericConnector.MODEL_ADD_CHANGE;

+  }

+

+  /**

+   * For any given document, list the bins that it is a member of.

+   */

+  @Override

+  public String[] getBinNames(String documentIdentifier) {

+    // Return the host name

+    return new String[]{genericEntryPoint};

+  }

+

+  // All methods below this line will ONLY be called if a connect() call succeeded

+  // on this instance!

+  /**

+   * Connect. The configuration parameters are included.

+   *

+   * @param configParams are the configuration parameters for this connection.

+   * Note well: There are no exceptions allowed from this call, since it is

+   * expected to mainly establish connection parameters.

+   */

+  @Override

+  public void connect(ConfigParams configParams) {

+    super.connect(configParams);

+    genericEntryPoint = getParam(configParams, "genericEntryPoint", null);

+    genericLogin = getParam(configParams, "genericLogin", null);

+    genericPassword = "";

+    try {

+      genericPassword = ManifoldCF.deobfuscate(getParam(configParams, "genericPassword", ""));

+    } catch (ManifoldCFException ignore) {

+    }

+    connectionTimeoutMillis = Integer.parseInt(getParam(configParams, "genericConnectionTimeout", "60000"));

+    if (connectionTimeoutMillis == 0) {

+      connectionTimeoutMillis = 60000;

+    }

+    socketTimeoutMillis = Integer.parseInt(getParam(configParams, "genericSocketTimeout", "1800000"));

+    if (socketTimeoutMillis == 0) {

+      socketTimeoutMillis = 1800000;

+    }

+  }

+

+  protected DefaultHttpClient getClient() throws ManifoldCFException {

+    DefaultHttpClient cl = new DefaultHttpClient();

+    if (genericLogin != null && !genericLogin.isEmpty()) {

+      try {

+        URL url = new URL(genericEntryPoint);

+        Credentials credentials = new UsernamePasswordCredentials(genericLogin, genericPassword);

+        cl.getCredentialsProvider().setCredentials(new AuthScope(url.getHost(), url.getPort() > 0 ? url.getPort() : 80, AuthScope.ANY_REALM), credentials);

+        cl.addRequestInterceptor(new PreemptiveAuth(credentials), 0);

+      } catch (MalformedURLException ex) {

+        throw new ManifoldCFException("getClient exception: " + ex.getMessage(), ex);

+      }

+    }

+    HttpConnectionParams.setConnectionTimeout(cl.getParams(), connectionTimeoutMillis);

+    HttpConnectionParams.setSoTimeout(cl.getParams(), socketTimeoutMillis);

+    return cl;

+  }

+

+  @Override

+  public String check() throws ManifoldCFException {

+    HttpClient client = getClient();

+    try {

+      CheckThread checkThread = new CheckThread(client, genericEntryPoint + "?" + ACTION_PARAM_NAME + "=" + ACTION_CHECK);

+      checkThread.start();

+      checkThread.join();

+      if (checkThread.getException() != null) {

+        Throwable thr = checkThread.getException();

+        return "Check exception: " + thr.getMessage();

+      }

+      return checkThread.getResult();

+    } catch (InterruptedException ex) {

+      throw new ManifoldCFException(ex.getMessage(), ex, ManifoldCFException.INTERRUPTED);

+    }

+  }

+

+  @Override

+  public void addSeedDocuments(ISeedingActivity activities, DocumentSpecification spec,

+    long startTime, long endTime)

+    throws ManifoldCFException, ServiceInterruption {

+

+    HttpClient client = getClient();

+    SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss'Z'");

+

+    StringBuilder url = new StringBuilder(genericEntryPoint);

+    url.append("?").append(ACTION_PARAM_NAME).append("=").append(ACTION_SEED);

+    if (startTime > 0) {

+      url.append("&startTime=").append(sdf.format(new Date(startTime)));

+    }

+    url.append("&endTime=").append(sdf.format(new Date(endTime)));

+    for (int i = 0; i < spec.getChildCount(); i++) {

+      SpecificationNode sn = spec.getChild(i);

+      if (sn.getType().equals("param")) {

+        try {

+          String paramName = sn.getAttributeValue("name");

+          String paramValue = sn.getValue();

+          url.append("&").append(URLEncoder.encode(paramName, "UTF-8")).append("=").append(URLEncoder.encode(paramValue, "UTF-8"));

+        } catch (UnsupportedEncodingException ex) {

+          throw new ManifoldCFException("addSeedDocuments error: " + ex.getMessage(), ex);

+        }

+      }

+    }

+    ExecuteSeedingThread t = new ExecuteSeedingThread(client, url.toString());

+    try {

+      t.start();

+      boolean wasInterrupted = false;

+      try {

+        XThreadStringBuffer seedBuffer = t.getBuffer();

+

+        // Pick up the paths, and add them to the activities, before we join with the child thread.

+        while (true) {

+          // The only kind of exceptions this can throw are going to shut the process down.

+          String docPath = seedBuffer.fetch();

+          if (docPath == null) {

+            break;

+          }

+          // Add the pageID to the queue

+          activities.addSeedDocument(docPath);

+        }

+      } catch (InterruptedException e) {

+        wasInterrupted = true;

+        throw e;

+      } catch (ManifoldCFException e) {

+        if (e.getErrorCode() == ManifoldCFException.INTERRUPTED) {

+          wasInterrupted = true;

+        }

+        throw e;

+      } finally {

+        if (!wasInterrupted) {

+          t.finishUp();

+        }

+      }

+    } catch (InterruptedException e) {

+      t.interrupt();

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,

+        ManifoldCFException.INTERRUPTED);

+    }

+  }

+

+  @Override

+  public String[] getDocumentVersions(String[] documentIdentifiers, String[] oldVersions, IVersionActivity activities,

+    DocumentSpecification spec, int jobType, boolean usesDefaultAuthority)

+    throws ManifoldCFException, ServiceInterruption {

+

+    // Forced acls

+    String[] acls = getAcls(spec);

+    // Sort it,

+    java.util.Arrays.sort(acls);

+    String rights = java.util.Arrays.toString(acls);

+

+    String genericAuthMode = "provided";

+    for (int i = 0; i < spec.getChildCount(); i++) {

+      SpecificationNode sn = spec.getChild(i);

+      if (sn.getType().equals("genericAuthMode")) {

+        genericAuthMode = sn.getValue();

+        break;

+      }

+    }

+

+    HttpClient client = getClient();

+    StringBuilder url = new StringBuilder(genericEntryPoint);

+    try {

+      url.append("?").append(ACTION_PARAM_NAME).append("=").append(ACTION_ITEMS);

+      for (int i = 0; i < documentIdentifiers.length; i++) {

+        url.append("&id[]=").append(URLEncoder.encode(documentIdentifiers[i], "UTF-8"));

+      }

+      for (int i = 0; i < spec.getChildCount(); i++) {

+        SpecificationNode sn = spec.getChild(i);

+        if (sn.getType().equals("param")) {

+          String paramName = sn.getAttributeValue("name");

+          String paramValue = sn.getValue();

+          url.append("&").append(URLEncoder.encode(paramName, "UTF-8")).append("=").append(URLEncoder.encode(paramValue, "UTF-8"));

+        }

+      }

+    } catch (UnsupportedEncodingException ex) {

+      throw new ManifoldCFException("getDocumentVersions error: " + ex.getMessage(), ex);

+    }

+    try {

+      DocumentVersionThread versioningThread = new DocumentVersionThread(client, url.toString(), documentIdentifiers, genericAuthMode, rights, documentCache);

+      versioningThread.start();

+      versioningThread.join();

+      if (versioningThread.getException() != null) {

+        Throwable thr = versioningThread.getException();

+        if (thr instanceof ManifoldCFException) {

+          if (((ManifoldCFException) thr).getErrorCode() == ManifoldCFException.INTERRUPTED) {

+            throw new InterruptedException(thr.getMessage());

+          }

+          throw (ManifoldCFException) thr;

+        } else if (thr instanceof ServiceInterruption) {

+          throw (ServiceInterruption) thr;

+        } else if (thr instanceof IOException) {

+          handleIOException((IOException) thr);

+        } else if (thr instanceof RuntimeException) {

+          throw (RuntimeException) thr;

+        }

+        throw new ManifoldCFException("getDocumentVersions error: " + thr.getMessage(), thr);

+      }

+      return versioningThread.getVersions();

+    } catch (InterruptedException ex) {

+      throw new ManifoldCFException(ex.getMessage(), ex, ManifoldCFException.INTERRUPTED);

+    }

+  }

+

+  @Override

+  public void processDocuments(String[] documentIdentifiers, String[] versions, IProcessActivity activities,

+    DocumentSpecification spec, boolean[] scanOnly, int jobType)

+    throws ManifoldCFException, ServiceInterruption {

+

+    // Forced acls

+    String[] acls = getAcls(spec);

+

+    String genericAuthMode = "provided";

+    for (int i = 0; i < spec.getChildCount(); i++) {

+      SpecificationNode sn = spec.getChild(i);

+      if (sn.getType().equals("genericAuthMode")) {

+        genericAuthMode = sn.getValue();

+        break;

+      }

+    }

+

+    HttpClient client = getClient();

+    for (int i = 0; i < documentIdentifiers.length; i++) {

+      activities.checkJobStillActive();

+

+      Item item = documentCache.get(documentIdentifiers[i]);

+      if (item == null) {

+        throw new ManifoldCFException("processDocuments error - no cache entry for: " + documentIdentifiers[i]);

+      }

+

+      if (item.related != null) {

+        for (String rel : item.related) {

+          activities.addDocumentReference(rel, documentIdentifiers[i], RELATIONSHIP_RELATED);

+        }

+      }

+      if (scanOnly[i]) {

+        continue;

+      }

+

+      RepositoryDocument doc = new RepositoryDocument();

+      if (item.mimeType != null) {

+        doc.setMimeType(item.mimeType);

+      }

+      if (item.created != null) {

+        doc.setCreatedDate(item.created);

+      }

+      if (item.updated != null) {

+        doc.setModifiedDate(item.updated);

+      }

+      if (item.fileName != null) {

+        doc.setFileName(item.fileName);

+      }

+      if (item.metadata != null) {

+        HashMap<String, List<String>> meta = new HashMap<String, List<String>>();

+        for (Meta m : item.metadata) {

+          if (meta.containsKey(m.name)) {

+            meta.get(m.name).add(m.value);

+          } else {

+            List<String> list = new ArrayList<String>(1);

+            list.add(m.value);

+            meta.put(m.name, list);

+          }

+        }

+        for (String name : meta.keySet()) {

+          List<String> values = meta.get(name);

+          if (values.size() > 1) {

+            String[] svals = new String[values.size()];

+            for (int j = 0; j < values.size(); j++) {

+              svals[j] = values.get(j);

+            }

+            doc.addField(name, svals);

+          } else {

+            doc.addField(name, values.get(0));

+          }

+        }

+      }

+      if ("provided".equals(genericAuthMode)) {

+        if (item.auth != null) {

+          String[] acl = new String[item.auth.size()];

+          for (int j = 0; j < item.auth.size(); j++) {

+            acl[j] = item.auth.get(j);

+          }

+          doc.setACL(acl);

+          doc.setDenyACL(new String[]{defaultAuthorityDenyToken});

+        }

+      } else {

+        if (acls.length > 0) {

+          doc.setACL(acls);

+          doc.setDenyACL(new String[]{defaultAuthorityDenyToken});

+        }

+      }

+      if (item.content != null) {

+        try {

+          byte[] content = item.content.getBytes("UTF-8");

+          ByteArrayInputStream is = new ByteArrayInputStream(content);

+          try {

+            doc.setBinary(is, content.length);

+            activities.ingestDocument(documentIdentifiers[i], versions[i], item.url, doc);

+            is.close();

+          } finally {

+            is.close();

+          }

+        } catch (IOException ex) {

+          handleIOException(ex);

+        }

+      } else {

+        StringBuilder url = new StringBuilder(genericEntryPoint);

+        try {

+          url.append("?").append(ACTION_PARAM_NAME).append("=").append(ACTION_ITEM);

+          url.append("&id=").append(URLEncoder.encode(documentIdentifiers[i], "UTF-8"));

+          for (int j = 0; j < spec.getChildCount(); j++) {

+            SpecificationNode sn = spec.getChild(j);

+            if (sn.getType().equals("param")) {

+              String paramName = sn.getAttributeValue("name");

+              String paramValue = sn.getValue();

+              url.append("&").append(URLEncoder.encode(paramName, "UTF-8")).append("=").append(URLEncoder.encode(paramValue, "UTF-8"));

+            }

+          }

+        } catch (UnsupportedEncodingException ex) {

+          throw new ManifoldCFException("processDocuments error: " + ex.getMessage(), ex);

+        }

+

+        ExecuteProcessThread t = new ExecuteProcessThread(client, url.toString());

+        try {

+          t.start();

+          boolean wasInterrupted = false;

+          try {

+            InputStream is = t.getSafeInputStream();

+            long fileLength = t.getStreamLength();

+            try {

+              // Can only index while background thread is running!

+              doc.setBinary(is, fileLength);

+              activities.ingestDocument(documentIdentifiers[i], versions[i], item.url, doc);

+            } finally {

+              is.close();

+            }

+          } catch (ManifoldCFException e) {

+            if (e.getErrorCode() == ManifoldCFException.INTERRUPTED) {

+              wasInterrupted = true;

+            }

+            throw e;

+          } catch (java.net.SocketTimeoutException e) {

+            throw e;

+          } catch (InterruptedIOException e) {

+            wasInterrupted = true;

+            throw e;

+          } finally {

+            if (!wasInterrupted) {

+              t.finishUp();

+            }

+          }

+        } catch (InterruptedException e) {

+          t.interrupt();

+          throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);

+        } catch (InterruptedIOException e) {

+          t.interrupt();

+          throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);

+        } catch (IOException e) {

+          handleIOException(e);

+        }

+      }

+    }

+  }

+

+  @Override

+  public void releaseDocumentVersions(String[] documentIdentifiers, String[] versions) throws ManifoldCFException {

+    for (int i = 0; i < documentIdentifiers.length; i++) {

+      if (documentCache.containsKey(documentIdentifiers[i])) {

+        documentCache.remove(documentIdentifiers[i]);

+      }

+    }

+    super.releaseDocumentVersions(documentIdentifiers, versions);

+  }

+

+  @Override

+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out,

+    Locale locale, ConfigParams parameters, List<String> tabsArray)

+    throws ManifoldCFException, IOException {

+    tabsArray.add(Messages.getString(locale, "generic.EntryPoint"));

+

+    out.print(

+      "<script type=\"text/javascript\">\n"

+      + "<!--\n"

+      + "function checkConfig() {\n"

+      + "  return true;\n"

+      + "}\n"

+      + "\n"

+      + "function checkConfigForSave() {\n"

+      + "  return true;\n"

+      + "}\n"

+      + "//-->\n"

+      + "</script>\n");

+  }

+

+  @Override

+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out,

+    Locale locale, ConfigParams parameters, String tabName)

+    throws ManifoldCFException, IOException {

+

+    String server = getParam(parameters, "genericEntryPoint", "");

+    String login = getParam(parameters, "genericLogin", "");

+    String password = "";

+    try {

+      password = ManifoldCF.deobfuscate(getParam(parameters, "genericPassword", ""));

+    } catch (ManifoldCFException ignore) {

+    }

+    String conTimeout = getParam(parameters, "genericConnectionTimeout", "60000");

+    String soTimeout = getParam(parameters, "genericSocketTimeout", "1800000");

+

+    if (tabName.equals(Messages.getString(locale, "generic.EntryPoint"))) {

+      out.print(

+        "<table class=\"displaytable\">\n"

+        + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"

+        + " <tr>\n"

+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.EntryPointColon") + "</nobr></td>\n"

+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericEntryPoint\" value=\"" + Encoder.attributeEscape(server) + "\"/></td>\n"

+        + " </tr>\n"

+        + " <tr>\n"

+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.LoginColon") + "</nobr></td>\n"

+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericLogin\" value=\"" + Encoder.attributeEscape(login) + "\"/></td>\n"

+        + " </tr>\n"

+        + " <tr>\n"

+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.PasswordColon") + "</nobr></td>\n"

+        + "  <td class=\"value\"><input type=\"password\" size=\"32\" name=\"genericPassword\" value=\"" + Encoder.attributeEscape(password) + "\"/></td>\n"

+        + " </tr>\n"

+        + " <tr>\n"

+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ConnectionTimeoutColon") + "</nobr></td>\n"

+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericConTimeout\" value=\"" + Encoder.attributeEscape(conTimeout) + "\"/></td>\n"

+        + " </tr>\n"

+        + " <tr>\n"

+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.SocketTimeoutColon") + "</nobr></td>\n"

+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"genericSoTimeout\" value=\"" + Encoder.attributeEscape(soTimeout) + "\"/></td>\n"

+        + " </tr>\n"

+        + "</table>\n");

+    } else {

+      out.print("<input type=\"hidden\" name=\"genericEntryPoint\" value=\"" + Encoder.attributeEscape(server) + "\"/>\n");

+      out.print("<input type=\"hidden\" name=\"genericLogin\" value=\"" + Encoder.attributeEscape(login) + "\"/>\n");

+      out.print("<input type=\"hidden\" name=\"genericPassword\" value=\"" + Encoder.attributeEscape(password) + "\"/>\n");

+      out.print("<input type=\"hidden\" name=\"genericConTimeout\" value=\"" + Encoder.attributeEscape(conTimeout) + "\"/>\n");

+      out.print("<input type=\"hidden\" name=\"genericSoTimeout\" value=\"" + Encoder.attributeEscape(soTimeout) + "\"/>\n");

+    }

+  }

+

+  @Override

+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext,

+    Locale locale, ConfigParams parameters)

+    throws ManifoldCFException {

+

+    copyParam(variableContext, parameters, "genericLogin");

+    copyParam(variableContext, parameters, "genericEntryPoint");

+    copyParam(variableContext, parameters, "genericConTimeout");

+    copyParam(variableContext, parameters, "genericSoTimeout");

+

+    String password = variableContext.getParameter("genericPassword");

+    if (password == null) {

+      password = "";

+    }

+    parameters.setParameter("genericPassword", ManifoldCF.obfuscate(password));

+    return null;

+  }

+

+  @Override

+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,

+    Locale locale, ConfigParams parameters)

+    throws ManifoldCFException, IOException {

+    String login = getParam(parameters, "genericLogin", "");

+    String server = getParam(parameters, "genericEntryPoint", "");

+    String conTimeout = getParam(parameters, "genericConnectionTimeout", "60000");

+    String soTimeout = getParam(parameters, "genericSocketTimeout", "1800000");

+    

+    out.print(

+      "<table class=\"displaytable\">\n"

+      + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"

+      + " <tr>\n"

+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.EntryPointColon") + "</nobr></td>\n"

+      + "  <td class=\"value\">" + Encoder.bodyEscape(server) + "</td>\n"

+      + " </tr>\n"

+      + " <tr>\n"

+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.LoginColon") + "</nobr></td>\n"

+      + "  <td class=\"value\">" + Encoder.bodyEscape(login) + "</td>\n"

+      + " </tr>\n"

+      + " <tr>\n"

+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.PasswordColon") + "</nobr></td>\n"

+      + "  <td class=\"value\">**********</td>\n"

+      + " </tr>\n"

+      + " <tr>\n"

+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ConnectionTimeoutColon") + "</nobr></td>\n"

+      + "  <td class=\"value\">" + Encoder.bodyEscape(conTimeout) + "</td>\n"

+      + " </tr>\n"

+      + " <tr>\n"

+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.SocketTimeoutColon") + "</nobr></td>\n"

+      + "  <td class=\"value\">" + Encoder.bodyEscape(soTimeout) + "</td>\n"

+      + " </tr>\n"

+      + "</table>\n");

+  }

+

+  @Override

+  public void outputSpecificationHeader(IHTTPOutput out, Locale locale, DocumentSpecification ds, List<String> tabsArray)

+    throws ManifoldCFException, IOException {

+    tabsArray.add(Messages.getString(locale, "generic.Parameters"));

+    tabsArray.add(Messages.getString(locale, "generic.Security"));

+

+    out.print(

+      "<script type=\"text/javascript\">\n"

+      + "<!--\n"

+      + "function SpecOp(n, opValue, anchorvalue) {\n"

+      + "  eval(\"editjob.\"+n+\".value = \\\"\"+opValue+\"\\\"\");\n"

+      + "  postFormSetAnchor(anchorvalue);\n"

+      + "}\n"

+      + "\n"

+      + "function checkSpecification() {\n"

+      + "  return true;\n"

+      + "}\n"

+      + "\n"

+      + "function SpecAddToken(anchorvalue) {\n"

+      + "  if (editjob.spectoken.value == \"\")\n"

+      + "  {\n"

+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "generic.TypeInAnAccessToken") + "\");\n"

+      + "    editjob.spectoken.focus();\n"

+      + "    return;\n"

+      + "  }\n"

+      + "  SpecOp(\"accessop\",\"Add\",anchorvalue);\n"

+      + "}\n"

+      + "function SpecAddParam(anchorvalue) {\n"

+      + "  if (editjob.specparamname.value == \"\")\n"

+      + "  {\n"

+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "generic.TypeInParamName") + "\");\n"

+      + "    editjob.specparamname.focus();\n"

+      + "    return;\n"

+      + "  }\n"

+      + "  SpecOp(\"paramop\",\"Add\",anchorvalue);\n"

+      + "}\n"

+      + "//-->\n"

+      + "</script>\n");

+  }

+

+  @Override

+  public void outputSpecificationBody(IHTTPOutput out, Locale locale, DocumentSpecification ds, String tabName)

+    throws ManifoldCFException, IOException {

+

+    int k, i;

+

+    if (tabName.equals(Messages.getString(locale, "generic.Parameters"))) {

+

+      out.print("<table class=\"displaytable\">"

+        + "<tr><td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.ParametersColon") + "</nobr></td>"

+        + "<td class=\"value\">");

+

+      out.print("<table class=\"formtable\">\n"

+        + "<tr class=\"formheaderrow\">"

+        + "<td class=\"formcolumnheader\"></td>"

+        + "<td class=\"formcolumnheader\">" + Messages.getBodyString(locale, "generic.ParameterName") + "</td>"

+        + "<td class=\"formcolumnheader\">" + Messages.getBodyString(locale, "generic.ParameterValue") + "</td>"

+        + "</tr>");

+

+      i = 0;

+      k = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i++);

+        if (sn.getType().equals("param")) {

+          String paramDescription = "_" + Integer.toString(k);

+          String paramOpName = "paramop" + paramDescription;

+          String paramName = sn.getAttributeValue("name");

+          String paramValue = sn.getValue();

+          out.print(

+            "  <tr class=\"evenformrow\">\n"

+            + "    <td class=\"formcolumncell\">\n"

+            + "      <input type=\"hidden\" name=\"" + paramOpName + "\" value=\"\"/>\n"

+            + "      <a name=\"" + "param_" + Integer.toString(k) + "\">\n"

+            + "        <input type=\"button\" value=\"" + Messages.getAttributeString(locale, "generic.Delete") + "\" onClick='Javascript:SpecOp(\"" + paramOpName + "\",\"Delete\",\"param" + paramDescription + "\")' alt=\"" + Messages.getAttributeString(locale, "generic.DeleteParameter") + Integer.toString(k) + "\"/>\n"

+            + "      </a>&nbsp;\n"

+            + "    </td>\n"

+            + "    <td class=\"formcolumncell\">\n"

+            + "      <input type=\"text\" name=\"specparamname" + paramDescription + "\" value=\"" + Encoder.attributeEscape(paramName) + "\"/>\n"

+            + "    </td>\n"

+            + "    <td class=\"formcolumncell\">\n"

+            + "      <input type=\"text\" name=\"specparamvalue" + paramDescription + "\" value=\"" + Encoder.attributeEscape(paramValue) + "\"/>\n"

+            + "    </td>\n"

+            + "  </tr>\n");

+          k++;

+        }

+      }

+      if (k == 0) {

+        out.print(

+          "  <tr>\n"

+          + "    <td class=\"message\" colspan=\"3\">" + Messages.getBodyString(locale, "generic.NoParametersSpecified") + "</td>\n"

+          + "  </tr>\n");

+      }

+      out.print(

+        "  <tr><td class=\"lightseparator\" colspan=\"3\"><hr/></td></tr>\n"

+        + "  <tr class=\"evenformrow\">\n"

+        + "    <td class=\"formcolumncell\">\n"

+        + "      <input type=\"hidden\" name=\"paramcount\" value=\"" + Integer.toString(k) + "\"/>\n"

+        + "      <input type=\"hidden\" name=\"paramop\" value=\"\"/>\n"

+        + "      <a name=\"param_" + Integer.toString(k) + "\">\n"

+        + "        <input type=\"button\" value=\"" + Messages.getAttributeString(locale, "generic.Add") + "\" onClick='Javascript:SpecAddParam(\"param_" + Integer.toString(k + 1) + "\")' alt=\"" + Messages.getAttributeString(locale, "generic.AddParameter") + "\"/>\n"

+        + "      </a>&nbsp;\n"

+        + "    </td>\n"

+        + "    <td class=\"formcolumncell\">\n"

+        + "      <input type=\"text\" size=\"30\" name=\"specparamname\" value=\"\"/>\n"

+        + "    </td>\n"

+        + "    <td class=\"formcolumncell\">\n"

+        + "      <input type=\"text\" size=\"30\" name=\"specparamvalue\" value=\"\"/>\n"

+        + "    </td>\n"

+        + "  </tr>\n"

+        + "</table>\n");

+      out.print("</td></tr></table>");

+    } else {

+      i = 0;

+      k = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i++);

+        if (sn.getType().equals("param")) {

+          String accessDescription = "_" + Integer.toString(k);

+          String paramName = sn.getAttributeValue("name");

+          String paramValue = sn.getValue();

+          out.print(

+            "<input type=\"hidden\" name=\"" + "specparamname" + accessDescription + "\" value=\"" + Encoder.attributeEscape(paramName) + "\"/>\n"

+            + "<input type=\"hidden\" name=\"" + "specparamvalue" + accessDescription + "\" value=\"" + Encoder.attributeEscape(paramValue) + "\"/>\n");

+          k++;

+        }

+      }

+      out.print("<input type=\"hidden\" name=\"paramcount\" value=\"" + Integer.toString(k) + "\"/>\n");

+    }

+

+    // Security tab

+    String genericAuthMode = "provided";

+    for (i = 0; i < ds.getChildCount(); i++) {

+      SpecificationNode sn = ds.getChild(i);

+      if (sn.getType().equals("genericAuthMode")) {

+        genericAuthMode = sn.getValue();

+      }

+    }

+    if (tabName.equals(Messages.getString(locale, "generic.Security"))) {

+      out.print(

+        "<table class=\"displaytable\">\n"

+        + "  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n");

+

+      out.print("  <tr>\n"

+        + "    <td class=\"description\">" + Messages.getBodyString(locale, "generic.AuthMode") + "</td>\n"

+        + "    <td class=\"value\" >\n"

+        + "      <input type=\"radio\" name=\"genericAuthMode\" value=\"provided\" " + ("provided".equals(genericAuthMode) ? "checked=\"checked\"" : "") + "/>" + Messages.getBodyString(locale, "generic.AuthModeProvided") + "<br/>\n"

+        + "      <input type=\"radio\" name=\"genericAuthMode\" value=\"forced\" " + ("forced".equals(genericAuthMode) ? "checked=\"checked\"" : "") + "/>" + Messages.getBodyString(locale, "generic.AuthModeForced") + "<br/>\n"

+        + "    </td>\n"

+        + "  </tr>\n");

+      // Go through forced ACL

+      out.print("<tr><td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.TokensColon") + "</nobr></td>"

+        + "<td class=\"value\">");

+      out.print("<table class=\"formtable\">\n"

+        + "<tr class=\"formheaderrow\">"

+        + "<td class=\"formcolumnheader\"></td>"

+        + "<td class=\"formcolumnheader\">" + Messages.getBodyString(locale, "generic.Token") + "</td>"

+        + "</tr>");

+      i = 0;

+      k = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i++);

+        if (sn.getType().equals("access")) {

+          String accessDescription = "_" + Integer.toString(k);

+          String accessOpName = "accessop" + accessDescription;

+          String token = sn.getAttributeValue("token");

+          out.print(

+            "  <tr class=\"evenformrow\">\n"

+            + "    <td class=\"formcolumncell\">\n"

+            + "      <input type=\"hidden\" name=\"" + accessOpName + "\" value=\"\"/>\n"

+            + "      <input type=\"hidden\" name=\"" + "spectoken" + accessDescription + "\" value=\"" + Encoder.attributeEscape(token) + "\"/>\n"

+            + "      <a name=\"" + "token_" + Integer.toString(k) + "\">\n"

+            + "        <input type=\"button\" value=\"" + Messages.getAttributeString(locale, "generic.Delete") + "\" onClick='Javascript:SpecOp(\"" + accessOpName + "\",\"Delete\",\"token_" + Integer.toString(k) + "\")' alt=\"" + Messages.getAttributeString(locale, "generic.DeleteToken") + Integer.toString(k) + "\"/>\n"

+            + "      </a>&nbsp;\n"

+            + "    </td>\n"

+            + "    <td class=\"formcolumncell\">\n"

+            + "      " + Encoder.bodyEscape(token) + "\n"

+            + "    </td>\n"

+            + "  </tr>\n");

+          k++;

+        }

+      }

+      if (k == 0) {

+        out.print(

+          "  <tr>\n"

+          + "    <td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale, "generic.NoAccessTokensSpecified") + "</td>\n"

+          + "  </tr>\n");

+      }

+      out.print(

+        "  <tr><td class=\"lightseparator\" colspan=\"2\"><hr/></td></tr>\n"

+        + "  <tr class=\"evenformrow\">\n"

+        + "    <td class=\"formcolumncell\">\n"

+        + "      <input type=\"hidden\" name=\"tokencount\" value=\"" + Integer.toString(k) + "\"/>\n"

+        + "      <input type=\"hidden\" name=\"accessop\" value=\"\"/>\n"

+        + "      <a name=\"" + "token_" + Integer.toString(k) + "\">\n"

+        + "        <input type=\"button\" value=\"" + Messages.getAttributeString(locale, "generic.Add") + "\" onClick='Javascript:SpecAddToken(\"token_" + Integer.toString(k + 1) + "\")' alt=\"" + Messages.getAttributeString(locale, "generic.AddAccessToken") + "\"/>\n"

+        + "      </a>&nbsp;\n"

+        + "    </td>\n"

+        + "    <td class=\"formcolumncell\">\n"

+        + "      <input type=\"text\" size=\"30\" name=\"spectoken\" value=\"\"/>\n"

+        + "    </td>\n"

+        + "  </tr>\n"

+        + "</table>\n");

+      out.print("</td></tr></table>");

+    } else {

+      // Finally, go through forced ACL

+      i = 0;

+      k = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i++);

+        if (sn.getType().equals("access")) {

+          String accessDescription = "_" + Integer.toString(k);

+          String token = "" + sn.getAttributeValue("token");

+          out.print(

+            "<input type=\"hidden\" name=\"" + "spectoken" + accessDescription + "\" value=\"" + Encoder.attributeEscape(token) + "\"/>\n");

+          k++;

+        }

+      }

+      out.print("<input type=\"hidden\" name=\"tokencount\" value=\"" + Integer.toString(k) + "\"/>\n");

+      out.print("<input type=\"hidden\" name=\"genericAuthMode\" value=\"" + Encoder.attributeEscape(genericAuthMode) + "\"/>\n");

+    }

+  }

+

+  @Override

+  public String processSpecificationPost(IPostParameters variableContext, Locale locale, DocumentSpecification ds)

+    throws ManifoldCFException {

+

+    String xc = variableContext.getParameter("paramcount");

+    if (xc != null) {

+      // Delete all tokens first

+      int i = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i);

+        if (sn.getType().equals("param")) {

+          ds.removeChild(i);

+        } else {

+          i++;

+        }

+      }

+

+      int accessCount = Integer.parseInt(xc);

+      i = 0;

+      while (i < accessCount) {

+        String paramDescription = "_" + Integer.toString(i);

+        String paramOpName = "paramop" + paramDescription;

+        xc = variableContext.getParameter(paramOpName);

+        if (xc != null && xc.equals("Delete")) {

+          // Next row

+          i++;

+          continue;

+        }

+        // Get the stuff we need

+        String paramName = variableContext.getParameter("specparamname" + paramDescription);

+        String paramValue = variableContext.getParameter("specparamvalue" + paramDescription);

+        SpecificationNode node = new SpecificationNode("param");

+        node.setAttribute("name", paramName);

+        node.setValue(paramValue);

+        ds.addChild(ds.getChildCount(), node);

+        i++;

+      }

+

+      String op = variableContext.getParameter("paramop");

+      if (op != null && op.equals("Add")) {

+        String paramName = variableContext.getParameter("specparamname");

+        String paramValue = variableContext.getParameter("specparamvalue");

+        SpecificationNode node = new SpecificationNode("param");

+        node.setAttribute("name", paramName);

+        node.setValue(paramValue);

+        ds.addChild(ds.getChildCount(), node);

+      }

+    }

+

+    String redmineAuthMode = variableContext.getParameter("genericAuthMode");

+    if (redmineAuthMode != null) {

+      // Delete existing seeds record first

+      int i = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i);

+        if (sn.getType().equals("genericAuthMode")) {

+          ds.removeChild(i);

+        } else {

+          i++;

+        }

+      }

+      SpecificationNode cn = new SpecificationNode("genericAuthMode");

+      cn.setValue(redmineAuthMode);

+      ds.addChild(ds.getChildCount(), cn);

+    }

+

+    xc = variableContext.getParameter("tokencount");

+    if (xc != null) {

+      // Delete all tokens first

+      int i = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i);

+        if (sn.getType().equals("access")) {

+          ds.removeChild(i);

+        } else {

+          i++;

+        }

+      }

+

+      int accessCount = Integer.parseInt(xc);

+      i = 0;

+      while (i < accessCount) {

+        String accessDescription = "_" + Integer.toString(i);

+        String accessOpName = "accessop" + accessDescription;

+        xc = variableContext.getParameter(accessOpName);

+        if (xc != null && xc.equals("Delete")) {

+          // Next row

+          i++;

+          continue;

+        }

+        // Get the stuff we need

+        String accessSpec = variableContext.getParameter("spectoken" + accessDescription);

+        SpecificationNode node = new SpecificationNode("access");

+        node.setAttribute("token", accessSpec);

+        ds.addChild(ds.getChildCount(), node);

+        i++;

+      }

+

+      String op = variableContext.getParameter("accessop");

+      if (op != null && op.equals("Add")) {

+        String accessspec = variableContext.getParameter("spectoken");

+        SpecificationNode node = new SpecificationNode("access");

+        node.setAttribute("token", accessspec);

+        ds.addChild(ds.getChildCount(), node);

+      }

+    }

+

+    return null;

+  }

+

+  @Override

+  public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)

+    throws ManifoldCFException, IOException {

+    boolean seenAny;

+    int i;

+

+    i = 0;

+    seenAny = false;

+    while (i < ds.getChildCount()) {

+      SpecificationNode sn = ds.getChild(i++);

+      if (sn.getType().equals("param")) {

+        if (seenAny == false) {

+          out.print(

+            "  <tr>\n"

+            + "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.Parameters") + "</nobr></td>\n"

+            + "    <td class=\"value\">\n");

+          seenAny = true;

+        }

+        String paramName = sn.getAttributeValue("name");

+        String paramValue = sn.getValue();

+        out.print(Encoder.bodyEscape(paramName) + " = " + Encoder.bodyEscape(paramValue) + "<br/>\n");

+      }

+    }

+

+    if (seenAny) {

+      out.print(

+        "    </td>\n"

+        + "  </tr>\n");

+    } else {

+      out.print(

+        "  <tr><td class=\"message\" colspan=\"4\"><nobr>" + Messages.getBodyString(locale, "generic.NoParametersSpecified") + "</nobr></td></tr>\n");

+    }

+

+    out.print(

+      "  <tr><td class=\"separator\" colspan=\"4\"><hr/></td></tr>\n");

+

+    // Go through looking for access tokens

+    i = 0;

+    seenAny = false;

+    while (i < ds.getChildCount()) {

+      SpecificationNode sn = ds.getChild(i++);

+      if (sn.getType().equals("access")) {

+        if (seenAny == false) {

+          out.print(

+            "  <tr>\n"

+            + "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "generic.AccessTokens") + "</nobr></td>\n"

+            + "    <td class=\"value\">\n");

+          seenAny = true;

+        }

+        String token = sn.getAttributeValue("token");

+        out.print(Encoder.bodyEscape(token) + "<br/>\n");

+      }

+    }

+

+    if (seenAny) {

+      out.print(

+        "    </td>\n"

+        + "  </tr>\n");

+    } else {

+      out.print(

+        "  <tr><td class=\"message\" colspan=\"4\"><nobr>" + Messages.getBodyString(locale, "generic.NoAccessTokensSpecified") + "</nobr></td></tr>\n");

+    }

+    out.print(

+      "  <tr><td class=\"separator\" colspan=\"4\"><hr/></td></tr>\n");

+  }

+

+  private String getParam(ConfigParams parameters, String name, String def) {

+    return parameters.getParameter(name) != null ? parameters.getParameter(name) : def;

+  }

+

+  private boolean copyParam(IPostParameters variableContext, ConfigParams parameters, String name) {

+    String val = variableContext.getParameter(name);

+    if (val == null) {

+      return false;

+    }

+    parameters.setParameter(name, val);

+    return true;

+  }

+

+  protected static String[] getAcls(DocumentSpecification spec) {

+    HashMap map = new HashMap();

+    int i = 0;

+    while (i < spec.getChildCount()) {

+      SpecificationNode sn = spec.getChild(i++);

+      if (sn.getType().equals("access")) {

+        String token = sn.getAttributeValue("token");

+        map.put(token, token);

+      }

+    }

+

+    String[] rval = new String[map.size()];

+    Iterator iter = map.keySet().iterator();

+    i = 0;

+    while (iter.hasNext()) {

+      rval[i++] = (String) iter.next();

+    }

+    return rval;

+  }

+

+  protected static void handleIOException(IOException e)

+    throws ManifoldCFException, ServiceInterruption {

+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);

+    }

+    long currentTime = System.currentTimeMillis();

+    throw new ServiceInterruption("IO exception: " + e.getMessage(), e, currentTime + 300000L,

+      currentTime + 3 * 60 * 60000L, -1, false);

+  }

+

+  static class PreemptiveAuth implements HttpRequestInterceptor {

+

+    private Credentials credentials;

+

+    public PreemptiveAuth(Credentials creds) {

+      this.credentials = creds;

+    }

+

+    @Override

+    public void process(final HttpRequest request, final HttpContext context) throws HttpException, IOException {

+      request.addHeader(BasicScheme.authenticate(credentials, "US-ASCII", false));

+    }

+  }

+

+  protected static class CheckThread extends Thread {

+

+    protected HttpClient client;

+

+    protected String url;

+

+    protected Throwable exception = null;

+

+    protected String result = "Unknown";

+

+    public CheckThread(HttpClient client, String url) {

+      super();

+      setDaemon(true);

+      this.client = client;

+      this.url = url;

+    }

+

+    @Override

+    public void run() {

+      HttpGet method = new HttpGet(url);

+      try {

+        HttpResponse response = client.execute(method);

+        try {

+          if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {

+            result = "Connection failed: " + response.getStatusLine().getReasonPhrase();

+            return;

+          }

+          EntityUtils.consume(response.getEntity());

+          result = "Connection OK";

+        } finally {

+          EntityUtils.consume(response.getEntity());

+          method.releaseConnection();

+        }

+      } catch (IOException ex) {

+        exception = ex;

+      }

+    }

+

+    public Throwable getException() {

+      return exception;

+    }

+

+    public String getResult() {

+      return result;

+    }

+  }

+

+  protected static class ExecuteSeedingThread extends Thread {

+

+    protected HttpClient client;

+

+    protected String url;

+

+    protected final XThreadStringBuffer seedBuffer;

+

+    protected Throwable exception = null;

+

+    public ExecuteSeedingThread(HttpClient client, String url) {

+      super();

+      setDaemon(true);

+      this.client = client;

+      this.url = url;

+      seedBuffer = new XThreadStringBuffer();

+    }

+

+    public XThreadStringBuffer getBuffer() {

+      return seedBuffer;

+    }

+

+    public void finishUp()

+      throws InterruptedException {

+      seedBuffer.abandon();

+      join();

+      Throwable thr = exception;

+      if (thr != null) {

+        if (thr instanceof RuntimeException) {

+          throw (RuntimeException) thr;

+        } else if (thr instanceof Error) {

+          throw (Error) thr;

+        } else {

+          throw new RuntimeException("Unhandled exception of type: " + thr.getClass().getName(), thr);

+        }

+      }

+    }

+

+    @Override

+    public void run() {

+      HttpGet method = new HttpGet(url.toString());

+

+      try {

+        HttpResponse response = client.execute(method);

+        try {

+          if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {

+            exception = new ManifoldCFException("addSeedDocuments error - interface returned incorrect return code for: " + url + " - " + response.getStatusLine().toString());

+            return;

+          }

+

+          try {

+            SAXParserFactory factory = SAXParserFactory.newInstance();

+            factory.setNamespaceAware(true);

+            SAXParser parser = factory.newSAXParser();

+            DefaultHandler handler = new SAXSeedingHandler(seedBuffer);

+            parser.parse(response.getEntity().getContent(), handler);

+          } catch (FactoryConfigurationError ex) {

+            exception = new ManifoldCFException("addSeedDocuments error: " + ex.getMessage(), ex);

+          } catch (ParserConfigurationException ex) {

+            exception = new ManifoldCFException("addSeedDocuments error: " + ex.getMessage(), ex);

+          } catch (SAXException ex) {

+            exception = new ManifoldCFException("addSeedDocuments error: " + ex.getMessage(), ex);

+          }

+          seedBuffer.signalDone();

+        } finally {

+          EntityUtils.consume(response.getEntity());

+          method.releaseConnection();

+        }

+      } catch (IOException ex) {

+        exception = ex;

+      }

+    }

+

+    public Throwable getException() {

+      return exception;

+    }

+  }

+

+  protected static class DocumentVersionThread extends Thread {

+

+    protected HttpClient client;

+

+    protected String url;

+

+    protected Throwable exception = null;

+

+    protected String[] versions;

+

+    protected ConcurrentHashMap<String, Item> documentCache;

+

+    protected String[] documentIdentifiers;

+

+    protected String genericAuthMode;

+

+    protected String defaultRights;

+

+    public DocumentVersionThread(HttpClient client, String url, String[] documentIdentifiers, String genericAuthMode, String defaultRights, ConcurrentHashMap<String, Item> documentCache) {

+      super();

+      setDaemon(true);

+      this.client = client;

+      this.url = url;

+      this.documentCache = documentCache;

+      this.documentIdentifiers = documentIdentifiers;

+      this.genericAuthMode = genericAuthMode;

+      this.defaultRights = defaultRights;

+      this.versions = new String[documentIdentifiers.length];

+      for (int i = 0; i < versions.length; i++) {

+        versions[i] = null;

+      }

+    }

+

+    @Override

+    public void run() {

+      try {

+        HttpGet method = new HttpGet(url.toString());

+

+        HttpResponse response = client.execute(method);

+        try {

+          if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {

+            exception = new ManifoldCFException("addSeedDocuments error - interface returned incorrect return code for: " + url + " - " + response.getStatusLine().toString());

+            return;

+          }

+          JAXBContext context;

+          context = JAXBContext.newInstance(Items.class);

+          Unmarshaller m = context.createUnmarshaller();

+          Items items = (Items) m.unmarshal(response.getEntity().getContent());

+          if (items.items != null) {

+            for (Item item : items.items) {

+              documentCache.put(item.id, item);

+              for (int i = 0; i < versions.length; i++) {

+                if (documentIdentifiers[i].equals(item.id)) {

+                  if ("provided".equals(genericAuthMode)) {

+                    versions[i] = item.getVersionString();

+                  } else {

+                    versions[i] = item.version + defaultRights;

+                  }

+                  break;

+                }

+              }

+            }

+          }

+        } catch (JAXBException ex) {

+          exception = ex;

+        } finally {

+          EntityUtils.consume(response.getEntity());

+          method.releaseConnection();

+        }

+      } catch (Exception ex) {

+        exception = ex;

+      }

+    }

+

+    public Throwable getException() {

+      return exception;

+    }

+

+    public String[] getVersions() {

+      return versions;

+    }

+  }

+

+  protected static class ExecuteProcessThread extends Thread {

+

+    protected HttpClient client;

+

+    protected String url;

+

+    protected Throwable exception = null;

+

+    protected XThreadInputStream threadStream;

+

+    protected boolean abortThread = false;

+

+    protected long streamLength = 0;

+

+    public ExecuteProcessThread(HttpClient client, String url) {

+      super();

+      setDaemon(true);

+      this.client = client;

+      this.url = url;

+    }

+

+    @Override

+    public void run() {

+      try {

+        HttpGet method = new HttpGet(url);

+        HttpResponse response = client.execute(method);

+        try {

+          if (response.getStatusLine().getStatusCode() != HttpStatus.SC_OK) {

+            exception = new ManifoldCFException("processDocuments error - interface returned incorrect return code for: " + url + " - " + response.getStatusLine().toString());

+            return;

+          }

+          synchronized (this) {

+            if (!abortThread) {

+              streamLength = response.getEntity().getContentLength();

+              threadStream = new XThreadInputStream(response.getEntity().getContent());

+              this.notifyAll();

+            }

+          }

+

+          if (threadStream != null) {

+            // Stuff the content until we are done

+            threadStream.stuffQueue();

+          }

+        } catch (Throwable ex) {

+          exception = ex;

+        } finally {

+          EntityUtils.consume(response.getEntity());

+          method.releaseConnection();

+        }

+      } catch (Throwable e) {

+        exception = e;

+      }

+    }

+

+    public InputStream getSafeInputStream() throws InterruptedException, IOException {

+      while (true) {

+        synchronized (this) {

+          if (exception != null) {

+            throw new IllegalStateException("Check for response before getting stream");

+          }

+          checkException(exception);

+          if (threadStream != null) {

+            return threadStream;

+          }

+          wait();

+        }

+      }

+    }

+

+    public long getStreamLength() throws IOException, InterruptedException {

+      while (true) {

+        synchronized (this) {

+          if (exception != null) {

+            throw new IllegalStateException("Check for response before getting stream");

+          }

+          checkException(exception);

+          if (threadStream != null) {

+            return streamLength;

+          }

+          wait();

+        }

+      }

+    }

+

+    protected synchronized void checkException(Throwable exception)

+      throws IOException {

+      if (exception != null) {

+        Throwable e = exception;

+        if (e instanceof IOException) {

+          throw (IOException) e;

+        } else if (e instanceof RuntimeException) {

+          throw (RuntimeException) e;

+        } else if (e instanceof Error) {

+          throw (Error) e;

+        } else {

+          throw new RuntimeException("Unhandled exception of type: " + e.getClass().getName(), e);

+        }

+      }

+    }

+

+    public void finishUp()

+      throws InterruptedException, IOException {

+      // This will be called during the finally

+      // block in the case where all is well (and

+      // the stream completed) and in the case where

+      // there were exceptions.

+      synchronized (this) {

+        if (threadStream != null) {

+          threadStream.abort();

+        }

+        abortThread = true;

+      }

+      join();

+      checkException(exception);

+    }

+

+    public Throwable getException() {

+      return exception;

+    }

+  }

+

+  static public class SAXSeedingHandler extends DefaultHandler {

+

+    protected XThreadStringBuffer seedBuffer;

+

+    public SAXSeedingHandler(XThreadStringBuffer seedBuffer) {

+      this.seedBuffer = seedBuffer;

+    }

+

+    @Override

+    public void startElement(String uri, String localName, String qName, Attributes attributes) throws SAXException {

+      if ("seed".equals(localName) && attributes.getValue("id") != null) {

+        try {

+          seedBuffer.add(attributes.getValue("id"));

+        } catch (InterruptedException ex) {

+          throw new SAXException("Adding seed failed: " + ex.getMessage(), ex);

+        }

+      }

+    }

+  }

+}

diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/Messages.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/Messages.java
new file mode 100644
index 0000000..cb7a7ae
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/Messages.java
@@ -0,0 +1,119 @@
+/* $Id: Messages.java 1295926 2012-03-01 21:56:27Z kwright $ */

+/**

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements. See the NOTICE file distributed with this

+ * work for additional information regarding copyright ownership. The ASF

+ * licenses this file to You under the Apache License, Version 2.0 (the

+ * "License"); you may not use this file except in compliance with the License.

+ * You may obtain a copy of the License at

+ * 

+* http://www.apache.org/licenses/LICENSE-2.0

+ * 

+* Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT

+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the

+ * License for the specific language governing permissions and limitations under

+ * the License.

+ */

+package org.apache.manifoldcf.crawler.connectors.generic;

+

+import java.util.Locale;

+import java.util.Map;

+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;

+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;

+

+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages {

+

+  public static final String DEFAULT_BUNDLE_NAME = "org.apache.manifoldcf.crawler.connectors.generic.common";

+

+  public static final String DEFAULT_PATH_NAME = "org.apache.manifoldcf.crawler.connectors.generic";

+

+  /**

+   * Constructor - do no instantiate

+   */

+  protected Messages() {

+  }

+

+  public static String getString(Locale locale, String messageKey) {

+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getAttributeString(Locale locale, String messageKey) {

+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getBodyString(Locale locale, String messageKey) {

+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getAttributeJavascriptString(Locale locale, String messageKey) {

+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getBodyJavascriptString(Locale locale, String messageKey) {

+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getString(Locale locale, String messageKey, Object[] args) {

+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getAttributeString(Locale locale, String messageKey, Object[] args) {

+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getBodyString(Locale locale, String messageKey, Object[] args) {

+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args) {

+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args) {

+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  // More general methods which allow bundlenames and class loaders to be specified.

+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args) {

+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  // Resource output

+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String, String> substitutionParameters, boolean mapToUpperCase)

+    throws ManifoldCFException {

+    outputResource(output, Messages.class, DEFAULT_PATH_NAME, locale, resourceKey,

+      substitutionParameters, mapToUpperCase);

+  }

+

+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String, String> substitutionParameters, boolean mapToUpperCase)

+    throws ManifoldCFException {

+    outputResourceWithVelocity(output, Messages.class, DEFAULT_BUNDLE_NAME, DEFAULT_PATH_NAME, locale, resourceKey,

+      substitutionParameters, mapToUpperCase);

+  }

+

+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String, Object> contextObjects)

+    throws ManifoldCFException {

+    outputResourceWithVelocity(output, Messages.class, DEFAULT_BUNDLE_NAME, DEFAULT_PATH_NAME, locale, resourceKey,

+      contextObjects);

+  }

+}

diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Auth.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Auth.java
new file mode 100644
index 0000000..cecd8e1
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Auth.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import java.util.List;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
+
+@XmlRootElement(name = "auth")
+public class Auth {
+
+  @XmlAttribute(name = "exists", required = true)
+  @XmlJavaTypeAdapter(BooleanAdapter.class)
+  public Boolean exists;
+
+  @XmlElements({
+    @XmlElement(name = "token", type = String.class)})
+  public List<String> tokens;
+}
\ No newline at end of file
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/BooleanAdapter.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/BooleanAdapter.java
new file mode 100644
index 0000000..c78d160
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/BooleanAdapter.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import javax.xml.bind.annotation.adapters.XmlAdapter;
+
+public class BooleanAdapter extends XmlAdapter<String, Boolean> {
+
+  @Override
+  public Boolean unmarshal(String v) throws Exception {
+    v = v.toLowerCase();
+    return "true".equals(v) || "1".equals(v) || "on".equals(v) || "y".equals(v);
+  }
+
+  @Override
+  public String marshal(Boolean v) throws Exception {
+    if (v) {
+      return "true";
+    }
+    return "false";
+  }
+}
\ No newline at end of file
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/DateAdapter.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/DateAdapter.java
new file mode 100644
index 0000000..0d60ff0
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/DateAdapter.java
@@ -0,0 +1,37 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import java.util.Date;
+import javax.xml.bind.annotation.adapters.XmlAdapter;
+import org.apache.manifoldcf.core.common.DateParser;
+
+/**
+ *
+ * @author krycek
+ */
+public class DateAdapter extends XmlAdapter<String, Date> {
+
+  @Override
+  public Date unmarshal(String v) throws Exception {
+    return DateParser.parseISO8601Date(v);
+  }
+
+  @Override
+  public String marshal(Date v) throws Exception {
+    return DateParser.formatISO8601Date(v);
+  }
+}
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Item.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Item.java
new file mode 100644
index 0000000..3b25ab4
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Item.java
@@ -0,0 +1,87 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import java.util.Date;
+import java.util.List;
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElementWrapper;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.adapters.XmlJavaTypeAdapter;
+
+/**
+ *
+ * @author krycek
+ */
+@XmlRootElement(name = "item")
+public class Item {
+
+  @XmlAttribute(name = "id", required = true)
+  public String id;
+
+  @XmlElement(name = "url", required = true)
+  public String url;
+
+  @XmlElement(name = "version", required = true)
+  public String version;
+
+  @XmlElement(name = "content")
+  public String content;
+
+  @XmlElement(name = "mimetype")
+  public String mimeType;
+
+  @XmlElement(name = "created")
+  @XmlJavaTypeAdapter(value = DateAdapter.class)
+  public Date created;
+
+  @XmlElement(name = "updated")
+  @XmlJavaTypeAdapter(value = DateAdapter.class)
+  public Date updated;
+
+  @XmlElement(name = "filename")
+  public String fileName;
+
+  @XmlElementWrapper(name = "metadata")
+  @XmlElements(value = {
+    @XmlElement(name = "meta", type = Meta.class)})
+  public List<Meta> metadata;
+
+  @XmlElementWrapper(name = "auth")
+  @XmlElements(value = {
+    @XmlElement(name = "token", type = String.class)})
+  public List<String> auth;
+
+  @XmlElementWrapper(name = "related")
+  @XmlElements(value = {
+    @XmlElement(name = "id", type = String.class)})
+  public List<String> related;
+
+  public String getVersionString() {
+    if (version == null) {
+      return "";
+    }
+    StringBuilder sb = new StringBuilder(version);
+    if (auth != null) {
+      for (String t : auth) {
+        sb.append("|").append(t);
+      }
+    }
+    return sb.toString();
+  }
+}
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Items.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Items.java
new file mode 100644
index 0000000..0ee3ea5
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Items.java
@@ -0,0 +1,34 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import java.util.List;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+
+/**
+ *
+ * @author krycek
+ */
+@XmlRootElement(name = "items")
+public class Items {
+
+  @XmlElements(value = {
+    @XmlElement(name = "item", type = Item.class)})
+  public List<Item> items;
+}
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Meta.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Meta.java
new file mode 100644
index 0000000..649de85
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Meta.java
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+import javax.xml.bind.annotation.XmlValue;
+
+/**
+ *
+ * @author krycek
+ */
+@XmlRootElement(name = "meta")
+public class Meta {
+
+  @XmlAttribute(name = "name")
+  public String name;
+
+  @XmlValue
+  public String value;
+}
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Seed.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Seed.java
new file mode 100644
index 0000000..c515e14
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Seed.java
@@ -0,0 +1,26 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import javax.xml.bind.annotation.XmlAttribute;
+import javax.xml.bind.annotation.XmlRootElement;
+
+@XmlRootElement(name = "seed")
+public class Seed {
+
+  @XmlAttribute(name = "id", required = true)
+  public String id;
+}
diff --git a/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Seeds.java b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Seeds.java
new file mode 100644
index 0000000..7175148
--- /dev/null
+++ b/connectors/generic/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/generic/api/Seeds.java
@@ -0,0 +1,29 @@
+/*
+ * Copyright 2013 The Apache Software Foundation.
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *      http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.crawler.connectors.generic.api;
+
+import java.util.List;
+import javax.xml.bind.annotation.XmlElement;
+import javax.xml.bind.annotation.XmlElements;
+import javax.xml.bind.annotation.XmlRootElement;
+
+@XmlRootElement(name = "seeds")
+public class Seeds {
+
+  @XmlElements({
+    @XmlElement(name = "seed", type = Seed.class)})
+  public List<Seed> seeds;
+}
diff --git a/connectors/generic/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/generic/common_en_US.properties b/connectors/generic/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/generic/common_en_US.properties
new file mode 100644
index 0000000..8d0fc22
--- /dev/null
+++ b/connectors/generic/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/generic/common_en_US.properties
@@ -0,0 +1,23 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+generic.EntryPoint=Entry Point

+generic.EntryPointColon=Entry Point:

+generic.LoginColon=Login:

+generic.PasswordColon=Password:

+generic.EntryPointCannotBeBlank=EntryPoint cannot be blank

+generic.ConnectionTimeoutColon=Connection timeout (milis):

+generic.SocketTimeoutColon=Socket timeout (milis):

+generic.ResponseLifetimeColon=Cache responses (milis):
\ No newline at end of file
diff --git a/connectors/generic/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/generic/common_en_US.properties b/connectors/generic/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/generic/common_en_US.properties
new file mode 100644
index 0000000..89aa80a
--- /dev/null
+++ b/connectors/generic/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/generic/common_en_US.properties
@@ -0,0 +1,42 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+generic.EntryPoint=Entry Point

+generic.EntryPointColon=Entry Point:

+generic.LoginColon=Login:

+generic.PasswordColon=Password:

+generic.ConnectionTimeoutColon=Connection timeout (milis):

+generic.SocketTimeoutColon=Socket timeout (milis):

+

+generic.Parameters=Parameters

+generic.ParametersColon=Parameters:

+generic.Token=Access token

+generic.TokensColon=Tokens:

+generic.Security=Security

+generic.Delete=Delete

+generic.Add=Add

+generic.DeleteToken=Delete token

+genericDeleteParameter=Delete parameter

+generic.NoAccessTokensSpecified=No access tokens defined.

+generic.NoParametersSpecified=No parameters specified.

+generic.TypeInAnAccessToken=Access token cannot be empty

+generic.TypeInParameterName=Parameter name cannot be empty

+generic.AuthMode=Authorization type

+generic.AuthModeForced=forced

+generic.AuthModeProvided=provided from API

+generic.AccessTokens=Access tokens

+

+generic.ParameterName=Parameter name

+generic.ParameterValue=Parameter value

diff --git a/connectors/generic/pom.xml b/connectors/generic/pom.xml
new file mode 100644
index 0000000..8ce577e
--- /dev/null
+++ b/connectors/generic/pom.xml
@@ -0,0 +1,118 @@
+<?xml version="1.0" encoding="UTF-8"?>

+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

+    <parent>

+        <groupId>org.apache.manifoldcf</groupId>

+        <artifactId>mcf-connectors</artifactId>

+        <version>1.5-SNAPSHOT</version>

+    </parent>

+    <modelVersion>4.0.0</modelVersion>

+

+    <artifactId>mcf-generic-connector</artifactId>

+    <name>ManifoldCF - Connectors - Generic</name>

+

+    <build>

+        <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>

+        <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>

+        <resources>

+            <resource>

+                <directory>${basedir}/connector/src/main/native2ascii</directory>

+                <includes>

+                    <include>**/*.properties</include>

+                </includes>

+            </resource>

+        </resources>

+        <plugins>

+            <plugin>

+                <groupId>org.codehaus.mojo</groupId>

+                <artifactId>native2ascii-maven-plugin</artifactId>

+                <version>1.0-beta-1</version>

+                <configuration>

+                    <dest>target/classes</dest>

+                    <src>connector/src/main/native2ascii</src>

+                </configuration>

+                <executions>

+                    <execution>

+                        <id>native2ascii-utf8</id>

+                        <goals>

+                            <goal>native2ascii</goal>

+                        </goals>

+                        <configuration>

+                            <encoding>UTF8</encoding>

+                            <includes>

+								<include>**/*.properties</include>

+							</includes>

+                        </configuration>

+                    </execution>

+                </executions>

+            </plugin>

+        </plugins>

+    </build>

+

+    <dependencies>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-agents</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-pull-agent</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-ui-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>commons-logging</groupId>

+            <artifactId>commons-logging</artifactId>

+            <version>${commons-logging.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>junit</groupId>

+            <artifactId>junit</artifactId>

+            <version>${junit.version}</version>

+            <scope>test</scope>

+        </dependency>

+        

+        <dependency>

+            <groupId>xerces</groupId>

+            <artifactId>xercesImpl</artifactId>

+            <version>${xerces.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>com.sun.xml.bind</groupId>

+            <artifactId>jaxb-impl</artifactId>

+            <version>${jaxb.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>org.apache.httpcomponents</groupId>

+            <artifactId>httpclient</artifactId>

+            <version>${httpcomponent.httpclient.version}</version>

+        </dependency>

+    </dependencies>

+</project>

+

diff --git a/connectors/googledrive/build.xml b/connectors/googledrive/build.xml
new file mode 100644
index 0000000..36824c2
--- /dev/null
+++ b/connectors/googledrive/build.xml
@@ -0,0 +1,40 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project name="googledrive" default="all">

+

+    <import file="../connector-build.xml"/>

+

+    <path id="connector-classpath">

+        <path refid="mcf-connector-build.connector-classpath"/>

+        <fileset dir="../../lib">

+            <include name="google-*.jar"/>

+	    <include name="jackson-core.jar"/>

+        </fileset>

+    </path>

+

+    <target name="lib" depends="mcf-connector-build.lib,precompile-check" if="canBuild">

+        <mkdir dir="dist/lib"/>

+        <copy todir="dist/lib">

+            <fileset dir="../../lib">

+                <include name="google-*.jar"/>

+		<include name="jackson-core.jar"/>

+            </fileset>

+        </copy>

+    </target>

+

+</project>

diff --git a/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveConfig.java b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveConfig.java
new file mode 100644
index 0000000..2d52fad
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveConfig.java
@@ -0,0 +1,34 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.googledrive;
+
+/**
+ *
+ * @author andrew
+ */
+public class GoogleDriveConfig {
+
+  public static final String CLIENT_ID_PARAM = "clientid";
+  public static final String CLIENT_SECRET_PARAM = "clientsecret";
+  public static final String REFRESH_TOKEN_PARAM = "refreshtoken";
+  public static final String REPOSITORY_ID_DEFAULT_VALUE = "googledrive";
+  public static final String GOOGLEDRIVE_QUERY_PARAM = "googledriveQuery";
+  public static final String GOOGLEDRIVE_QUERY_DEFAULT = "mimeType='application/vnd.google-apps.folder' and trashed=false";
+}
diff --git a/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveRepositoryConnector.java b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveRepositoryConnector.java
new file mode 100644
index 0000000..dcc60d8
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveRepositoryConnector.java
@@ -0,0 +1,1394 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.googledrive;
+
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InterruptedIOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Date;
+import java.util.Set;
+import java.util.Iterator;
+import org.apache.manifoldcf.crawler.system.Logging;
+import org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.commons.lang.StringUtils;
+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.SpecificationNode;
+import org.apache.manifoldcf.crawler.interfaces.DocumentSpecification;
+import org.apache.manifoldcf.crawler.interfaces.IProcessActivity;
+import org.apache.manifoldcf.crawler.interfaces.ISeedingActivity;
+import org.apache.log4j.Logger;
+import com.google.api.services.drive.model.File;
+import com.google.api.client.util.DateTime;
+import java.io.ByteArrayInputStream;
+import java.io.ByteArrayOutputStream;
+import java.io.FileOutputStream;
+import java.io.OutputStream;
+import java.util.Map.Entry;
+import java.security.GeneralSecurityException;
+/**
+ *
+ * @author andrew
+ */
+public class GoogleDriveRepositoryConnector extends BaseRepositoryConnector {
+
+  protected final static String ACTIVITY_READ = "read document";
+  public final static String ACTIVITY_FETCH = "fetch";
+  protected static final String RELATIONSHIP_CHILD = "child";
+  
+  /** Deny access token for default authority */
+  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
+
+  // Nodes
+  private static final String JOB_STARTPOINT_NODE_TYPE = "startpoint";
+  private static final String JOB_QUERY_ATTRIBUTE = "query";
+  private static final String JOB_ACCESS_NODE_TYPE = "access";
+  private static final String JOB_TOKEN_ATTRIBUTE = "token";
+
+  // Configuration tabs
+  private static final String GOOGLEDRIVE_SERVER_TAB_PROPERTY = "GoogleDriveRepositoryConnector.Server";
+  
+  // Specification tabs
+  private static final String GOOGLEDRIVE_QUERY_TAB_PROPERTY = "GoogleDriveRepositoryConnector.GoogleDriveQuery";
+  private static final String GOOGLEDRIVE_SECURITY_TAB_PROPERTY = "GoogleDriveRepositoryConnector.Security";
+  
+  // Template names for configuration
+  /**
+   * Forward to the javascript to check the configuration parameters
+   */
+  private static final String EDIT_CONFIG_HEADER_FORWARD = "editConfiguration_google_server.js";
+  /**
+   * Server tab template
+   */
+  private static final String EDIT_CONFIG_FORWARD_SERVER = "editConfiguration_google_server.html";
+  
+  /**
+   * Forward to the HTML template to view the configuration parameters
+   */
+  private static final String VIEW_CONFIG_FORWARD = "viewConfiguration_googledrive.html";
+   
+  // Template names for specification
+  /**
+   * Forward to the javascript to check the specification parameters for the job
+   */
+  private static final String EDIT_SPEC_HEADER_FORWARD = "editSpecification_googledrive.js";
+  /**
+   * Forward to the template to edit the query for the job
+   */
+  private static final String EDIT_SPEC_FORWARD_GOOGLEDRIVEQUERY = "editSpecification_googledriveQuery.html";
+  /**
+   * Forward to the template to edit the security parameters for the job
+   */
+  private static final String EDIT_SPEC_FORWARD_SECURITY = "editSpecification_googledriveSecurity.html";
+  
+  /**
+   * Forward to the template to view the specification parameters for the job
+   */
+  private static final String VIEW_SPEC_FORWARD = "viewSpecification_googledrive.html";
+  
+  /**
+   * Endpoint server name
+   */
+  protected String server = "googledrive";
+  protected GoogleDriveSession session = null;
+  protected long lastSessionFetch = -1L;
+  protected static final long timeToRelease = 300000L;
+  protected String clientid = null;
+  protected String clientsecret = null;
+  protected String refreshtoken = null;
+
+  public GoogleDriveRepositoryConnector() {
+    super();
+  }
+
+  /**
+   * Return the list of activities that this connector supports (i.e. writes
+   * into the log).
+   *
+   * @return the list.
+   */
+  @Override
+  public String[] getActivitiesList() {
+    return new String[]{ACTIVITY_FETCH, ACTIVITY_READ};
+  }
+
+  /**
+   * Get the bin name strings for a document identifier. The bin name
+   * describes the queue to which the document will be assigned for throttling
+   * purposes. Throttling controls the rate at which items in a given queue
+   * are fetched; it does not say anything about the overall fetch rate, which
+   * may operate on multiple queues or bins. For example, if you implement a
+   * web crawler, a good choice of bin name would be the server name, since
+   * that is likely to correspond to a real resource that will need real
+   * throttle protection.
+   *
+   * @param documentIdentifier is the document identifier.
+   * @return the set of bin names. If an empty array is returned, it is
+   * equivalent to there being no request rate throttling available for this
+   * identifier.
+   */
+  @Override
+  public String[] getBinNames(String documentIdentifier) {
+    return new String[]{server};
+  }
+
+  /**
+   * Close the connection. Call this before discarding the connection.
+   */
+  @Override
+  public void disconnect() throws ManifoldCFException {
+    if (session != null) {
+      session.close();
+      session = null;
+      lastSessionFetch = -1L;
+    }
+
+    clientid = null;
+    clientsecret = null;
+    refreshtoken = null;
+  }
+
+  /**
+   * This method create a new GOOGLEDRIVE session for a GOOGLEDRIVE
+   * repository, if the repositoryId is not provided in the configuration, the
+   * connector will retrieve all the repositories exposed for this endpoint
+   * the it will start to use the first one.
+   *
+   * @param configParameters is the set of configuration parameters, which in
+   * this case describe the target appliance, basic auth configuration, etc.
+   * (This formerly came out of the ini file.)
+   */
+  @Override
+  public void connect(ConfigParams configParams) {
+    super.connect(configParams);
+
+    clientid = params.getParameter(GoogleDriveConfig.CLIENT_ID_PARAM);
+    clientsecret = params.getObfuscatedParameter(GoogleDriveConfig.CLIENT_SECRET_PARAM);
+    refreshtoken = params.getParameter(GoogleDriveConfig.REFRESH_TOKEN_PARAM);
+  }
+
+  /**
+   * Test the connection. Returns a string describing the connection
+   * integrity.
+   *
+   * @return the connection's status as a displayable string.
+   */
+  @Override
+  public String check() throws ManifoldCFException {
+    try {
+      checkConnection();
+      return super.check();
+    } catch (ServiceInterruption e) {
+      return "Connection temporarily failed: " + e.getMessage();
+    } catch (ManifoldCFException e) {
+      return "Connection failed: " + e.getMessage();
+    }
+  }
+
+  protected class CheckConnectionThread extends Thread {
+
+    protected Throwable exception = null;
+
+    public CheckConnectionThread() {
+      super();
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.getRepositoryInfo();
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public Throwable getException() {
+      return exception;
+    }
+  }
+
+  protected void checkConnection() throws ManifoldCFException, ServiceInterruption {
+    getSession();
+    CheckConnectionThread t = new CheckConnectionThread();
+    try {
+      t.start();
+      t.join();
+      Throwable thr = t.getException();
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+      return;
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      Logging.connectors.warn("GOOGLEDRIVE: Socket timeout: " + e.getMessage(), e);
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (IOException e) {
+      Logging.connectors.warn("GOOGLEDRIVE: Error checking repository: " + e.getMessage(), e);
+      handleIOException(e);
+    }
+  }
+
+  /**
+   * Set up a session
+   */
+  protected void getSession() throws ManifoldCFException, ServiceInterruption {
+    if (session == null) {
+      // Check for parameter validity
+
+      if (StringUtils.isEmpty(clientid)) {
+        throw new ManifoldCFException("Parameter " + GoogleDriveConfig.CLIENT_ID_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("GOOGLEDRIVE: Clientid = '" + clientid + "'");
+      }
+
+      if (StringUtils.isEmpty(clientsecret)) {
+        throw new ManifoldCFException("Parameter " + GoogleDriveConfig.CLIENT_SECRET_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("GOOGLEDRIVE: Clientsecret = '" + clientsecret + "'");
+      }
+
+      if (StringUtils.isEmpty(refreshtoken)) {
+        throw new ManifoldCFException("Parameter " + GoogleDriveConfig.REFRESH_TOKEN_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("GOOGLEDRIVE: refreshtoken = '" + refreshtoken + "'");
+      }
+
+
+
+      long currentTime;
+      GetSessionThread t = new GetSessionThread();
+      try {
+        t.start();
+        t.join();
+        Throwable thr = t.getException();
+        if (thr != null) {
+          if (thr instanceof IOException) {
+            throw (IOException) thr;
+          } else if (thr instanceof GeneralSecurityException) {
+            throw (GeneralSecurityException) thr;
+          } else {
+            throw (Error) thr;
+          }
+
+        }
+      } catch (InterruptedException e) {
+        t.interrupt();
+        throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+            ManifoldCFException.INTERRUPTED);
+      } catch (java.net.SocketTimeoutException e) {
+        Logging.connectors.warn("GOOGLEDRIVE: Socket timeout: " + e.getMessage(), e);
+        handleIOException(e);
+      } catch (InterruptedIOException e) {
+        t.interrupt();
+        throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+            ManifoldCFException.INTERRUPTED);
+      } catch (GeneralSecurityException e) {
+        Logging.connectors.error("GOOGLEDRIVE: " +  "General security error initializing transport: " + e.getMessage(), e);
+        handleGeneralSecurityException(e);
+      } catch (IOException e) {
+        Logging.connectors.warn("GOOGLEDRIVE: IO error: " + e.getMessage(), e);
+        handleIOException(e);
+      }
+
+    }
+    lastSessionFetch = System.currentTimeMillis();
+  }
+
+  protected class GetSessionThread extends Thread {
+
+    protected Throwable exception = null;
+
+    public GetSessionThread() {
+      super();
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        // Create a session
+        session = new GoogleDriveSession(clientid, clientsecret, refreshtoken);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public Throwable getException() {
+      return exception;
+    }
+  }
+
+  @Override
+  public void poll() throws ManifoldCFException {
+    if (lastSessionFetch == -1L) {
+      return;
+    }
+
+    long currentTime = System.currentTimeMillis();
+    if (currentTime >= lastSessionFetch + timeToRelease) {
+      session.close();
+      session = null;
+      lastSessionFetch = -1L;
+    }
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
+  /**
+   * Get the maximum number of documents to amalgamate together into one
+   * batch, for this connector.
+   *
+   * @return the maximum number. 0 indicates "unlimited".
+   */
+  @Override
+  public int getMaxDocumentRequest() {
+    return 1;
+  }
+
+  /**
+   * Return the list of relationship types that this connector recognizes.
+   *
+   * @return the list.
+   */
+  @Override
+  public String[] getRelationshipTypes() {
+    return new String[]{RELATIONSHIP_CHILD};
+  }
+
+  /**
+   * Fill in a Server tab configuration parameter map for calling a Velocity
+   * template.
+   *
+   * @param newMap is the map to fill in
+   * @param parameters is the current set of configuration parameters
+   */
+  private static void fillInServerConfigurationMap(Map<String, Object> newMap, IPasswordMapperActivity mapper, ConfigParams parameters) {
+    String clientid = parameters.getParameter(GoogleDriveConfig.CLIENT_ID_PARAM);
+    String clientsecret = parameters.getObfuscatedParameter(GoogleDriveConfig.CLIENT_SECRET_PARAM);
+    String refreshtoken = parameters.getParameter(GoogleDriveConfig.REFRESH_TOKEN_PARAM);
+
+    if (clientid == null) {
+      clientid = StringUtils.EMPTY;
+    }
+    
+    if (clientsecret == null) {
+      clientsecret = StringUtils.EMPTY;
+    } else {
+      clientsecret = mapper.mapPasswordToKey(clientsecret);
+    }
+
+    if (refreshtoken == null) {
+      refreshtoken = StringUtils.EMPTY;
+    }
+
+    newMap.put("CLIENTID", clientid);
+    newMap.put("CLIENTSECRET", clientsecret);
+    newMap.put("REFRESHTOKEN", refreshtoken);
+  }
+
+  /**
+   * View configuration. This method is called in the body section of the
+   * connector's view configuration page. Its purpose is to present the
+   * connection information to the user. The coder can presume that the HTML
+   * that is output from this configuration will be within appropriate <html>
+   * and <body> tags.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
+      Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in map from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+
+    Messages.outputResourceWithVelocity(out,locale,VIEW_CONFIG_FORWARD,paramMap);
+  }
+
+  /**
+   *
+   * Output the configuration header section. This method is called in the
+   * head section of the connector's configuration page. Its purpose is to add
+   * the required tabs to the list, and to output any javascript methods that
+   * might be needed by the configuration editing HTML.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @param tabsArray is an array of tab names. Add to this array any tab
+   * names that are specific to the connector.
+   */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext,
+      IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
+      throws ManifoldCFException, IOException {
+    // Add the Server tab
+    tabsArray.add(Messages.getString(locale, GOOGLEDRIVE_SERVER_TAB_PROPERTY));
+    // Map the parameters
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the parameters from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+
+    // Output the Javascript - only one Velocity template for all tabs
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_HEADER_FORWARD,paramMap);
+  }
+
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext,
+      IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+      throws ManifoldCFException, IOException {
+
+
+    // Call the Velocity templates for each tab
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    // Set the tab name
+    paramMap.put("TabName", tabName);
+
+    // Server tab
+    // Fill in the parameters
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_FORWARD_SERVER,paramMap);
+  }
+
+  /**
+   * Process a configuration post. This method is called at the start of the
+   * connector's configuration page, whenever there is a possibility that form
+   * data for a connection has been posted. Its purpose is to gather form
+   * information and modify the configuration parameters accordingly. The name
+   * of the posted form is "editconnection".
+   *
+   * @param threadContext is the local thread context.
+   * @param variableContext is the set of variables available from the post,
+   * including binary file post information.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @return null if all is well, or a string error message if there is an
+   * error that should prevent saving of the connection (and cause a
+   * redirection to an error page).
+   *
+   */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext,
+      IPostParameters variableContext, ConfigParams parameters)
+      throws ManifoldCFException {
+
+    String clientid = variableContext.getParameter(GoogleDriveConfig.CLIENT_ID_PARAM);
+    if (clientid != null) {
+      parameters.setParameter(GoogleDriveConfig.CLIENT_ID_PARAM, clientid);
+    }
+
+    String clientsecret = variableContext.getParameter(GoogleDriveConfig.CLIENT_SECRET_PARAM);
+    if (clientsecret != null) {
+      parameters.setObfuscatedParameter(GoogleDriveConfig.CLIENT_SECRET_PARAM, variableContext.mapKeyToPassword(clientsecret));
+    }
+
+    String refreshtoken = variableContext.getParameter(GoogleDriveConfig.REFRESH_TOKEN_PARAM);
+    if (refreshtoken != null) {
+      parameters.setParameter(GoogleDriveConfig.REFRESH_TOKEN_PARAM, refreshtoken);
+    }
+
+    return null;
+  }
+
+  /**
+   * Fill in specification Velocity parameter map for GOOGLEDRIVEQuery tab.
+   */
+  private static void fillInGOOGLEDRIVEQuerySpecificationMap(Map<String, Object> newMap, DocumentSpecification ds) {
+    String GoogleDriveQuery = GoogleDriveConfig.GOOGLEDRIVE_QUERY_DEFAULT;
+    for (int i = 0; i < ds.getChildCount(); i++) {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {
+        GoogleDriveQuery = sn.getAttributeValue(JOB_QUERY_ATTRIBUTE);
+      }
+    }
+    newMap.put("GOOGLEDRIVEQUERY", GoogleDriveQuery);
+  }
+
+  /**
+   * Fill in specification Velocity parameter map for GOOGLEDRIVESecurity tab.
+   */
+  private static void fillInGOOGLEDRIVESecuritySpecificationMap(Map<String, Object> newMap, DocumentSpecification ds) {
+    List<Map<String,String>> accessTokenList = new ArrayList<Map<String,String>>();
+    for (int i = 0; i < ds.getChildCount(); i++) {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals(JOB_ACCESS_NODE_TYPE)) {
+        String token = sn.getAttributeValue(JOB_TOKEN_ATTRIBUTE);
+        Map<String,String> accessMap = new HashMap<String,String>();
+        accessMap.put("TOKEN",token);
+        accessTokenList.add(accessMap);
+      }
+    }
+    newMap.put("ACCESSTOKENS", accessTokenList);
+  }
+
+  /**
+   * View specification. This method is called in the body section of a job's
+   * view page. Its purpose is to present the document specification
+   * information to the user. The coder can presume that the HTML that is
+   * output from this configuration will be within appropriate <html> and
+   * <body> tags.
+   *
+   * @param out is the output to which any HTML should be sent.
+   * @param ds is the current document specification for this job.
+   */
+  @Override
+  public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)
+      throws ManifoldCFException, IOException {
+
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the map with data from all tabs
+    fillInGOOGLEDRIVEQuerySpecificationMap(paramMap, ds);
+    fillInGOOGLEDRIVESecuritySpecificationMap(paramMap, ds);
+
+    Messages.outputResourceWithVelocity(out,locale,VIEW_SPEC_FORWARD,paramMap);
+  }
+
+  /**
+   * Process a specification post. This method is called at the start of job's
+   * edit or view page, whenever there is a possibility that form data for a
+   * connection has been posted. Its purpose is to gather form information and
+   * modify the document specification accordingly. The name of the posted
+   * form is "editjob".
+   *
+   * @param variableContext contains the post data, including binary
+   * file-upload information.
+   * @param ds is the current document specification for this job.
+   * @return null if all is well, or a string error message if there is an
+   * error that should prevent saving of the job (and cause a redirection to
+   * an error page).
+   */
+  @Override
+  public String processSpecificationPost(IPostParameters variableContext,
+      DocumentSpecification ds) throws ManifoldCFException {
+
+    String googleDriveQuery = variableContext.getParameter("googledrivequery");
+    if (googleDriveQuery != null) {
+      int i = 0;
+      while (i < ds.getChildCount()) {
+        SpecificationNode oldNode = ds.getChild(i);
+        if (oldNode.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {
+          ds.removeChild(i);
+          break;
+        }
+        i++;
+      }
+      SpecificationNode node = new SpecificationNode(JOB_STARTPOINT_NODE_TYPE);
+      node.setAttribute(JOB_QUERY_ATTRIBUTE, googleDriveQuery);
+      ds.addChild(ds.getChildCount(), node);
+    }
+    
+    String xc = variableContext.getParameter("tokencount");
+    if (xc != null) {
+      // Delete all tokens first
+      int i = 0;
+      while (i < ds.getChildCount()) {
+        SpecificationNode sn = ds.getChild(i);
+        if (sn.getType().equals(JOB_ACCESS_NODE_TYPE))
+          ds.removeChild(i);
+        else
+          i++;
+      }
+
+      int accessCount = Integer.parseInt(xc);
+      i = 0;
+      while (i < accessCount) {
+        String accessDescription = "_"+Integer.toString(i);
+        String accessOpName = "accessop"+accessDescription;
+        xc = variableContext.getParameter(accessOpName);
+        if (xc != null && xc.equals("Delete")) {
+          // Next row
+          i++;
+          continue;
+        }
+        // Get the stuff we need
+        String accessSpec = variableContext.getParameter("spectoken"+accessDescription);
+        SpecificationNode node = new SpecificationNode(JOB_ACCESS_NODE_TYPE);
+        node.setAttribute(JOB_TOKEN_ATTRIBUTE,accessSpec);
+        ds.addChild(ds.getChildCount(),node);
+        i++;
+      }
+
+      String op = variableContext.getParameter("accessop");
+      if (op != null && op.equals("Add"))
+      {
+        String accessspec = variableContext.getParameter("spectoken");
+        SpecificationNode node = new SpecificationNode(JOB_ACCESS_NODE_TYPE);
+        node.setAttribute(JOB_TOKEN_ATTRIBUTE,accessspec);
+        ds.addChild(ds.getChildCount(),node);
+      }
+    }
+
+    return null;
+  }
+
+  /**
+   * Output the specification body section. This method is called in the body
+   * section of a job page which has selected a repository connection of the
+   * current type. Its purpose is to present the required form elements for
+   * editing. The coder can presume that the HTML that is output from this
+   * configuration will be within appropriate <html>, <body>, and <form> tags.
+   * The name of the form is "editjob".
+   *
+   * @param out is the output to which any HTML should be sent.
+   * @param ds is the current document specification for this job.
+   * @param tabName is the current tab name.
+   */
+  @Override
+  public void outputSpecificationBody(IHTTPOutput out,
+      Locale locale, DocumentSpecification ds, String tabName) throws ManifoldCFException,
+      IOException {
+
+    // Output GOOGLEDRIVEQuery tab
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    paramMap.put("TabName", tabName);
+    fillInGOOGLEDRIVEQuerySpecificationMap(paramMap, ds);
+    fillInGOOGLEDRIVESecuritySpecificationMap(paramMap, ds);
+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_FORWARD_GOOGLEDRIVEQUERY,paramMap);
+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_FORWARD_SECURITY,paramMap);
+  }
+
+  /**
+   * Output the specification header section. This method is called in the
+   * head section of a job page which has selected a repository connection of
+   * the current type. Its purpose is to add the required tabs to the list,
+   * and to output any javascript methods that might be needed by the job
+   * editing HTML.
+   *
+   * @param out is the output to which any HTML should be sent.
+   * @param ds is the current document specification for this job.
+   * @param tabsArray is an array of tab names. Add to this array any tab
+   * names that are specific to the connector.
+   */
+  @Override
+  public void outputSpecificationHeader(IHTTPOutput out,
+      Locale locale, DocumentSpecification ds, List<String> tabsArray)
+      throws ManifoldCFException, IOException {
+
+    tabsArray.add(Messages.getString(locale, GOOGLEDRIVE_QUERY_TAB_PROPERTY));
+    tabsArray.add(Messages.getString(locale, GOOGLEDRIVE_SECURITY_TAB_PROPERTY));
+
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the specification header map, using data from all tabs.
+    fillInGOOGLEDRIVEQuerySpecificationMap(paramMap, ds);
+    fillInGOOGLEDRIVESecuritySpecificationMap(paramMap, ds);
+
+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_HEADER_FORWARD,paramMap);
+  }
+
+  /**
+   * Queue "seed" documents. Seed documents are the starting places for
+   * crawling activity. Documents are seeded when this method calls
+   * appropriate methods in the passed in ISeedingActivity object.
+   *
+   * This method can choose to find repository changes that happen only during
+   * the specified time interval. The seeds recorded by this method will be
+   * viewed by the framework based on what the getConnectorModel() method
+   * returns.
+   *
+   * It is not a big problem if the connector chooses to create more seeds
+   * than are strictly necessary; it is merely a question of overall work
+   * required.
+   *
+   * The times passed to this method may be interpreted for greatest
+   * efficiency. The time ranges any given job uses with this connector will
+   * not overlap, but will proceed starting at 0 and going to the "current
+   * time", each time the job is run. For continuous crawling jobs, this
+   * method will be called once, when the job starts, and at various periodic
+   * intervals as the job executes.
+   *
+   * When a job's specification is changed, the framework automatically resets
+   * the seeding start time to 0. The seeding start time may also be set to 0
+   * on each job run, depending on the connector model returned by
+   * getConnectorModel().
+   *
+   * Note that it is always ok to send MORE documents rather than less to this
+   * method.
+   *
+   * @param activities is the interface this method should use to perform
+   * whatever framework actions are desired.
+   * @param spec is a document specification (that comes from the job).
+   * @param startTime is the beginning of the time range to consider,
+   * inclusive.
+   * @param endTime is the end of the time range to consider, exclusive.
+   * @param jobMode is an integer describing how the job is being run, whether
+   * continuous or once-only.
+   */
+  @Override
+  public void addSeedDocuments(ISeedingActivity activities,
+      DocumentSpecification spec, long startTime, long endTime, int jobMode)
+      throws ManifoldCFException, ServiceInterruption {
+
+    String googleDriveQuery = GoogleDriveConfig.GOOGLEDRIVE_QUERY_DEFAULT;
+    int i = 0;
+    while (i < spec.getChildCount()) {
+      SpecificationNode sn = spec.getChild(i);
+      if (sn.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {
+        googleDriveQuery = sn.getAttributeValue(JOB_QUERY_ATTRIBUTE);
+        break;
+      }
+      i++;
+    }
+
+    getSession();
+    GetSeedsThread t = new GetSeedsThread(googleDriveQuery);
+    try {
+      t.start();
+      boolean wasInterrupted = false;
+      try {
+        XThreadStringBuffer seedBuffer = t.getBuffer();
+        // Pick up the paths, and add them to the activities, before we join with the child thread.
+        while (true) {
+          // The only kind of exceptions this can throw are going to shut the process down.
+          String docPath = seedBuffer.fetch();
+          if (docPath ==  null)
+            break;
+          // Add the pageID to the queue
+          activities.addSeedDocument(docPath);
+        }
+      } catch (InterruptedException e) {
+        wasInterrupted = true;
+        throw e;
+      } catch (ManifoldCFException e) {
+        if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+          wasInterrupted = true;
+        throw e;
+      } finally {
+        if (!wasInterrupted)
+          t.finishUp();
+      }
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      Logging.connectors.warn("GOOGLEDRIVE: Socket timeout adding seed documents: " + e.getMessage(), e);
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (IOException e) {
+      Logging.connectors.warn("GOOGLEDRIVE: Error adding seed documents: " + e.getMessage(), e);
+      handleIOException(e);
+    }
+  }
+  
+  protected class GetSeedsThread extends Thread {
+
+    protected Throwable exception = null;
+    protected final String googleDriveQuery;
+    protected final XThreadStringBuffer seedBuffer;
+    
+    public GetSeedsThread(String googleDriveQuery) {
+      super();
+      this.googleDriveQuery = googleDriveQuery;
+      this.seedBuffer = new XThreadStringBuffer();
+      setDaemon(true);
+    }
+
+    @Override
+    public void run() {
+      try {
+        session.getSeeds(seedBuffer, googleDriveQuery);
+      } catch (Throwable e) {
+        this.exception = e;
+      } finally {
+        seedBuffer.signalDone();
+      }
+    }
+
+    public XThreadStringBuffer getBuffer() {
+      return seedBuffer;
+    }
+    
+    public void finishUp()
+      throws InterruptedException, IOException {
+      seedBuffer.abandon();
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException)
+          throw (IOException) thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException) thr;
+        else if (thr instanceof Error)
+          throw (Error) thr;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+      }
+    }
+  }
+
+  protected File getObject(String nodeId)
+    throws ManifoldCFException, ServiceInterruption {
+    getSession();
+    GetObjectThread t = new GetObjectThread(nodeId);
+    try {
+      t.start();
+      t.join();
+      Throwable thr = t.getException();
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      Logging.connectors.warn("GOOGLEDRIVE: Socket timeout getting object: " + e.getMessage(), e);
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (IOException e) {
+      Logging.connectors.warn("GOOGLEDRIVE: Error getting object: " + e.getMessage(), e);
+      handleIOException(e);
+    }
+    return t.getResponse();
+  }
+  
+  protected class GetObjectThread extends Thread {
+
+    protected final String nodeId;
+    protected Throwable exception = null;
+    protected File response = null;
+
+    public GetObjectThread(String nodeId) {
+      super();
+      setDaemon(true);
+      this.nodeId = nodeId;
+    }
+
+    public void run() {
+      try {
+        response = session.getObject(nodeId);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public File getResponse() {
+      return response;
+    }
+    
+    public Throwable getException() {
+      return exception;
+    }
+  }
+  
+  /**
+   * Process a set of documents. This is the method that should cause each
+   * document to be fetched, processed, and the results either added to the
+   * queue of documents for the current job, and/or entered into the
+   * incremental ingestion manager. The document specification allows this
+   * class to filter what is done based on the job.
+   *
+   * @param documentIdentifiers is the set of document identifiers to process.
+   * @param versions is the corresponding document versions to process, as
+   * returned by getDocumentVersions() above. The implementation may choose to
+   * ignore this parameter and always process the current version.
+   * @param activities is the interface this method should use to queue up new
+   * document references and ingest documents.
+   * @param spec is the document specification.
+   * @param scanOnly is an array corresponding to the document identifiers. It
+   * is set to true to indicate when the processing should only find other
+   * references, and should not actually call the ingestion methods.
+   * @param jobMode is an integer describing how the job is being run, whether
+   * continuous or once-only.
+   */
+  @SuppressWarnings("unchecked")
+  @Override
+  public void processDocuments(String[] documentIdentifiers, String[] versions,
+      IProcessActivity activities, DocumentSpecification spec,
+      boolean[] scanOnly) throws ManifoldCFException, ServiceInterruption {
+
+    Logging.connectors.debug("GOOGLEDRIVE: Inside processDocuments");
+
+    for (int i = 0; i < documentIdentifiers.length; i++) {
+      // MHL for access tokens
+      long startTime = System.currentTimeMillis();
+      String errorCode = "FAILED";
+      String errorDesc = StringUtils.EMPTY;
+      Long fileSize = null;
+      boolean doLog = false;
+      String nodeId = documentIdentifiers[i];
+      String version = versions[i];
+      
+      try {
+        if (Logging.connectors.isDebugEnabled()) {
+          Logging.connectors.debug("GOOGLEDRIVE: Processing document identifier '"
+              + nodeId + "'");
+        }
+
+        File googleFile = getObject(nodeId);
+        if (googleFile == null || (googleFile.containsKey("explicitlyTrashed") && googleFile.getExplicitlyTrashed())) {
+          //its deleted, move on
+          continue;
+        }
+
+
+        if (Logging.connectors.isDebugEnabled()) {
+          Logging.connectors.debug("GOOGLEDRIVE: have this file:\t" + googleFile.getTitle());
+        }
+
+        if ("application/vnd.google-apps.folder".equals(googleFile.getMimeType())) {
+          //if directory add its children
+
+          if (Logging.connectors.isDebugEnabled()) {
+            Logging.connectors.debug("GOOGLEDRIVE: its a directory");
+          }
+
+          // adding all the children + subdirs for a folder
+
+          getSession();
+          GetChildrenThread t = new GetChildrenThread(nodeId);
+          try {
+            t.start();
+            boolean wasInterrupted = false;
+            try {
+              XThreadStringBuffer childBuffer = t.getBuffer();
+              // Pick up the paths, and add them to the activities, before we join with the child thread.
+              while (true) {
+                // The only kind of exceptions this can throw are going to shut the process down.
+                String child = childBuffer.fetch();
+                if (child ==  null)
+                  break;
+                // Add the pageID to the queue
+                activities.addDocumentReference(child, nodeId, RELATIONSHIP_CHILD);
+              }
+            } catch (InterruptedException e) {
+              wasInterrupted = true;
+              throw e;
+            } catch (ManifoldCFException e) {
+              if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                wasInterrupted = true;
+              throw e;
+            } finally {
+              if (!wasInterrupted)
+                t.finishUp();
+            }
+          } catch (InterruptedException e) {
+            t.interrupt();
+            throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+              ManifoldCFException.INTERRUPTED);
+          } catch (java.net.SocketTimeoutException e) {
+            Logging.connectors.warn("GOOGLEDRIVE: Socket timeout adding child documents: " + e.getMessage(), e);
+            handleIOException(e);
+          } catch (InterruptedIOException e) {
+            t.interrupt();
+            throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+              ManifoldCFException.INTERRUPTED);
+          } catch (IOException e) {
+            Logging.connectors.warn("GOOGLEDRIVE: Error adding child documents: " + e.getMessage(), e);
+            handleIOException(e);
+          }
+
+        } else {
+          // its a file
+          if (!scanOnly[i]) {
+            doLog = true;
+
+            if (Logging.connectors.isDebugEnabled()) {
+              Logging.connectors.debug("GOOGLEDRIVE: its a file");
+            }
+
+            // We always direct to the PDF
+            String documentURI = getUrl(googleFile, "application/pdf");
+
+            // Get the file length
+            Long fileLength = googleFile.getFileSize();
+            if (fileLength != null) {
+
+              // Unpack the version string
+              ArrayList acls = new ArrayList();
+              StringBuilder denyAclBuffer = new StringBuilder();
+              int index = unpackList(acls,version,0,'+');
+              if (index < version.length() && version.charAt(index++) == '+') {
+                index = unpack(denyAclBuffer,version,index,'+');
+              }
+
+              //otherwise process
+              RepositoryDocument rd = new RepositoryDocument();
+
+              // Turn into acls and add into description
+              String[] aclArray = new String[acls.size()];
+              for (int j = 0; j < aclArray.length; j++) {
+                aclArray[j] = (String)acls.get(j);
+              }
+              rd.setACL(aclArray);
+              if (denyAclBuffer.length() > 0) {
+                String[] denyAclArray = new String[]{denyAclBuffer.toString()};
+                rd.setDenyACL(denyAclArray);
+              }
+
+              // Now do standard stuff
+              String mimeType = googleFile.getMimeType();
+              DateTime createdDate = googleFile.getCreatedDate();
+              DateTime modifiedDate = googleFile.getModifiedDate();
+              String extension = googleFile.getFileExtension();
+              String title = googleFile.getTitle();
+              
+              if (mimeType != null)
+                rd.setMimeType(mimeType);
+              if (createdDate != null)
+                rd.setCreatedDate(new Date(createdDate.getValue()));
+              if (modifiedDate != null)
+                rd.setModifiedDate(new Date(modifiedDate.getValue()));
+              if (extension != null)
+              {
+                if (title == null)
+                  title = "";
+                rd.setFileName(title + "." + extension);
+              }
+
+              // Get general document metadata
+              for (Entry<String, Object> entry : googleFile.entrySet()) {
+                rd.addField(entry.getKey(), entry.getValue().toString());
+              }
+
+              // Fire up the document reading thread
+              DocumentReadingThread t = new DocumentReadingThread(documentURI);
+              try {
+                t.start();
+                boolean wasInterrupted = false;
+                try {
+                  InputStream is = t.getSafeInputStream();
+                  try {
+                    // Can only index while background thread is running!
+                    rd.setBinary(is, fileLength);
+                    activities.ingestDocument(nodeId, version, documentURI, rd);
+                  } finally {
+                    is.close();
+                  }
+                } catch (ManifoldCFException e) {
+                  if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                    wasInterrupted = true;
+                  throw e;
+                } catch (java.net.SocketTimeoutException e) {
+                  throw e;
+                } catch (InterruptedIOException e) {
+                  wasInterrupted = true;
+                  throw e;
+                } finally {
+                  if (!wasInterrupted)
+                    t.finishUp();
+                }
+
+                // No errors.  Record the fact that we made it.
+                errorCode = "OK";
+                fileSize = new Long(fileLength);
+              } catch (InterruptedException e) {
+                t.interrupt();
+                throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+                  ManifoldCFException.INTERRUPTED);
+              } catch (java.net.SocketTimeoutException e) {
+                Logging.connectors.warn("GOOGLEDRIVE: Socket timeout reading document: " + e.getMessage(), e);
+                handleIOException(e);
+              } catch (InterruptedIOException e) {
+                t.interrupt();
+                throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+                  ManifoldCFException.INTERRUPTED);
+              } catch (IOException e) {
+                errorCode = "IO ERROR";
+                errorDesc = e.getMessage();
+                Logging.connectors.warn("GOOGLEDRIVE: Error reading document: " + e.getMessage(), e);
+                handleIOException(e);
+              }
+            } else {
+              errorCode = "NO LENGTH";
+              errorDesc = "Document "+nodeId+" had no length; skipping";
+            }
+          }
+        }
+      } finally {
+        if (doLog)
+          activities.recordActivity(new Long(startTime), ACTIVITY_READ,
+            fileSize, nodeId, errorCode, errorDesc, null);
+      }
+    }
+  }
+
+  protected class DocumentReadingThread extends Thread {
+
+    protected Throwable exception = null;
+    protected final String fileURL;
+    protected final XThreadInputStream stream;
+    
+    public DocumentReadingThread(String fileURL) {
+      super();
+      this.fileURL = fileURL;
+      this.stream = new XThreadInputStream();
+      setDaemon(true);
+    }
+
+    @Override
+    public void run() {
+      try {
+        session.getGoogleDriveOutputStream(stream, fileURL);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public InputStream getSafeInputStream() {
+      return stream;
+    }
+    
+    public void finishUp()
+      throws InterruptedException, IOException
+    {
+      // This will be called during the finally
+      // block in the case where all is well (and
+      // the stream completed) and in the case where
+      // there were exceptions.
+      stream.abort();
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException)
+          throw (IOException) thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException) thr;
+        else if (thr instanceof Error)
+          throw (Error) thr;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+      }
+    }
+
+  }
+
+  /** Get the URL of a file in google land.
+  */
+  protected static String getUrl(File googleFile, String exportType) {
+    if (googleFile.containsKey("fileSize")) {
+      return googleFile.getDownloadUrl();
+    } else {
+      return googleFile.getExportLinks().get(exportType);
+    }
+  }
+
+  protected class GetChildrenThread extends Thread {
+
+    protected Throwable exception = null;
+    protected final String nodeId;
+    protected final XThreadStringBuffer childBuffer;
+    
+    public GetChildrenThread(String nodeId) {
+      super();
+      this.nodeId = nodeId;
+      this.childBuffer = new XThreadStringBuffer();
+      setDaemon(true);
+    }
+
+    @Override
+    public void run() {
+      try {
+        session.getChildren(childBuffer, nodeId);
+      } catch (Throwable e) {
+        this.exception = e;
+      } finally {
+        childBuffer.signalDone();
+      }
+    }
+
+    public XThreadStringBuffer getBuffer() {
+      return childBuffer;
+    }
+    
+    public void finishUp()
+      throws InterruptedException, IOException {
+      childBuffer.abandon();
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException)
+          throw (IOException) thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException) thr;
+        else if (thr instanceof Error)
+          throw (Error) thr;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+      }
+    }
+  }
+
+  /**
+   * The short version of getDocumentVersions. Get document versions given an
+   * array of document identifiers. This method is called for EVERY document
+   * that is considered. It is therefore important to perform as little work
+   * as possible here.
+   *
+   * @param documentIdentifiers is the array of local document identifiers, as
+   * understood by this connector.
+   * @param spec is the current document specification for the current job. If
+   * there is a dependency on this specification, then the version string
+   * should include the pertinent data, so that reingestion will occur when
+   * the specification changes. This is primarily useful for metadata.
+   * @return the corresponding version strings, with null in the places where
+   * the document no longer exists. Empty version strings indicate that there
+   * is no versioning ability for the corresponding document, and the document
+   * will always be processed.
+   */
+  @Override
+  public String[] getDocumentVersions(String[] documentIdentifiers,
+      DocumentSpecification spec) throws ManifoldCFException,
+      ServiceInterruption {
+
+    // Forced acls
+    String[] acls = getAcls(spec);
+    // Sort it,
+    java.util.Arrays.sort(acls);
+
+    String[] rval = new String[documentIdentifiers.length];
+    for (int i = 0; i < rval.length; i++) {
+      File googleFile = getObject(documentIdentifiers[i]);
+      if (!isDir(googleFile)) {
+        String rev = googleFile.getModifiedDate().toStringRfc3339();
+        if (StringUtils.isNotEmpty(rev)) {
+          StringBuilder sb = new StringBuilder();
+
+          // Acls
+          packList(sb,acls,'+');
+          if (acls.length > 0) {
+            sb.append('+');
+            pack(sb,defaultAuthorityDenyToken,'+');
+          }
+          else
+            sb.append('-');
+
+          sb.append(rev);
+          rval[i] = sb.toString();
+        } else {
+          //a google document that doesn't contain versioning information will NEVER be processed.
+          // I don't know what this means, and whether it can ever occur.
+          rval[i] = null;
+        }
+      } else {
+        //a google folder will always be processed
+        rval[i] = StringUtils.EMPTY;
+      }
+    }
+    return rval;
+  }
+
+  /** Grab forced acl out of document specification.
+  *@param spec is the document specification.
+  *@return the acls.
+  */
+  protected static String[] getAcls(DocumentSpecification spec) {
+    Set<String> map = new HashSet<String>();
+    for (int i = 0; i < spec.getChildCount(); i++) {
+      SpecificationNode sn = spec.getChild(i);
+      if (sn.getType().equals(JOB_ACCESS_NODE_TYPE)) {
+        String token = sn.getAttributeValue(JOB_TOKEN_ATTRIBUTE);
+        map.add(token);
+      }
+    }
+
+    String[] rval = new String[map.size()];
+    Iterator<String> iter = map.iterator();
+    int i = 0;
+    while (iter.hasNext()) {
+      rval[i++] = (String)iter.next();
+    }
+    return rval;
+  }
+
+  private boolean isDir(File f) {
+    return f.getMimeType().compareToIgnoreCase("application/vnd.google-apps.folder") == 0;
+  }
+  
+  private static void handleIOException(IOException e)
+    throws ManifoldCFException, ServiceInterruption {
+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    }
+    long currentTime = System.currentTimeMillis();
+    throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L,
+      currentTime + 3 * 60 * 60000L,-1,false);
+  }
+  
+  private static void handleGeneralSecurityException(GeneralSecurityException e)
+    throws ManifoldCFException, ServiceInterruption {
+    // Permanent problem: can't initialize transport layer
+    throw new ManifoldCFException("GoogleDrive exception: "+e.getMessage(), e);
+  }
+}
diff --git a/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveSession.java b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveSession.java
new file mode 100644
index 0000000..a304a3e
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/GoogleDriveSession.java
@@ -0,0 +1,150 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.googledrive;
+
+import org.apache.manifoldcf.core.common.*;
+
+import com.google.api.client.googleapis.auth.oauth2.GoogleCredential;
+import com.google.api.client.googleapis.javanet.GoogleNetHttpTransport;
+import com.google.api.client.googleapis.media.MediaHttpDownloader;
+import com.google.api.client.http.GenericUrl;
+import com.google.api.client.http.HttpTransport;
+import com.google.api.client.json.JsonFactory;
+import com.google.api.client.json.jackson2.JacksonFactory;
+import com.google.api.services.drive.Drive;
+import java.util.Map;
+
+
+
+import java.util.HashMap;
+import java.util.HashSet;
+import com.google.api.services.drive.model.File;
+import com.google.api.services.drive.model.FileList;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.io.OutputStream;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.logging.Level;
+import java.util.logging.Logger;
+
+import java.security.GeneralSecurityException;
+
+/**
+ *
+ * @author andrew
+ */
+public class GoogleDriveSession {
+
+  private static String APPNAME = "ManifoldCF GoogleDrive Connector";
+  
+  private Drive drive;
+  private HttpTransport HTTP_TRANSPORT;
+  
+  private static final JsonFactory JSON_FACTORY = new JacksonFactory();
+  
+  /** Constructor.  Create a session.
+  */
+  public GoogleDriveSession(String clientId, String clientSecret, String refreshToken)
+    throws IOException, GeneralSecurityException {
+    HTTP_TRANSPORT = GoogleNetHttpTransport.newTrustedTransport();
+
+    GoogleCredential credentials = new GoogleCredential.Builder().setClientSecrets(clientId, clientSecret)
+        .setJsonFactory(JSON_FACTORY).setTransport(HTTP_TRANSPORT).build().setRefreshToken(refreshToken);
+
+    drive = new Drive.Builder(HTTP_TRANSPORT, JSON_FACTORY, credentials).setApplicationName(APPNAME).build();
+  }
+
+  /** Close session.
+  */
+  public void close() {
+    // MHL - figure out what is needed
+  }
+
+  /** Obtain repository information.
+  */
+  public Map<String, String> getRepositoryInfo() throws IOException {
+    Map<String, String> info = new HashMap<String, String>();
+    info.put("Application Name", drive.getApplicationName());
+    info.put("Base URL", drive.getBaseUrl());
+    // We need something that will actually cause a back-and-forth to the server!
+    drive.files().get("").execute();
+    return info;
+  }
+
+  /** Get the list of matching root documents, e.g. seeds.
+  */
+  public void getSeeds(XThreadStringBuffer idBuffer, String googleDriveQuery)
+    throws IOException, InterruptedException {
+    Drive.Files.List request;
+
+    request = drive.files().list().setQ(googleDriveQuery);
+
+    do {
+      FileList files = request.execute();
+      for (File f : files.getItems()) {
+        idBuffer.add(f.getId());
+      }
+      request.setPageToken(files.getNextPageToken());
+    } while (request.getPageToken() != null
+        && request.getPageToken().length() > 0);
+  }
+
+  /** Get an individual document.
+  */
+  public File getObject(String id) throws IOException {
+    File file = drive.files().get(id).execute();
+    return file;
+  }
+
+  /** Get the list of child documents for a document.
+  */
+  public void getChildren(XThreadStringBuffer idBuffer, String nodeId)
+    throws IOException, InterruptedException {
+    Drive.Files.List request = drive.files().list().setQ("'" + nodeId + "' in parents");
+
+    do {
+      FileList files = request.execute();
+      for (File f : files.getItems()) {
+        idBuffer.add(f.getId());
+      }
+      request.setPageToken(files.getNextPageToken());
+    } while (request.getPageToken() != null
+        && request.getPageToken().length() > 0);
+  }
+
+
+  /** Get a stream representing the specified document.
+  */
+  public void getGoogleDriveOutputStream(XThreadInputStream inputStream, String documentURI) throws IOException {
+    // Create an object that implements outputstream but pushes everything through to the designated input stream
+    OutputStream outputStream = new XThreadOutputStream(inputStream);
+    try {
+      MediaHttpDownloader downloader =
+          new MediaHttpDownloader(HTTP_TRANSPORT, drive.getRequestFactory().getInitializer());
+      downloader.setDirectDownloadEnabled(false);
+      downloader.download(new GenericUrl(documentURI), outputStream);
+    } finally {
+      // Make sure it is closed and flushed
+      outputStream.close();
+    }
+  }
+  
+}
diff --git a/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/Messages.java b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/Messages.java
new file mode 100644
index 0000000..628ba4a
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/googledrive/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.googledrive;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.crawler.connectors.googledrive.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.crawler.connectors.googledrive";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/googledrive/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/googledrive/common_en_US.properties b/connectors/googledrive/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/googledrive/common_en_US.properties
new file mode 100644
index 0000000..f824667
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/googledrive/common_en_US.properties
@@ -0,0 +1,38 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+GoogleDriveRepositoryConnector.Server=Server

+GoogleDriveRepositoryConnector.GoogleDriveQuery=Seed Query

+GoogleDriveRepositoryConnector.Security=Security

+

+GoogleDriveRepositoryConnector.RefreshTokenColon=RefreshToken:

+GoogleDriveRepositoryConnector.ClientIDColon=Client ID:

+GoogleDriveRepositoryConnector.ClientSecretColon=Client Secret ID:

+

+

+GoogleDriveRepositoryConnector.RefreshTokenMustNotBeNull=Refresh Token must not be null

+GoogleDriveRepositoryConnector.ClientSecretMustNotBeNull=Client Secret must not be null

+GoogleDriveRepositoryConnector.ClientMustNotBeNull=Client must not be null

+

+GoogleDriveRepositoryConnector.SeedQueryCannotBeNull=Seed query cannot be null

+GoogleDriveRepositoryConnector.GoogleDriveQueryColon=Google Drive seed query:

+

+GoogleDriveRepositoryConnector.NoAccessTokensPresent=No access tokens present

+GoogleDriveRepositoryConnector.Add=Add

+GoogleDriveRepositoryConnector.AddAccessToken=Add access token

+GoogleDriveRepositoryConnector.Delete=Delete

+GoogleDriveRepositoryConnector.DeleteToken=Delete token #

+GoogleDriveRepositoryConnector.AccessTokensColon=Access tokens:

+GoogleDriveRepositoryConnector.TypeInAnAccessToken=Type in an access token

diff --git a/connectors/googledrive/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/googledrive/common_ja_JP.properties b/connectors/googledrive/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/googledrive/common_ja_JP.properties
new file mode 100644
index 0000000..2f07696
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/googledrive/common_ja_JP.properties
@@ -0,0 +1,38 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+GoogleDriveRepositoryConnector.Server=Server
+GoogleDriveRepositoryConnector.GoogleDriveQuery=Seed Query
+GoogleDriveRepositoryConnector.Security=Security
+
+GoogleDriveRepositoryConnector.RefreshTokenColon=RefreshToken:
+GoogleDriveRepositoryConnector.ClientIDColon=Client ID:
+GoogleDriveRepositoryConnector.ClientSecretColon=Client Secret ID:
+
+
+GoogleDriveRepositoryConnector.RefreshTokenMustNotBeNull=Refresh Token must not be null
+GoogleDriveRepositoryConnector.ClientSecretMustNotBeNull=Client Secret must not be null
+GoogleDriveRepositoryConnector.ClientMustNotBeNull=Client must not be null
+
+GoogleDriveRepositoryConnector.SeedQueryCannotBeNull=Seed query cannot be null
+GoogleDriveRepositoryConnector.GoogleDriveQueryColon=Google Drive seed query:
+
+GoogleDriveRepositoryConnector.NoAccessTokensPresent=No access tokens present
+GoogleDriveRepositoryConnector.Add=Add
+GoogleDriveRepositoryConnector.AddAccessToken=Add access token
+GoogleDriveRepositoryConnector.Delete=Delete
+GoogleDriveRepositoryConnector.DeleteToken=Delete token #
+GoogleDriveRepositoryConnector.AccessTokensColon=Access tokens:
+GoogleDriveRepositoryConnector.TypeInAnAccessToken=Type in an access token
diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editConfiguration_google_server.html b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editConfiguration_google_server.html
new file mode 100644
index 0000000..94c8673
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editConfiguration_google_server.html
@@ -0,0 +1,64 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+#if($TabName == $ResourceBundle.getString('GoogleDriveRepositoryConnector.Server'))

+

+<table class="displaytable">

+  <tr>

+    <td class="separator" colspan="2">

+      <hr />

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>

+        $Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.RefreshTokenColon'))

+      </nobr>

+    </td>

+    <td class="value">

+      <input type="text" id="refreshtoken" name="refreshtoken" value="$Encoder.attributeEscape($REFRESHTOKEN)" />

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>

+        $Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.ClientIDColon'))

+      </nobr>

+    </td>

+    <td class="value">

+      <input type="text" id="clientid" name="clientid" value="$Encoder.attributeEscape($CLIENTID)" />

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>

+        $Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.ClientSecretColon'))

+      </nobr>

+    </td>

+    <td class="value">

+      <input type="password" id="clientsecret" name="clientsecret" value="$Encoder.attributeEscape($CLIENTSECRET)" />

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="clientid" value="$Encoder.attributeEscape($CLIENTID)" />

+<input type="hidden" name="clientsecret" value="$Encoder.attributeEscape($CLIENTSECRET)" />

+<input type="hidden" name="refreshtoken" value="$Encoder.attributeEscape($REFRESHTOKEN)" />

+

+#end

diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editConfiguration_google_server.js b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editConfiguration_google_server.js
new file mode 100644
index 0000000..5fccbed
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editConfiguration_google_server.js
@@ -0,0 +1,52 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<script type="text/javascript">

+<!--

+function checkConfig()

+{

+  return true;

+}

+ 

+function checkConfigForSave()

+{

+    

+  if (editconnection.refreshtoken.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.RefreshTokenMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.Server'))");

+    editconnection.jsonauth.focus();

+    return false;

+  }

+  if (editconnection.clientid.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.ClientMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.Server'))");

+    editconnection.key.focus();

+    return false;

+  }

+  if (editconnection.clientsecret.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.ClientSecretMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.Server'))");

+    editconnection.secret.focus();

+    return false;

+  }

+  return true;

+}

+//-->

+</script>
\ No newline at end of file
diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledrive.js b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledrive.js
new file mode 100644
index 0000000..4e94f1b
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledrive.js
@@ -0,0 +1,54 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<script type="text/javascript">

+<!--

+function checkSpecificationForSave()

+{

+  if (editjob.googledrivequery.value == "") {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.SeedQueryCannotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.GoogleDriveQuery'))");

+    editjob.googledrivequery.focus();

+    return false;

+  }

+  return true;

+}

+ 

+function SpecOp(n, opValue, anchorvalue)

+{

+  eval("editjob."+n+".value = \""+opValue+"\"");

+  postFormSetAnchor(anchorvalue);

+}

+

+function SpecDeleteToken(i)

+{

+  SpecOp("accessop_"+i,"Delete","token_"+i);

+}

+

+function SpecAddToken(i)

+{

+  if (editjob.spectoken.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.TypeInAnAccessToken'))");

+    editjob.spectoken.focus();

+    return;

+  }

+  SpecOp("accessop","Add","token_"+i);

+}

+

+//-->

+</script>
\ No newline at end of file
diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledriveQuery.html b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledriveQuery.html
new file mode 100644
index 0000000..a0248d9
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledriveQuery.html
@@ -0,0 +1,40 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+#if($TabName == $ResourceBundle.getString('GoogleDriveRepositoryConnector.GoogleDriveQuery'))

+

+<table class="displaytable">

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+    <td class="description">

+      <nobr>

+        $Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.GoogleDriveQueryColon'))

+      </nobr>

+    </td>

+    <td class="value">

+      <nobr>

+        <input type="text" size="120" name="googledrivequery" value="$Encoder.attributeEscape($GOOGLEDRIVEQUERY)" />

+      </nobr>

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="googledrivequery" value="$Encoder.attributeEscape($GOOGLEDRIVEQUERY)" />

+

+#end

diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledriveSecurity.html b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledriveSecurity.html
new file mode 100644
index 0000000..a4c9bd8
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/editSpecification_googledriveSecurity.html
@@ -0,0 +1,73 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('GoogleDriveRepositoryConnector.Security'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  
+  #set($atcounter = 0)
+  #foreach($atoken in $ACCESSTOKENS)
+
+  <tr>
+    <td class="description">
+      <input type="hidden" name="accessop_$atcounter" value=""/>
+      <input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($atoken.get('TOKEN'))"/>
+      <a name="token_$atcounter">
+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.Delete'))" onClick='Javascript:SpecDeleteToken($atcounter)' alt="$Encoder.attributeEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.DeleteToken'))$atcounter"/>
+      </a>
+    </td>
+    <td class="value">$Encoder.bodyEscape($atoken.get('TOKEN'))</td>
+  </tr>
+
+    #set($atcounter = $atcounter + 1)
+  #end
+
+  #set($nexttoken = $atcounter + 1)
+
+  #if($atcounter == 0)
+  <tr>
+    <td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.NoAccessTokensPresent'))</td>
+  </tr>
+  #end
+
+  <tr><td class="lightseparator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+    <td class="description">
+      <input type="hidden" name="tokencount" value="$atcounter"/>
+      <input type="hidden" name="accessop" value=""/>
+      <a name="token_$atcounter">
+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.Add'))" onClick='Javascript:SpecAddToken($nexttoken)' alt="$Encoder.attributeEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.AddAccessToken'))"/>
+      </a>
+    </td>
+    <td class="value">
+      <input type="text" size="30" name="spectoken" value=""/>
+    </td>
+  </tr>
+</table>
+
+#else
+
+  #set($atcounter = 0)
+  #foreach($atoken in $ACCESSTOKENS)
+<input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($atoken.get('TOKEN'))"/>
+    #set($atcounter = $atcounter + 1)
+  #end
+<input type="hidden" name="tokencount" value="$atcounter"/>
+
+#end
diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/viewConfiguration_googledrive.html b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/viewConfiguration_googledrive.html
new file mode 100644
index 0000000..b4db034
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/viewConfiguration_googledrive.html
@@ -0,0 +1,44 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<table class="displaytable">

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.ClientIDColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($CLIENTID)</nobr>

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.ClientSecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>********</nobr>

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.RefreshTokenColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($REFRESHTOKEN)</nobr>

+    </td>

+  </tr>

+</table>

+

diff --git a/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/viewSpecification_googledrive.html b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/viewSpecification_googledrive.html
new file mode 100644
index 0000000..5b36b2b
--- /dev/null
+++ b/connectors/googledrive/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/googledrive/viewSpecification_googledrive.html
@@ -0,0 +1,47 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<table class="displaytable">

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.GoogleDriveQueryColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($GOOGLEDRIVEQUERY)</nobr>

+    </td>

+  </tr>

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+#if($ACCESSTOKENS.size() == 0)

+    <td class="message" colspan="2">

+      $Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.NoAccessTokensPresent'))

+    </td>

+#else

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('GoogleDriveRepositoryConnector.AccessTokensColon'))</nobr>

+    </td>

+    <td class="value">

+  #set($atcounter = 0)

+  #foreach($atoken in $ACCESSTOKENS)

+    <nobr>$Encoder.bodyEscape($atoken.get('TOKEN'))</nobr><br/>

+    #set($atcounter = $atcounter + 1)

+  #end

+    </td>

+#end

+  </tr>

+

+</table>

diff --git a/connectors/googledrive/pom.xml b/connectors/googledrive/pom.xml
new file mode 100644
index 0000000..9e94633
--- /dev/null
+++ b/connectors/googledrive/pom.xml
@@ -0,0 +1,171 @@
+<?xml version="1.0" encoding="UTF-8"?>

+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

+

+    <parent>

+        <groupId>org.apache.manifoldcf</groupId>

+        <artifactId>mcf-connectors</artifactId>

+        <version>1.5-SNAPSHOT</version>

+    </parent>

+    <modelVersion>4.0.0</modelVersion>

+

+    <packaging>jar</packaging>

+

+    <developers>

+        <developer>

+            <name>Andrew Janowczyk</name>

+            <organization>Searchbox</organization>

+            <organizationUrl>http://www.searchbox.com</organizationUrl>

+            <url>http://www.searchbox.com</url>

+        </developer>

+    </developers>

+

+    <artifactId>mcf-googledrive-connector</artifactId>

+    <name>ManifoldCF - Connectors - GoogleDrive</name>

+

+

+    <properties>

+        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

+        <project.http.version>1.14.1-beta</project.http.version>

+        <project.oauth.version>1.14.1-beta</project.oauth.version>

+    </properties>

+

+    <build>

+        <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>

+        <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>

+        <resources>

+            <resource>

+                <directory>${basedir}/connector/src/main/resources</directory>

+                <includes>

+                    <include>**/*.html</include>

+                    <include>**/*.js</include>

+                </includes>

+            </resource>

+            <resource>

+                <directory>${basedir}/connector/src/main/native2ascii</directory>

+                <includes>

+                    <include>**/*.properties</include>

+                </includes>

+            </resource>

+        </resources>

+        <plugins>

+            <plugin>

+                <groupId>org.codehaus.mojo</groupId>

+                <artifactId>native2ascii-maven-plugin</artifactId>

+                <version>1.0-beta-1</version>

+                <configuration>

+                    <workDir>target/classes</workDir>

+                </configuration>

+                <executions>

+                    <execution>

+                        <id>native2ascii-utf8</id>

+                        <goals>

+                            <goal>native2ascii</goal>

+                        </goals>

+                        <configuration>

+                            <encoding>UTF8</encoding>

+                            <includes>

+                                <include>**/*.properties</include>

+                            </includes>

+                        </configuration>

+                    </execution>

+                </executions>

+            </plugin>

+

+            <plugin>

+                <groupId>org.apache.maven.plugins</groupId>

+                <artifactId>maven-surefire-plugin</artifactId>

+                <configuration>

+                    <excludes>

+                        <exclude>**/*Postgresql*.java</exclude>

+                        <exclude>**/*MySQL*.java</exclude>

+                    </excludes>

+                    <forkMode>always</forkMode>

+                    <workingDirectory>target/test-output</workingDirectory>

+                </configuration>

+            </plugin>

+

+        </plugins>

+    </build>

+

+    <dependencies>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-pull-agent</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-agents</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-ui-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>commons-lang</groupId>

+            <artifactId>commons-lang</artifactId>

+            <version>${commons-lang.version}</version>

+            <type>jar</type>

+        </dependency>

+        

+        

+        <dependency>

+            <groupId>commons-logging</groupId>

+            <artifactId>commons-logging</artifactId>

+            <version>1.1.1</version>

+            <scope>test</scope>

+        </dependency>

+        <dependency>

+            <groupId>log4j</groupId>

+            <artifactId>log4j</artifactId>

+            <version>1.2.16</version>

+            <scope>provided</scope>

+            <type>jar</type>

+        </dependency>

+       <dependency>

+            <groupId>com.google.apis</groupId>

+            <artifactId>google-api-services-drive</artifactId>

+            <version>v2-rev64-1.14.1-beta</version>

+        </dependency>

+        <dependency>

+            <groupId>com.google.http-client</groupId>

+            <artifactId>google-http-client</artifactId>

+            <version>${project.http.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>com.google.http-client</groupId>

+            <artifactId>google-http-client-jackson2</artifactId>

+            <version>${project.http.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>com.google.oauth-client</groupId>

+            <artifactId>google-oauth-client</artifactId>

+            <version>${project.oauth.version}</version>

+        </dependency> 

+    </dependencies>

+</project>

diff --git a/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/GTSConnector.java b/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/GTSConnector.java
index 5eda558..390bdc8 100644
--- a/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/GTSConnector.java
+++ b/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/GTSConnector.java
@@ -114,7 +114,7 @@
       String userID = params.getParameter(GTSConfig.PARAM_USERID);
       String password = params.getObfuscatedParameter(GTSConfig.PARAM_PASSWORD);
       String realm = params.getParameter(GTSConfig.PARAM_REALM);
-      poster = new HttpPoster(realm,userID,password,ingestURI);
+      poster = new HttpPoster(currentContext,realm,userID,password,ingestURI);
     }
   }
 
diff --git a/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/HttpPoster.java b/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/HttpPoster.java
index 8a72ab8..2a9a41a 100644
--- a/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/HttpPoster.java
+++ b/connectors/gts/connector/src/main/java/org/apache/manifoldcf/agents/output/gts/HttpPoster.java
@@ -63,14 +63,16 @@
   private int port = 80;
   private String protocol = null;
 
-  private int buffersize = 32768;  // default buffer size
-  double sizeCoefficient = 0.0005;    // 20 ms additional timeout per 2000 bytes, pulled out of my butt
+  /** Default buffer size */
+  private final int buffersize;
+  /** Size coefficient */
+  private static double sizeCoefficient = 0.0005;    // 20 ms additional timeout per 2000 bytes, pulled out of my butt
   /** the number of times we should poll for the response */
-  int responseRetries = 9000;         // Long basic wait: 3 minutes.  This will also be added to by a term based on the size of the request.
+  private final int responseRetries;
   /** how long we should wait before checking for a new stream */
-  long responseRetryWait = 20L;
+  private final long responseRetryWait;
   /** How long to wait before retrying a failed ingestion */
-  long interruptionRetryTime = 60000L;
+  private final long interruptionRetryTime;
 
   /** This is the secure socket factory we will use.  I'm presuming it's thread-safe, but
   * if not, synchronization blocks are in order when it's used. */
@@ -95,7 +97,7 @@
   * @param password is the unencoded password, or null.
   * @param postURI the uri to post the request to
   */
-  public HttpPoster(String realm, String userID, String password, String postURI)
+  public HttpPoster(IThreadContext threadContext, String realm, String userID, String password, String postURI)
     throws ManifoldCFException
   {
     if (userID != null && userID.length() > 0 && password != null)
@@ -136,18 +138,10 @@
         port = 80;
     }
 
-    String x = ManifoldCF.getProperty(ingestBufferSizeProperty);
-    if (x != null && x.length() > 0)
-      buffersize = new Integer(x).intValue();
-    x = ManifoldCF.getProperty(ingestResponseRetryCount);
-    if (x != null && x.length() > 0)
-      responseRetries = new Integer(x).intValue();
-    x = ManifoldCF.getProperty(ingestResponseRetryInterval);
-    if (x != null && x.length() > 0)
-      responseRetryWait = new Long(x).longValue();
-    x = ManifoldCF.getProperty(ingestRescheduleInterval);
-    if (x != null && x.length() > 0)
-      interruptionRetryTime = new Long(x).longValue();
+    buffersize = LockManagerFactory.getIntProperty(threadContext,ingestBufferSizeProperty,32768);
+    responseRetries = LockManagerFactory.getIntProperty(threadContext,ingestResponseRetryCount,9000);
+    responseRetryWait = LockManagerFactory.getIntProperty(threadContext,ingestResponseRetryInterval,20);
+    interruptionRetryTime = LockManagerFactory.getIntProperty(threadContext,ingestRescheduleInterval,60000);
   }
 
   /**
diff --git a/connectors/gts/pom.xml b/connectors/gts/pom.xml
index 4b50baa..364f72c 100644
--- a/connectors/gts/pom.xml
+++ b/connectors/gts/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/hdfs/build.xml b/connectors/hdfs/build.xml
new file mode 100644
index 0000000..b4bbfe4
--- /dev/null
+++ b/connectors/hdfs/build.xml
@@ -0,0 +1,45 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project name="hdfs" default="all">
+
+    <import file="../connector-build.xml"/>
+
+    <path id="connector-classpath">
+        <path refid="mcf-connector-build.connector-classpath"/>
+        <fileset dir="../../lib">
+            <include name="commons-configuration*.jar"/>
+            <include name="hadoop-common*.jar"/>
+            <include name="hadoop-annotations*.jar"/>
+            <include name="hadoop-auth*.jar"/>
+            <include name="guava*.jar"/>
+        </fileset>
+    </path>
+
+    <target name="lib" depends="mcf-connector-build.lib,precompile-check" if="canBuild">
+        <mkdir dir="dist/lib"/>
+        <copy todir="dist/lib">
+            <fileset dir="../../lib">
+                <include name="commons-configuration*.jar"/>
+                <include name="hadoop-common*.jar"/>
+                <include name="hadoop-annotations*.jar"/>
+                <include name="hadoop-auth*.jar"/>
+                <include name="guava*.jar"/>
+            </fileset>
+        </copy>
+    </target>
+</project>
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputConfig.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputConfig.java
new file mode 100644
index 0000000..cf84f23
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputConfig.java
@@ -0,0 +1,72 @@
+/* $Id: FileOutputConfig.java 1299512 2013-05-31 22:59:38Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+
+
+public class HDFSOutputConfig extends HDFSOutputParam {
+
+  /**
+   * 
+   */
+  private static final long serialVersionUID = -2062295503498352538L;
+
+  /** Parameters used for the configuration */
+  final private static ParameterEnum[] CONFIGURATIONLIST = {
+    ParameterEnum.namenodehost,
+    ParameterEnum.namenodeport,
+    ParameterEnum.user
+  };
+
+  /** Build a set of ElasticSearchParameters by reading ConfigParams. If the
+   * value returned by ConfigParams.getParameter is null, the default value is
+   * set.
+   * 
+   * @param paramList
+   * @param params
+   */
+  public HDFSOutputConfig(ConfigParams params)
+  {
+    super(CONFIGURATIONLIST);
+    for (ParameterEnum param : CONFIGURATIONLIST) {
+      String value = params.getParameter(param.name());
+      if (value == null) {
+        value = param.defaultValue;
+      }
+      put(param, value);
+    }
+  }
+
+  /**
+   * @param variableContext
+   * @param parameters
+   */
+  public final static void contextToConfig(IPostParameters variableContext, ConfigParams parameters) {
+    for (ParameterEnum param : CONFIGURATIONLIST) {
+      String p = variableContext.getParameter(param.name().toLowerCase());
+      if (p != null) {
+        parameters.setParameter(param.name(), p);
+      }
+    }
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputConnector.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputConnector.java
new file mode 100644
index 0000000..590ea48
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputConnector.java
@@ -0,0 +1,817 @@
+/* $Id: FileOutputConnector.java 991374 2013-05-31 23:04:08Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import java.io.File;
+import java.io.FileInputStream;
+import java.io.FileNotFoundException;
+import java.io.FileOutputStream;
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.io.InputStream;
+import java.io.UnsupportedEncodingException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.net.URLEncoder;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.manifoldcf.agents.interfaces.IOutputAddActivity;
+import org.apache.manifoldcf.agents.interfaces.IOutputRemoveActivity;
+import org.apache.manifoldcf.agents.interfaces.OutputSpecification;
+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.agents.system.Logging;
+import org.apache.manifoldcf.agents.output.BaseOutputConnector;
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.ConfigurationNode;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.SpecificationNode;
+import org.json.JSONException;
+
+public class HDFSOutputConnector extends BaseOutputConnector {
+
+  public static final String _rcsid = "@(#)$Id: FileOutputConnector.java 988245 2010-08-23 18:39:35Z minoru $";
+
+  // Activities we log
+
+  /** Ingestion activity */
+  public final static String INGEST_ACTIVITY = "document ingest";
+  /** Document removal activity */
+  public final static String REMOVE_ACTIVITY = "document deletion";
+
+  // Activities list
+  protected static final String[] activitiesList = new String[]{INGEST_ACTIVITY, REMOVE_ACTIVITY};
+
+  /** Forward to the javascript to check the configuration parameters */
+  private static final String EDIT_CONFIGURATION_JS = "editConfiguration.js";
+
+  /** Forward to the HTML template to edit the configuration parameters */
+  private static final String EDIT_CONFIGURATION_HTML = "editConfiguration.html";
+
+  /** Forward to the HTML template to view the configuration parameters */
+  private static final String VIEW_CONFIGURATION_HTML = "viewConfiguration.html";
+
+  /** Forward to the javascript to check the specification parameters for the job */
+  private static final String EDIT_SPECIFICATION_JS = "editSpecification.js";
+
+  /** Forward to the template to edit the configuration parameters for the job */
+  private static final String EDIT_SPECIFICATION_HTML = "editSpecification.html";
+
+  /** Forward to the template to view the specification parameters for the job */
+  private static final String VIEW_SPECIFICATION_HTML = "viewSpecification.html";
+
+  protected String nameNodeHost = null;
+  protected String nameNodePort = null;
+  protected String user = null;
+  protected HDFSSession session = null;
+  protected long lastSessionFetch = -1L;
+  protected static final long timeToRelease = 300000L;
+
+  /** Constructor.
+   */
+  public HDFSOutputConnector() {
+  }
+
+  /** Return the list of activities that this connector supports (i.e. writes into the log).
+   *@return the list.
+   */
+  @Override
+  public String[] getActivitiesList() {
+    return activitiesList;
+  }
+
+  /** Connect.
+   *@param configParameters is the set of configuration parameters, which
+   * in this case describe the target appliance, basic auth configuration, etc.  (This formerly came
+   * out of the ini file.)
+   */
+  @Override
+  public void connect(ConfigParams configParams) {
+    super.connect(configParams);
+    nameNodeHost = configParams.getParameter(ParameterEnum.namenodehost.name());
+    nameNodePort = configParams.getParameter(ParameterEnum.namenodeport.name());
+    user = configParams.getParameter(ParameterEnum.user.name());
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
+  /** Close the connection.  Call this before discarding the connection.
+   */
+  @Override
+  public void disconnect() throws ManifoldCFException {
+    closeSession();
+    nameNodeHost = null;
+    nameNodePort = null;
+    user = null;
+    super.disconnect();
+  }
+
+  /**
+   * @throws ManifoldCFException
+   */
+  @Override
+  public void poll() throws ManifoldCFException {
+    if (lastSessionFetch == -1L) {
+      return;
+    }
+
+    long currentTime = System.currentTimeMillis();
+    if (currentTime >= lastSessionFetch + timeToRelease) {
+      closeSession();
+    }
+  }
+
+  protected void closeSession()
+    throws ManifoldCFException {
+    if (session != null) {
+      try {
+        // This can in theory throw an IOException, so it is possible it is doing socket
+        // communication.  In practice, it's unlikely that there's any real IO, so I'm
+        // NOT putting it in a background thread for now.
+        session.close();
+      } catch (InterruptedIOException e) {
+        throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+      } catch (IOException e) {
+        Logging.agents.warn("HDFS: Error closing connection: "+e.getMessage(),e);
+        // Eat the exception
+      } finally {
+        session = null;
+        lastSessionFetch = -1L;
+      }
+    }
+  }
+
+  /** Set up a session */
+  protected HDFSSession getSession() throws ManifoldCFException, ServiceInterruption {
+    if (session == null) {
+      String nameNodeHost = params.getParameter(ParameterEnum.namenodehost.name());
+      if (nameNodeHost == null)
+        throw new ManifoldCFException("Namenodehost must be specified");
+
+      String nameNodePort = params.getParameter(ParameterEnum.namenodeport.name());
+      if (nameNodePort == null)
+        throw new ManifoldCFException("Namenodeport must be specified");
+      
+      String user = params.getParameter(ParameterEnum.user.name());
+      if (user == null)
+        throw new ManifoldCFException("User must be specified");
+      
+      String nameNode = "hdfs://"+nameNodeHost+":"+nameNodePort;
+      //System.out.println("Namenode = '"+nameNode+"'");
+
+      /*
+       * make Configuration
+       */
+      Configuration config = null;
+      ClassLoader ocl = Thread.currentThread().getContextClassLoader();
+      try {
+        Thread.currentThread().setContextClassLoader(org.apache.hadoop.conf.Configuration.class.getClassLoader());
+        config = new Configuration();
+        config.set("fs.default.name", nameNode);
+      } finally {
+        Thread.currentThread().setContextClassLoader(ocl);
+      }
+      
+      /*
+       * get connection to HDFS
+       */
+      GetSessionThread t = new GetSessionThread(nameNode,config,user);
+      try {
+        t.start();
+        t.finishUp();
+      } catch (InterruptedException e) {
+        t.interrupt();
+        throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+      } catch (java.net.SocketTimeoutException e) {
+        handleIOException(e);
+      } catch (InterruptedIOException e) {
+        t.interrupt();
+        handleIOException(e);
+      } catch (URISyntaxException e) {
+        handleURISyntaxException(e);
+      } catch (IOException e) {
+        handleIOException(e);
+      }
+      
+      session = t.getResult();
+    }
+    lastSessionFetch = System.currentTimeMillis();
+    return session;
+  }
+
+  /** Test the connection.  Returns a string describing the connection integrity.
+   *@return the connection's status as a displayable string.
+   */
+  @Override
+  public String check() throws ManifoldCFException {
+    try {
+      checkConnection();
+      return super.check();
+    } catch (ServiceInterruption e) {
+      return "Connection temporarily failed: " + e.getMessage();
+    } catch (ManifoldCFException e) {
+      return "Connection failed: " + e.getMessage();
+    }
+  }
+
+  /** Get an output version string, given an output specification.  The output version string is used to uniquely describe the pertinent details of
+   * the output specification and the configuration, to allow the Connector Framework to determine whether a document will need to be output again.
+   * Note that the contents of the document cannot be considered by this method, and that a different version string (defined in IRepositoryConnector)
+   * is used to describe the version of the actual document.
+   *
+   * This method presumes that the connector object has been configured, and it is thus able to communicate with the output data store should that be
+   * necessary.
+   *@param spec is the current output specification for the job that is doing the crawling.
+   *@return a string, of unlimited length, which uniquely describes output configuration and specification in such a way that if two such strings are equal,
+   * the document will not need to be sent again to the output data store.
+   */
+  @Override
+  public String getOutputDescription(OutputSpecification spec) throws ManifoldCFException, ServiceInterruption {
+    HDFSOutputSpecs specs = new HDFSOutputSpecs(getSpecNode(spec));
+    return specs.toJson().toString();
+  }
+
+  /** Add (or replace) a document in the output data store using the connector.
+   * This method presumes that the connector object has been configured, and it is thus able to communicate with the output data store should that be
+   * necessary.
+   * The OutputSpecification is *not* provided to this method, because the goal is consistency, and if output is done it must be consistent with the
+   * output description, since that was what was partly used to determine if output should be taking place.  So it may be necessary for this method to decode
+   * an output description string in order to determine what should be done.
+   *@param documentURI is the URI of the document.  The URI is presumed to be the unique identifier which the output data store will use to process
+   * and serve the document.  This URI is constructed by the repository connector which fetches the document, and is thus universal across all output connectors.
+   *@param outputDescription is the description string that was constructed for this document by the getOutputDescription() method.
+   *@param document is the document data to be processed (handed to the output data store).
+   *@param authorityNameString is the name of the authority responsible for authorizing any access tokens passed in with the repository document.  May be null.
+   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
+   *@return the document status (accepted or permanently rejected).
+   */
+  @Override
+  public int addOrReplaceDocument(String documentURI, String outputDescription, RepositoryDocument document, String authorityNameString, IOutputAddActivity activities) throws ManifoldCFException, ServiceInterruption {
+
+    try {
+      HDFSOutputSpecs specs = new HDFSOutputSpecs(outputDescription);
+
+      /*
+       * make file path
+       */
+      StringBuffer strBuff = new StringBuffer();
+      if (specs.getRootPath() != null) {
+        strBuff.append(specs.getRootPath());
+      }
+      strBuff.append("/");
+      strBuff.append(documentURItoFilePath(documentURI));
+      Path path = new Path(strBuff.toString());
+
+      Long startTime = new Long(System.currentTimeMillis());
+      createFile(path, document.getBinaryStream());
+      activities.recordActivity(startTime, INGEST_ACTIVITY, new Long(document.getBinaryLength()), documentURI, "OK", null);
+      return DOCUMENTSTATUS_ACCEPTED;
+    } catch (JSONException e) {
+      handleJSONException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    } catch (URISyntaxException e) {
+      handleURISyntaxException(e);
+      return DOCUMENTSTATUS_REJECTED;
+    }
+
+  }
+
+  /** Remove a document using the connector.
+   * Note that the last outputDescription is included, since it may be necessary for the connector to use such information to know how to properly remove the document.
+   *@param documentURI is the URI of the document.  The URI is presumed to be the unique identifier which the output data store will use to process
+   * and serve the document.  This URI is constructed by the repository connector which fetches the document, and is thus universal across all output connectors.
+   *@param outputDescription is the last description string that was constructed for this document by the getOutputDescription() method above.
+   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
+   */
+  @Override
+  public void removeDocument(String documentURI, String outputDescription, IOutputRemoveActivity activities) throws ManifoldCFException, ServiceInterruption {
+
+    try {
+      HDFSOutputSpecs specs = new HDFSOutputSpecs(outputDescription);
+
+      /*
+       * make path
+       */
+      StringBuffer strBuff = new StringBuffer();
+      if (specs.getRootPath() != null) {
+        strBuff.append(specs.getRootPath());
+      }
+      strBuff.append("/");
+      strBuff.append(documentURItoFilePath(documentURI));
+      Path path = new Path(strBuff.toString());
+      Long startTime = new Long(System.currentTimeMillis());
+      deleteFile(path);
+      activities.recordActivity(startTime, REMOVE_ACTIVITY, null, documentURI, "OK", null);
+    } catch (JSONException e) {
+      handleJSONException(e);
+    } catch (URISyntaxException e) {
+      handleURISyntaxException(e);
+    }
+  }
+
+  /** Output the configuration header section.
+   * This method is called in the head section of the connector's configuration page.  Its purpose is to add the required tabs to the list, and to output any
+   * javascript methods that might be needed by the configuration editing HTML.
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+   */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray) throws ManifoldCFException, IOException {
+    super.outputConfigurationHeader(threadContext, out, locale, parameters, tabsArray);
+    tabsArray.add(Messages.getString(locale,"HDFSOutputConnector.ServerTabName"));
+    outputResource(EDIT_CONFIGURATION_JS, out, locale, null, null);
+  }
+
+  /** Output the configuration body section.
+   * This method is called in the body section of the connector's configuration page.  Its purpose is to present the required form elements for editing.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+   * form is "editconnection".
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@param tabName is the current tab name.
+   */
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName) throws ManifoldCFException, IOException {
+    super.outputConfigurationBody(threadContext, out, locale, parameters, tabName);
+    HDFSOutputConfig config = this.getConfigParameters(parameters);
+    outputResource(EDIT_CONFIGURATION_HTML, out, locale, config, tabName);
+  }
+
+  /** Process a configuration post.
+   * This method is called at the start of the connector's configuration page, whenever there is a possibility that form data for a connection has been
+   * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
+   * The name of the posted form is "editconnection".
+   *@param threadContext is the local thread context.
+   *@param variableContext is the set of variables available from the post, including binary file post information.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
+   */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, Locale locale, ConfigParams parameters) throws ManifoldCFException {
+    HDFSOutputConfig.contextToConfig(variableContext, parameters);
+    return null;
+  }
+
+  /** View configuration.
+   * This method is called in the body section of the connector's view configuration page.  Its purpose is to present the connection information to the user.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
+    outputResource(VIEW_CONFIGURATION_HTML, out, locale, getConfigParameters(parameters), null);
+  }
+
+  /** Output the specification header section.
+   * This method is called in the head section of a job page which has selected an output connection of the current type.  Its purpose is to add the required tabs
+   * to the list, and to output any javascript methods that might be needed by the job editing HTML.
+   *@param out is the output to which any HTML should be sent.
+   *@param os is the current output specification for this job.
+   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+   */
+  @Override
+  public void outputSpecificationHeader(IHTTPOutput out, Locale locale, OutputSpecification os, List<String> tabsArray) throws ManifoldCFException, IOException {
+    super.outputSpecificationHeader(out, locale, os, tabsArray);
+    tabsArray.add(Messages.getString(locale, "HDFSOutputConnector.PathTabName"));
+    outputResource(EDIT_SPECIFICATION_JS, out, locale, null, null);
+  }
+
+  /** Output the specification body section.
+   * This method is called in the body section of a job page which has selected an output connection of the current type.  Its purpose is to present the required form elements for editing.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+   * form is "editjob".
+   *@param out is the output to which any HTML should be sent.
+   *@param os is the current output specification for this job.
+   *@param tabName is the current tab name.
+   */
+  @Override
+  public void outputSpecificationBody(IHTTPOutput out, Locale locale, OutputSpecification os, String tabName) throws ManifoldCFException, IOException {
+    super.outputSpecificationBody(out, locale, os, tabName);
+    HDFSOutputSpecs specs = getSpecParameters(os);
+    outputResource(EDIT_SPECIFICATION_HTML, out, locale, specs, tabName);
+  }
+
+  /** Process a specification post.
+   * This method is called at the start of job's edit or view page, whenever there is a possibility that form data for a connection has been
+   * posted.  Its purpose is to gather form information and modify the output specification accordingly.
+   * The name of the posted form is "editjob".
+   *@param variableContext contains the post data, including binary file-upload information.
+   *@param os is the current output specification for this job.
+   *@return null if all is well, or a string error message if there is an error that should prevent saving of the job (and cause a redirection to an error page).
+   */
+  @Override
+  public String processSpecificationPost(IPostParameters variableContext, Locale locale, OutputSpecification os) throws ManifoldCFException {
+    ConfigurationNode specNode = getSpecNode(os);
+    boolean bAdd = (specNode == null);
+    if (bAdd) {
+      specNode = new SpecificationNode(ParameterEnum.rootpath.name());
+    }
+    HDFSOutputSpecs.contextToSpecNode(variableContext, specNode);
+    if (bAdd) {
+      os.addChild(os.getChildCount(), specNode);
+    }
+
+    return null;
+  }
+
+  /** View specification.
+   * This method is called in the body section of a job's view page.  Its purpose is to present the output specification information to the user.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+   *@param out is the output to which any HTML should be sent.
+   *@param os is the current output specification for this job.
+   */
+  @Override
+  public void viewSpecification(IHTTPOutput out, Locale locale, OutputSpecification os) throws ManifoldCFException, IOException {
+    outputResource(VIEW_SPECIFICATION_HTML, out, locale, getSpecParameters(os), null);
+  }
+
+  /**
+   * @param os
+   * @return
+   */
+  final private SpecificationNode getSpecNode(OutputSpecification os)
+  {
+    int l = os.getChildCount();
+    for (int i = 0; i < l; i++) {
+      SpecificationNode node = os.getChild(i);
+      if (node.getType().equals(ParameterEnum.rootpath.name())) {
+        return node;
+      }
+    }
+    return null;
+  }
+
+  /**
+   * @param os
+   * @return
+   * @throws ManifoldCFException
+   */
+  final private HDFSOutputSpecs getSpecParameters(OutputSpecification os) throws ManifoldCFException {
+    return new HDFSOutputSpecs(getSpecNode(os));
+  }
+
+  /**
+   * @param configParams
+   * @return
+   */
+  final private HDFSOutputConfig getConfigParameters(ConfigParams configParams) {
+    if (configParams == null)
+      configParams = getConfiguration();
+    return new HDFSOutputConfig(configParams);
+  }
+
+  /** Read the content of a resource, replace the variable ${PARAMNAME} with the
+   * value and copy it to the out.
+   * 
+   * @param resName
+   * @param out
+   * @throws ManifoldCFException */
+  private static void outputResource(String resName, IHTTPOutput out, Locale locale, HDFSOutputParam params, String tabName) throws ManifoldCFException {
+    Map<String,String> paramMap = null;
+    if (params != null) {
+      paramMap = params.buildMap();
+      if (tabName != null) {
+        paramMap.put("TabName", tabName);
+      }
+    }
+    Messages.outputResourceWithVelocity(out, locale, resName, paramMap, true);
+  }
+
+  /**
+   * @param documentURI
+   * @return
+   * @throws URISyntaxException
+   */
+  final private String documentURItoFilePath(String documentURI) throws URISyntaxException {
+    StringBuffer path = new StringBuffer();
+    URI uri = null;
+
+    uri = new URI(documentURI);
+
+    if (uri.getScheme() != null) {
+      path.append(uri.getScheme());
+      path.append("/");
+    }
+
+    if (uri.getHost() != null) {
+      path.append(uri.getHost());
+      if (uri.getPort() != -1) {
+        path.append(":");
+        path.append(Integer.toString(uri.getPort()));
+      }
+      if (uri.getRawPath() != null) {
+        if (uri.getRawPath().length() == 0) {
+          path.append("/");
+        } else if (uri.getRawPath().equals("/")) {
+          path.append(uri.getRawPath());
+        } else {
+          for (String name : uri.getRawPath().split("/")) {
+            if (name != null && name.length() > 0) {
+              path.append("/");
+              path.append(name);
+            }
+          }
+        }
+      }
+      if (uri.getRawQuery() != null) {
+        path.append("?");
+        path.append(uri.getRawQuery());
+      }
+    } else {
+      if (uri.getRawSchemeSpecificPart() != null) {
+        for (String name : uri.getRawSchemeSpecificPart().split("/")) {
+          if (name != null && name.length() > 0) {
+            path.append("/");
+            path.append(name);
+          }
+        }
+      }
+    }
+
+    if (path.toString().endsWith("/")) {
+      path.append(".content");
+    }
+    return path.toString();
+  }
+  
+  /** Handle URISyntaxException */
+  protected static void handleURISyntaxException(URISyntaxException e)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    Logging.agents.error("Namenode URI is malformed: "+e.getMessage(),e);
+    throw new ManifoldCFException("Namenode URI is malformed: "+e.getMessage(),e);
+  }
+  
+  /** Handle JSONException */
+  protected static void handleJSONException(JSONException e)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    Logging.agents.error("JSON parsing error: "+e.getMessage(),e);
+    throw new ManifoldCFException("JSON parsing error: "+e.getMessage(),e);
+  }
+  
+  /** Handle IOException */
+  protected static void handleIOException(IOException e)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    }
+    long currentTime = System.currentTimeMillis();
+    Logging.agents.warn("HDFS output connection: IO exception: "+e.getMessage(),e);
+    throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L, currentTime + 3 * 60 * 60000L,-1,false);
+  }
+
+  protected static class CreateFileThread extends Thread {
+    protected final HDFSSession session;
+    protected final Path path;
+    protected final InputStream input;
+    protected Throwable exception = null;
+
+    public CreateFileThread(HDFSSession session, Path path, InputStream input) {
+      super();
+      this.session = session;
+      this.path = path;
+      this.input = input;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.createFile(path,input);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp() throws InterruptedException, IOException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+  }
+
+  protected void createFile(Path path, InputStream input)
+    throws ManifoldCFException, ServiceInterruption {
+    CreateFileThread t = new CreateFileThread(getSession(), path, input);
+    try {
+      t.start();
+      t.finishUp();
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+  }
+
+  protected static class DeleteFileThread extends Thread {
+    protected final HDFSSession session;
+    protected final Path path;
+    protected Throwable exception = null;
+
+    public DeleteFileThread(HDFSSession session, Path path) {
+      super();
+      this.session = session;
+      this.path = path;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.deleteFile(path);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp() throws InterruptedException, IOException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+  }
+
+  protected void deleteFile(Path path)
+    throws ManifoldCFException, ServiceInterruption {
+    // Establish a session
+    DeleteFileThread t = new DeleteFileThread(getSession(),path);
+    try {
+      t.start();
+      t.finishUp();
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+  }
+
+
+  protected static class CheckConnectionThread extends Thread {
+    protected final HDFSSession session;
+    protected Throwable exception = null;
+
+    public CheckConnectionThread(HDFSSession session) {
+      super();
+      this.session = session;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.getRepositoryInfo();
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp() throws InterruptedException, IOException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+  }
+
+  /**
+   * @throws ManifoldCFException
+   * @throws ServiceInterruption
+   */
+  protected void checkConnection() throws ManifoldCFException, ServiceInterruption {
+    CheckConnectionThread t = new CheckConnectionThread(getSession());
+    try {
+      t.start();
+      t.finishUp();
+      return;
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+  }
+
+  protected static class GetSessionThread extends Thread {
+    protected final String nameNode;
+    protected final Configuration config;
+    protected final String user;
+    protected Throwable exception = null;
+    protected HDFSSession session = null;
+
+    public GetSessionThread(String nameNode, Configuration config, String user) {
+      super();
+      this.nameNode = nameNode;
+      this.config = config;
+      this.user = user;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        // Create a session
+        session = new HDFSSession(nameNode, config, user);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException, IOException, URISyntaxException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof URISyntaxException) {
+          throw (URISyntaxException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+    
+    public HDFSSession getResult() {
+      return session;
+    }
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputParam.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputParam.java
new file mode 100644
index 0000000..ac2a76f
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputParam.java
@@ -0,0 +1,45 @@
+/* $Id: FileOutputParam.java 1299512 2013-05-31 22:59:38Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/** 
+ * Parameters data for the elasticsearch output connector.
+ */
+public class HDFSOutputParam extends HashMap<ParameterEnum, String>
+{
+
+  private static final long serialVersionUID = -140994685772720029L;
+
+  protected HDFSOutputParam(ParameterEnum[] params) {
+    super(params.length);
+  }
+
+  final public Map<String, String> buildMap() {
+    Map<String, String> rval = new HashMap<String, String>();
+    for (Map.Entry<ParameterEnum, String> entry : this.entrySet()) {
+      rval.put(entry.getKey().name(), entry.getValue());
+    }
+    return rval;
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputSpecs.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputSpecs.java
new file mode 100644
index 0000000..a6b470d
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSOutputSpecs.java
@@ -0,0 +1,153 @@
+/* $Id: FileOutputSpecs.java 1299512 2013-05-31 22:58:38Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import java.io.BufferedReader;
+import java.io.IOException;
+import java.io.StringReader;
+import java.util.Set;
+import java.util.TreeSet;
+
+import org.apache.commons.io.IOUtils;
+import org.apache.manifoldcf.core.interfaces.ConfigurationNode;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.json.JSONException;
+import org.json.JSONObject;
+
+public class HDFSOutputSpecs extends HDFSOutputParam {
+  /**
+   * 
+   */
+  private static final long serialVersionUID = 1145652730572662025L;
+
+  final public static ParameterEnum[] SPECIFICATIONLIST = {
+    ParameterEnum.rootpath
+  };
+
+  private String rootPath;
+
+  /** Build a set of ElasticSearch parameters by reading an JSON object
+   * 
+   * @param json
+   * @throws JSONException
+   * @throws ManifoldCFException
+   */
+  public HDFSOutputSpecs(String json) throws JSONException, ManifoldCFException {
+    this(new JSONObject(json));
+  }
+
+  /** Build a set of ElasticSearch parameters by reading an JSON object
+   * 
+   * @param json
+   * @throws JSONException
+   * @throws ManifoldCFException
+   */
+  public HDFSOutputSpecs(JSONObject json) throws JSONException, ManifoldCFException {
+    super(SPECIFICATIONLIST);
+    rootPath = null;
+    for (ParameterEnum param : SPECIFICATIONLIST) {
+      String value = null;
+      value = json.getString(param.name());
+      if (value == null) {
+        value = param.defaultValue;
+      }
+      put(param, value);
+    }
+    rootPath = getRootPath();
+  }
+
+  /** Build a set of ElasticSearch parameters by reading an instance of
+   * SpecificationNode.
+   * 
+   * @param node
+   * @throws ManifoldCFException
+   */
+  public HDFSOutputSpecs(ConfigurationNode node) throws ManifoldCFException {
+    super(SPECIFICATIONLIST);
+    rootPath = null;
+    for (ParameterEnum param : SPECIFICATIONLIST) {
+      String value = null;
+      if (node != null) {
+        value = node.getAttributeValue(param.name());
+      }
+      if (value == null) {
+        value = param.defaultValue;
+      }
+      put(param, value);
+    }
+    rootPath = getRootPath();
+  }
+
+  /**
+   * @param variableContext
+   * @param specNode
+   */
+  public static void contextToSpecNode(IPostParameters variableContext, ConfigurationNode specNode) {
+    for (ParameterEnum param : SPECIFICATIONLIST) {
+      String p = variableContext.getParameter(param.name().toLowerCase());
+      if (p != null) {
+        specNode.setAttribute(param.name(), p);
+      }
+    }
+  }
+
+  /** @return a JSON representation of the parameter list */
+  public JSONObject toJson() {
+    return new JSONObject(this);
+  }
+
+  /**
+   * @return
+   */
+  public String getRootPath() {
+    return get(ParameterEnum.rootpath);
+  }
+
+  /**
+   * @param content
+   * @return
+   * @throws ManifoldCFException
+   */
+  private final static TreeSet<String> createStringSet(String content) throws ManifoldCFException {
+    TreeSet<String> set = new TreeSet<String>();
+    BufferedReader br = null;
+    StringReader sr = null;
+    try {
+      sr = new StringReader(content);
+      br = new BufferedReader(sr);
+      String line = null;
+      while ((line = br.readLine()) != null) {
+        line = line.trim();
+        if (line.length() > 0) {
+          set.add(line);
+        }
+      }
+      return set;
+    } catch (IOException e) {
+      throw new ManifoldCFException(e.getMessage(),e);
+    } finally {
+      if (br != null) {
+        IOUtils.closeQuietly(br);
+      }
+    }
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSSession.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSSession.java
new file mode 100644
index 0000000..8b77138
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/HDFSSession.java
@@ -0,0 +1,119 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+/*
+ * To change this template, choose Tools | Templates
+ * and open the template in the editor.
+ */
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataOutputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.InputStream;
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Map;
+import java.util.HashMap;
+
+/**
+ */
+public class HDFSSession {
+
+  private FileSystem fileSystem;
+  private final String nameNode;
+  private final Configuration config;
+  private final String user;
+  
+  public HDFSSession(String nameNode, Configuration config, String user) throws URISyntaxException, IOException, InterruptedException {
+    this.nameNode = nameNode;
+    this.config = config;
+    this.user = user;
+    fileSystem = FileSystem.get(new URI(nameNode), config, user);
+  }
+
+  public Map<String, String> getRepositoryInfo() {
+    Map<String, String> info = new HashMap<String, String>();
+
+    info.put("Name Node", nameNode);
+    info.put("Config", config.toString());
+    info.put("User", user);
+    info.put("Canonical Service Name", fileSystem.getCanonicalServiceName());
+    info.put("Default Block Size", Long.toString(fileSystem.getDefaultBlockSize()));
+    info.put("Default Replication", Short.toString(fileSystem.getDefaultReplication()));
+    info.put("Home Directory", fileSystem.getHomeDirectory().toUri().toString());
+    info.put("Working Directory", fileSystem.getWorkingDirectory().toUri().toString());
+    return info;
+  }
+
+  public void deleteFile(Path path)
+    throws IOException {
+    if (fileSystem.exists(path)) {
+      fileSystem.delete(path, true);
+    }
+  }
+
+  public void createFile(Path path, InputStream input)
+    throws IOException {
+    /*
+      * make directory
+      */
+    if (!fileSystem.exists(path.getParent())) {
+      fileSystem.mkdirs(path.getParent());
+    }
+
+    /*
+      * delete old file
+      */
+    if (fileSystem.exists(path)) {
+      fileSystem.delete(path, true);
+    }
+
+    FSDataOutputStream output = fileSystem.create(path);
+    try {
+      /*
+       * write file
+       */
+      byte buf[] = new byte[65536];
+      int len;
+      while((len = input.read(buf)) != -1) {
+        output.write(buf, 0, len);
+      }
+      output.flush();
+    } finally {
+      output.close();
+    }
+
+    // Do NOT close input; it's closed by the caller.
+  }
+
+  public URI getUri() {
+    return fileSystem.getUri();
+  }
+
+  public void close() throws IOException {
+    fileSystem.close();
+  }
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/Messages.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/Messages.java
new file mode 100644
index 0000000..051b614
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/Messages.java
@@ -0,0 +1,141 @@
+/* $Id: Messages.java 1295926 2013-05-31 23:00:00Z minoru $ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.agents.output.hdfs.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.agents.output.hdfs";
+
+  /** Constructor - do no instantiate
+   */
+  protected Messages()
+  {
+  }
+
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+      Map<String,String> substitutionParameters, boolean mapToUpperCase)
+          throws ManifoldCFException
+          {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+        substitutionParameters,mapToUpperCase);
+          }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+      Map<String,String> substitutionParameters, boolean mapToUpperCase)
+          throws ManifoldCFException
+          {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+        substitutionParameters,mapToUpperCase);
+          }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+      Map<String,Object> contextObjects)
+          throws ManifoldCFException
+          {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+        contextObjects);
+          }
+
+}
+
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/ParameterEnum.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/ParameterEnum.java
new file mode 100644
index 0000000..d54c2b4
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/agents/output/hdfs/ParameterEnum.java
@@ -0,0 +1,37 @@
+/* $Id$ */
+
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import java.util.HashMap;
+import java.util.Map;
+
+/** Parameters constants */
+public enum ParameterEnum {
+  namenodehost("localhost"),
+  namenodeport("9000"),
+  user(""),
+  rootpath("");
+
+  final protected String defaultValue;
+
+  private ParameterEnum(String defaultValue) {
+    this.defaultValue = defaultValue;
+  }
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/HDFSRepositoryConnector.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/HDFSRepositoryConnector.java
new file mode 100644
index 0000000..bae4c6c
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/HDFSRepositoryConnector.java
@@ -0,0 +1,1966 @@
+/* $Id: FileConnector.java 995085 2010-09-08 15:13:38Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs;
+
+import org.apache.commons.lang.StringUtils;
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.Path;
+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.connectors.hdfs.HDFSSession;
+import org.apache.manifoldcf.crawler.connectors.hdfs.Messages;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import org.apache.manifoldcf.core.common.XThreadInputStream;
+import org.apache.manifoldcf.core.common.XThreadStringBuffer;
+import org.apache.manifoldcf.core.extmimemap.ExtensionMimeMap;
+
+import java.util.*;
+import java.io.*;
+import java.net.URI;
+import java.net.URISyntaxException;
+
+/** This is the "repository connector" for a file system.  It's a relative of the share crawler, and should have
+ * comparable basic functionality, with the exception of the ability to use ActiveDirectory and look at other shares.
+ */
+public class HDFSRepositoryConnector extends org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector
+{
+  public static final String _rcsid = "@(#)$Id: FileConnector.java 995085 2010-09-08 15:13:38Z kwright $";
+
+  // Activities that we know about
+  protected final static String ACTIVITY_READ = "read document";
+
+  // Relationships we know about
+  protected static final String RELATIONSHIP_CHILD = "child";
+
+  // Activities list
+  protected static final String[] activitiesList = new String[]{ACTIVITY_READ};
+
+  protected String nameNodeHost = null;
+  protected String nameNodePort = null;
+  protected String user = null;
+  protected HDFSSession session = null;
+  protected long lastSessionFetch = -1L;
+  protected static final long timeToRelease = 300000L;
+
+  /*
+   * Constructor.
+   */
+  public HDFSRepositoryConnector()
+  {
+  }
+
+  /** Tell the world what model this connector uses for getDocumentIdentifiers().
+   * This must return a model value as specified above.
+   *@return the model type value.
+   */
+  @Override
+  public int getConnectorModel()
+  {
+    return MODEL_CHAINED_ADD_CHANGE;
+  }
+
+  /** Return the list of relationship types that this connector recognizes.
+   *@return the list.
+   */
+  @Override
+  public String[] getRelationshipTypes()
+  {
+    return new String[]{RELATIONSHIP_CHILD};
+  }
+
+  /** List the activities we might report on.
+   */
+  @Override
+  public String[] getActivitiesList()
+  {
+    return activitiesList;
+  }
+
+  /** For any given document, list the bins that it is a member of.
+   */
+  @Override
+  public String[] getBinNames(String documentIdentifier)
+  {
+    return new String[]{"HDFS"};
+  }
+
+  /**
+   * Get the maximum number of documents to amalgamate together into one
+   * batch, for this connector.
+   *
+   * @return the maximum number. 0 indicates "unlimited".
+   */
+  @Override
+  public int getMaxDocumentRequest() {
+    return 1;
+  }
+
+  /* (non-Javadoc)
+   * @see org.apache.manifoldcf.core.connector.BaseConnector#connect(org.apache.manifoldcf.core.interfaces.ConfigParams)
+   */
+  @Override
+  public void connect(ConfigParams configParams) {
+    super.connect(configParams);
+
+    nameNodeHost = configParams.getParameter("namenodehost");
+    nameNodePort = configParams.getParameter("namenodeport");
+    user = configParams.getParameter("user");
+    
+  }
+
+  /* (non-Javadoc)
+   * @see org.apache.manifoldcf.core.connector.BaseConnector#disconnect()
+   */
+  @Override
+  public void disconnect() throws ManifoldCFException {
+    closeSession();
+    user = null;
+    nameNodeHost = null;
+    nameNodePort = null;
+    super.disconnect();
+  }
+
+  /**
+   * Set up a session
+   */
+  protected HDFSSession getSession() throws ManifoldCFException, ServiceInterruption {
+    if (session == null) {
+      if (StringUtils.isEmpty(nameNodeHost)) {
+        throw new ManifoldCFException("Parameter namenodehost required but not set");
+      }
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("HDFS: NameNodeHost = '" + nameNodeHost + "'");
+      }
+      if (StringUtils.isEmpty(nameNodePort)) {
+        throw new ManifoldCFException("Parameter namenodeport required but not set");
+      }
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("HDFS: NameNodePort = '" + nameNodePort + "'");
+      }
+
+      if (StringUtils.isEmpty(user)) {
+        throw new ManifoldCFException("Parameter user required but not set");
+      }
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("HDFS: User = '" + user + "'");
+      }
+
+      String nameNode = "hdfs://"+nameNodeHost+":"+nameNodePort;
+
+      /*
+       * make Configuration
+       */
+      Configuration config = null;
+      ClassLoader ocl = Thread.currentThread().getContextClassLoader();
+      try {
+        Thread.currentThread().setContextClassLoader(org.apache.hadoop.conf.Configuration.class.getClassLoader());
+        config = new Configuration();
+        config.set("fs.default.name", nameNode);
+      } finally {
+        Thread.currentThread().setContextClassLoader(ocl);
+      }
+      
+      GetSessionThread t = new GetSessionThread(nameNode,config,user);
+      try {
+        t.start();
+        t.finishUp();
+      } catch (InterruptedException e) {
+        t.interrupt();
+        throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+      } catch (java.net.SocketTimeoutException e) {
+        handleIOException(e);
+      } catch (InterruptedIOException e) {
+        t.interrupt();
+        handleIOException(e);
+      } catch (URISyntaxException e) {
+        handleURISyntaxException(e);
+      } catch (IOException e) {
+        handleIOException(e);
+      }
+      session = t.getResult();
+    }
+    lastSessionFetch = System.currentTimeMillis();
+    return session;
+  }
+
+  /**
+   * Test the connection. Returns a string describing the connection
+   * integrity.
+   *
+   * @return the connection's status as a displayable string.
+   */
+  @Override
+  public String check() throws ManifoldCFException {
+    try {
+      checkConnection();
+      return super.check();
+    } catch (ServiceInterruption e) {
+      return "Connection temporarily failed: " + e.getMessage();
+    } catch (ManifoldCFException e) {
+      return "Connection failed: " + e.getMessage();
+    }
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
+  /**
+   * @throws ManifoldCFException
+   */
+  @Override
+  public void poll() throws ManifoldCFException {
+    if (lastSessionFetch == -1L) {
+      return;
+    }
+
+    long currentTime = System.currentTimeMillis();
+    if (currentTime >= lastSessionFetch + timeToRelease) {
+      closeSession();
+    }
+  }
+
+  protected void closeSession()
+    throws ManifoldCFException {
+    if (session != null) {
+      try {
+        // This can in theory throw an IOException, so it is possible it is doing socket
+        // communication.  In practice, it's unlikely that there's any real IO, so I'm
+        // NOT putting it in a background thread for now.
+        session.close();
+      } catch (InterruptedIOException e) {
+        throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+      } catch (IOException e) {
+        Logging.connectors.warn("HDFS: Error closing connection: "+e.getMessage(),e);
+        // Eat the exception
+      } finally {
+        session = null;
+        lastSessionFetch = -1L;
+      }
+    }
+  }
+
+  /**
+   * Queue "seed" documents. Seed documents are the starting places for
+   * crawling activity. Documents are seeded when this method calls
+   * appropriate methods in the passed in ISeedingActivity object.
+   *
+   * This method can choose to find repository changes that happen only during
+   * the specified time interval. The seeds recorded by this method will be
+   * viewed by the framework based on what the getConnectorModel() method
+   * returns.
+   *
+   * It is not a big problem if the connector chooses to create more seeds
+   * than are strictly necessary; it is merely a question of overall work
+   * required.
+   *
+   * The times passed to this method may be interpreted for greatest
+   * efficiency. The time ranges any given job uses with this connector will
+   * not overlap, but will proceed starting at 0 and going to the "current
+   * time", each time the job is run. For continuous crawling jobs, this
+   * method will be called once, when the job starts, and at various periodic
+   * intervals as the job executes.
+   *
+   * When a job's specification is changed, the framework automatically resets
+   * the seeding start time to 0. The seeding start time may also be set to 0
+   * on each job run, depending on the connector model returned by
+   * getConnectorModel().
+   *
+   * Note that it is always ok to send MORE documents rather than less to this
+   * method.
+   *
+   * @param activities is the interface this method should use to perform
+   * whatever framework actions are desired.
+   * @param spec is a document specification (that comes from the job).
+   * @param startTime is the beginning of the time range to consider,
+   * inclusive.
+   * @param endTime is the end of the time range to consider, exclusive.
+   * @param jobMode is an integer describing how the job is being run, whether
+   * continuous or once-only.
+   */
+  @Override
+  public void addSeedDocuments(ISeedingActivity activities,
+      DocumentSpecification spec, long startTime, long endTime, int jobMode)
+      throws ManifoldCFException, ServiceInterruption {
+
+    String path = StringUtils.EMPTY;
+    int i = 0;
+    while (i < spec.getChildCount()) {
+      SpecificationNode sn = spec.getChild(i);
+      if (sn.getType().equals("startpoint")) {
+        path = sn.getAttributeValue("path");
+        
+        FileStatus fileStatus = getObject(new Path(path));
+        if (fileStatus.isDir()) {
+          activities.addSeedDocument(fileStatus.getPath().toUri().toString());
+        }
+      }
+      i++;
+    }
+  }
+
+  /** Get document versions given an array of document identifiers.
+   * This method is called for EVERY document that is considered. It is therefore important to perform
+   * as little work as possible here.
+   * The connector will be connected before this method can be called.
+   *@param documentIdentifiers is the array of local document identifiers, as understood by this connector.
+   *@param oldVersions is the corresponding array of version strings that have been saved for the document identifiers.
+   *   A null value indicates that this is a first-time fetch, while an empty string indicates that the previous document
+   *   had an empty version string.
+   *@param activities is the interface this method should use to perform whatever framework actions are desired.
+   *@param spec is the current document specification for the current job.  If there is a dependency on this
+   * specification, then the version string should include the pertinent data, so that reingestion will occur
+   * when the specification changes.  This is primarily useful for metadata.
+   *@param jobMode is an integer describing how the job is being run, whether continuous or once-only.
+   *@param usesDefaultAuthority will be true only if the authority in use for these documents is the default one.
+   *@return the corresponding version strings, with null in the places where the document no longer exists.
+   * Empty version strings indicate that there is no versioning ability for the corresponding document, and the document
+   * will always be processed.
+   */
+  public String[] getDocumentVersions(String[] documentIdentifiers, String[] oldVersions, IVersionActivity activities,
+    DocumentSpecification spec, int jobMode, boolean usesDefaultAuthority)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    String[] rval = new String[documentIdentifiers.length];
+    for (int i = 0; i < rval.length; i++) {
+      String documentIdentifier = documentIdentifiers[i];
+      
+      FileStatus fileStatus = getObject(new Path(documentIdentifier));
+      if (fileStatus != null) {
+        if (fileStatus.isDir()) {
+          long lastModified = fileStatus.getModificationTime();
+          rval[i] = new Long(lastModified).toString();
+        } else {
+          long fileLength = fileStatus.getLen();
+          if (activities.checkLengthIndexable(fileLength)) {
+            long lastModified = fileStatus.getModificationTime();
+            StringBuilder sb = new StringBuilder();
+            // Check if the path is to be converted.  We record that info in the version string so that we'll reindex documents whose
+            // URI's change.
+            String nameNode = "hdfs://" + nameNodeHost + ":" + nameNodePort;
+            String convertPath = findConvertPath(nameNode, spec, fileStatus.getPath());
+            if (convertPath != null)
+            {
+              // Record the path.
+              sb.append("+");
+              pack(sb,convertPath,'+');
+            }
+            else
+              sb.append("-");
+            sb.append(new Long(lastModified).toString()).append(":").append(new Long(fileLength).toString());
+            rval[i] = sb.toString();
+          } else {
+            rval[i] = null;
+          }
+        }
+      } else {
+        rval[i] = null;
+      }
+    }
+    
+    return rval;
+  }
+
+
+  /** Process a set of documents.
+   * This is the method that should cause each document to be fetched, processed, and the results either added
+   * to the queue of documents for the current job, and/or entered into the incremental ingestion manager.
+   * The document specification allows this class to filter what is done based on the job.
+   *@param documentIdentifiers is the set of document identifiers to process.
+   *@param activities is the interface this method should use to queue up new document references
+   * and ingest documents.
+   *@param spec is the document specification.
+   *@param scanOnly is an array corresponding to the document identifiers.  It is set to true to indicate when the processing
+   * should only find other references, and should not actually call the ingestion methods.
+   */
+  @Override
+  public void processDocuments(String[] documentIdentifiers, String[] versions, IProcessActivity activities, DocumentSpecification spec, boolean[] scanOnly)
+    throws ManifoldCFException, ServiceInterruption {
+    for (int i = 0; i < documentIdentifiers.length; i++) {
+      String version = versions[i];
+      String documentIdentifier = documentIdentifiers[i];
+        
+      if (Logging.connectors.isDebugEnabled()) {
+        Logging.connectors.debug("HDFS: Processing document identifier '" + documentIdentifier + "'");
+      }
+      FileStatus fileStatus = getObject(new Path(documentIdentifier));
+        
+      if (fileStatus == null) {
+        // It is no longer there , so delete right away
+        activities.deleteDocument(documentIdentifier,version);
+        continue;
+      }
+        
+      if (fileStatus.isDir()) {
+        /*
+          * Queue up stuff for directory
+          */
+        String entityReference = documentIdentifier;
+        FileStatus[] fileStatuses = getChildren(fileStatus.getPath());
+        if (fileStatuses == null) {
+          // Directory was deleted, so remove
+          activities.deleteDocument(documentIdentifier,version);
+          continue;
+        }
+        for (int j = 0; j < fileStatuses.length; j++) {
+          FileStatus fs = fileStatuses[j++];
+          String canonicalPath = fs.getPath().toString();
+          if (checkInclude(session.getUri().toString(),fs,canonicalPath,spec)) {
+            activities.addDocumentReference(canonicalPath,documentIdentifier,RELATIONSHIP_CHILD);
+          }
+        }
+      } else {
+        if (scanOnly[i])
+          continue;
+        if (!checkIngest(session.getUri().toString(),fileStatus,spec))
+          continue;
+
+        // Get the WGet conversion path out of the version string
+        String convertPath = null;
+        if (version.length() > 0 && version.startsWith("+"))
+        {
+          StringBuilder unpack = new StringBuilder();
+          unpack(unpack, version, 1, '+');
+          convertPath = unpack.toString();
+        }
+
+        // It is a file to be indexed.
+        
+        // Prepare the metadata part of RepositoryDocument
+        RepositoryDocument data = new RepositoryDocument();
+
+        data.setFileName(fileStatus.getPath().getName());
+        data.setMimeType(mapExtensionToMimeType(fileStatus.getPath().getName()));
+        data.setModifiedDate(new Date(fileStatus.getModificationTime()));
+
+        String uri;
+        if (convertPath != null) {
+          uri = convertToWGETURI(convertPath);
+        } else {
+          uri = fileStatus.getPath().toUri().toString();
+        }
+        data.addField("uri",uri);
+
+        // We will record document fetch as an activity
+        long startTime = System.currentTimeMillis();
+        String errorCode = "FAILED";
+        String errorDesc = StringUtils.EMPTY;
+        long fileSize = 0;
+
+        try {
+          BackgroundStreamThread t = new BackgroundStreamThread(getSession(),new Path(documentIdentifier));
+          try {
+            t.start();
+            boolean wasInterrupted = false;
+            try {
+              InputStream is = t.getSafeInputStream();
+              try {
+                data.setBinary(is, fileSize);
+                activities.ingestDocument(documentIdentifier,version,uri,data);
+              } finally {
+                is.close();
+              }
+            } catch (java.net.SocketTimeoutException e) {
+              throw e;
+            } catch (InterruptedIOException e) {
+              wasInterrupted = true;
+              throw e;
+            } catch (ManifoldCFException e) {
+              if (e.getErrorCode() == ManifoldCFException.INTERRUPTED) {
+                wasInterrupted = true;
+              }
+              throw e;
+            } finally {
+              if (!wasInterrupted) {
+                // This does a join
+                t.finishUp();
+              }
+            }
+
+            // No errors.  Record the fact that we made it.
+            errorCode = "OK";
+            // Length we did in bytes
+            fileSize = fileStatus.getLen();
+
+          } catch (InterruptedException e) {
+            // We were interrupted out of the join, most likely.  Before we abandon the thread,
+            // send a courtesy interrupt.
+            t.interrupt();
+            throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+          } catch (java.net.SocketTimeoutException e) {
+            errorCode = "IO ERROR";
+            errorDesc = e.getMessage();
+            handleIOException(e);
+          } catch (InterruptedIOException e) {
+            t.interrupt();
+            throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+          } catch (IOException e) {
+            errorCode = "IO ERROR";
+            errorDesc = e.getMessage();
+            handleIOException(e);
+          }
+        } finally {
+          activities.recordActivity(new Long(startTime),ACTIVITY_READ,new Long(fileSize),documentIdentifier,errorCode,errorDesc,null);
+        }
+      }
+    }
+  }
+
+  // UI support methods.
+  //
+  // These support methods come in two varieties.  The first bunch is involved in setting up connection configuration information.  The second bunch
+  // is involved in presenting and editing document specification information for a job.  The two kinds of methods are accordingly treated differently,
+  // in that the first bunch cannot assume that the current connector object is connected, while the second bunch can.  That is why the first bunch
+  // receives a thread context argument for all UI methods, while the second bunch does not need one (since it has already been applied via the connect()
+  // method, above).
+    
+  /** Output the configuration header section.
+   * This method is called in the head section of the connector's configuration page.  Its purpose is to add the required tabs to the list, and to output any
+   * javascript methods that might be needed by the configuration editing HTML.
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+   */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray) throws ManifoldCFException, IOException
+  {
+    tabsArray.add(Messages.getString(locale,"HDFSRepositoryConnector.ServerTabName"));
+    
+    out.print(
+"<script type=\"text/javascript\">\n"+
+"<!--\n"+
+"function checkConfigForSave()\n"+
+"{\n"+
+"  if (editconnection.namenodehost.value == \"\")\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.NameNodeHostCannotBeNull")+"\");\n"+
+"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.ServerTabName")+"\");\n"+
+"    editconnection.namenodehost.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  if (editconnection.namenodeport.value == \"\")\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.NameNodePortCannotBeNull")+"\");\n"+
+"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.ServerTabName")+"\");\n"+
+"    editconnection.namenodeport.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  if (!isInteger(editconnection.namenodeport.value))\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.NameNodePortMustBeAnInteger")+"\");\n"+
+"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.ServerTabName")+"\");\n"+
+"    editconnection.namenodeport.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  if (editconnection.user.value == \"\")\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.UserCannotBeNull")+"\");\n"+
+"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"HDFSRepositoryConnector.ServerTabName")+"\");\n"+
+"    editconnection.user.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  return true;\n"+
+"}\n"+
+"\n"+
+"//-->\n"+
+"</script>\n"
+    );
+  }
+  
+  /** Output the configuration body section.
+  * This method is called in the body section of the connector's configuration page.  Its purpose is to present the required form elements for editing.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+  * form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabName is the current tab name.
+  */
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    String nameNodeHost = parameters.getParameter("namenodehost");
+    if (nameNodeHost == null) {
+      nameNodeHost = "localhost";
+    }
+    
+    String nameNodePort = parameters.getParameter("namenodeport");
+    if (nameNodePort == null) {
+      nameNodePort = "9000";
+    }
+
+    String user = parameters.getParameter("user");
+    if (user == null) {
+      user = "";
+    }
+    
+    if (tabName.equals(Messages.getString(locale,"HDFSRepositoryConnector.ServerTabName")))
+    {
+      out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NameNodeHost") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"namenodehost\" type=\"text\" size=\"32\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nameNodeHost)+"\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NameNodePort") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"namenodeport\" type=\"text\" size=\"5\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nameNodePort)+"\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.User") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"user\" type=\"text\" size=\"32\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(user)+"\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"</table>\n"
+      );
+    }
+    else
+    {
+      // Server tab hiddens
+      out.print(
+"<input type=\"hidden\" name=\"namenodehost\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nameNodeHost)+"\"/>\n"+
+"<input type=\"hidden\" name=\"namenodeport\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nameNodePort)+"\"/>\n"+
+"<input type=\"hidden\" name=\"user\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(user)+"\"/>\n"
+      );
+    }
+  }
+  
+  /** Process a configuration post.
+   * This method is called at the start of the connector's configuration page, whenever there is a possibility that form data for a connection has been
+   * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
+   * The name of the posted form is "editconnection".
+   *@param threadContext is the local thread context.
+   *@param variableContext is the set of variables available from the post, including binary file post information.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
+   */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    String nameNodeHost = variableContext.getParameter("namenodehost");
+    if (nameNodeHost != null) {
+      parameters.setParameter("namenodehost", nameNodeHost);
+    }
+
+    String nameNodePort = variableContext.getParameter("namenodeport");
+    if (nameNodePort != null) {
+      parameters.setParameter("namenodeport", nameNodePort);
+    }
+
+    String user = variableContext.getParameter("user");
+    if (user != null) {
+      parameters.setParameter("user", user);
+    }
+
+    return null;
+  }
+  
+  /** View configuration.
+   * This method is called in the body section of the connector's view configuration page.  Its purpose is to present the connection information to the user.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+   *@param threadContext is the local thread context.
+   *@param out is the output to which any HTML should be sent.
+   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+   */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters)
+    throws ManifoldCFException, IOException
+  {
+    String nameNodeHost = parameters.getParameter("namenodehost");
+    String nameNodePort = parameters.getParameter("namenodeport");
+    String user = parameters.getParameter("user");
+    
+    out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NameNodeHost") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nameNodeHost)+"</td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NameNodePort") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nameNodePort)+"</td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.User") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(user)+"</td>\n"+
+"  </tr>\n"+
+"</table>\n"
+    );
+  }
+  
+  /** Output the specification header section.
+   * This method is called in the head section of a job page which has selected a repository connection of the current type.  Its purpose is to add the required tabs
+   * to the list, and to output any javascript methods that might be needed by the job editing HTML.
+   *@param out is the output to which any HTML should be sent.
+   *@param ds is the current document specification for this job.
+   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+   */
+  @Override
+  public void outputSpecificationHeader(IHTTPOutput out, Locale locale, DocumentSpecification ds, List<String> tabsArray)
+    throws ManifoldCFException, IOException
+  {
+    tabsArray.add(Messages.getString(locale,"HDFSRepositoryConnector.Paths"));
+
+    out.print(
+"<script type=\"text/javascript\">\n"+
+"<!--\n"+
+"function checkSpecification()\n"+
+"{\n"+
+"  // Does nothing right now.\n"+
+"  return true;\n"+
+"}\n"+
+"\n"+
+"function SpecOp(n, opValue, anchorvalue)\n"+
+"{\n"+
+"  eval(\"editjob.\"+n+\".value = \\\"\"+opValue+\"\\\"\");\n"+
+"  postFormSetAnchor(anchorvalue);\n"+
+"}\n"+
+"//-->\n"+
+"</script>\n"
+    );
+  }
+  
+  /** Output the specification body section.
+   * This method is called in the body section of a job page which has selected a repository connection of the current type.  Its purpose is to present the required form elements for editing.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+   * form is "editjob".
+   *@param out is the output to which any HTML should be sent.
+   *@param ds is the current document specification for this job.
+   *@param tabName is the current tab name.
+   */
+  @Override
+  public void outputSpecificationBody(IHTTPOutput out, Locale locale, DocumentSpecification ds, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    int i;
+    int k;
+
+    // Paths tab
+    if (tabName.equals(Messages.getString(locale,"HDFSRepositoryConnector.Paths")))
+    {
+      out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr><td class=\"separator\" colspan=\"3\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Paths2") + "</nobr></td>\n"+
+"    <td class=\"boxcell\">\n"+
+"      <table class=\"formtable\">\n"+
+"        <tr class=\"formheaderrow\">\n"+
+"          <td class=\"formcolumnheader\"></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.RootPath") + "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.ConvertToURI") + "<br/>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.ConvertToURIExample")+ "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Rules") + "</nobr></td>\n"+
+"        </tr>\n"
+      );
+      i = 0;
+      k = 0;
+      while (i < ds.getChildCount())
+      {
+        SpecificationNode sn = ds.getChild(i++);
+        if (sn.getType().equals("startpoint"))
+        {
+          String pathDescription = "_"+Integer.toString(k);
+          String pathOpName = "specop"+pathDescription;
+          
+          String path = sn.getAttributeValue("path");
+          String convertToURIString = sn.getAttributeValue("converttouri");
+
+          boolean convertToURI = false;
+          if (convertToURIString != null && convertToURIString.equals("true"))
+            convertToURI = true;
+
+          out.print(
+"        <tr class=\""+(((k % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <input type=\"hidden\" name=\""+pathOpName+"\" value=\"\"/>\n"+
+"            <input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(sn.getAttributeValue("path"))+"\"/>\n"+
+"            <a name=\""+"path_"+Integer.toString(k)+"\">\n"+
+"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.Delete") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Delete\",\"path_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"HDFSRepositoryConnector.DeletePath")+Integer.toString(k)+"\"/>\n"+
+"            </a>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <input type=\"hidden\" name=\"converttouri"+pathDescription+"\" value=\""+(convertToURI?"true":"false")+"\">\n"+
+"            <nobr>\n"+
+"              "+(convertToURI?Messages.getBodyString(locale,"HDFSRepositoryConnector.Yes"):Messages.getBodyString(locale,"HDFSRepositoryConnector.No"))+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"boxcell\">\n"+
+"            <input type=\"hidden\" name=\""+"specchildcount"+pathDescription+"\" value=\""+Integer.toString(sn.getChildCount())+"\"/>\n"+
+"            <table class=\"formtable\">\n"+
+"              <tr class=\"formheaderrow\">\n"+
+"                <td class=\"formcolumnheader\"></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.IncludeExclude") + "</nobr></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.FileDirectory") + "</nobr></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Match") + "</nobr></td>\n"+
+"              </tr>\n"
+          );
+          int j = 0;
+          while (j < sn.getChildCount())
+          {
+            SpecificationNode excludeNode = sn.getChild(j);
+            String instanceDescription = "_"+Integer.toString(k)+"_"+Integer.toString(j);
+            String instanceOpName = "specop" + instanceDescription;
+
+            String nodeFlavor = excludeNode.getType();
+            String nodeType = excludeNode.getAttributeValue("type");
+            String nodeMatch = excludeNode.getAttributeValue("match");
+            out.print(
+"              <tr class=\"evenformrow\">\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.InsertHere") + "\" onClick='Javascript:SpecOp(\"specop"+instanceDescription+"\",\"Insert Here\",\"match_"+Integer.toString(k)+"_"+Integer.toString(j+1)+"\")' alt=\""+Messages.getAttributeString(locale,"HDFSRepositoryConnector.InsertNewMatchForPath")+Integer.toString(k)+" before position #"+Integer.toString(j)+"\"/>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <select name=\""+"specflavor"+instanceDescription+"\">\n"+
+"                      <option value=\"include\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.include") + "</option>\n"+
+"                      <option value=\"exclude\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.exclude") + "</option>\n"+
+"                    </select>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <select name=\""+"spectype"+instanceDescription+"\">\n"+
+"                      <option value=\"file\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.File") + "</option>\n"+
+"                      <option value=\"directory\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Directory") + "</option>\n"+
+"                    </select>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <input type=\"text\" size=\"10\" name=\""+"specmatch"+instanceDescription+"\" value=\"\"/>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"              </tr>\n"+
+"              <tr class=\"oddformrow\">\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <input type=\"hidden\" name=\""+"specop"+instanceDescription+"\" value=\"\"/>\n"+
+"                    <input type=\"hidden\" name=\""+"specfl"+instanceDescription+"\" value=\""+nodeFlavor+"\"/>\n"+
+"                    <input type=\"hidden\" name=\""+"specty"+instanceDescription+"\" value=\""+nodeType+"\"/>\n"+
+"                    <input type=\"hidden\" name=\""+"specma"+instanceDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nodeMatch)+"\"/>\n"+
+"                    <a name=\""+"match_"+Integer.toString(k)+"_"+Integer.toString(j)+"\">\n"+
+"                      <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.Delete") + "\" onClick='Javascript:SpecOp(\"specop"+instanceDescription+"\",\"Delete\",\"match_"+Integer.toString(k)+"_"+Integer.toString(j)+"\")' alt=\""+Messages.getAttributeString(locale,"HDFSRepositoryConnector.DeletePath")+Integer.toString(k)+", match spec #"+Integer.toString(j)+"\"/>\n"+
+"                    </a>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+nodeFlavor+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+nodeType+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(nodeMatch)+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"              </tr>\n"
+            );
+            j++;
+          }
+          if (j == 0)
+          {
+            out.print(
+"              <tr class=\"formrow\"><td class=\"formcolumnmessage\" colspan=\"4\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NoRulesDefined") + "</td></tr>\n"
+            );
+          }
+          out.print(
+"              <tr class=\"formrow\"><td class=\"lightseparator\" colspan=\"4\"><hr/></td></tr>\n"+
+"              <tr class=\"formrow\">\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <a name=\""+"match_"+Integer.toString(k)+"_"+Integer.toString(j)+"\">\n"+
+"                    <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.Add") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Add\",\"match_"+Integer.toString(k)+"_"+Integer.toString(j+1)+"\")' alt=\""+Messages.getAttributeString(locale,"HDFSRepositoryConnector.AddNewMatchForPath")+Integer.toString(k)+"\"/>\n"+
+"                  </a>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <select name=\""+"specflavor"+pathDescription+"\">\n"+
+"                      <option value=\"include\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.include") + "</option>\n"+
+"                      <option value=\"exclude\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.exclude") + "</option>\n"+
+"                    </select>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <select name=\""+"spectype"+pathDescription+"\">\n"+
+"                      <option value=\"file\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.File") + "</option>\n"+
+"                      <option value=\"directory\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Directory") + "</option>\n"+
+"                    </select>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    <input type=\"text\" size=\"10\" name=\""+"specmatch"+pathDescription+"\" value=\"\"/>\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"              </tr>\n"+
+"            </table>\n"+
+"          </td>\n"+
+"        </tr>\n"
+          );
+          k++;
+        }
+      }
+      if (k == 0)
+      {
+        out.print(
+"        <tr class=\"formrow\"><td class=\"formcolumnmessage\" colspan=\"4\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NoDocumentsSpecified") + "</td></tr>\n"
+        );
+      }
+      out.print(
+"        <tr class=\"formrow\"><td class=\"lightseparator\" colspan=\"4\"><hr/></td></tr>\n"+
+"        <tr class=\"formrow\">\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              <a name=\""+"path_"+Integer.toString(k)+"\">\n"+
+"                <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.Add") + "\" onClick='Javascript:SpecOp(\"specop\",\"Add\",\"path_"+Integer.toString(i+1)+"\")' alt=\"" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.AddNewPath") + "\"/>\n"+
+"                <input type=\"hidden\" name=\"pathcount\" value=\""+Integer.toString(k)+"\"/>\n"+
+"                <input type=\"hidden\" name=\"specop\" value=\"\"/>\n"+
+"              </a>\n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              <input type=\"text\" size=\"30\" name=\"specpath\" value=\"\"/>\n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              <input name=\"converttouri\" type=\"checkbox\" value=\"true\"/>\n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"          </td>\n"+
+"        </tr>\n"+
+"      </table>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"</table>\n"
+      );
+    }
+    else
+    {
+      i = 0;
+      k = 0;
+      while (i < ds.getChildCount())
+      {
+        SpecificationNode sn = ds.getChild(i++);
+        if (sn.getType().equals("startpoint"))
+        {
+          String pathDescription = "_"+Integer.toString(k);
+          
+          String path = sn.getAttributeValue("path");
+          String convertToURIString = sn.getAttributeValue("converttouri");
+
+          boolean convertToURI = false;
+          if (convertToURIString != null && convertToURIString.equals("true"))
+            convertToURI = true;
+
+          out.print(
+"<input type=\"hidden\" name=\"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
+"<input type=\"hidden\" name=\"converttouri"+pathDescription+"\" value=\""+(convertToURI?"true":"false")+"\">\n"+
+"<input type=\"hidden\" name=\"specchildcount"+pathDescription+"\" value=\""+Integer.toString(sn.getChildCount())+"\"/>\n"
+          );
+
+          int j = 0;
+	  while (j < sn.getChildCount())
+	  {
+            SpecificationNode excludeNode = sn.getChild(j);
+            String instanceDescription = "_"+Integer.toString(k)+"_"+Integer.toString(j);
+
+            String nodeFlavor = excludeNode.getType();
+            String nodeType = excludeNode.getAttributeValue("type");
+            String nodeMatch = excludeNode.getAttributeValue("match");
+            out.print(
+"<input type=\"hidden\" name=\"specfl"+instanceDescription+"\" value=\""+nodeFlavor+"\"/>\n"+
+"<input type=\"hidden\" name=\"specty"+instanceDescription+"\" value=\""+nodeType+"\"/>\n"+
+"<input type=\"hidden\" name=\"specma"+instanceDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(nodeMatch)+"\"/>\n"
+            );
+            j++;
+          }
+          k++;
+        }
+      }
+      out.print(
+"<input type=\"hidden\" name=\"pathcount\" value=\""+Integer.toString(k)+"\"/>\n"
+      );
+    }
+  }
+  
+  /** Process a specification post.
+   * This method is called at the start of job's edit or view page, whenever there is a possibility that form data for a connection has been
+   * posted.  Its purpose is to gather form information and modify the document specification accordingly.
+   * The name of the posted form is "editjob".
+   *@param variableContext contains the post data, including binary file-upload information.
+   *@param ds is the current document specification for this job.
+   *@return null if all is well, or a string error message if there is an error that should prevent saving of the job (and cause a redirection to an error page).
+   */
+  @Override
+  public String processSpecificationPost(IPostParameters variableContext, Locale locale, DocumentSpecification ds)
+    throws ManifoldCFException
+  {
+    String x = variableContext.getParameter("pathcount");
+    if (x != null)
+    {
+      ds.clearChildren();
+      // Find out how many children were sent
+      int pathCount = Integer.parseInt(x);
+      // Gather up these
+      int i = 0;
+      int k = 0;
+      while (i < pathCount)
+      {
+        String pathDescription = "_"+Integer.toString(i);
+        String pathOpName = "specop"+pathDescription;
+        x = variableContext.getParameter(pathOpName);
+        if (x != null && x.equals("Delete"))
+        {
+          // Skip to the next
+          i++;
+          continue;
+        }
+        // Path inserts won't happen until the very end
+        String path = variableContext.getParameter("specpath"+pathDescription);
+        String convertToURI = variableContext.getParameter("converttouri"+pathDescription);
+
+        SpecificationNode node = new SpecificationNode("startpoint");
+        node.setAttribute("path",path);
+        if (convertToURI != null)
+          node.setAttribute("converttouri",convertToURI);
+
+        // Now, get the number of children
+        String y = variableContext.getParameter("specchildcount"+pathDescription);
+        int childCount = Integer.parseInt(y);
+        int j = 0;
+        int w = 0;
+        while (j < childCount)
+        {
+          String instanceDescription = "_"+Integer.toString(i)+"_"+Integer.toString(j);
+          // Look for an insert or a delete at this point
+          String instanceOp = "specop"+instanceDescription;
+          String z = variableContext.getParameter(instanceOp);
+          String flavor;
+          String type;
+          String match;
+          SpecificationNode sn;
+          if (z != null && z.equals("Delete"))
+          {
+            // Process the deletion as we gather
+            j++;
+            continue;
+          }
+          if (z != null && z.equals("Insert Here"))
+          {
+            // Process the insertion as we gather.
+            flavor = variableContext.getParameter("specflavor"+instanceDescription);
+            type = variableContext.getParameter("spectype"+instanceDescription);
+            match = variableContext.getParameter("specmatch"+instanceDescription);
+            sn = new SpecificationNode(flavor);
+            sn.setAttribute("type",type);
+            sn.setAttribute("match",match);
+            node.addChild(w++,sn);
+          }
+          flavor = variableContext.getParameter("specfl"+instanceDescription);
+          type = variableContext.getParameter("specty"+instanceDescription);
+          match = variableContext.getParameter("specma"+instanceDescription);
+          sn = new SpecificationNode(flavor);
+          sn.setAttribute("type",type);
+          sn.setAttribute("match",match);
+          node.addChild(w++,sn);
+          j++;
+        }
+        if (x != null && x.equals("Add"))
+        {
+          // Process adds to the end of the rules in-line
+          String match = variableContext.getParameter("specmatch"+pathDescription);
+          String type = variableContext.getParameter("spectype"+pathDescription);
+          String flavor = variableContext.getParameter("specflavor"+pathDescription);
+          SpecificationNode sn = new SpecificationNode(flavor);
+          sn.setAttribute("type",type);
+          sn.setAttribute("match",match);
+          node.addChild(w,sn);
+        }
+        ds.addChild(k++,node);
+        i++;
+      }
+
+      // See if there's a global add operation
+      String op = variableContext.getParameter("specop");
+      if (op != null && op.equals("Add"))
+      {
+        String path = variableContext.getParameter("specpath");
+        String convertToURI = variableContext.getParameter("converttouri");
+
+        SpecificationNode node = new SpecificationNode("startpoint");
+        node.setAttribute("path",path);
+        if (convertToURI != null)
+          node.setAttribute("converttouri",convertToURI);
+        
+        // Now add in the defaults; these will be "include all directories" and "include all files".
+        SpecificationNode sn = new SpecificationNode("include");
+        sn.setAttribute("type","file");
+        sn.setAttribute("match","*");
+        node.addChild(node.getChildCount(),sn);
+        sn = new SpecificationNode("include");
+        sn.setAttribute("type","directory");
+        sn.setAttribute("match","*");
+        node.addChild(node.getChildCount(),sn);
+
+        ds.addChild(k,node);
+      }
+    }
+    
+    /*
+     * "filepathtouri"
+     */
+    String filepathtouri = variableContext.getParameter("filepathtouri");
+    if (filepathtouri != null) {
+      SpecificationNode sn;
+      int i = 0;
+      while (i < ds.getChildCount()) {
+        if (ds.getChild(i).getType().equals("filepathtouri")) {
+          ds.removeChild(i);
+        } else {
+          i++;
+        }
+      }
+      sn = new SpecificationNode("filepathtouri");
+      sn.setValue(filepathtouri);
+      ds.addChild(ds.getChildCount(),sn);
+    }
+    
+    return null;
+  }
+  
+  /** View specification.
+   * This method is called in the body section of a job's view page.  Its purpose is to present the document specification information to the user.
+   * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+   *@param out is the output to which any HTML should be sent.
+   *@param ds is the current document specification for this job.
+   */
+  @Override
+  public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)
+    throws ManifoldCFException, IOException
+  {
+    out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr>\n"+
+"    <td class=\"description\">" + Messages.getAttributeString(locale,"HDFSRepositoryConnector.Paths2") + "</td>\n"+    
+"    <td class=\"boxcell\">\n"+
+"      <table class=\"formtable\">\n"+
+"        <tr class=\"formheaderrow\">\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.RootPath") + "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.ConvertToURI") + "<br/>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.ConvertToURIExample")+ "</nobr></td>\n"+
+"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Rules") + "</nobr></td>\n"+
+"        </tr>\n"
+    );
+    
+    int k = 0;
+    for (int i = 0; i < ds.getChildCount(); i++)
+    {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals("startpoint"))
+      {
+        String path = sn.getAttributeValue("path");
+        String convertToURIString = sn.getAttributeValue("converttouri");
+        boolean convertToURI = false;
+        if (convertToURIString != null && convertToURIString.equals("true"))
+          convertToURI = true;
+        
+        out.print(
+"        <tr class=\""+(((k % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"formcolumncell\">\n"+
+"            <nobr>\n"+
+"              "+(convertToURI?Messages.getBodyString(locale,"HDFSRepositoryConnector.Yes"):Messages.getBodyString(locale,"HDFSRepositoryConnector.No"))+" \n"+
+"            </nobr>\n"+
+"          </td>\n"+
+"          <td class=\"boxcell\">\n"+
+"            <table class=\"formtable\">\n"+
+"              <tr class=\"formheaderrow\">\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.IncludeExclude") + "</nobr></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.FileDirectory") + "</nobr></td>\n"+
+"                <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"HDFSRepositoryConnector.Match") + "</nobr></td>\n"+
+"              </tr>\n"
+        );
+        
+        int l = 0;
+        for (int j = 0; j < sn.getChildCount(); j++)
+        {
+          SpecificationNode excludeNode = sn.getChild(j);
+
+          String nodeFlavor = excludeNode.getType();
+          String nodeType = excludeNode.getAttributeValue("type");
+          String nodeMatch = excludeNode.getAttributeValue("match");
+          out.print(
+"              <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+nodeFlavor+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+nodeType+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"                <td class=\"formcolumncell\">\n"+
+"                  <nobr>\n"+
+"                    "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(nodeMatch)+"\n"+
+"                  </nobr>\n"+
+"                </td>\n"+
+"              </tr>\n"
+          );
+          l++;
+        }
+
+        if (l == 0)
+        {
+          out.print(
+"              <tr><td class=\"formcolumnmessage\" colspan=\"3\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NoRulesDefined") + "</td></tr>\n"
+          );
+        }
+
+        out.print(
+"            </table>\n"+
+"           </td>\n"
+        );
+
+        out.print(
+"        </tr>\n"
+        );
+
+        k++;
+      }
+      
+    }
+
+    if (k == 0)
+    {
+      out.print(
+"        <tr><td class=\"formcolumnmessage\" colspan=\"3\">" + Messages.getBodyString(locale,"HDFSRepositoryConnector.NoDocumentsSpecified") + "</td></tr>\n"
+      );
+    }
+    
+    out.print(
+"      </table>\n"+
+"    </td>\n"+
+"  </tr>\n"
+    );
+
+    out.print(
+"</table>\n"
+    );
+    
+  }
+
+  // Protected static methods
+
+  /** Convert a path to an HDFS wget URI.  The URI is the URI that will be the unique key from
+  * the search index, and will be presented to the user as part of the search results.
+  *@param filePath is the document filePath.
+  *@param repositoryPath is the document repositoryPath.
+  *@return the document uri.
+  */
+  protected static String convertToWGETURI(String path)
+    throws ManifoldCFException
+  {
+    //
+    // Note well:  This MUST be a legal URI!!!
+    try
+    {
+      StringBuffer sb = new StringBuffer();
+      String[] tmp = path.split("/", 3);
+      String scheme = "";
+      String host = "";
+      String other = "";
+      if (tmp.length >= 1)
+        scheme = tmp[0];
+      else
+        scheme = "hdfs";
+      if (tmp.length >= 2)
+        host = tmp[1];
+      else
+        host = "localhost:9000";
+      if (tmp.length >= 3)
+        other = "/" + tmp[2];
+      else
+        other = "/";
+      return new URI(scheme + "://" + host + other).toURL().toString();
+    }
+    catch (java.net.MalformedURLException e)
+    {
+      throw new ManifoldCFException("Bad url: "+e.getMessage(),e);
+    }
+    catch (URISyntaxException e)
+    {
+      throw new ManifoldCFException("Bad url: "+e.getMessage(),e);
+    }
+  }
+
+  /** This method finds the part of the path that should be converted to a URI.
+  * Returns null if the path should not be converted.
+  *@param spec is the document specification.
+  *@param documentIdentifier is the document identifier.
+  *@return the part of the path to be converted, or null.
+  */
+  protected static String findConvertPath(String nameNode, DocumentSpecification spec, Path theFile)
+  {
+    String fullpath = theFile.toString();
+    for (int j = 0; j < spec.getChildCount(); j++)
+    {
+      SpecificationNode sn = spec.getChild(j);
+      if (sn.getType().equals("startpoint"))
+      {
+        String path = sn.getAttributeValue("path");
+        String convertToURI = sn.getAttributeValue("converttouri");
+        if (path.length() > 0 && convertToURI != null && convertToURI.equals("true"))
+        {
+          path = nameNode + path;
+          if (!path.endsWith("/"))
+            path += "/";
+          if (fullpath.startsWith(path))
+            return fullpath.substring(path.length());
+        }
+      }
+    }
+    return null;
+  }
+
+  /** Map an extension to a mime type */
+  protected static String mapExtensionToMimeType(String fileName)
+  {
+    int slashIndex = fileName.lastIndexOf("/");
+    if (slashIndex != -1)
+      fileName = fileName.substring(slashIndex+1);
+    int dotIndex = fileName.lastIndexOf(".");
+    if (dotIndex == -1)
+      return null;
+    return ExtensionMimeMap.mapToMimeType(fileName.substring(dotIndex+1).toLowerCase(java.util.Locale.ROOT));
+  }
+
+  /** Check if a file or directory should be included, given a document specification.
+   *@param fileName is the canonical file name.
+   *@param documentSpecification is the specification.
+   *@return true if it should be included.
+   */
+  protected static boolean checkInclude(String nameNode, FileStatus fileStatus, String fileName, DocumentSpecification documentSpecification)
+    throws ManifoldCFException
+  {
+    if (Logging.connectors.isDebugEnabled())
+    {
+      Logging.connectors.debug("Checking whether to include file '"+fileName+"'");
+    }
+
+    String pathPart;
+    String filePart;
+    if (fileStatus.isDir())
+    {
+      pathPart = fileName;
+      filePart = null;
+    }
+    else
+    {
+      pathPart = fileStatus.getPath().getParent().toString();
+      filePart = fileStatus.getPath().getName();
+    }
+
+    // Scan until we match a startpoint
+    int i = 0;
+    while (i < documentSpecification.getChildCount())
+    {
+      SpecificationNode sn = documentSpecification.getChild(i++);
+      if (sn.getType().equals("startpoint"))
+      {
+        String path = null;
+        try {
+			path = new URI(nameNode).resolve(sn.getAttributeValue("path")).toString();
+		} catch (URISyntaxException e) {
+			e.printStackTrace();
+		}
+        if (Logging.connectors.isDebugEnabled())
+        {
+          Logging.connectors.debug("Checking path '"+path+"' against canonical '"+pathPart+"'");
+        }
+        // Compare with filename
+        int matchEnd = matchSubPath(path,pathPart);
+        if (matchEnd == -1)
+        {
+          if (Logging.connectors.isDebugEnabled())
+          {
+            Logging.connectors.debug("Match check '"+path+"' against canonical '"+pathPart+"' failed");
+          }
+
+          continue;
+        }
+        // matchEnd is the start of the rest of the path (after the match) in fileName.
+        // We need to walk through the rules and see whether it's in or out.
+        int j = 0;
+        while (j < sn.getChildCount())
+        {
+          SpecificationNode node = sn.getChild(j++);
+          String flavor = node.getType();
+          String match = node.getAttributeValue("match");
+          String type = node.getAttributeValue("type");
+          // If type is "file", then our match string is against the filePart.
+          // If filePart is null, then this rule is simply skipped.
+          String sourceMatch;
+          int sourceIndex;
+          if (type.equals("file"))
+          {
+            if (filePart == null)
+            {
+              continue;
+            }
+            sourceMatch = filePart;
+            sourceIndex = 0;
+          }
+          else
+          {
+            if (filePart != null)
+            {
+              continue;
+            }
+            sourceMatch = pathPart;
+            sourceIndex = matchEnd;
+          }
+
+          if (flavor.equals("include"))
+          {
+            if (checkMatch(sourceMatch,sourceIndex,match))
+            {
+              return true;
+            }
+          }
+          else if (flavor.equals("exclude"))
+          {
+            if (checkMatch(sourceMatch,sourceIndex,match))
+            {
+              return false;
+            }
+          }
+        }
+      }
+    }
+    if (Logging.connectors.isDebugEnabled())
+    {
+      Logging.connectors.debug("Not including '"+fileName+"' because no matching rules");
+    }
+
+    return false;
+  }
+
+  /** Check if a file should be ingested, given a document specification.  It is presumed that
+   * documents that do not pass checkInclude() will be checked with this method.
+   *@param file is the file.
+   *@param documentSpecification is the specification.
+   */
+  protected static boolean checkIngest(String nameNode, FileStatus fileStatus, DocumentSpecification documentSpecification)
+    throws ManifoldCFException
+  {
+    // Since the only exclusions at this point are not based on file contents, this is a no-op.
+    // MHL
+    return true;
+  }
+
+  /** Match a sub-path.  The sub-path must match the complete starting part of the full path, in a path
+   * sense.  The returned value should point into the file name beyond the end of the matched path, or
+   * be -1 if there is no match.
+   *@param subPath is the sub path.
+   *@param fullPath is the full path.
+   *@return the index of the start of the remaining part of the full path, or -1.
+   */
+  protected static int matchSubPath(String subPath, String fullPath)
+  {
+    if (subPath.length() > fullPath.length())
+      return -1;
+    if (fullPath.startsWith(subPath) == false)
+      return -1;
+    int rval = subPath.length();
+    if (fullPath.length() == rval)
+      return rval;
+    char x = fullPath.charAt(rval);
+    if (x == Path.SEPARATOR_CHAR)
+      rval++;
+    return rval;
+  }
+
+  /** Check a match between two strings with wildcards.
+   *@param sourceMatch is the expanded string (no wildcards)
+   *@param sourceIndex is the starting point in the expanded string.
+   *@param match is the wildcard-based string.
+   *@return true if there is a match.
+   */
+  protected static boolean checkMatch(String sourceMatch, int sourceIndex, String match)
+  {
+    // Note: The java regex stuff looks pretty heavyweight for this purpose.
+    // I've opted to try and do a simple recursive version myself, which is not compiled.
+    // Basically, the match proceeds by recursive descent through the string, so that all *'s cause
+    // recursion.
+    boolean caseSensitive = true;
+
+    return processCheck(caseSensitive, sourceMatch, sourceIndex, match, 0);
+  }
+
+  /** Recursive worker method for checkMatch.  Returns 'true' if there is a path that consumes both
+   * strings in their entirety in a matched way.
+   *@param caseSensitive is true if file names are case sensitive.
+   *@param sourceMatch is the source string (w/o wildcards)
+   *@param sourceIndex is the current point in the source string.
+   *@param match is the match string (w/wildcards)
+   *@param matchIndex is the current point in the match string.
+   *@return true if there is a match.
+   */
+  protected static boolean processCheck(boolean caseSensitive, String sourceMatch, int sourceIndex,
+    String match, int matchIndex)
+  {
+    // Logging.connectors.debug("Matching '"+sourceMatch+"' position "+Integer.toString(sourceIndex)+
+    //      " against '"+match+"' position "+Integer.toString(matchIndex));
+
+    // Match up through the next * we encounter
+    while (true)
+    {
+      // If we've reached the end, it's a match.
+      if (sourceMatch.length() == sourceIndex && match.length() == matchIndex)
+        return true;
+      // If one has reached the end but the other hasn't, no match
+      if (match.length() == matchIndex)
+        return false;
+      if (sourceMatch.length() == sourceIndex)
+      {
+        if (match.charAt(matchIndex) != '*')
+          return false;
+        matchIndex++;
+        continue;
+      }
+      char x = sourceMatch.charAt(sourceIndex);
+      char y = match.charAt(matchIndex);
+      if (!caseSensitive)
+      {
+        if (x >= 'A' && x <= 'Z')
+          x -= 'A'-'a';
+        if (y >= 'A' && y <= 'Z')
+          y -= 'A'-'a';
+      }
+      if (y == '*')
+      {
+        // Wildcard!
+        // We will recurse at this point.
+        // Basically, we want to combine the results for leaving the "*" in the match string
+        // at this point and advancing the source index, with skipping the "*" and leaving the source
+        // string alone.
+        return processCheck(caseSensitive,sourceMatch,sourceIndex+1,match,matchIndex) ||
+          processCheck(caseSensitive,sourceMatch,sourceIndex,match,matchIndex+1);
+      }
+      if (y == '?' || x == y)
+      {
+        sourceIndex++;
+        matchIndex++;
+      }
+      else
+        return false;
+    }
+  }
+
+  /**
+   * @param e
+   * @throws ManifoldCFException
+   * @throws ServiceInterruption
+   */
+  private static void handleIOException(IOException e) throws ManifoldCFException, ServiceInterruption {
+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    }
+    Logging.connectors.warn("HDFS: IO exception: "+e.getMessage(),e);
+    long currentTime = System.currentTimeMillis();
+    throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L, currentTime + 3 * 60 * 60000L,-1,false);
+  }
+  
+  /**
+   * @param e
+   * @throws ManifoldCFException
+   * @throws ServiceInterruption
+   */
+  private static void handleURISyntaxException(URISyntaxException e) throws ManifoldCFException, ServiceInterruption {
+    // Permanent problem
+    Logging.connectors.error("HDFS: Bad namenode specification: "+e.getMessage(), e);
+    throw new ManifoldCFException("Bad namenode specification: "+e.getMessage(), e);
+  }
+
+  protected static class CheckConnectionThread extends Thread {
+    protected final HDFSSession session;
+    protected Throwable exception = null;
+
+    public CheckConnectionThread(HDFSSession session) {
+      super();
+      this.session = session;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.getRepositoryInfo();
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp() throws InterruptedException, IOException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+  }
+
+  /**
+   * @throws ManifoldCFException
+   * @throws ServiceInterruption
+   */
+  protected void checkConnection() throws ManifoldCFException, ServiceInterruption {
+    CheckConnectionThread t = new CheckConnectionThread(getSession());
+    try {
+      t.start();
+      t.finishUp();
+      return;
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+  }
+
+  protected static class GetSessionThread extends Thread {
+    protected final String nameNode;
+    protected final Configuration config;
+    protected final String user;
+    protected Throwable exception = null;
+    protected HDFSSession session;
+
+    public GetSessionThread(String nameNode, Configuration config, String user) {
+      super();
+      this.nameNode = nameNode;
+      this.config = config;
+      this.user = user;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        // Create a session
+        session = new HDFSSession(nameNode, config, user);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException, IOException, URISyntaxException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof URISyntaxException) {
+          throw (URISyntaxException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+    
+    public HDFSSession getResult() {
+      return session;
+    }
+  }
+
+  protected FileStatus[] getChildren(Path path)
+    throws ManifoldCFException, ServiceInterruption {
+    GetChildrenThread t = new GetChildrenThread(getSession(), path);
+    try {
+      t.start();
+      t.finishUp();
+      return t.getResult();
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+    return null;
+  }
+
+  protected class GetChildrenThread extends Thread {
+    protected Throwable exception = null;
+    protected FileStatus[] result = null;
+    protected final HDFSSession session;
+    protected final Path path;
+
+    public GetChildrenThread(HDFSSession session, Path path) {
+      super();
+      this.session = session;
+      this.path = path;
+      setDaemon(true);
+    }
+
+    @Override
+    public void run() {
+      try {
+        result = session.listStatus(path);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp() throws InterruptedException, IOException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else if (thr instanceof Error) {
+          throw (Error) thr;
+        } else if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else {
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+        }
+      }
+    }
+    
+    public FileStatus[] getResult() {
+      return result;
+    }
+  }
+
+  protected FileStatus getObject(Path path)
+    throws ManifoldCFException, ServiceInterruption {
+    GetObjectThread objt = new GetObjectThread(getSession(),path);
+    try {
+      objt.start();
+      objt.finishUp();
+      return objt.getResponse();
+    } catch (InterruptedException e) {
+      objt.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e, ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      objt.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    }
+    return null;
+  }
+  
+  protected static class GetObjectThread extends Thread {
+    protected final HDFSSession session;
+    protected final Path nodeId;
+    protected Throwable exception = null;
+    protected FileStatus response = null;
+
+    public GetObjectThread(HDFSSession session, Path nodeId) {
+      super();
+      setDaemon(true);
+      this.session = session;
+      this.nodeId = nodeId;
+    }
+
+    public void run() {
+      try {
+        response = session.getObject(nodeId);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp() throws InterruptedException, IOException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else if (thr instanceof Error) {
+          throw (Error) thr;
+        } else if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else {
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+        }
+      }
+    }
+
+    public FileStatus getResponse() {
+      return response;
+    }
+
+  }
+
+  protected static class BackgroundStreamThread extends Thread
+  {
+    protected final HDFSSession session;
+    protected final Path nodeId;
+    
+    protected boolean abortThread = false;
+    protected Throwable responseException = null;
+    protected InputStream sourceStream = null;
+    protected XThreadInputStream threadStream = null;
+    
+    public BackgroundStreamThread(HDFSSession session, Path nodeId)
+    {
+      super();
+      setDaemon(true);
+      this.session = session;
+      this.nodeId = nodeId;
+    }
+
+    public void run()
+    {
+      try {
+        try {
+          synchronized (this) {
+            if (!abortThread) {
+              sourceStream = session.getFSDataInputStream(nodeId);
+              threadStream = new XThreadInputStream(sourceStream);
+              this.notifyAll();
+            }
+          }
+          
+          if (threadStream != null)
+          {
+            // Stuff the content until we are done
+            threadStream.stuffQueue();
+          }
+        } finally {
+          if (sourceStream != null) {
+            sourceStream.close();
+          }
+        }
+      } catch (Throwable e) {
+        responseException = e;
+      }
+    }
+
+    public InputStream getSafeInputStream() throws InterruptedException, IOException
+    {
+      // Must wait until stream is created, or until we note an exception was thrown.
+      while (true)
+      {
+        synchronized (this)
+        {
+          if (responseException != null) {
+            throw new IllegalStateException("Check for response before getting stream");
+          }
+          checkException(responseException);
+          if (threadStream != null) {
+            return threadStream;
+          }
+          wait();
+        }
+      }
+    }
+    
+    public void finishUp() throws InterruptedException, IOException
+    {
+      // This will be called during the finally
+      // block in the case where all is well (and
+      // the stream completed) and in the case where
+      // there were exceptions.
+      synchronized (this) {
+        if (threadStream != null) {
+          threadStream.abort();
+        }
+        abortThread = true;
+      }
+
+      join();
+
+      checkException(responseException);
+    }
+    
+    protected synchronized void checkException(Throwable exception) throws IOException
+    {
+      if (exception != null)
+      {
+        Throwable e = exception;
+        if (e instanceof IOException) {
+          throw (IOException)e;
+        } else if (e instanceof RuntimeException) {
+          throw (RuntimeException)e;
+        } else if (e instanceof Error) {
+          throw (Error)e;
+        } else {
+          throw new RuntimeException("Unhandled exception of type: "+e.getClass().getName(),e);
+        }
+      }
+    }
+  }
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/HDFSSession.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/HDFSSession.java
new file mode 100644
index 0000000..ef1d754
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/HDFSSession.java
@@ -0,0 +1,105 @@
+/* $Id: DropboxSession.java 1490621 2013-06-07 12:55:04Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+/*
+ * To change this template, choose Tools | Templates
+ * and open the template in the editor.
+ */
+package org.apache.manifoldcf.crawler.connectors.hdfs;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FSDataInputStream;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.net.URI;
+import java.net.URISyntaxException;
+import java.util.Map;
+import java.util.HashMap;
+
+/**
+ *
+ * @author andrew
+ */
+public class HDFSSession {
+
+  private FileSystem fileSystem;
+  private String nameNode;
+  private Configuration config;
+  private String user;
+  
+  public HDFSSession(String nameNode, Configuration config, String user) throws URISyntaxException, IOException, InterruptedException {
+    this.nameNode = nameNode;
+    this.config = config;
+    this.user = user;
+    fileSystem = FileSystem.get(new URI(nameNode), config, user);
+  }
+
+  public Map<String, String> getRepositoryInfo() {
+    Map<String, String> info = new HashMap<String, String>();
+
+    info.put("Name Node", nameNode);
+    info.put("Config", config.toString());
+    info.put("User", user);
+    // Commented much of this out because each timeout is too long if there's no connection
+    info.put("Canonical Service Name", fileSystem.getCanonicalServiceName());
+    //info.put("Default Block Size", Long.toString(fileSystem.getDefaultBlockSize()));
+    //info.put("Default Replication", Short.toString(fileSystem.getDefaultReplication()));
+    //info.put("Home Directory", fileSystem.getHomeDirectory().toUri().toString());
+    //info.put("Working Directory", fileSystem.getWorkingDirectory().toUri().toString());
+    return info;
+  }
+
+  public FileStatus[] listStatus(Path path)
+    throws IOException {
+    try {
+      return fileSystem.listStatus(path);
+    } catch (FileNotFoundException e) {
+      return null;
+    }
+  }
+  
+  public URI getUri() {
+    return fileSystem.getUri();
+  }
+
+  public FileStatus getObject(Path path) throws IOException {
+    try {
+      return fileSystem.getFileStatus(path);
+    } catch(FileNotFoundException e) {
+      return null;
+    }
+  }
+
+  public FSDataInputStream getFSDataInputStream(Path path) throws IOException {
+    try {
+      return fileSystem.open(path);
+    } catch (FileNotFoundException e) {
+      return null;
+    }
+  }
+  
+  public void close() throws IOException {
+    fileSystem.close();
+  }
+}
diff --git a/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/Messages.java b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/Messages.java
new file mode 100644
index 0000000..d16a2ad
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/hdfs/Messages.java
@@ -0,0 +1,141 @@
+/* $Id: Messages.java 1295926 2012-03-01 21:56:27Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.crawler.connectors.hdfs.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.crawler.connectors.hdfs";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/hdfs/common_en_US.properties b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/hdfs/common_en_US.properties
new file mode 100644
index 0000000..54dace2
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/hdfs/common_en_US.properties
@@ -0,0 +1,29 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HDFSOutputConnector.ServerTabName=Server
+HDFSOutputConnector.NameNodeHost=Name node host:
+HDFSOutputConnector.NameNodePort=Name node port:
+HDFSOutputConnector.User=User:
+HDFSOutputConnector.NameNodeHostCannotBeNull=Name node host cannot be null
+HDFSOutputConnector.NameNodePortCannotBeNull=Name node port cannot be null
+HDFSOutputConnector.NameNodePortMustBeAnInteger=Name node port must be an integer
+HDFSOutputConnector.UserCannotBeNull=User cannot be null
+
+HDFSOutputConnector.PathTabName=Output Path
+HDFSOutputConnector.Path=Output Path:
+HDFSOutputConnector.RootPath=Root path:
+HDFSOutputConnector.RootPathCannotBeNull=Root path cannot be null
+
diff --git a/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/hdfs/common_ja_JP.properties b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/hdfs/common_ja_JP.properties
new file mode 100644
index 0000000..d711263
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/hdfs/common_ja_JP.properties
@@ -0,0 +1,29 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HDFSOutputConnector.ServerTabName=サーバー
+HDFSOutputConnector.NameNodeHost=Name node host:
+HDFSOutputConnector.NameNodePort=Name node port:
+HDFSOutputConnector.User=ユーザー:
+HDFSOutputConnector.NameNodeHostCannotBeNull=Name node host cannot be null
+HDFSOutputConnector.NameNodePortCannotBeNull=Name node port cannot be null
+HDFSOutputConnector.NameNodePortMustBeAnInteger=Name node port must be an integer
+HDFSOutputConnector.UserCannotBeNull=User cannot be null
+
+HDFSOutputConnector.PathTabName=出力パス
+HDFSOutputConnector.Path=出力パス:
+HDFSOutputConnector.RootPath=ルートパス:
+HDFSOutputConnector.RootPathCannotBeNull=Root path cannot be null
+
diff --git a/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/hdfs/common_en_US.properties b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/hdfs/common_en_US.properties
new file mode 100644
index 0000000..86ff1d8
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/hdfs/common_en_US.properties
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HDFSRepositoryConnector.ServerTabName=Server
+HDFSRepositoryConnector.NameNodeHost=Name node host:
+HDFSRepositoryConnector.NameNodePort=Name node port:
+HDFSRepositoryConnector.User=User:
+HDFSRepositoryConnector.NameNodeHostCannotBeNull=Name node host cannot be null
+HDFSRepositoryConnector.NameNodePortCannotBeNull=Name node port cannot be null
+HDFSRepositoryConnector.NameNodePortMustBeAnInteger=Name node port must be an integer
+HDFSRepositoryConnector.UserCannotBeNull=User cannot be null
+
+HDFSRepositoryConnector.Paths=Repository Paths
+HDFSRepositoryConnector.Paths2=Repository Paths:
+HDFSRepositoryConnector.RootPath=Root path
+HDFSRepositoryConnector.ConvertToURI=Convert path to URI?
+HDFSRepositoryConnector.ConvertToURIExample= (e.g. http/xyz/index.html => http://xyz/index.html)
+HDFSRepositoryConnector.Yes=Yes
+HDFSRepositoryConnector.No=No
+HDFSRepositoryConnector.Rules=Rules
+HDFSRepositoryConnector.Delete=Delete
+HDFSRepositoryConnector.DeletePath=Delete path #
+HDFSRepositoryConnector.IncludeExclude=Include/exclude
+HDFSRepositoryConnector.FileDirectory=File/directory
+HDFSRepositoryConnector.Match=Match
+HDFSRepositoryConnector.include=include
+HDFSRepositoryConnector.exclude=exclude
+HDFSRepositoryConnector.File=File
+HDFSRepositoryConnector.Directory=Directory
+HDFSRepositoryConnector.NoDocumentsSpecified=No documents specified
+HDFSRepositoryConnector.Add=Add
+HDFSRepositoryConnector.InsertHere=Insert Here
+HDFSRepositoryConnector.include=include
+HDFSRepositoryConnector.NoRulesDefined=No rules defined
+HDFSRepositoryConnector.InsertNewMatchForPath=Insert new match for path #
+HDFSRepositoryConnector.DeletePath=Delete path #
+HDFSRepositoryConnector.AddNewMatchForPath=Add new match for path #
+HDFSRepositoryConnector.AddNewPath=Add new path
diff --git a/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/hdfs/common_ja_JP.properties b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/hdfs/common_ja_JP.properties
new file mode 100644
index 0000000..c4c04f4
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/hdfs/common_ja_JP.properties
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+HDFSRepositoryConnector.ServerTabName=サーバー
+HDFSRepositoryConnector.NameNodeHost=Name node host:
+HDFSRepositoryConnector.NameNodePort=Name node port:
+HDFSRepositoryConnector.User=ユーザー:
+HDFSRepositoryConnector.NameNodeHostCannotBeNull=Name node host cannot be null
+HDFSRepositoryConnector.NameNodePortCannotBeNull=Name node port cannot be null
+HDFSRepositoryConnector.NameNodePortMustBeAnInteger=Name node port must be an integer
+HDFSRepositoryConnector.UserCannotBeNull=User cannot be null
+
+HDFSRepositoryConnector.Paths=リポジトリパス
+HDFSRepositoryConnector.Paths2=リポジトリパス:
+HDFSRepositoryConnector.RootPath=ルートパス
+HDFSRepositoryConnector.ConvertToURI=Convert path to URI?
+HDFSRepositoryConnector.ConvertToURIExample= (e.g. http/xyz/index.html => http://xyz/index.html)
+HDFSRepositoryConnector.Yes=Yes
+HDFSRepositoryConnector.No=No
+HDFSRepositoryConnector.Rules=ルール
+HDFSRepositoryConnector.Delete=削除
+HDFSRepositoryConnector.DeletePath=パスを削除 #
+HDFSRepositoryConnector.IncludeExclude=含む/除外
+HDFSRepositoryConnector.FileDirectory=ファイル/ディレクトリ
+HDFSRepositoryConnector.Match=一致
+HDFSRepositoryConnector.include=含む
+HDFSRepositoryConnector.exclude=除外
+HDFSRepositoryConnector.File=ファイル
+HDFSRepositoryConnector.Directory=ディレクトリ
+HDFSRepositoryConnector.NoDocumentsSpecified=コンテンツは指定されていません
+HDFSRepositoryConnector.Add=追加
+HDFSRepositoryConnector.InsertHere=挿入
+HDFSRepositoryConnector.include=含む
+HDFSRepositoryConnector.NoRulesDefined=ルールが未定義です
+HDFSRepositoryConnector.InsertNewMatchForPath=パス用に新しいパターンを挿入: #
+HDFSRepositoryConnector.DeletePath=パスを削除: #
+HDFSRepositoryConnector.AddNewMatchForPath=パス用に新しいパターンを追加: #
+HDFSRepositoryConnector.AddNewPath=新しいパスを追加
diff --git a/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editConfiguration.html b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editConfiguration.html
new file mode 100644
index 0000000..0fc4252
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editConfiguration.html
@@ -0,0 +1,42 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TABNAME == $ResourceBundle.getString('HDFSOutputConnector.ServerTabName'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodeHost'))</nobr></td>
+    <td class="value"><input name="namenodehost" type="text" value="$Encoder.attributeEscape($NAMENODEHOST)" size="32" /></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodePort'))</nobr></td>
+    <td class="value"><input name="namenodeport" type="text" value="$Encoder.attributeEscape($NAMENODEPORT)" size="5" /></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.User'))</nobr></td>
+    <td class="value"><input name="user" type="text" value="$Encoder.attributeEscape($USER)" size="32" /></td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="namenodehost" value="$Encoder.attributeEscape($NAMENODEHOST)" />
+<input type="hidden" name="namenodeport" value="$Encoder.attributeEscape($NAMENODEPORT)" />
+<input type="hidden" name="user" value="$Encoder.attributeEscape($USER)" />
+
+#end
diff --git a/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editConfiguration.js b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editConfiguration.js
new file mode 100644
index 0000000..b84915d
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editConfiguration.js
@@ -0,0 +1,53 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkConfigForSave()
+{
+  if (editconnection.namenodehost.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodeHostCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.ServerTabName'))");
+    editconnection.namenodehost.focus();
+    return false;
+  }
+  if (editconnection.namenodeport.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodePortCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.ServerTabName'))");
+    editconnection.namenodeport.focus();
+    return false;
+  }
+  if (!isInteger(editconnection.namenodeport.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodePortMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.ServerTabName'))");
+    editconnection.namenodeport.focus();
+    return false;
+  }
+  if (editconnection.user.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.UserCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.ServerTabName'))");
+    editconnection.user.focus();
+    return false;
+  }
+  return true;
+}
+//-->
+</script>
\ No newline at end of file
diff --git a/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editSpecification.html b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editSpecification.html
new file mode 100644
index 0000000..ab9b965
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editSpecification.html
@@ -0,0 +1,32 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TABNAME == $ResourceBundle.getString('HDFSOutputConnector.PathTabName'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.RootPath'))</nobr></td>
+    <td class="value"><input type="text" name="rootpath" size="64" value="$Encoder.attributeEscape($ROOTPATH)" /></td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="rootpath" value="$Encoder.attributeEscape($ROOTPATH)" />
+
+#end
diff --git a/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editSpecification.js b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editSpecification.js
new file mode 100644
index 0000000..b5d31ea
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/editSpecification.js
@@ -0,0 +1,32 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkOutputSpecificationForSave()
+{
+  if (editjob.rootpath.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.RootPathCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('HDFSOutputConnector.PathTabName'))");
+    editjob.rootpath.focus();
+    return false;
+  }
+  return true;
+}
+//-->
+</script>
\ No newline at end of file
diff --git a/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/viewConfiguration.html b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/viewConfiguration.html
new file mode 100644
index 0000000..43ff3a6
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/viewConfiguration.html
@@ -0,0 +1,31 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodeHost'))</nobr></td>
+    <td class="value">$Encoder.bodyEscape($NAMENODEHOST)</td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.NameNodePort'))</nobr></td>
+    <td class="value">$Encoder.bodyEscape($NAMENODEPORT)</td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.User'))</nobr></td>
+    <td class="value">$Encoder.bodyEscape($USER)</td>
+  </tr>
+</table>
\ No newline at end of file
diff --git a/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/viewSpecification.html b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/viewSpecification.html
new file mode 100644
index 0000000..7d256b7
--- /dev/null
+++ b/connectors/hdfs/connector/src/main/resources/org/apache/manifoldcf/agents/output/hdfs/viewSpecification.html
@@ -0,0 +1,23 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('HDFSOutputConnector.RootPath'))</nobr></td>
+    <td class="value">$Encoder.bodyEscape($ROOTPATH)</td>
+  </tr>
+</table>
\ No newline at end of file
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseDerby.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseDerby.java
new file mode 100644
index 0000000..e595243
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseDerby.java
@@ -0,0 +1,44 @@
+/* $Id: TestBase.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseDerby extends org.apache.manifoldcf.crawler.tests.ConnectorBaseDerby
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseHSQLDB.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseHSQLDB.java
new file mode 100644
index 0000000..ad92daf
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseHSQLDB.java
@@ -0,0 +1,44 @@
+/* $Id: BaseHSQLDB.java 1147086 2011-07-15 10:58:30Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseHSQLDB extends org.apache.manifoldcf.crawler.tests.ConnectorBaseHSQLDB
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseMySQL.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseMySQL.java
new file mode 100644
index 0000000..2cd2289
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BaseMySQL.java
@@ -0,0 +1,44 @@
+/* $Id: BaseMySQL.java 1221585 2011-12-21 03:10:03Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseMySQL extends org.apache.manifoldcf.crawler.tests.ConnectorBaseMySQL
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFSRepository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BasePostgresql.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BasePostgresql.java
new file mode 100644
index 0000000..a004fa6
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/BasePostgresql.java
@@ -0,0 +1,44 @@
+/* $Id: TestBase.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BasePostgresql extends org.apache.manifoldcf.crawler.tests.ConnectorBasePostgresql
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFSRepository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityDerbyTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityDerbyTest.java
new file mode 100644
index 0000000..7f57208
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityDerbyTest.java
@@ -0,0 +1,42 @@
+/* $Id: Sanity.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityDerbyTest extends BaseDerby
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityHSQLDBTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityHSQLDBTest.java
new file mode 100644
index 0000000..f53e0f6
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityHSQLDBTest.java
@@ -0,0 +1,42 @@
+/* $Id: SanityHSQLDBTest.java 1147086 2011-07-15 10:58:30Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityHSQLDBTest extends BaseHSQLDB
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityMySQLTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityMySQLTest.java
new file mode 100644
index 0000000..e931062
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityMySQLTest.java
@@ -0,0 +1,42 @@
+/* $Id: SanityMySQLTest.java 1221585 2011-12-21 03:10:03Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityMySQLTest extends BaseMySQL
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityPostgresqlTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityPostgresqlTest.java
new file mode 100644
index 0000000..a4ead9c
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/agents/output/hdfs/SanityPostgresqlTest.java
@@ -0,0 +1,42 @@
+/* $Id: Sanity.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.output.hdfs;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityPostgresqlTest extends BasePostgresql
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseDerby.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseDerby.java
new file mode 100644
index 0000000..861573b
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseDerby.java
@@ -0,0 +1,44 @@
+/* $Id: TestBase.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseDerby extends org.apache.manifoldcf.crawler.tests.ConnectorBaseDerby
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseHSQLDB.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseHSQLDB.java
new file mode 100644
index 0000000..8fde1ff
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseHSQLDB.java
@@ -0,0 +1,44 @@
+/* $Id: BaseHSQLDB.java 1147086 2011-07-15 10:58:30Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseHSQLDB extends org.apache.manifoldcf.crawler.tests.ConnectorBaseHSQLDB
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseMySQL.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseMySQL.java
new file mode 100644
index 0000000..e752ec6
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BaseMySQL.java
@@ -0,0 +1,44 @@
+/* $Id: BaseMySQL.java 1221585 2011-12-21 03:10:03Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BaseMySQL extends org.apache.manifoldcf.crawler.tests.ConnectorBaseMySQL
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BasePostgresql.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BasePostgresql.java
new file mode 100644
index 0000000..a890a5d
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/BasePostgresql.java
@@ -0,0 +1,44 @@
+/* $Id: TestBase.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for setting up/tearing down the agents framework. */
+public class BasePostgresql extends org.apache.manifoldcf.crawler.tests.ConnectorBasePostgresql
+{
+  
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityDerbyTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityDerbyTest.java
new file mode 100644
index 0000000..0ee767d
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityDerbyTest.java
@@ -0,0 +1,42 @@
+/* $Id: Sanity.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityDerbyTest extends BaseDerby
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityHSQLDBTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityHSQLDBTest.java
new file mode 100644
index 0000000..ac0d321
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityHSQLDBTest.java
@@ -0,0 +1,42 @@
+/* $Id: SanityHSQLDBTest.java 1147086 2011-07-15 10:58:30Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityHSQLDBTest extends BaseHSQLDB
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityMySQLTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityMySQLTest.java
new file mode 100644
index 0000000..3207ce1
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityMySQLTest.java
@@ -0,0 +1,42 @@
+/* $Id: SanityMySQLTest.java 1221585 2011-12-21 03:10:03Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityMySQLTest extends BaseMySQL
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityPostgresqlTest.java b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityPostgresqlTest.java
new file mode 100644
index 0000000..8246bdb
--- /dev/null
+++ b/connectors/hdfs/connector/src/test/java/org/apache/manifoldcf/crawler/connectors/hdfs/tests/SanityPostgresqlTest.java
@@ -0,0 +1,42 @@
+/* $Id: Sanity.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.hdfs.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class SanityPostgresqlTest extends BasePostgresql
+{
+  
+  @Test
+  public void sanityCheck()
+    throws Exception
+  {
+    // If we get this far, it must mean that the setup was successful, which is all that I'm shooting for in this test.
+  }
+  
+
+}
diff --git a/connectors/hdfs/pom.xml b/connectors/hdfs/pom.xml
new file mode 100644
index 0000000..40c4c34
--- /dev/null
+++ b/connectors/hdfs/pom.xml
@@ -0,0 +1,165 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <parent>
+    <groupId>org.apache.manifoldcf</groupId>
+    <artifactId>mcf-connectors</artifactId>
+    <version>1.5-SNAPSHOT</version>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+
+  <artifactId>mcf-hdfs-connector</artifactId>
+  <name>ManifoldCF - Connectors - HDFS</name>
+
+  <build>
+    <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
+    <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
+      </resource>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
+    <plugins>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>native2ascii-maven-plugin</artifactId>
+        <version>1.0-beta-1</version>
+        <configuration>
+            <workDir>target/classes</workDir>
+        </configuration>
+        <executions>
+            <execution>
+                <id>native2ascii-utf8</id>
+                <goals>
+                    <goal>native2ascii</goal>
+                </goals>
+                <configuration>
+                    <encoding>UTF8</encoding>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
+                </configuration>
+            </execution>
+        </executions>
+      </plugin>
+      <plugin>
+        <groupId>org.apache.maven.plugins</groupId>
+        <artifactId>maven-surefire-plugin</artifactId>
+        <configuration>
+          <excludes>
+            <exclude>**/*Postgresql*.java</exclude>
+            <exclude>**/*MySQL*.java</exclude>
+          </excludes>
+          <forkMode>always</forkMode>
+          <workingDirectory>target/test-output</workingDirectory>
+        </configuration>
+      </plugin>
+    </plugins>
+  </build>
+  
+  <dependencies>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-core</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-agents</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-pull-agent</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-ui-core</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>junit</groupId>
+      <artifactId>junit</artifactId>
+      <version>${junit.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-core</artifactId>
+      <version>${project.version}</version>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-agents</artifactId>
+      <version>${project.version}</version>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-pull-agent</artifactId>
+      <version>${project.version}</version>
+      <type>test-jar</type>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>postgresql</groupId>
+      <artifactId>postgresql</artifactId>
+      <version>${postgresql.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.hsqldb</groupId>
+      <artifactId>hsqldb</artifactId>
+      <version>${hsqldb.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.derby</groupId>
+      <artifactId>derby</artifactId>
+      <version>${derby.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>mysql</groupId>
+      <artifactId>mysql-connector-java</artifactId>
+      <version>${mysql.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.hadoop</groupId>
+      <artifactId>hadoop-common</artifactId>
+      <version>2.2.0</version>
+    </dependency>
+  </dependencies>
+</project>
diff --git a/connectors/jcifs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharedrive/SharedDriveConnector.java b/connectors/jcifs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharedrive/SharedDriveConnector.java
index a9dddcc..c160fa8 100644
--- a/connectors/jcifs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharedrive/SharedDriveConnector.java
+++ b/connectors/jcifs/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharedrive/SharedDriveConnector.java
@@ -46,6 +46,7 @@
 import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
 import org.apache.manifoldcf.core.interfaces.IThreadContext;
 import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;
 import org.apache.manifoldcf.core.interfaces.IPostParameters;
 import org.apache.manifoldcf.core.interfaces.ConfigParams;
 import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
@@ -53,6 +54,7 @@
 import org.apache.manifoldcf.core.interfaces.KeystoreManagerFactory;
 import org.apache.manifoldcf.core.interfaces.Configuration;
 import org.apache.manifoldcf.core.interfaces.ConfigurationNode;
+import org.apache.manifoldcf.core.interfaces.LockManagerFactory;
 import org.apache.manifoldcf.crawler.interfaces.DocumentSpecification;
 import org.apache.manifoldcf.crawler.interfaces.IDocumentIdentifierStream;
 import org.apache.manifoldcf.crawler.interfaces.IProcessActivity;
@@ -108,7 +110,7 @@
     System.setProperty("jcifs.smb.client.responseTimeout","120000");
     System.setProperty("jcifs.resolveOrder","LMHOSTS,DNS,WINS");
     System.setProperty("jcifs.smb.client.listCount","20");
-    System.setProperty("jcifs.sm.client.dfs.strictView","true");
+    System.setProperty("jcifs.smb.client.dfs.strictView","true");
   }
   
   private String smbconnectionPath = null;
@@ -119,17 +121,27 @@
   private boolean useSIDs = true;
 
   private NtlmPasswordAuthentication pa;
-
+  
   /** Deny access token for default authority */
-  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
+  private final static String defaultAuthorityDenyToken = GLOBAL_DENY_TOKEN;
 
   /** Constructor.
   */
   public SharedDriveConnector()
   {
-    // We need to know whether to operate in NTLMv2 mode, or in NTLM mode.
-    String value = ManifoldCF.getProperty(PROPERTY_JCIFS_USE_NTLM_V1);
-    if (value == null || value.toLowerCase().equals("false"))
+  }
+
+  /** Set thread context.
+  * Use the opportunity to set the system properties we'll need.
+  */
+  @Override
+  public void setThreadContext(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    super.setThreadContext(threadContext);
+    // We need to know whether to operate in NTLMv2 mode, or in NTLM mode.  We do this before jcifs called the first time.
+    boolean useV1 = LockManagerFactory.getBooleanProperty(threadContext, PROPERTY_JCIFS_USE_NTLM_V1, false);
+    if (!useV1)
     {
       System.setProperty("jcifs.smb.lmCompatibility","3");
       System.setProperty("jcifs.smb.client.useExtendedSecurity","true");
@@ -140,13 +152,14 @@
       System.setProperty("jcifs.smb.client.useExtendedSecurity","false");
     }
   }
-
+  
   /** Establish a "session".  In the case of the jcifs connector, this just builds the appropriate smbconnectionPath string, and does the necessary checks. */
   protected void getSession()
     throws ManifoldCFException
   {
     if (smbconnectionPath == null)
     {
+      
       // Get the server
       if (server == null || server.length() == 0)
         throw new ManifoldCFException("Missing parameter '"+SharedDriveParameters.server+"'");
@@ -726,115 +739,126 @@
               String fileName = getFileCanonicalPath(file);
               if (fileName != null && !file.isHidden())
               {
-                // manipulate path to include the DFS alias, not the literal path
-                // String newPath = matchPrefix + fileName.substring(matchReplace.length());
-                String newPath = fileName;
-                if (checkNeedFileData(newPath, spec))
-                {
-                  if (Logging.connectors.isDebugEnabled())
-                    Logging.connectors.debug("JCIFS: Local file data needed for '"+documentIdentifier+"'");
+                // Initialize repository document with common stuff, and find the URI
+                RepositoryDocument rd = new RepositoryDocument();
+                String uri = prepareForIndexing(rd,file,version);
 
-                  // Create a temporary file, and use that for the check and then the ingest
-                  File tempFile = File.createTempFile("_sdc_",null);
-                  try
+                if (activities.checkURLIndexable(uri))
+                {
+
+                  // manipulate path to include the DFS alias, not the literal path
+                  // String newPath = matchPrefix + fileName.substring(matchReplace.length());
+                  String newPath = fileName;
+                  if (checkNeedFileData(newPath, spec))
                   {
-                    FileOutputStream os = new FileOutputStream(tempFile);
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug("JCIFS: Local file data needed for '"+documentIdentifier+"'");
+
+                    // Create a temporary file, and use that for the check and then the ingest
+                    File tempFile = File.createTempFile("_sdc_",null);
                     try
                     {
-
-                      // Now, make a local copy so we can fingerprint
-                      InputStream inputStream = getFileInputStream(file);
+                      FileOutputStream os = new FileOutputStream(tempFile);
                       try
                       {
-                        // Copy!
-                        if (transferBuffer == null)
-                          transferBuffer = new byte[65536];
-                        while (true)
+
+                        // Now, make a local copy so we can fingerprint
+                        InputStream inputStream = getFileInputStream(file);
+                        try
                         {
-                          int amt = inputStream.read(transferBuffer,0,transferBuffer.length);
-                          if (amt == -1)
-                            break;
-                          os.write(transferBuffer,0,amt);
+                          // Copy!
+                          if (transferBuffer == null)
+                            transferBuffer = new byte[65536];
+                          while (true)
+                          {
+                            int amt = inputStream.read(transferBuffer,0,transferBuffer.length);
+                            if (amt == -1)
+                              break;
+                            os.write(transferBuffer,0,amt);
+                          }
+                        }
+                        finally
+                        {
+                          inputStream.close();
                         }
                       }
                       finally
                       {
-                        inputStream.close();
+                        os.close();
+                      }
+
+                      if (checkIngest(tempFile, newPath, spec, activities))
+                      {
+                        if (Logging.connectors.isDebugEnabled())
+                          Logging.connectors.debug("JCIFS: Decided to ingest '"+documentIdentifier+"'");
+                        // OK, do ingestion itself!
+                        InputStream inputStream = new FileInputStream(tempFile);
+                        try
+                        {
+                          rd.setBinary(inputStream, tempFile.length());
+                          
+                          activities.ingestDocument(documentIdentifier, version, uri, rd);
+                        }
+                        finally
+                        {
+                          inputStream.close();
+                        }
+
+                        // I put this record here deliberately for two reasons:
+                        // (1) the other path includes ingestion time, and
+                        // (2) if anything fails up to and during ingestion, I want THAT failure record to be written, not this one.
+                        // So, really, ACTIVITY_ACCESS is a bit more than just fetch for JCIFS...
+                        activities.recordActivity(new Long(startFetchTime),ACTIVITY_ACCESS,
+                          new Long(tempFile.length()),documentIdentifier,"Success",null,null);
+
+                      }
+                      else
+                      {
+                        // We must actively remove the document here, because the getDocumentVersions()
+                        // method has no way of signalling this, since it does not do the fingerprinting.
+                        if (Logging.connectors.isDebugEnabled())
+                          Logging.connectors.debug("JCIFS: Decided to remove '"+documentIdentifier+"'");
+                        activities.deleteDocument(documentIdentifier, version);
+                        // We should record the access here as well, since this is a non-exception way through the code path.
+                        // (I noticed that this was not being recorded in the history while fixing 25477.)
+                        activities.recordActivity(new Long(startFetchTime),ACTIVITY_ACCESS,
+                          new Long(tempFile.length()),documentIdentifier,"Success",null,null);
                       }
                     }
                     finally
                     {
-                      os.close();
-                    }
-
-
-                    if (checkIngest(tempFile, newPath, spec, activities))
-                    {
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug("JCIFS: Decided to ingest '"+documentIdentifier+"'");
-                      // OK, do ingestion itself!
-                      InputStream inputStream = new FileInputStream(tempFile);
-                      try
-                      {
-                        RepositoryDocument rd = new RepositoryDocument();
-                        rd.setBinary(inputStream, tempFile.length());
-                        
-                        indexDocument(activities,rd,file,documentIdentifier,version);
-                      }
-                      finally
-                      {
-                        inputStream.close();
-                      }
-
-                      // I put this record here deliberately for two reasons:
-                      // (1) the other path includes ingestion time, and
-                      // (2) if anything fails up to and during ingestion, I want THAT failure record to be written, not this one.
-                      // So, really, ACTIVITY_ACCESS is a bit more than just fetch for JCIFS...
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_ACCESS,
-                        new Long(tempFile.length()),documentIdentifier,"Success",null,null);
-
-                    }
-                    else
-                    {
-                      // We must actively remove the document here, because the getDocumentVersions()
-                      // method has no way of signalling this, since it does not do the fingerprinting.
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug("JCIFS: Decided to remove '"+documentIdentifier+"'");
-                      activities.deleteDocument(documentIdentifier, version);
-                      // We should record the access here as well, since this is a non-exception way through the code path.
-                      // (I noticed that this was not being recorded in the history while fixing 25477.)
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_ACCESS,
-                        new Long(tempFile.length()),documentIdentifier,"Success",null,null);
+                      tempFile.delete();
                     }
                   }
-                  finally
+                  else
                   {
-                    tempFile.delete();
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug("JCIFS: Local file data not needed for '"+documentIdentifier+"'");
+
+                    // Presume that since the file was queued that it fulfilled the needed criteria.
+                    // Go off and ingest the fast way.
+
+                    // Ingest the document.
+                    InputStream inputStream = getFileInputStream(file);
+                    try
+                    {
+                      rd.setBinary(inputStream, fileLength(file));
+                      
+                      activities.ingestDocument(documentIdentifier, version, uri, rd);
+                    }
+                    finally
+                    {
+                      inputStream.close();
+                    }
+                    activities.recordActivity(new Long(startFetchTime),ACTIVITY_ACCESS,
+                      new Long(fileLength(file)),documentIdentifier,"Success",null,null);
                   }
                 }
                 else
                 {
-                  if (Logging.connectors.isDebugEnabled())
-                    Logging.connectors.debug("JCIFS: Local file data not needed for '"+documentIdentifier+"'");
-
-                  // Presume that since the file was queued that it fulfilled the needed criteria.
-                  // Go off and ingest the fast way.
-
-                  // Ingest the document.
-                  InputStream inputStream = getFileInputStream(file);
-                  try
-                  {
-                    RepositoryDocument rd = new RepositoryDocument();
-                    rd.setBinary(inputStream, fileLength(file));
-                    
-                    indexDocument(activities,rd,file,documentIdentifier,version);
-                  }
-                  finally
-                  {
-                    inputStream.close();
-                  }
-                  activities.recordActivity(new Long(startFetchTime),ACTIVITY_ACCESS,
-                    new Long(fileLength(file)),documentIdentifier,"Success",null,null);
+                  Logging.connectors.debug("JCIFS: Skipping file because output connector cannot accept it");
+                  activities.recordActivity(null,ACTIVITY_ACCESS,
+                    null,documentIdentifier,"Skip","Output connector refused",null);
                 }
               }
               else
@@ -965,11 +989,18 @@
 
   }
 
-  protected static void indexDocument(IProcessActivity activities, RepositoryDocument rd, SmbFile file, String documentIdentifier, String version)
-    throws ManifoldCFException, ServiceInterruption, SmbException
+  protected static String prepareForIndexing(RepositoryDocument rd, SmbFile file, String version)
+    throws ManifoldCFException, SmbException
   {
     String fileNameString = file.getName();
     Date lastModifiedDate = new Date(file.lastModified());
+    Date creationDate = new Date(file.createTime());
+    //If using the lastAccess patched/Google version of jcifs then this can be uncommented
+    //Date lastAccessDate = new Date(file.lastAccess());
+    Integer attributes = file.getAttributes();
+    String shareName = file.getShare();
+
+    
     String contentType = mapExtensionToMimeType(fileNameString);
 
     rd.setFileName(fileNameString);
@@ -977,15 +1008,24 @@
       rd.setMimeType(contentType);
     rd.addField("lastModified", lastModifiedDate.toString());
     rd.setModifiedDate(lastModifiedDate);
+    
+    // Add extra obtainable fields to the field map
+    rd.addField("createdOn", creationDate.toString());
+    rd.setCreatedDate(creationDate);
+
+    //rd.addField("lastAccess", lastModifiedDate.toString());
+    rd.addField("attributes", Integer.toString(attributes));
+    rd.addField("shareName", shareName);
+
 
     int index = 0;
     index = setDocumentSecurity(rd,version,index);
     index = setPathMetadata(rd,version,index);
     StringBuilder ingestURI = new StringBuilder();
     index = unpack(ingestURI,version,index,'+');
-    activities.ingestDocument(documentIdentifier, version, ingestURI.toString(), rd);
+    return ingestURI.toString();
   }
-
+  
   /** Map an extension to a mime type */
   protected static String mapExtensionToMimeType(String fileName)
   {
@@ -1433,7 +1473,8 @@
       if (!isDirectory)
       {
         long fileLength = fileLength(file);
-        if (!activities.checkLengthIndexable(fileLength))
+        if (!activities.checkLengthIndexable(fileLength) ||
+          !activities.checkMimeTypeIndexable(mapExtensionToMimeType(fileName)))
           return false;
         long maxFileLength = Long.MAX_VALUE;
         i = 0;
@@ -1691,6 +1732,7 @@
                     isIndexable = false;
                   else
                   {
+                    // Evaluate the parts of being indexable that are based on the filename, mime type, and url
                     isIndexable = pretendIndexable;
                   }
 
@@ -2621,15 +2663,18 @@
     Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    String server   = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.server);
+    String server   = parameters.getParameter(SharedDriveParameters.server);
     if (server==null) server = "";
-    String domain = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.domain);
+    String domain = parameters.getParameter(SharedDriveParameters.domain);
     if (domain==null) domain = "";
-    String username = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.username);
+    String username = parameters.getParameter(SharedDriveParameters.username);
     if (username==null) username = "";
-    String password = parameters.getObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.password);
-    if (password==null) password = "";
-    String resolvesids = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.useSIDs);
+    String password = parameters.getObfuscatedParameter(SharedDriveParameters.password);
+    if (password==null)
+      password = "";
+    else
+      password = out.mapPasswordToKey(password);
+    String resolvesids = parameters.getParameter(SharedDriveParameters.useSIDs);
     if (resolvesids==null) resolvesids = "true";
 
     // "Server" tab
@@ -2691,19 +2736,19 @@
   {
     String server = variableContext.getParameter("server");
     if (server != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.server,server);
+      parameters.setParameter(SharedDriveParameters.server,server);
 	
     String domain = variableContext.getParameter("domain");
     if (domain != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.domain,domain);
+      parameters.setParameter(SharedDriveParameters.domain,domain);
 	
     String username = variableContext.getParameter("username");
     if (username != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.username,username);
+      parameters.setParameter(SharedDriveParameters.username,username);
 		
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.sharedrive.SharedDriveParameters.password,password);
+      parameters.setObfuscatedParameter(SharedDriveParameters.password,variableContext.mapKeyToPassword(password));
     
     String resolvesidspresent = variableContext.getParameter("resolvesidspresent");
     if (resolvesidspresent != null)
diff --git a/connectors/jcifs/pom.xml b/connectors/jcifs/pom.xml
index 673f378..4e27b9a 100644
--- a/connectors/jcifs/pom.xml
+++ b/connectors/jcifs/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/JDBCAuthority.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/JDBCAuthority.java
index c1ff97b..213d109 100644
--- a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/JDBCAuthority.java
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/JDBCAuthority.java
@@ -45,11 +45,12 @@
 import org.apache.manifoldcf.core.interfaces.ManifoldCFException;

 import org.apache.manifoldcf.core.interfaces.StringSet;

 import org.apache.manifoldcf.core.interfaces.TimeMarker;

-import org.apache.manifoldcf.core.jdbcpool.WrappedConnection;

-import org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConnectionFactory;

-import org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants;

-import org.apache.manifoldcf.crawler.connectors.jdbc.Messages;

-import org.apache.manifoldcf.crawler.system.Logging;

+import org.apache.manifoldcf.core.interfaces.IResultRow;

+import org.apache.manifoldcf.jdbc.JDBCConnection;

+import org.apache.manifoldcf.jdbc.JDBCConstants;

+import org.apache.manifoldcf.jdbc.IDynamicResultSet;

+import org.apache.manifoldcf.jdbc.IDynamicResultRow;

+import org.apache.manifoldcf.authorities.system.Logging;

 

 /**

  *

@@ -58,19 +59,19 @@
 public class JDBCAuthority extends BaseAuthorityConnector {

 

   public static final String _rcsid = "@(#)$Id: JDBCAuthority.java $";

-  private static final String globalDenyToken = "DEAD_AUTHORITY";

-  private static final AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{globalDenyToken},

-    AuthorizationResponse.RESPONSE_UNREACHABLE);

-  private static final AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{globalDenyToken},

-    AuthorizationResponse.RESPONSE_USERNOTFOUND);

-  protected WrappedConnection connection = null;

+

+  protected JDBCConnection connection = null;

   protected String jdbcProvider = null;

+  protected String accessMethod = null;

   protected String host = null;

   protected String databaseName = null;

+  protected String rawDriverString = null;

   protected String userName = null;

   protected String password = null;

+

   protected String idQuery = null;

   protected String tokenQuery = null;

+

   private long responseLifetime = 60000L; //60sec

   private int LRUsize = 1000;

   /**

@@ -98,10 +99,13 @@
     super.connect(configParams);

 

     jdbcProvider = configParams.getParameter(JDBCConstants.providerParameter);

+    accessMethod = configParams.getParameter(JDBCConstants.methodParameter);

     host = configParams.getParameter(JDBCConstants.hostParameter);

     databaseName = configParams.getParameter(JDBCConstants.databaseNameParameter);

+    rawDriverString = configParams.getParameter(JDBCConstants.driverStringParameter);

     userName = configParams.getParameter(JDBCConstants.databaseUserName);

     password = configParams.getObfuscatedParameter(JDBCConstants.databasePassword);

+

     idQuery = configParams.getParameter(JDBCConstants.databaseUserIdQuery);

     tokenQuery = configParams.getParameter(JDBCConstants.databaseTokensQuery);

   }

@@ -113,12 +117,13 @@
   public String check()

     throws ManifoldCFException {

     try {

-      WrappedConnection tempConnection = JDBCConnectionFactory.getConnection(jdbcProvider, host, databaseName, userName, password);

-      JDBCConnectionFactory.releaseConnection(tempConnection);

+      getSession();

+      // Attempt to fetch a connection; if this succeeds we pass

+      connection.testConnection();

       return super.check();

-    } catch (Throwable e) {

-      if (Logging.connectors.isDebugEnabled()) {

-        Logging.connectors.debug("Service interruption in check(): " + e.getMessage(), e);

+    } catch (ServiceInterruption e) {

+      if (Logging.authorityConnectors.isDebugEnabled()) {

+        Logging.authorityConnectors.debug("Service interruption in check(): " + e.getMessage(), e);

       }

       return "Transient error: " + e.getMessage();

     }

@@ -130,13 +135,12 @@
   @Override

   public void disconnect()

     throws ManifoldCFException {

-    if (connection != null) {

-      JDBCConnectionFactory.releaseConnection(connection);

-      connection = null;

-    }

+    connection = null;

     host = null;

     jdbcProvider = null;

+    accessMethod = null;

     databaseName = null;

+    rawDriverString = null;

     userName = null;

     password = null;

 

@@ -152,19 +156,19 @@
       if (jdbcProvider == null || jdbcProvider.length() == 0) {

         throw new ManifoldCFException("Missing parameter '" + JDBCConstants.providerParameter + "'");

       }

-      if (host == null || host.length() == 0) {

-        throw new ManifoldCFException("Missing parameter '" + JDBCConstants.hostParameter + "'");

-      }

+      if ((host == null || host.length() == 0) && (rawDriverString == null || rawDriverString.length() == 0))

+        throw new ManifoldCFException("Missing parameter '"+JDBCConstants.hostParameter+"' or '"+JDBCConstants.driverStringParameter+"'");

 

-      connection = JDBCConnectionFactory.getConnection(jdbcProvider, host, databaseName, userName, password);

+      connection = new JDBCConnection(jdbcProvider,(accessMethod==null || accessMethod.equals("name")),host,databaseName,rawDriverString,userName,password);

     }

   }

 

   private String createCacheConnectionString() {

     StringBuilder sb = new StringBuilder();

     sb.append(jdbcProvider).append("|")

-      .append(host).append("|")

-      .append(databaseName).append("|")

+      .append((host==null)?"":host).append("|")

+      .append((databaseName==null)?"":databaseName).append("|")

+      .append((rawDriverString==null)?"":rawDriverString).append("|")

       .append(userName);

     return sb.toString();

   }

@@ -209,10 +213,12 @@
 

   public AuthorizationResponse getAuthorizationResponseUncached(String userName)

     throws ManifoldCFException {

-    try {

+    try

+    {

       getSession();

 

       VariableMap vm = new VariableMap();

+      addConstant(vm, JDBCConstants.idReturnVariable, JDBCConstants.idReturnColumnName);

       addVariable(vm, JDBCConstants.userNameVariable, userName);

 

       // Find user id

@@ -220,55 +226,96 @@
       StringBuilder sb = new StringBuilder();

       substituteQuery(idQuery, vm, sb, paramList);

 

-      PreparedStatement ps = connection.getConnection().prepareStatement(sb.toString());

-      loadPS(ps, paramList);

-      ResultSet rs = ps.executeQuery();

-      if (rs == null) {

-        return unreachableResponse;

+      IDynamicResultSet idSet;

+      try {

+        idSet = connection.executeUncachedQuery(sb.toString(),paramList,-1);

       }

+      catch (ServiceInterruption e)

+      {

+        return RESPONSE_UNREACHABLE;

+      }

+      catch (ManifoldCFException e)

+      {

+        throw e;

+      }

+

       String uid;

-      if (rs.next()) {

-        uid = rs.getString(1);

-      } else {

-        return userNotFoundResponse;

+      try {

+        IDynamicResultRow row = idSet.getNextRow();

+        if (row == null)

+          return RESPONSE_USERNOTFOUND;

+        try

+        {

+          Object oUid = row.getValue(JDBCConstants.idReturnColumnName);

+          if (oUid == null)

+            throw new ManifoldCFException("Bad id query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");

+          uid = JDBCConnection.readAsString(oUid);

+        }

+        finally

+        {

+          row.close();

+        }

+      } finally {

+        idSet.close();

       }

-      if (uid == null || uid.isEmpty()) {

-        return unreachableResponse;

+

+      if (uid.isEmpty()) {

+        return RESPONSE_USERNOTFOUND;

       }

 

       // now check tokens

       vm = new VariableMap();

+      addConstant(vm, JDBCConstants.tokenReturnVariable, JDBCConstants.tokenReturnColumnName);

       addVariable(vm, JDBCConstants.userNameVariable, userName);

       addVariable(vm, JDBCConstants.userIDVariable, uid);

       sb = new StringBuilder();

       paramList = new ArrayList();

       substituteQuery(tokenQuery, vm, sb, paramList);

-      ps = connection.getConnection().prepareStatement(sb.toString());

-      loadPS(ps, paramList);

-      rs = ps.executeQuery();

-      if (rs == null) {

-        return unreachableResponse;

+      

+      try {

+        idSet = connection.executeUncachedQuery(sb.toString(),paramList,-1);

       }

+      catch (ServiceInterruption e)

+      {

+        return RESPONSE_UNREACHABLE;

+      }

+      catch (ManifoldCFException e)

+      {

+        throw e;

+      }

+

       ArrayList<String> tokenArray = new ArrayList<String>();

-      while (rs.next()) {

-        String token = rs.getString(1);

-        if (token != null && !token.isEmpty()) {

-          tokenArray.add(token);

+      try {

+        while (true)

+        {

+          IDynamicResultRow row = idSet.getNextRow();

+          if (row == null)

+            break;

+          try

+          {

+            Object oToken = row.getValue(JDBCConstants.tokenReturnColumnName);

+            if (oToken == null)

+              throw new ManifoldCFException("Bad token query; doesn't return $(TOKENCOLUMN) column.  Try using quotes around $(TOKENCOLUMN) variable, e.g. \"$(TOKENCOLUMN)\".");

+            String token = JDBCConnection.readAsString(oToken);

+

+            if (!token.isEmpty()) {

+              tokenArray.add(token);

+            }

+          }

+          finally

+          {

+            row.close();

+          }

         }

+      } finally {

+        idSet.close();

       }

-

-      String[] tokens = new String[tokenArray.size()];

-      int k = 0;

-      while (k < tokens.length) {

-        tokens[k] = tokenArray.get(k);

-        k++;

-      }

-

-      return new AuthorizationResponse(tokens, AuthorizationResponse.RESPONSE_OK);

-

-    } catch (Exception e) {

-      // Unreachable

-      return unreachableResponse;

+      return new AuthorizationResponse(tokenArray.toArray(new String[0]), AuthorizationResponse.RESPONSE_OK);

+    }

+    catch (ServiceInterruption e)

+    {

+      Logging.authorityConnectors.warn("JDBCAuthority: Service interruption: "+e.getMessage(),e);

+      return RESPONSE_UNREACHABLE;

     }

   }

 

@@ -302,36 +349,37 @@
     tabsArray.add(Messages.getString(locale, "JDBCAuthority.Queries"));

 

     out.print(

-      "<script type=\"text/javascript\">\n"

-      + "<!--\n"

-      + "function checkConfigForSave()\n"

-      + "{\n"

-      + "  if (editconnection.databasehost.value == \"\")\n"

-      + "  {\n"

-      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.PleaseFillInADatabaseServerName") + "\");\n"

-      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.Server") + "\");\n"

-      + "    editconnection.databasehost.focus();\n"

-      + "    return false;\n"

-      + "  }\n"

-      + "  if (editconnection.databasename.value == \"\")\n"

-      + "  {\n"

-      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.PleaseFillInTheNameOfTheDatabase") + "\");\n"

-      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.Server") + "\");\n"

-      + "    editconnection.databasename.focus();\n"

-      + "    return false;\n"

-      + "  }\n"

-      + "  if (editconnection.username.value == \"\")\n"

-      + "  {\n"

-      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.PleaseSupplyTheDatabaseUsernameForThisConnection") + "\");\n"

-      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.Credentials") + "\");\n"

-      + "    editconnection.username.focus();\n"

-      + "    return false;\n"

-      + "  }\n"

-      + "  return true;\n"

-      + "}\n"

-      + "\n"

-      + "//-->\n"

-      + "</script>\n");

+"<script type=\"text/javascript\">\n"+

+"<!--\n"+

+"function checkConfigForSave()\n"+

+"{\n"+

+"  if (editconnection.databasehost.value == \"\" && editconnection.rawjdbcstring.value == \"\")\n"+

+"  {\n"+

+"    alert(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.PleaseFillInADatabaseServerName") + "\");\n"+

+"    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.Server") + "\");\n"+

+"    editconnection.databasehost.focus();\n"+

+"    return false;\n"+

+"  }\n"+

+"  if (editconnection.databasename.value == \"\" && editconnection.rawjdbcstring.value == \"\")\n"+

+"  {\n"+

+"    alert(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.PleaseFillInTheNameOfTheDatabase") + "\");\n"+

+"    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.Server") + "\");\n"+

+"    editconnection.databasename.focus();\n"+

+"    return false;\n"+

+"  }\n"+

+"  if (editconnection.username.value == \"\")\n"+

+"  {\n"+

+"    alert(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.PleaseSupplyTheDatabaseUsernameForThisConnection") + "\");\n"+

+"    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "JDBCAuthority.Credentials") + "\");\n"+

+"    editconnection.username.focus();\n"+

+"    return false;\n"+

+"  }\n"+

+"  return true;\n"+

+"}\n"+

+"\n"+

+"//-->\n"+

+"</script>\n"

+    );

   }

 

   /**

@@ -351,73 +399,98 @@
   public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out,

     Locale locale, ConfigParams parameters, String tabName)

     throws ManifoldCFException, IOException {

-    String lJdbcProvider = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.providerParameter);

+    String lJdbcProvider = parameters.getParameter(JDBCConstants.providerParameter);

     if (lJdbcProvider == null) {

       lJdbcProvider = "oracle:thin:@";

     }

-    String lHost = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.hostParameter);

+    String lAccessMethod = parameters.getParameter(JDBCConstants.methodParameter);

+    if (lAccessMethod == null)

+      lAccessMethod = "name";

+    String lHost = parameters.getParameter(JDBCConstants.hostParameter);

     if (lHost == null) {

       lHost = "localhost";

     }

-    String lDatabaseName = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseNameParameter);

+    String lDatabaseName = parameters.getParameter(JDBCConstants.databaseNameParameter);

     if (lDatabaseName == null) {

       lDatabaseName = "database";

     }

-    String databaseUser = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseUserName);

+    String rawJDBCString = parameters.getParameter(JDBCConstants.driverStringParameter);

+    if (rawJDBCString == null)

+      rawJDBCString = "";

+    String databaseUser = parameters.getParameter(JDBCConstants.databaseUserName);

     if (databaseUser == null) {

       databaseUser = "";

     }

-    String databasePassword = parameters.getObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databasePassword);

+    String databasePassword = parameters.getObfuscatedParameter(JDBCConstants.databasePassword);

     if (databasePassword == null) {

       databasePassword = "";

+    } else {

+      databasePassword = out.mapPasswordToKey(databasePassword);

     }

-    String lIdQuery = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseUserIdQuery);

+    String lIdQuery = parameters.getParameter(JDBCConstants.databaseUserIdQuery);

     if (lIdQuery == null) {

-      lIdQuery = "SELECT idfield FROM usertable WHERE login = $(USERNAME)";

+      lIdQuery = "SELECT idfield AS $(IDCOLUMN) FROM usertable WHERE login = $(USERNAME)";

     }

-    String lTokenQuery = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseTokensQuery);

+    String lTokenQuery = parameters.getParameter(JDBCConstants.databaseTokensQuery);

     if (lTokenQuery == null) {

-      lTokenQuery = "SELECT groupnamefield FROM grouptable WHERE user_id = $(UID) or login = $(USERNAME)";

+      lTokenQuery = "SELECT groupnamefield AS $(TOKENCOLUMN) FROM grouptable WHERE user_id = $(UID) OR login = $(USERNAME)";

     }

 

     // "Database Type" tab

     if (tabName.equals(Messages.getString(locale, "JDBCAuthority.DatabaseType"))) {

       out.print(

-        "<table class=\"displaytable\">\n"

-        + "  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"

-        + "  <tr>\n"

-        + "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "JDBCAuthority.DatabaseType2") + "</nobr></td><td class=\"value\">\n"

-        + "      <select multiple=\"false\" name=\"databasetype\" size=\"2\">\n"

-        + "        <option value=\"oracle:thin:@\" " + (lJdbcProvider.equals("oracle:thin:@") ? "selected=\"selected\"" : "") + ">Oracle</option>\n"

-        + "        <option value=\"postgresql:\" " + (lJdbcProvider.equals("postgresql:") ? "selected=\"selected\"" : "") + ">Postgres SQL</option>\n"

-        + "        <option value=\"jtds:sqlserver:\" " + (lJdbcProvider.equals("jtds:sqlserver:") ? "selected=\"selected\"" : "") + ">MS SQL Server (&gt; V6.5)</option>\n"

-        + "        <option value=\"jtds:sybase:\" " + (lJdbcProvider.equals("jtds:sybase:") ? "selected=\"selected\"" : "") + ">Sybase (&gt;= V10)</option>\n"

-        + "        <option value=\"mysql:\" " + (lJdbcProvider.equals("mysql:") ? "selected=\"selected\"" : "") + ">MySQL (&gt;= V5)</option>\n"

-        + "      </select>\n"

-        + "    </td>\n"

-        + "  </tr>\n"

-        + "</table>\n");

+"<table class=\"displaytable\">\n"+

+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+

+"  <tr>\n"+

+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "JDBCAuthority.DatabaseType2") + "</nobr></td><td class=\"value\">\n"+

+"      <select multiple=\"false\" name=\"databasetype\" size=\"2\">\n"+

+"        <option value=\"oracle:thin:@\" " + (lJdbcProvider.equals("oracle:thin:@") ? "selected=\"selected\"" : "") + ">Oracle</option>\n"+

+"        <option value=\"postgresql:\" " + (lJdbcProvider.equals("postgresql:") ? "selected=\"selected\"" : "") + ">Postgres SQL</option>\n"+

+"        <option value=\"jtds:sqlserver:\" " + (lJdbcProvider.equals("jtds:sqlserver:") ? "selected=\"selected\"" : "") + ">MS SQL Server (&gt; V6.5)</option>\n"+

+"        <option value=\"jtds:sybase:\" " + (lJdbcProvider.equals("jtds:sybase:") ? "selected=\"selected\"" : "") + ">Sybase (&gt;= V10)</option>\n"+

+"        <option value=\"mysql:\" " + (lJdbcProvider.equals("mysql:") ? "selected=\"selected\"" : "") + ">MySQL (&gt;= V5)</option>\n"+

+"      </select>\n"+

+"    </td>\n"+

+"  </tr>\n"+

+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+

+"  <tr>\n"+

+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"JDBCAuthority.AccessMethod") + "</nobr></td><td class=\"value\">\n"+

+"      <select multiple=\"false\" name=\"accessmethod\" size=\"2\">\n"+

+"        <option value=\"name\" "+(lAccessMethod.equals("name")?"selected=\"selected\"":"")+">"+Messages.getBodyString(locale,"JDBCAuthority.ByName")+"</option>\n"+

+"        <option value=\"label\" "+(lAccessMethod.equals("label")?"selected=\"selected\"":"")+">"+Messages.getBodyString(locale,"JDBCAuthority.ByLabel")+"</option>\n"+

+"      </select>\n"+

+"    </td>\n"+

+"  </tr>\n"+

+"</table>\n");

     } else {

       out.print(

-        "<input type=\"hidden\" name=\"databasetype\" value=\"" + lJdbcProvider + "\"/>\n");

+"<input type=\"hidden\" name=\"databasetype\" value=\"" + lJdbcProvider + "\"/>\n"+

+"<input type=\"hidden\" name=\"accessmethod\" value=\""+lAccessMethod+"\"/>\n"

+      );

     }

 

     // "Server" tab

     if (tabName.equals(Messages.getString(locale, "JDBCAuthority.Server"))) {

       out.print(

-        "<table class=\"displaytable\">\n"

-        + "  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"

-        + "  <tr>\n"

-        + "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "JDBCAuthority.DatabaseHostAndPort") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"64\" name=\"databasehost\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lHost) + "\"/></td>\n"

-        + "  </tr>\n"

-        + "  <tr>\n"

-        + "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "JDBCAuthority.DatabaseServiceNameOrInstanceDatabase") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"32\" name=\"databasename\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lDatabaseName) + "\"/></td>\n"

-        + "  </tr>\n"

-        + "</table>\n");

+"<table class=\"displaytable\">\n"+

+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+

+"  <tr>\n"+

+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "JDBCAuthority.DatabaseHostAndPort") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"64\" name=\"databasehost\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lHost) + "\"/></td>\n"+

+"  </tr>\n"+

+"  <tr>\n"+

+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "JDBCAuthority.DatabaseServiceNameOrInstanceDatabase") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"32\" name=\"databasename\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lDatabaseName) + "\"/></td>\n"+

+"  </tr>\n"+

+"  <tr>\n"+

+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"JDBCAuthority.RawDatabaseConnectString") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"80\" name=\"rawjdbcstring\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(rawJDBCString)+"\"/></td>\n"+

+"  </tr>\n"+

+"</table>\n"

+      );

     } else {

       out.print(

-        "<input type=\"hidden\" name=\"databasehost\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lHost) + "\"/>\n"

-        + "<input type=\"hidden\" name=\"databasename\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lDatabaseName) + "\"/>\n");

+"<input type=\"hidden\" name=\"databasehost\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lHost) + "\"/>\n"+

+"<input type=\"hidden\" name=\"databasename\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(lDatabaseName) + "\"/>\n"+

+"<input type=\"hidden\" name=\"rawjdbcstring\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(rawJDBCString)+"\"/>\n"

+      );

     }

 

     // "Credentials" tab

@@ -481,37 +554,45 @@
     throws ManifoldCFException {

     String type = variableContext.getParameter("databasetype");

     if (type != null) {

-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.providerParameter, type);

+      parameters.setParameter(JDBCConstants.providerParameter, type);

     }

 

+    String accessMethod = variableContext.getParameter("accessmethod");

+    if (accessMethod != null)

+      parameters.setParameter(JDBCConstants.methodParameter,accessMethod);

+

     String lHost = variableContext.getParameter("databasehost");

     if (lHost != null) {

-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.hostParameter, lHost);

+      parameters.setParameter(JDBCConstants.hostParameter, lHost);

     }

 

     String lDatabaseName = variableContext.getParameter("databasename");

     if (lDatabaseName != null) {

-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseNameParameter, lDatabaseName);

+      parameters.setParameter(JDBCConstants.databaseNameParameter, lDatabaseName);

     }

 

+    String rawJDBCString = variableContext.getParameter("rawjdbcstring");

+    if (rawJDBCString != null)

+      parameters.setParameter(JDBCConstants.driverStringParameter,rawJDBCString);

+

     String lUserName = variableContext.getParameter("username");

     if (lUserName != null) {

-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseUserName, lUserName);

+      parameters.setParameter(JDBCConstants.databaseUserName, lUserName);

     }

 

     String lPassword = variableContext.getParameter("password");

     if (lPassword != null) {

-      parameters.setObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databasePassword, lPassword);

+      parameters.setObfuscatedParameter(JDBCConstants.databasePassword, variableContext.mapKeyToPassword(lPassword));

     }

 

     String lIdQuery = variableContext.getParameter("idquery");

     if (lIdQuery != null) {

-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseUserIdQuery, lIdQuery);

+      parameters.setParameter(JDBCConstants.databaseUserIdQuery, lIdQuery);

     }

 

     String lTokenQuery = variableContext.getParameter("tokenquery");

     if (lTokenQuery != null) {

-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseTokensQuery, lTokenQuery);

+      parameters.setParameter(JDBCConstants.databaseTokensQuery, lTokenQuery);

     }

 

     return null;

diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/Messages.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/Messages.java
new file mode 100644
index 0000000..ec69a51
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jdbc/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorities.jdbc;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.authorities.authorities.jdbc.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.authorities.authorities.jdbc";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/IDynamicResultSet.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/IDynamicResultSet.java
deleted file mode 100644
index 8884622..0000000
--- a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/IDynamicResultSet.java
+++ /dev/null
@@ -1,43 +0,0 @@
-/* $Id: IDynamicResultSet.java 988245 2010-08-23 18:39:35Z kwright $ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-package org.apache.manifoldcf.crawler.connectors.jdbc;
-
-import org.apache.manifoldcf.core.interfaces.*;
-import org.apache.manifoldcf.agents.interfaces.*;
-
-/** This object describes an (open) jdbc resultset.  Semantics are identical to
-* org.apache.manifoldcf.core.interfaces.IResultSet, EXCEPT that a close() method is
-* provided and must be called, and there is no method to get the entire resultset
-* at once.
-*/
-public interface IDynamicResultSet
-{
-  public static final String _rcsid = "@(#)$Id: IDynamicResultSet.java 988245 2010-08-23 18:39:35Z kwright $";
-
-  /** Get the next row from the resultset.
-  *@return the immutable row description, or null if there is no such row.
-  */
-  public IResultRow getNextRow()
-    throws ManifoldCFException, ServiceInterruption;
-
-  /** Close this resultset.
-  */
-  public void close()
-    throws ManifoldCFException, ServiceInterruption;
-}
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnection.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnection.java
deleted file mode 100644
index ee5af81..0000000
--- a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnection.java
+++ /dev/null
@@ -1,1355 +0,0 @@
-/* $Id: JDBCConnection.java 988245 2010-08-23 18:39:35Z kwright $ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-package org.apache.manifoldcf.crawler.connectors.jdbc;
-
-import org.apache.manifoldcf.core.interfaces.*;
-import org.apache.manifoldcf.core.database.*;
-import org.apache.manifoldcf.core.jdbcpool.*;
-import org.apache.manifoldcf.agents.interfaces.*;
-
-import java.sql.*;
-import javax.naming.*;
-import javax.sql.*;
-
-import java.io.*;
-import java.util.*;
-
-/** This object describes a connection to a particular JDBC instance.
-*/
-public class JDBCConnection
-{
-  public static final String _rcsid = "@(#)$Id: JDBCConnection.java 988245 2010-08-23 18:39:35Z kwright $";
-
-  protected String jdbcProvider = null;
-  protected boolean useName;
-  protected String host = null;
-  protected String databaseName = null;
-  protected String userName = null;
-  protected String password = null;
-
-  /** Constructor.
-  */
-  public JDBCConnection(String jdbcProvider, boolean useName, String host, String databaseName, String userName, String password)
-  {
-    this.jdbcProvider = jdbcProvider;
-    this.useName = useName;
-    this.host = host;
-    this.databaseName = databaseName;
-    this.userName = userName;
-    this.password = password;
-  }
-
-  protected static IResultRow readNextResultRowViaThread(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    NextResultRowThread t = new NextResultRowThread(rs,rsmd,resultCols);
-    try
-    {
-      t.start();
-      t.join();
-      Throwable thr = t.getException();
-      if (thr != null)
-      {
-        if (thr instanceof java.sql.SQLException)
-          throw new ManifoldCFException("Error fetching next JDBC result row: "+thr.getMessage(),thr);
-        else if (thr instanceof ManifoldCFException)
-          throw (ManifoldCFException)thr;
-        else if (thr instanceof ServiceInterruption)
-          throw (ServiceInterruption)thr;
-        else if (thr instanceof RuntimeException)
-          throw (RuntimeException)thr;
-        else
-          throw (Error)thr;
-      }
-      return t.getResponse();
-    }
-    catch (InterruptedException e)
-    {
-      t.interrupt();
-      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-    }
-  }
-
-  protected static class NextResultRowThread extends Thread
-  {
-    protected ResultSet rs;
-    protected ResultSetMetaData rsmd;
-    protected String[] resultCols;
-
-    protected Throwable exception = null;
-    protected IResultRow response = null;
-
-    public NextResultRowThread(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
-    {
-      super();
-      setDaemon(true);
-      this.rs = rs;
-      this.rsmd = rsmd;
-      this.resultCols = resultCols;
-    }
-
-    public void run()
-    {
-      try
-      {
-        response = readNextResultRow(rs,rsmd,resultCols);
-      }
-      catch (Throwable e)
-      {
-        this.exception = e;
-      }
-    }
-
-    public Throwable getException()
-    {
-      return exception;
-    }
-
-    public IResultRow getResponse()
-    {
-      return response;
-    }
-  }
-
-  protected static IResultRow readNextResultRow(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      if (rs.next())
-      {
-        return readResultRow(rs,rsmd,resultCols);
-      }
-      return null;
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Result set error: "+e.getMessage(),e);
-    }
-  }
-
-  protected static void closeResultset(ResultSet rs)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      rs.close();
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Exception closing resultset: "+e.getMessage(),e);
-    }
-  }
-
-  protected static void closeStmt(Statement stmt)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      stmt.close();
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Exception closing statement: "+e.getMessage(),e);
-    }
-  }
-
-  protected static void closePS(PreparedStatement ps)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      ps.close();
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Exception closing statement: "+e.getMessage(),e);
-    }
-  }
-
-
-  /** Test connection.
-  */
-  public void testConnection()
-    throws ManifoldCFException, ServiceInterruption
-  {
-    TestConnectionThread t = new TestConnectionThread();
-    try
-    {
-      t.start();
-      t.join();
-      Throwable thr = t.getException();
-      if (thr != null)
-      {
-        if (thr instanceof java.sql.SQLException)
-          throw new ManifoldCFException("Error doing JDBC connection test: "+thr.getMessage(),thr);
-        else if (thr instanceof ManifoldCFException)
-          throw (ManifoldCFException)thr;
-        else if (thr instanceof ServiceInterruption)
-          throw (ServiceInterruption)thr;
-        else if (thr instanceof RuntimeException)
-          throw (RuntimeException)thr;
-        else
-          throw (Error)thr;
-      }
-    }
-    catch (InterruptedException e)
-    {
-      t.interrupt();
-      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-    }
-  }
-
-  protected class TestConnectionThread extends Thread
-  {
-    protected Throwable exception = null;
-
-    public TestConnectionThread()
-    {
-      super();
-      setDaemon(true);
-    }
-
-    public void run()
-    {
-      try
-      {
-        WrappedConnection tempConnection = JDBCConnectionFactory.getConnection(jdbcProvider,host,databaseName,userName,password);
-        JDBCConnectionFactory.releaseConnection(tempConnection);
-      }
-      catch (Throwable e)
-      {
-        this.exception = e;
-      }
-    }
-
-    public Throwable getException()
-    {
-      return exception;
-    }
-  }
-
-  /** Execute query.
-  */
-  public IDynamicResultSet executeUncachedQuery(String query, ArrayList params, int maxResults)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    if (params == null)
-      return new JDBCResultSet(query,maxResults);
-    else
-      return new JDBCPSResultSet(query,params,maxResults);
-  }
-
-  /** Execute operation.
-  */
-  public void executeOperation(String query, ArrayList params)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    ExecuteOperationThread t = new ExecuteOperationThread(query,params);
-    try
-    {
-      t.start();
-      t.join();
-      Throwable thr = t.getException();
-      if (thr != null)
-      {
-        if (thr instanceof java.sql.SQLException)
-          throw new ManifoldCFException("Exception doing connector query '"+query+"': "+thr.getMessage(),thr);
-        else if (thr instanceof ManifoldCFException)
-          throw (ManifoldCFException)thr;
-        else if (thr instanceof ServiceInterruption)
-          throw (ServiceInterruption)thr;
-        else if (thr instanceof RuntimeException)
-          throw (RuntimeException)thr;
-        else
-          throw (Error)thr;
-      }
-    }
-    catch (InterruptedException e)
-    {
-      t.interrupt();
-      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-    }
-  }
-
-  protected class ExecuteOperationThread extends Thread
-  {
-    protected String query;
-    protected ArrayList params;
-
-    protected Throwable exception = null;
-
-    public ExecuteOperationThread(String query, ArrayList params)
-    {
-      super();
-      setDaemon(true);
-      this.query = query;
-      this.params = params;
-    }
-
-    public void run()
-    {
-      try
-      {
-        WrappedConnection tempConnection = JDBCConnectionFactory.getConnection(jdbcProvider,host,databaseName,userName,password);
-        try
-        {
-          execute(tempConnection.getConnection(),query,params,false,0,useName);
-        }
-        finally
-        {
-          JDBCConnectionFactory.releaseConnection(tempConnection);
-        }
-      }
-      catch (Throwable e)
-      {
-        this.exception = e;
-      }
-    }
-
-    public Throwable getException()
-    {
-      return exception;
-    }
-  }
-
-  /** Run a query.  No caching is involved at all at this level.
-  * @param query String the query string
-  * @param maxResults is the maximum number of results to load: -1 if all
-  * @param params ArrayList if params !=null, use preparedStatement
-  */
-  protected static IResultSet execute(Connection connection, String query, ArrayList params, boolean bResults, int maxResults, boolean useName)
-    throws ManifoldCFException, ServiceInterruption
-  {
-
-    ResultSet rs;
-
-    try
-    {
-
-      if (params==null)
-      {
-        // lightest statement type
-        Statement stmt = connection.createStatement();
-        try
-        {
-          stmt.execute(query);
-          rs = stmt.getResultSet();
-          try
-          {
-            // Suck data from resultset
-            if (bResults)
-              return getData(rs,maxResults,useName);
-            return null;
-          }
-          finally
-          {
-            if (rs != null)
-              rs.close();
-          }
-        }
-        finally
-        {
-          stmt.close();
-        }
-      }
-      else
-      {
-        PreparedStatement ps = connection.prepareStatement(query);
-        try
-        {
-          loadPS(ps, params);
-
-          if (bResults)
-          {
-            rs = ps.executeQuery();
-            try
-            {
-              // Suck data from resultset
-              return getData(rs,maxResults,useName);
-            }
-            finally
-            {
-              if (rs != null)
-                rs.close();
-            }
-          }
-          else
-          {
-            ps.executeUpdate();
-            return null;
-          }
-
-        }
-        finally
-        {
-          ps.close();
-          cleanupParameters(params);
-        }
-      }
-
-    }
-    catch (ManifoldCFException e)
-    {
-      throw e;
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Exception doing connector query '"+query+"': "+e.getMessage(),e);
-    }
-  }
-
-  /** Read the current row from the resultset */
-  protected static IResultRow readResultRow(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      Object value = null;
-      RRow m = new RRow();
-
-      // We have 'colcount' cols to look thru
-      for (int i = 0; i < resultCols.length; i++)
-      {
-        String key = resultCols[i];
-        // System.out.println("Key = "+key);
-        int colnum = findColumn(rs,key);
-        if (colnum > -1)
-        {
-          if (isBinaryData(rsmd,colnum))
-          {
-            InputStream bis = rs.getBinaryStream(colnum);
-            if (bis != null)
-              value = new TempFileInput(bis);
-          }
-          else if (isBLOB(rsmd,colnum))
-          {
-            // System.out.println("It's a blob!");
-            Blob blob = getBLOB(rs,colnum);
-            // Create a tempfileinput object!
-            // Cleanup should happen by the user of the resultset.
-            // System.out.println(" Blob length = "+Long.toString(blob.length()));
-            if (blob != null)
-              value = new TempFileInput(blob.getBinaryStream(),blob.length());
-          }
-          else if (isCLOB(rsmd,colnum))
-          {
-            Clob clob = getCLOB(rs,colnum);
-            // Note well: we have not figured out how to handle characters outside of ASCII!
-            if (clob != null)
-              value = new TempFileInput(clob.getAsciiStream(),clob.length());
-          }
-          else
-          {
-            // System.out.println("It's not a blob");
-            value = getObject(rs,rsmd,colnum);
-          }
-        }
-        if (value != null)
-          m.put(key, value);
-      }
-      return m;
-
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Resultset error: "+e.getMessage(),e);
-    }
-  }
-
-  protected static String[] readColumnNames(ResultSetMetaData rsmd, boolean useName)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      String[] resultCols;
-      if (rsmd != null)
-      {
-        int colcount = rsmd.getColumnCount();
-        resultCols = new String[colcount];
-        for (int i = 0; i < colcount; i++)
-        {
-          String name;
-          if (useName)
-            name = rsmd.getColumnName(i+1);
-          else
-            name = rsmd.getColumnLabel(i+1);
-          resultCols[i] = name;
-        }
-      }
-      else
-        resultCols = new String[0];
-      return resultCols;
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Sql exception reading column names: "+e.getMessage(),e);
-    }
-  }
-
-  // Read data from a resultset
-  protected static IResultSet getData(ResultSet rs, int maxResults, boolean useName)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      RSet results = new RSet();  // might be empty but not an error
-
-      if (rs != null)
-      {
-        // Optionally we're going to suck the data
-        // out of the db and return it in a
-        // readonly structure
-        ResultSetMetaData rsmd = rs.getMetaData();
-        String[] resultCols = readColumnNames(rsmd, useName);
-        if (resultCols.length == 0)
-        {
-          // This is an error situation; if a result with no columns is
-          // necessary, bResults must be false!!!
-          throw new ManifoldCFException("Empty query, no columns returned",ManifoldCFException.GENERAL_ERROR);
-        }
-
-        while (rs.next() && (maxResults == -1 || maxResults > 0))
-        {
-          IResultRow m = readResultRow(rs,rsmd,resultCols);
-          if (maxResults != -1)
-            maxResults--;
-          results.addRow(m);
-        }
-      }
-      return results;
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Resultset error: "+e.getMessage(),e);
-    }
-  }
-
-  // pass params to preparedStatement
-  protected static void loadPS(PreparedStatement ps, ArrayList data)
-    throws java.sql.SQLException, ManifoldCFException
-  {
-    if (data!=null)
-    {
-      for (int i = 0; i < data.size(); i++)
-      {
-        // If the input type is a string, then set it as such.
-        // Otherwise, if it's an input stream, we make a blob out of it.
-        Object x = data.get(i);
-        if (x instanceof String)
-        {
-          String value = (String)x;
-          // letting database do lame conversion!
-          ps.setString(i+1, value);
-        }
-        if (x instanceof BinaryInput)
-        {
-          BinaryInput value = (BinaryInput)x;
-          // System.out.println("Blob length on write = "+Long.toString(value.getLength()));
-          // The oracle driver does a binary conversion to base 64 when writing data
-          // into a clob column using a binary stream operator.  Since at this
-          // point there is no way to distinguish the two, and since our tests use CLOB,
-          // this code doesn't work for them.
-          // So, for now, use the ascii stream method.
-          //ps.setBinaryStream(i+1,value.getStream(),(int)value.getLength());
-          ps.setAsciiStream(i+1,value.getStream(),(int)value.getLength());
-        }
-        if (x instanceof java.util.Date)
-        {
-          ps.setDate(i+1,new java.sql.Date(((java.util.Date)x).getTime()));
-        }
-        if (x instanceof Long)
-        {
-          ps.setLong(i+1,((Long)x).longValue());
-        }
-        if (x instanceof TimeMarker)
-        {
-          ps.setTimestamp(i+1,new java.sql.Timestamp(((Long)x).longValue()));
-        }
-        if (x instanceof Double)
-        {
-          ps.setDouble(i+1,((Double)x).doubleValue());
-        }
-        if (x instanceof Integer)
-        {
-          ps.setInt(i+1,((Integer)x).intValue());
-        }
-        if (x instanceof Float)
-        {
-          ps.setFloat(i+1,((Float)x).floatValue());
-        }
-      }
-    }
-  }
-
-  /** Clean up parameters after query has been triggered.
-  */
-  protected static void cleanupParameters(ArrayList data)
-    throws ManifoldCFException
-  {
-    if (data != null)
-    {
-      for (int i = 0; i < data.size(); i++)
-      {
-        // If the input type is a string, then set it as such.
-        // Otherwise, if it's an input stream, we make a blob out of it.
-        Object x = data.get(i);
-        if (x instanceof BinaryInput)
-        {
-          BinaryInput value = (BinaryInput)x;
-          value.doneWithStream();
-        }
-      }
-    }
-  }
-
-  protected static int findColumn(ResultSet rs, String name)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      return rs.findColumn(name);
-    }
-    catch (java.sql.SQLException e)
-    {
-      return -1;
-    }
-  }
-
-  protected static Blob getBLOB(ResultSet rs, int col)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      return rs.getBlob(col);
-    }
-    catch (java.sql.SQLException sqle)
-    {
-      throw new ManifoldCFException("Error in getBlob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
-    }
-  }
-
-  protected static Clob getCLOB(ResultSet rs, int col)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      return rs.getClob(col);
-    }
-    catch (java.sql.SQLException sqle)
-    {
-      throw new ManifoldCFException("Error in getClob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
-    }
-  }
-
-  protected static boolean isBLOB(ResultSetMetaData rsmd, int col)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      int type = rsmd.getColumnType(col);
-      return (type == java.sql.Types.BLOB);
-    }
-    catch (java.sql.SQLException sqle)
-    {
-      throw new ManifoldCFException("Error in isBlob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
-    }
-  }
-
-  protected static boolean isBinaryData(ResultSetMetaData rsmd, int col)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      int type = rsmd.getColumnType(col);
-      return (type == java.sql.Types.VARBINARY ||
-        type == java.sql.Types.BINARY || type == java.sql.Types.LONGVARBINARY);
-    }
-    catch (java.sql.SQLException sqle)
-    {
-      throw new ManifoldCFException("Error in isBinaryData("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
-    }
-  }
-
-  protected static boolean isCLOB(ResultSetMetaData rsmd, int col)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    try
-    {
-      int type = rsmd.getColumnType(col);
-      return (type == java.sql.Types.CLOB || type == java.sql.Types.LONGVARCHAR);
-    }
-    catch (java.sql.SQLException sqle)
-    {
-      throw new ManifoldCFException("Error in isClob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
-    }
-  }
-
-  protected static Object getObject(ResultSet rs, ResultSetMetaData rsmd, int col)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    Object result = null;
-
-    try
-    {
-      Timestamp timestamp;
-      java.sql.Date date;
-      Clob clob;
-      String resultString;
-
-      switch (rsmd.getColumnType(col))
-      {
-      case java.sql.Types.CHAR :
-        if ((resultString = rs.getString(col)) != null)
-        {
-          if (rsmd.getColumnDisplaySize(col) < resultString.length())
-          {
-            result = resultString.substring(0,rsmd.getColumnDisplaySize(col));
-          }
-          else
-            result = resultString;
-        }
-        break;
-      case java.sql.Types.CLOB :
-        if ((clob = rs.getClob(col)) != null)
-        {
-          result = clob.getSubString(1, (int) clob.length());
-        }
-        break;
-
-      case java.sql.Types.BIGINT :
-        long l = rs.getLong(col);
-        if (!rs.wasNull())
-          result = new Long(l);
-        break;
-
-      case java.sql.Types.INTEGER :
-        int i = rs.getInt(col);
-        if (!rs.wasNull())
-          result = new Integer(i);
-        break;
-
-      case java.sql.Types.REAL :
-      case java.sql.Types.FLOAT :
-        float f = rs.getFloat(col);
-        if (!rs.wasNull())
-          result = new Float(f);
-        break;
-
-      case java.sql.Types.DOUBLE :
-        double d = rs.getDouble(col);
-        if (!rs.wasNull())
-          result = new Double(d);
-        break;
-
-      case java.sql.Types.DATE :
-        if ((date = rs.getDate(col)) != null)
-        {
-          result = new java.util.Date(date.getTime());
-        }
-        break;
-
-      case java.sql.Types.TIMESTAMP :
-        if ((timestamp = rs.getTimestamp(col)) != null)
-        {
-          result = new TimeMarker(timestamp.getTime());
-        }
-        break;
-
-      case java.sql.Types.BLOB:
-      case java.sql.Types.VARBINARY:
-      case java.sql.Types.BINARY:
-      case java.sql.Types.LONGVARBINARY:
-        throw new ManifoldCFException("Binary type is not a string, column = " + col,ManifoldCFException.GENERAL_ERROR);
-        //break
-
-      default :
-        result = rs.getString(col);
-        break;
-      }
-      if (rs.wasNull())
-      {
-        result = null;
-      }
-    }
-    catch (java.sql.SQLException e)
-    {
-      throw new ManifoldCFException("Exception in getString(): "+e.getMessage(),e,ManifoldCFException.DATABASE_ERROR);
-    }
-    return result;
-  }
-
-  protected class JDBCResultSet implements IDynamicResultSet
-  {
-    protected WrappedConnection connection;
-    protected Statement stmt;
-    protected ResultSet rs;
-    protected ResultSetMetaData rsmd;
-    protected String[] resultCols;
-    protected int maxResults;
-
-    /** Constructor */
-    public JDBCResultSet(String query, int maxResults)
-      throws ManifoldCFException, ServiceInterruption
-    {
-      this.maxResults = maxResults;
-      StatementQueryThread t = new StatementQueryThread(query);
-      try
-      {
-        t.start();
-        t.join();
-        Throwable thr = t.getException();
-        if (thr != null)
-        {
-          if (thr instanceof java.sql.SQLException)
-            throw new ManifoldCFException("Exception doing connector query '"+query+"': "+thr.getMessage(),thr);
-          else if (thr instanceof ManifoldCFException)
-            throw (ManifoldCFException)thr;
-          else if (thr instanceof ServiceInterruption)
-            throw (ServiceInterruption)thr;
-          else if (thr instanceof RuntimeException)
-            throw (RuntimeException)thr;
-          else
-            throw (Error)thr;
-        }
-        connection = t.getConnection();
-        stmt = t.getStatement();
-        rs = t.getResultSet();
-        rsmd = t.getResultSetMetaData();
-        resultCols = t.getColumnNames();
-      }
-      catch (InterruptedException e)
-      {
-        t.interrupt();
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-    }
-
-    /** Get the next row from the resultset.
-    *@return the immutable row description, or null if there is no such row.
-    */
-    public IResultRow getNextRow()
-      throws ManifoldCFException, ServiceInterruption
-    {
-      if (maxResults == -1 || maxResults > 0)
-      {
-        IResultRow row = readNextResultRowViaThread(rs,rsmd,resultCols);
-        if (row != null && maxResults != -1)
-          maxResults--;
-        return row;
-      }
-      return null;
-    }
-
-    /** Close this resultset.
-    */
-    public void close()
-      throws ManifoldCFException, ServiceInterruption
-    {
-      ManifoldCFException rval = null;
-      Error error = null;
-      RuntimeException rtException = null;
-      if (rs != null)
-      {
-        try
-        {
-          closeResultset(rs);
-        }
-        catch (ManifoldCFException e)
-        {
-          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-            rval = e;
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          rs = null;
-        }
-      }
-      if (stmt != null)
-      {
-        try
-        {
-          closeStmt(stmt);
-        }
-        catch (ManifoldCFException e)
-        {
-          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-            rval = e;
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          stmt = null;
-        }
-      }
-      if (connection != null)
-      {
-        try
-        {
-          JDBCConnectionFactory.releaseConnection(connection);
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          connection = null;
-        }
-      }
-      if (error != null)
-        throw error;
-      if (rtException != null)
-        throw rtException;
-      if (rval != null)
-        throw rval;
-    }
-
-  }
-
-  protected class StatementQueryThread extends Thread
-  {
-    protected String query;
-
-    protected Throwable exception = null;
-    protected WrappedConnection connection = null;
-    protected Statement stmt = null;
-    protected ResultSet rs = null;
-    protected ResultSetMetaData rsmd = null;
-    protected String[] resultCols = null;
-
-    public StatementQueryThread(String query)
-    {
-      super();
-      setDaemon(true);
-      this.query = query;
-    }
-
-    public void run()
-    {
-      try
-      {
-        connection = JDBCConnectionFactory.getConnection(jdbcProvider,host,databaseName,userName,password);
-        // lightest statement type
-        stmt = connection.getConnection().createStatement();
-        stmt.execute(query);
-        rs = stmt.getResultSet();
-        rsmd = rs.getMetaData();
-        resultCols = readColumnNames(rsmd,useName);
-      }
-      catch (Throwable e)
-      {
-        this.exception = e;
-        if (rs != null)
-        {
-          try
-          {
-            closeResultset(rs);
-          }
-          catch (ManifoldCFException e2)
-          {
-            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
-              this.exception = e2;
-            // Ignore
-          }
-          catch (Throwable e2)
-          {
-            // We already have an exception to report.
-            // Eat any other exceptions from closing
-          }
-          finally
-          {
-            rs = null;
-          }
-        }
-        if (stmt != null)
-        {
-          try
-          {
-            closeStmt(stmt);
-          }
-          catch (ManifoldCFException e2)
-          {
-            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
-              this.exception = e2;
-            // Ignore
-          }
-          catch (Throwable e2)
-          {
-            // We already have an exception to report.
-            // Eat any other exceptions from closing statements
-          }
-          finally
-          {
-            stmt = null;
-          }
-        }
-        if (connection != null)
-        {
-          JDBCConnectionFactory.releaseConnection(connection);
-          connection = null;
-        }
-      }
-    }
-
-    public Throwable getException()
-    {
-      return exception;
-    }
-
-    public WrappedConnection getConnection()
-    {
-      return connection;
-    }
-
-    public Statement getStatement()
-    {
-      return stmt;
-    }
-
-    public ResultSet getResultSet()
-    {
-      return rs;
-    }
-
-    public ResultSetMetaData getResultSetMetaData()
-    {
-      return rsmd;
-    }
-
-    public String[] getColumnNames()
-    {
-      return resultCols;
-    }
-  }
-
-  protected class JDBCPSResultSet implements IDynamicResultSet
-  {
-    protected WrappedConnection connection;
-    protected PreparedStatement ps;
-    protected ResultSet rs;
-    protected ResultSetMetaData rsmd;
-    protected String[] resultCols;
-    protected int maxResults;
-    protected ArrayList params;
-
-    /** Constructor */
-    public JDBCPSResultSet(String query, ArrayList params, int maxResults)
-      throws ManifoldCFException, ServiceInterruption
-    {
-      this.maxResults = maxResults;
-      this.params = params;
-      PreparedStatementQueryThread t = new PreparedStatementQueryThread(query,params);
-      try
-      {
-        t.start();
-        t.join();
-        Throwable thr = t.getException();
-        if (thr != null)
-        {
-          // Cleanup of parameters happens even if exception doing query
-          cleanupParameters(params);
-          if (thr instanceof java.sql.SQLException)
-            throw new ManifoldCFException("Exception doing connector query '"+query+"': "+thr.getMessage(),thr);
-          else if (thr instanceof ManifoldCFException)
-            throw (ManifoldCFException)thr;
-          else if (thr instanceof ServiceInterruption)
-            throw (ServiceInterruption)thr;
-          else if (thr instanceof RuntimeException)
-            throw (RuntimeException)thr;
-          else
-            throw (Error)thr;
-        }
-        connection = t.getConnection();
-        ps = t.getPreparedStatement();
-        rs = t.getResultSet();
-        rsmd = t.getResultSetMetaData();
-        resultCols = t.getColumnNames();
-      }
-      catch (InterruptedException e)
-      {
-        cleanupParameters(params);
-        t.interrupt();
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-    }
-
-    /** Get the next row from the resultset.
-    *@return the immutable row description, or null if there is no such row.
-    */
-    public IResultRow getNextRow()
-      throws ManifoldCFException, ServiceInterruption
-    {
-      if (maxResults == -1 || maxResults > 0)
-      {
-        IResultRow row = readNextResultRowViaThread(rs,rsmd,resultCols);
-        if (row != null && maxResults != -1)
-          maxResults--;
-        return row;
-      }
-      return null;
-    }
-
-    /** Close this resultset.
-    */
-    public void close()
-      throws ManifoldCFException, ServiceInterruption
-    {
-      ManifoldCFException rval = null;
-      Error error = null;
-      RuntimeException rtException = null;
-      if (rs != null)
-      {
-        try
-        {
-          closeResultset(rs);
-        }
-        catch (ServiceInterruption e)
-        {
-        }
-        catch (ManifoldCFException e)
-        {
-          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-            rval = e;
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          rs = null;
-        }
-      }
-      if (ps != null)
-      {
-        try
-        {
-          closePS(ps);
-        }
-        catch (ServiceInterruption e)
-        {
-        }
-        catch (ManifoldCFException e)
-        {
-          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-            rval = e;
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          ps = null;
-        }
-      }
-      if (connection != null)
-      {
-        try
-        {
-          JDBCConnectionFactory.releaseConnection(connection);
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          connection = null;
-        }
-      }
-      if (params != null)
-      {
-        try
-        {
-          cleanupParameters(params);
-        }
-        catch (ManifoldCFException e)
-        {
-          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-            rval = e;
-        }
-        catch (Error e)
-        {
-          error = e;
-        }
-        catch (RuntimeException e)
-        {
-          rtException = e;
-        }
-        finally
-        {
-          params = null;
-        }
-      }
-      if (error != null)
-        throw error;
-      if (rtException != null)
-        throw rtException;
-      if (rval != null)
-        throw rval;
-
-    }
-
-  }
-
-  protected class PreparedStatementQueryThread extends Thread
-  {
-    protected ArrayList params;
-    protected String query;
-
-    protected WrappedConnection connection = null;
-    protected Throwable exception = null;
-    protected PreparedStatement ps = null;
-    protected ResultSet rs = null;
-    protected ResultSetMetaData rsmd = null;
-    protected String[] resultCols = null;
-
-    public PreparedStatementQueryThread(String query, ArrayList params)
-    {
-      super();
-      setDaemon(true);
-      this.query = query;
-      this.params = params;
-    }
-
-    public void run()
-    {
-      try
-      {
-        connection = JDBCConnectionFactory.getConnection(jdbcProvider,host,databaseName,userName,password);
-        ps = connection.getConnection().prepareStatement(query);
-        loadPS(ps, params);
-        rs = ps.executeQuery();
-        rsmd = rs.getMetaData();
-        resultCols = readColumnNames(rsmd,useName);
-      }
-      catch (Throwable e)
-      {
-        this.exception = e;
-        if (rs != null)
-        {
-          try
-          {
-            closeResultset(rs);
-          }
-          catch (ManifoldCFException e2)
-          {
-            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
-              this.exception = e2;
-          }
-          catch (Throwable e2)
-          {
-          }
-          finally
-          {
-            rs = null;
-          }
-        }
-        if (ps != null)
-        {
-          try
-          {
-            closePS(ps);
-          }
-          catch (ManifoldCFException e2)
-          {
-            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
-              this.exception = e2;
-          }
-          catch (Throwable e2)
-          {
-          }
-          finally
-          {
-            ps = null;
-          }
-        }
-        if (connection != null)
-        {
-          JDBCConnectionFactory.releaseConnection(connection);
-          connection = null;
-        }
-      }
-    }
-
-    public Throwable getException()
-    {
-      return exception;
-    }
-
-    public WrappedConnection getConnection()
-    {
-      return connection;
-    }
-
-    public PreparedStatement getPreparedStatement()
-    {
-      return ps;
-    }
-
-    public ResultSet getResultSet()
-    {
-      return rs;
-    }
-
-    public ResultSetMetaData getResultSetMetaData()
-    {
-      return rsmd;
-    }
-
-    public String[] getColumnNames()
-    {
-      return resultCols;
-    }
-  }
-
-}
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnectionFactory.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnectionFactory.java
deleted file mode 100644
index c8154a6..0000000
--- a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnectionFactory.java
+++ /dev/null
@@ -1,181 +0,0 @@
-/* $Id: JDBCConnectionFactory.java 988245 2010-08-23 18:39:35Z kwright $ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-package org.apache.manifoldcf.crawler.connectors.jdbc;
-
-import org.apache.manifoldcf.core.interfaces.*;
-import org.apache.manifoldcf.core.jdbcpool.*;
-import org.apache.manifoldcf.agents.interfaces.*;
-import org.apache.manifoldcf.crawler.system.Logging;
-import org.apache.manifoldcf.crawler.system.ManifoldCF;
-
-import java.util.*;
-import java.sql.*;
-import javax.naming.*;
-import javax.sql.*;
-import java.util.*;
-
-/** This class creates a connection
-*/
-public class JDBCConnectionFactory
-{
-  public static final String _rcsid = "@(#)$Id: JDBCConnectionFactory.java 988245 2010-08-23 18:39:35Z kwright $";
-
-  private static Map driverMap;
-
-  private static ConnectionPoolManager _pool = null;
-
-  static
-  {
-    driverMap = new HashMap();
-    driverMap.put("oracle:thin:@","oracle.jdbc.OracleDriver");
-    driverMap.put("postgresql:","org.postgresql.Driver");
-    driverMap.put("jtds:sqlserver:","net.sourceforge.jtds.jdbc.Driver");
-    driverMap.put("jtds:sybase:","net.sourceforge.jtds.jdbc.Driver");
-    driverMap.put("mysql:","com.mysql.jdbc.Driver");
-    try
-    {
-      _pool = new ConnectionPoolManager(120);
-    }
-    catch (Exception e)
-    {
-      System.err.println("Can't set up pool");
-      e.printStackTrace(System.err);
-    }
-  }
-
-  private JDBCConnectionFactory()
-  {
-  }
-
-
-  public static WrappedConnection getConnection(String providerName, String host, String database, String userName, String password)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    if (database.length() == 0)
-      database = "_root_";
-
-    String driverClassName = (String)driverMap.get(providerName);
-    if (driverClassName == null)
-      throw new ManifoldCFException("Unrecognized jdbc provider: '"+providerName+"'");
-
-    String instanceName = null;
-    // Special for MSSQL: Allow database spec to contain an instance name too, in form:
-    // <instance>/<database>
-    if (providerName.startsWith("jtds:"))
-    {
-      int slashIndex = database.indexOf("/");
-      if (slashIndex != -1)
-      {
-        instanceName = database.substring(0,slashIndex);
-        database = database.substring(slashIndex+1);
-      }
-    }
-
-    String dburl = "jdbc:" + providerName + "//" + host + "/" + database + ((instanceName==null)?"":";instance="+instanceName);
-    if (Logging.connectors != null && Logging.connectors.isDebugEnabled())
-      Logging.connectors.debug("JDBC: The connect string is '"+dburl+"'");
-    try
-    {
-      // Hope for a connection now
-      if (_pool != null)
-      {
-        // Build a unique string to identify the pool.  This has to include
-        // the database and host at a minimum.
-
-        // Provider is part of the pool key, so that the pools can distinguish between different databases
-        String poolKey = providerName + "/";
-
-        // Distinguish between instance names and databases too
-        if (instanceName == null)
-          poolKey += host + "/" + database;
-        else
-          poolKey += host + "/" + instanceName + "/" + database;
-
-        // Better include the credentials on the pool key, or we won't be able to change those and have it build new connections
-        // The password value is SHA-1 hashed, because the pool driver reports the password in many exceptions and we don't want it
-        // to be displayed.
-        poolKey += "/" + userName + "/" + ManifoldCF.hash(password);
-
-        ConnectionPool cp;
-        synchronized (_pool)
-        {
-          cp = _pool.getPool(poolKey);
-          if (cp == null)
-          {
-            // Register the driver here
-            Class.forName(driverClassName);
-            //System.out.println("Class name '"+driverClassName+"'; URL = '"+dburl+"'");
-            cp =_pool.addAlias(poolKey, driverClassName, dburl,
-              userName, password, 30, 300000L);
-          }
-        }
-        return cp.getConnection();
-      }
-      else
-        throw new ManifoldCFException("Can't get connection since pool driver did not initialize properly");
-    }
-    catch (InterruptedException e)
-    {
-      throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
-    }
-    catch (java.sql.SQLException e)
-    {
-      e.printStackTrace();
-      // Unfortunately, the connection pool manager manages to eat all actual connection setup errors.  This makes it very hard to figure anything out
-      // when something goes wrong.  So, we try again, going directly this time as a means of getting decent error feedback.
-      try
-      {
-        if (userName != null && userName.length() > 0)
-        {
-          DriverManager.getConnection(dburl, userName, password).close();
-        }
-        else
-        {
-          DriverManager.getConnection(dburl).close();
-        }
-      }
-      catch (java.sql.SQLException e2)
-      {
-        throw new ManifoldCFException("Error getting connection: "+e2.getMessage(),e2,ManifoldCFException.SETUP_ERROR);
-      }
-      // By definition, this must be a service interruption, because the direct route in setting up the connection succeeded.
-      long currentTime = System.currentTimeMillis();
-      throw new ServiceInterruption("Error getting connection: "+e.getMessage(),e,currentTime + 300000L,currentTime + 6 * 60 * 60000L,-1,true);
-    }
-    catch (java.lang.ClassNotFoundException e)
-    {
-      throw new ManifoldCFException("Driver class not found: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
-    }
-    catch (java.lang.InstantiationException e)
-    {
-      throw new ManifoldCFException("Driver class not instantiable: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
-    }
-    catch (java.lang.IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Driver class not accessible: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
-    }
-  }
-
-  public static void releaseConnection(WrappedConnection c)
-  {
-    c.release();
-  }
-
-}
-
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnector.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnector.java
index 40edce5..33290ed 100644
--- a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnector.java
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConnector.java
@@ -23,6 +23,10 @@
 import org.apache.manifoldcf.crawler.interfaces.*;
 import org.apache.manifoldcf.crawler.system.Logging;
 import org.apache.manifoldcf.core.database.*;
+import org.apache.manifoldcf.jdbc.JDBCConnection;
+import org.apache.manifoldcf.jdbc.JDBCConstants;
+import org.apache.manifoldcf.jdbc.IDynamicResultSet;
+import org.apache.manifoldcf.jdbc.IDynamicResultRow;
 
 import java.sql.*;
 import javax.naming.*;
@@ -80,6 +84,7 @@
   protected String accessMethod = null;
   protected String host = null;
   protected String databaseName = null;
+  protected String rawDriverString = null;
   protected String userName = null;
   protected String password = null;
 
@@ -97,10 +102,10 @@
     {
       if (jdbcProvider == null || jdbcProvider.length() == 0)
         throw new ManifoldCFException("Missing parameter '"+JDBCConstants.providerParameter+"'");
-      if (host == null || host.length() == 0)
-        throw new ManifoldCFException("Missing parameter '"+JDBCConstants.hostParameter+"'");
+      if ((host == null || host.length() == 0) && (rawDriverString == null || rawDriverString.length() == 0))
+        throw new ManifoldCFException("Missing parameter '"+JDBCConstants.hostParameter+"' or '"+JDBCConstants.driverStringParameter+"'");
 
-      connection = new JDBCConnection(jdbcProvider,(accessMethod==null || accessMethod.equals("name")),host,databaseName,userName,password);
+      connection = new JDBCConnection(jdbcProvider,(accessMethod==null || accessMethod.equals("name")),host,databaseName,rawDriverString,userName,password);
     }
   }
 
@@ -135,6 +140,7 @@
     accessMethod = configParams.getParameter(JDBCConstants.methodParameter);
     host = configParams.getParameter(JDBCConstants.hostParameter);
     databaseName = configParams.getParameter(JDBCConstants.databaseNameParameter);
+    rawDriverString = configParams.getParameter(JDBCConstants.driverStringParameter);
     userName= configParams.getParameter(JDBCConstants.databaseUserName);
     password = configParams.getObfuscatedParameter(JDBCConstants.databasePassword);
   }
@@ -169,7 +175,9 @@
     connection = null;
     host = null;
     jdbcProvider = null;
+    accessMethod = null;
     databaseName = null;
+    rawDriverString = null;
     userName = null;
     password = null;
 
@@ -188,7 +196,7 @@
   @Override
   public String[] getBinNames(String documentIdentifier)
   {
-    return new String[]{host};
+    return new String[]{(rawDriverString==null||rawDriverString.length()==0)?host:rawDriverString};
   }
 
   /** Queue "seed" documents.  Seed documents are the starting places for crawling activity.  Documents
@@ -270,14 +278,21 @@
 
       while (true)
       {
-        IResultRow row = idSet.getNextRow();
+        IDynamicResultRow row = idSet.getNextRow();
         if (row == null)
           break;
-        Object o = row.getValue(JDBCConstants.idReturnColumnName);
-        if (o == null)
-          throw new ManifoldCFException("Bad seed query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");
-        String idValue = o.toString();
-        activities.addSeedDocument(idValue);
+        try
+        {
+          Object o = row.getValue(JDBCConstants.idReturnColumnName);
+          if (o == null)
+            throw new ManifoldCFException("Bad seed query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");
+          String idValue = JDBCConnection.readAsString(o);
+          activities.addSeedDocument(idValue);
+        }
+        finally
+        {
+          row.close();
+        }
       }
     }
     finally
@@ -380,36 +395,43 @@
       // Now, go through resultset
       while (true)
       {
-        IResultRow row = result.getNextRow();
+        IDynamicResultRow row = result.getNextRow();
         if (row == null)
           break;
-        Object o = row.getValue(JDBCConstants.idReturnColumnName);
-        if (o == null)
-          throw new ManifoldCFException("Bad version query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");
-        String idValue = o.toString();
-        o = row.getValue(JDBCConstants.versionReturnColumnName);
-        String versionValue;
-        // Null version is OK; make it a ""
-        if (o == null)
-          versionValue = "";
-        else
+        try
         {
-          // A real version string!  Any acls must be added to the front, if they are present...
-          sb = new StringBuilder();
-          packList(sb,acls,'+');
-          if (acls.length > 0)
-          {
-            sb.append('+');
-            pack(sb,defaultAuthorityDenyToken,'+');
-          }
+          Object o = row.getValue(JDBCConstants.idReturnColumnName);
+          if (o == null)
+            throw new ManifoldCFException("Bad version query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");
+          String idValue = JDBCConnection.readAsString(o);
+          o = row.getValue(JDBCConstants.versionReturnColumnName);
+          String versionValue;
+          // Null version is OK; make it a ""
+          if (o == null)
+            versionValue = "";
           else
-            sb.append('-');
+          {
+            // A real version string!  Any acls must be added to the front, if they are present...
+            sb = new StringBuilder();
+            packList(sb,acls,'+');
+            if (acls.length > 0)
+            {
+              sb.append('+');
+              pack(sb,defaultAuthorityDenyToken,'+');
+            }
+            else
+              sb.append('-');
 
-          sb.append(o.toString()).append("=").append(ts.dataQuery);
-          versionValue = sb.toString();
+            sb.append(JDBCConnection.readAsString(o)).append("=").append(ts.dataQuery);
+            versionValue = sb.toString();
+          }
+          // Versions that are "", when processed, will have their acls fetched at that time...
+          versionsReturned[((Integer)map.get(idValue)).intValue()] = versionValue;
         }
-        // Versions that are "", when processed, will have their acls fetched at that time...
-        versionsReturned[((Integer)map.get(idValue)).intValue()] = versionValue;
+        finally
+        {
+          row.close();
+        }
       }
     }
     finally
@@ -496,146 +518,196 @@
 
       while (true)
       {
-        IResultRow row = result.getNextRow();
+        IDynamicResultRow row = result.getNextRow();
         if (row == null)
           break;
-        Object o = row.getValue(JDBCConstants.idReturnColumnName);
-        if (o == null)
-          throw new ManifoldCFException("Bad document query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");
-        String id = readAsString(o);
-        String version = (String)map.get(id);
-        if (version != null)
+        try
         {
-          // This document was marked as "not scan only", so we expect to find it.
-          if (Logging.connectors.isDebugEnabled())
-            Logging.connectors.debug("JDBC: Document data result found for '"+id+"'");
-          o = row.getValue(JDBCConstants.urlReturnColumnName);
-          if (o != null)
+          Object o = row.getValue(JDBCConstants.idReturnColumnName);
+          if (o == null)
+            throw new ManifoldCFException("Bad document query; doesn't return $(IDCOLUMN) column.  Try using quotes around $(IDCOLUMN) variable, e.g. \"$(IDCOLUMN)\".");
+          String id = JDBCConnection.readAsString(o);
+          String version = (String)map.get(id);
+          if (version != null)
           {
-            // This is not right - url can apparently be a BinaryInput
-            String url = readAsString(o);
-            boolean validURL;
-            try
+            // This document was marked as "not scan only", so we expect to find it.
+            if (Logging.connectors.isDebugEnabled())
+              Logging.connectors.debug("JDBC: Document data result found for '"+id+"'");
+            o = row.getValue(JDBCConstants.urlReturnColumnName);
+            if (o != null)
             {
-              // Check to be sure url is valid
-              new java.net.URI(url);
-              validURL = true;
-            }
-            catch (java.net.URISyntaxException e)
-            {
-              validURL = false;
-            }
-
-            if (validURL)
-            {
-              // Process the document itself
-              Object contents = row.getValue(JDBCConstants.dataReturnColumnName);
-              // Null data is allowed; we just ignore these
-              if (contents != null)
+              // This is not right - url can apparently be a BinaryInput
+              String url = JDBCConnection.readAsString(o);
+              boolean validURL;
+              try
               {
-                // We will ingest something, so remove this id from the map in order that we know what we still
-                // need to delete when all done.
-                map.remove(id);
-                String contentType;
-                o = row.getValue(JDBCConstants.contentTypeReturnColumnName);
-                if (o != null)
-                  contentType = readAsString(o);
-                else
-                  contentType = null;
-                
-                if (contents instanceof BinaryInput)
-                {
-                  // An ingestion will take place for this document.
-                  RepositoryDocument rd = new RepositoryDocument();
+                // Check to be sure url is valid
+                new java.net.URI(url);
+                validURL = true;
+              }
+              catch (java.net.URISyntaxException e)
+              {
+                validURL = false;
+              }
 
-                  // Default content type is application/octet-stream for binary data
-                  if (contentType == null)
-                    rd.setMimeType("application/octet-stream");
+              if (validURL)
+              {
+                // Process the document itself
+                Object contents = row.getValue(JDBCConstants.dataReturnColumnName);
+                // Null data is allowed; we just ignore these
+                if (contents != null)
+                {
+                  // We will ingest something, so remove this id from the map in order that we know what we still
+                  // need to delete when all done.
+                  map.remove(id);
+                  String contentType;
+                  o = row.getValue(JDBCConstants.contentTypeReturnColumnName);
+                  if (o != null)
+                    contentType = JDBCConnection.readAsString(o);
                   else
-                    rd.setMimeType(contentType);
+                    contentType = null;
                   
-                  applyAccessTokens(rd,version,spec);
-                  applyMetadata(rd,row);
+                  if (contentType == null || activities.checkMimeTypeIndexable(contentType))
+                  {
+                    if (contents instanceof BinaryInput)
+                    {
+                      // An ingestion will take place for this document.
+                      RepositoryDocument rd = new RepositoryDocument();
 
-                  BinaryInput bi = (BinaryInput)contents;
-                  try
-                  {
-                    // Read the stream
-                    InputStream is = bi.getStream();
-                    try
-                    {
-                      rd.setBinary(is,bi.getLength());
-                      activities.ingestDocument(id, version, url, rd);
+                      // Default content type is application/octet-stream for binary data
+                      if (contentType == null)
+                        rd.setMimeType("application/octet-stream");
+                      else
+                        rd.setMimeType(contentType);
+                      
+                      applyAccessTokens(rd,version,spec);
+                      applyMetadata(rd,row);
+
+                      BinaryInput bi = (BinaryInput)contents;
+                      try
+                      {
+                        // Read the stream
+                        InputStream is = bi.getStream();
+                        try
+                        {
+                          rd.setBinary(is,bi.getLength());
+                          activities.ingestDocument(id, version, url, rd);
+                        }
+                        finally
+                        {
+                          is.close();
+                        }
+                      }
+                      catch (java.net.SocketTimeoutException e)
+                      {
+                        throw new ManifoldCFException("Socket timeout reading database data: "+e.getMessage(),e);
+                      }
+                      catch (InterruptedIOException e)
+                      {
+                        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                      }
+                      catch (IOException e)
+                      {
+                        throw new ManifoldCFException("Error reading database data: "+e.getMessage(),e);
+                      }
                     }
-                    finally
+                    else if (contents instanceof CharacterInput)
                     {
-                      is.close();
+                      // An ingestion will take place for this document.
+                      RepositoryDocument rd = new RepositoryDocument();
+
+                      // Default content type is application/octet-stream for binary data
+                      if (contentType == null)
+                        rd.setMimeType("text/plain; charset=utf-8");
+                      else
+                        rd.setMimeType(contentType);
+                      
+                      applyAccessTokens(rd,version,spec);
+                      applyMetadata(rd,row);
+
+                      CharacterInput ci = (CharacterInput)contents;
+                      try
+                      {
+                        // Read the stream
+                        InputStream is = ci.getUtf8Stream();
+                        try
+                        {
+                          rd.setBinary(is,ci.getUtf8StreamLength());
+                          activities.ingestDocument(id, version, url, rd);
+                        }
+                        finally
+                        {
+                          is.close();
+                        }
+                      }
+                      catch (java.net.SocketTimeoutException e)
+                      {
+                        throw new ManifoldCFException("Socket timeout reading database data: "+e.getMessage(),e);
+                      }
+                      catch (InterruptedIOException e)
+                      {
+                        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                      }
+                      catch (IOException e)
+                      {
+                        throw new ManifoldCFException("Error reading database data: "+e.getMessage(),e);
+                      }
+                    }
+                    else
+                    {
+                      // Turn it into a string, and then into a stream
+                      String value = contents.toString();
+                      try
+                      {
+                        byte[] bytes = value.getBytes("utf-8");
+                        RepositoryDocument rd = new RepositoryDocument();
+
+                        // Default content type is text/plain for character data
+                        if (contentType == null)
+                          rd.setMimeType("text/plain");
+                        else
+                          rd.setMimeType(contentType);
+                        
+                        applyAccessTokens(rd,version,spec);
+                        applyMetadata(rd,row);
+
+                        InputStream is = new ByteArrayInputStream(bytes);
+                        try
+                        {
+                          rd.setBinary(is,bytes.length);
+                          activities.ingestDocument(id, version, url, rd);
+                        }
+                        finally
+                        {
+                          is.close();
+                        }
+                      }
+                      catch (InterruptedIOException e)
+                      {
+                        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                      }
+                      catch (IOException e)
+                      {
+                        throw new ManifoldCFException("Error reading database data: "+e.getMessage(),e);
+                      }
                     }
                   }
-                  catch (java.net.SocketTimeoutException e)
-                  {
-                    throw new ManifoldCFException("Socket timeout reading database data: "+e.getMessage(),e);
-                  }
-                  catch (InterruptedIOException e)
-                  {
-                    throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-                  }
-                  catch (IOException e)
-                  {
-                    throw new ManifoldCFException("Error reading database data: "+e.getMessage(),e);
-                  }
-                  finally
-                  {
-                    bi.discard();
-                  }
+                  else
+                    Logging.connectors.warn("JDBC: Document '"+id+"' excluded because of mime type - skipping");
                 }
                 else
-                {
-                  // Turn it into a string, and then into a stream
-                  String value = contents.toString();
-                  try
-                  {
-                    byte[] bytes = value.getBytes("utf-8");
-                    RepositoryDocument rd = new RepositoryDocument();
-
-                    // Default content type is text/plain for character data
-                    if (contentType == null)
-                      rd.setMimeType("text/plain");
-                    else
-                      rd.setMimeType(contentType);
-                    
-                    applyAccessTokens(rd,version,spec);
-                    applyMetadata(rd,row);
-
-                    InputStream is = new ByteArrayInputStream(bytes);
-                    try
-                    {
-                      rd.setBinary(is,bytes.length);
-                      activities.ingestDocument(id, version, url, rd);
-                    }
-                    finally
-                    {
-                      is.close();
-                    }
-                  }
-                  catch (InterruptedIOException e)
-                  {
-                    throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-                  }
-                  catch (IOException e)
-                  {
-                    throw new ManifoldCFException("Error reading database data: "+e.getMessage(),e);
-                  }
-                }
+                  Logging.connectors.warn("JDBC: Document '"+id+"' seems to have null data - skipping");
               }
               else
-                Logging.connectors.warn("JDBC: Document '"+id+"' seems to have null data - skipping");
+                Logging.connectors.warn("JDBC: Document '"+id+"' has an illegal url: '"+url+"' - skipping");
             }
             else
-              Logging.connectors.warn("JDBC: Document '"+id+"' has an illegal url: '"+url+"' - skipping");
+              Logging.connectors.warn("JDBC: Document '"+id+"' has a null url - skipping");
           }
-          else
-            Logging.connectors.warn("JDBC: Document '"+id+"' has a null url - skipping");
+        }
+        finally
+        {
+          row.close();
         }
       }
       // Now, go through the original id's, and see which ones are still in the map.  These
@@ -692,14 +764,14 @@
 "<!--\n"+
 "function checkConfigForSave()\n"+
 "{\n"+
-"  if (editconnection.databasehost.value == \"\")\n"+
+"  if (editconnection.databasehost.value == \"\" && editconnection.rawjdbcstring.value == \"\")\n"+
 "  {\n"+
 "    alert(\"" + Messages.getBodyJavascriptString(locale,"JDBCConnector.PleaseFillInADatabaseServerName") + "\");\n"+
 "    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"JDBCConnector.Server") + "\");\n"+
 "    editconnection.databasehost.focus();\n"+
 "    return false;\n"+
 "  }\n"+
-"  if (editconnection.databasename.value == \"\")\n"+
+"  if (editconnection.databasename.value == \"\" && editconnection.rawjdbcstring.value == \"\")\n"+
 "  {\n"+
 "    alert(\"" + Messages.getBodyJavascriptString(locale,"JDBCConnector.PleaseFillInTheNameOfTheDatabase") + "\");\n"+
 "    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"JDBCConnector.Server") + "\");\n"+
@@ -735,24 +807,29 @@
     Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    String jdbcProvider = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.providerParameter);
+    String jdbcProvider = parameters.getParameter(JDBCConstants.providerParameter);
     if (jdbcProvider == null)
       jdbcProvider = "oracle:thin:@";
-    String accessMethod = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.methodParameter);
+    String accessMethod = parameters.getParameter(JDBCConstants.methodParameter);
     if (accessMethod == null)
       accessMethod = "name";
-    String host = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.hostParameter);
+    String host = parameters.getParameter(JDBCConstants.hostParameter);
     if (host == null)
       host = "localhost";
-    String databaseName = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseNameParameter);
+    String databaseName = parameters.getParameter(JDBCConstants.databaseNameParameter);
     if (databaseName == null)
       databaseName = "database";
-    String databaseUser = parameters.getParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseUserName);
+    String rawJDBCString = parameters.getParameter(JDBCConstants.driverStringParameter);
+    if (rawJDBCString == null)
+      rawJDBCString = "";
+    String databaseUser = parameters.getParameter(JDBCConstants.databaseUserName);
     if (databaseUser == null)
       databaseUser = "";
-    String databasePassword = parameters.getObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databasePassword);
+    String databasePassword = parameters.getObfuscatedParameter(JDBCConstants.databasePassword);
     if (databasePassword == null)
       databasePassword = "";
+    else
+      databasePassword = out.mapPasswordToKey(databasePassword);
 
     // "Database Type" tab
     if (tabName.equals(Messages.getString(locale,"JDBCConnector.DatabaseType")))
@@ -803,6 +880,9 @@
 "  <tr>\n"+
 "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"JDBCConnector.DatabaseServiceNameOrInstanceDatabase") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"32\" name=\"databasename\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(databaseName)+"\"/></td>\n"+
 "  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"JDBCConnector.RawDatabaseConnectString") + "</nobr></td><td class=\"value\"><input type=\"text\" size=\"80\" name=\"rawjdbcstring\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(rawJDBCString)+"\"/></td>\n"+
+"  </tr>\n"+
 "</table>\n"
       );
     }
@@ -810,7 +890,8 @@
     {
       out.print(
 "<input type=\"hidden\" name=\"databasehost\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(host)+"\"/>\n"+
-"<input type=\"hidden\" name=\"databasename\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(databaseName)+"\"/>\n"
+"<input type=\"hidden\" name=\"databasename\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(databaseName)+"\"/>\n"+
+"<input type=\"hidden\" name=\"rawjdbcstring\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(rawJDBCString)+"\"/>\n"
       );
     }
 
@@ -854,27 +935,31 @@
   {
     String type = variableContext.getParameter("databasetype");
     if (type != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.providerParameter,type);
+      parameters.setParameter(JDBCConstants.providerParameter,type);
 
     String accessMethod = variableContext.getParameter("accessmethod");
     if (accessMethod != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.methodParameter,accessMethod);
+      parameters.setParameter(JDBCConstants.methodParameter,accessMethod);
 
     String host = variableContext.getParameter("databasehost");
     if (host != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.hostParameter,host);
+      parameters.setParameter(JDBCConstants.hostParameter,host);
 
     String databaseName = variableContext.getParameter("databasename");
     if (databaseName != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseNameParameter,databaseName);
+      parameters.setParameter(JDBCConstants.databaseNameParameter,databaseName);
+
+    String rawJDBCString = variableContext.getParameter("rawjdbcstring");
+    if (rawJDBCString != null)
+      parameters.setParameter(JDBCConstants.driverStringParameter,rawJDBCString);
 
     String userName = variableContext.getParameter("username");
     if (userName != null)
-      parameters.setParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databaseUserName,userName);
+      parameters.setParameter(JDBCConstants.databaseUserName,userName);
 
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.databasePassword,password);
+      parameters.setObfuscatedParameter(JDBCConstants.databasePassword,variableContext.mapKeyToPassword(password));
     
     return null;
   }
@@ -1058,19 +1143,19 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.idQueryNode))
+      if (sn.getType().equals(JDBCConstants.idQueryNode))
       {
         idQuery = sn.getValue();
         if (idQuery == null)
           idQuery = "";
       }
-      else if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.versionQueryNode))
+      else if (sn.getType().equals(JDBCConstants.versionQueryNode))
       {
         versionQuery = sn.getValue();
         if (versionQuery == null)
           versionQuery = "";
       }
-      else if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.dataQueryNode))
+      else if (sn.getType().equals(JDBCConstants.dataQueryNode))
       {
         dataQuery = sn.getValue();
         if (dataQuery == null)
@@ -1218,12 +1303,12 @@
       int i = 0;
       while (i < ds.getChildCount())
       {
-        if (ds.getChild(i).getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.idQueryNode))
+        if (ds.getChild(i).getType().equals(JDBCConstants.idQueryNode))
           ds.removeChild(i);
         else
           i++;
       }
-      sn = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.idQueryNode);
+      sn = new SpecificationNode(JDBCConstants.idQueryNode);
       sn.setValue(idQuery);
       ds.addChild(ds.getChildCount(),sn);
     }
@@ -1232,12 +1317,12 @@
       int i = 0;
       while (i < ds.getChildCount())
       {
-        if (ds.getChild(i).getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.versionQueryNode))
+        if (ds.getChild(i).getType().equals(JDBCConstants.versionQueryNode))
           ds.removeChild(i);
         else
           i++;
       }
-      sn = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.versionQueryNode);
+      sn = new SpecificationNode(JDBCConstants.versionQueryNode);
       sn.setValue(versionQuery);
       ds.addChild(ds.getChildCount(),sn);
     }
@@ -1246,12 +1331,12 @@
       int i = 0;
       while (i < ds.getChildCount())
       {
-        if (ds.getChild(i).getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.dataQueryNode))
+        if (ds.getChild(i).getType().equals(JDBCConstants.dataQueryNode))
           ds.removeChild(i);
         else
           i++;
       }
-      sn = new SpecificationNode(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.dataQueryNode);
+      sn = new SpecificationNode(JDBCConstants.dataQueryNode);
       sn.setValue(dataQuery);
       ds.addChild(ds.getChildCount(),sn);
     }
@@ -1321,19 +1406,19 @@
     while (i < ds.getChildCount())
     {
       SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.idQueryNode))
+      if (sn.getType().equals(JDBCConstants.idQueryNode))
       {
         idQuery = sn.getValue();
         if (idQuery == null)
           idQuery = "";
       }
-      else if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.versionQueryNode))
+      else if (sn.getType().equals(JDBCConstants.versionQueryNode))
       {
         versionQuery = sn.getValue();
         if (versionQuery == null)
           versionQuery = "";
       }
-      else if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.dataQueryNode))
+      else if (sn.getType().equals(JDBCConstants.dataQueryNode))
       {
         dataQuery = sn.getValue();
         if (dataQuery == null)
@@ -1665,49 +1750,6 @@
     return sb.toString();
   }
 
-  /** Make sure we read this field as a string */
-  protected static String readAsString(Object o)
-    throws ManifoldCFException
-  {
-    if (o instanceof BinaryInput)
-    {
-      // Convert this input to a string, since mssql can mess us up with the wrong column types here.
-      BinaryInput bi = (BinaryInput)o;
-      try
-      {
-        InputStream is = bi.getStream();
-        try
-        {
-          InputStreamReader reader = new InputStreamReader(is,"utf-8");
-          StringBuilder sb = new StringBuilder();
-          while (true)
-          {
-            int x = reader.read();
-            if (x == -1)
-              break;
-            sb.append((char)x);
-          }
-          return sb.toString();
-        }
-        finally
-        {
-          is.close();
-        }
-      }
-      catch (IOException e)
-      {
-        throw new ManifoldCFException(e.getMessage(),e);
-      }
-      finally
-      {
-        bi.doneWithStream();
-      }
-    }
-    else
-    {
-      return o.toString();
-    }
-  }
 
   /** Variable map entry.
   */
@@ -1779,19 +1821,19 @@
       while (i < ds.getChildCount())
       {
         SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.idQueryNode))
+        if (sn.getType().equals(JDBCConstants.idQueryNode))
         {
           idQuery = sn.getValue();
           if (idQuery == null)
             idQuery = "";
         }
-        else if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.versionQueryNode))
+        else if (sn.getType().equals(JDBCConstants.versionQueryNode))
         {
           versionQuery = sn.getValue();
           if (versionQuery == null)
             versionQuery = "";
         }
-        else if (sn.getType().equals(org.apache.manifoldcf.crawler.connectors.jdbc.JDBCConstants.dataQueryNode))
+        else if (sn.getType().equals(JDBCConstants.dataQueryNode))
         {
           dataQuery = sn.getValue();
           if (dataQuery == null)
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConstants.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConstants.java
deleted file mode 100644
index 63e72f6..0000000
--- a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jdbc/JDBCConstants.java
+++ /dev/null
@@ -1,86 +0,0 @@
-/* $Id: JDBCConstants.java 988245 2010-08-23 18:39:35Z kwright $ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-package org.apache.manifoldcf.crawler.connectors.jdbc;
-
-/** These are the constant strings needed by the Oracle connector.
-*/
-public class JDBCConstants
-{
-  public static final String _rcsid = "@(#)$Id: JDBCConstants.java 988245 2010-08-23 18:39:35Z kwright $";
-
-  /** The jdbc provider parameter */
-  public static String providerParameter = "JDBC Provider";
-  /** The column interrogation method name parameter */
-  public static String methodParameter = "JDBC column access method";
-  /** The host machine config parameter */
-  public static String hostParameter = "Host";
-  /** The database name config parameter */
-  public static String databaseNameParameter = "Database name";
-  /** The user name config parameter */
-  public static String databaseUserName = "User name";
-  /** The password config parameter */
-  public static String databasePassword = "Password";
-
-  /** The node containing the identifier query */
-  public static String idQueryNode = "idquery";
-  /** The node containing the version query */
-  public static String versionQueryNode = "versionquery";
-  /** The node containing the process query */
-  public static String dataQueryNode = "dataquery";
-
-  /** The name of the id return column */
-  public static String idReturnColumnName = "lcf__id";
-  /** The name of the version return column */
-  public static String versionReturnColumnName = "lcf__version";
-  /** The name of the url return column */
-  public static String urlReturnColumnName = "lcf__url";
-  /** The name of the data return column */
-  public static String dataReturnColumnName = "lcf__data";
-  /** The name of the content type return column */
-  public static String contentTypeReturnColumnName = "lcf__contenttype";
-  
-  /** The name of the id return variable */
-  public static String idReturnVariable = "IDCOLUMN";
-  /** The name of the version return variable */
-  public static String versionReturnVariable = "VERSIONCOLUMN";
-  /** The name of the url return variable */
-  public static String urlReturnVariable = "URLCOLUMN";
-  /** The name of the data return variable */
-  public static String dataReturnVariable = "DATACOLUMN";
-  /** The name of the content type return variable */
-  public static String contentTypeReturnVariable = "CONTENTTYPE";
-  /** The name of the start time variable */
-  public static String startTimeVariable = "STARTTIME";
-  /** The name of the end time variable */
-  public static String endTimeVariable = "ENDTIME";
-  /** The name of the id list */
-  public static String idListVariable = "IDLIST";
-
-  /** JDBCAuthority */
-  /** Query returning user Id parameter name */
-  public static String databaseUserIdQuery = "User Id Query";
-  /** Query returning user tokens parameter name */
-  public static String databaseTokensQuery = "User Tokens Query";
-  /** The name of the user name variable */
-  public static String userNameVariable = "USERNAME";
-  /** The name of the user id variable */
-  public static String userIDVariable = "UID";
-}
-
-
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/IDynamicResultRow.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/IDynamicResultRow.java
new file mode 100644
index 0000000..066a34c
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/IDynamicResultRow.java
@@ -0,0 +1,37 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.jdbc;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+
+/** This object describes an (open) jdbc resultrow.  Semantics are identical to
+* org.apache.manifoldcf.core.interfaces.IResultRow, EXCEPT that a close() method is
+* provided and must be called, in order to clean up any blobs or clobs in the set that
+* were unused.
+*/
+public interface IDynamicResultRow extends IResultRow
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Close this resultrow.
+  */
+  public void close()
+    throws ManifoldCFException;
+}
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/IDynamicResultSet.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/IDynamicResultSet.java
new file mode 100644
index 0000000..3dbc85d
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/IDynamicResultSet.java
@@ -0,0 +1,43 @@
+/* $Id: IDynamicResultSet.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.jdbc;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+
+/** This object describes an (open) jdbc resultset.  Semantics are identical to
+* org.apache.manifoldcf.core.interfaces.IResultSet, EXCEPT that a close() method is
+* provided and must be called, and there is no method to get the entire resultset
+* at once.
+*/
+public interface IDynamicResultSet
+{
+  public static final String _rcsid = "@(#)$Id: IDynamicResultSet.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  /** Get the next row from the resultset.
+  *@return the immutable row description, or null if there is no such row.
+  */
+  public IDynamicResultRow getNextRow()
+    throws ManifoldCFException, ServiceInterruption;
+
+  /** Close this resultset.
+  */
+  public void close()
+    throws ManifoldCFException, ServiceInterruption;
+}
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConnection.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConnection.java
new file mode 100644
index 0000000..6ebcadf
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConnection.java
@@ -0,0 +1,1482 @@
+/* $Id: JDBCConnection.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.jdbc;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.database.*;
+import org.apache.manifoldcf.core.jdbcpool.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+
+import java.sql.*;
+import javax.naming.*;
+import javax.sql.*;
+
+import java.io.*;
+import java.util.*;
+
+/** This object describes a connection to a particular JDBC instance.
+*/
+public class JDBCConnection
+{
+  public static final String _rcsid = "@(#)$Id: JDBCConnection.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  protected String jdbcProvider = null;
+  protected boolean useName;
+  protected String driverString = null;
+  protected String userName = null;
+  protected String password = null;
+
+  /** Constructor.
+  */
+  public JDBCConnection(String jdbcProvider, boolean useName, String host, String databaseName, String rawDriverString,
+    String userName, String password)
+    throws ManifoldCFException
+  {
+    this.jdbcProvider = jdbcProvider;
+    this.useName = useName;
+    this.driverString = JDBCConnectionFactory.getJDBCDriverString(jdbcProvider, host, databaseName, rawDriverString);
+    this.userName = userName;
+    this.password = password;
+  }
+
+  protected static IDynamicResultRow readNextResultRowViaThread(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    NextResultRowThread t = new NextResultRowThread(rs,rsmd,resultCols);
+    try
+    {
+      t.start();
+      return t.finishUp();
+    }
+    catch (InterruptedException e)
+    {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  protected static class NextResultRowThread extends Thread
+  {
+    protected ResultSet rs;
+    protected ResultSetMetaData rsmd;
+    protected String[] resultCols;
+
+    protected Throwable exception = null;
+    protected IDynamicResultRow response = null;
+
+    public NextResultRowThread(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
+    {
+      super();
+      setDaemon(true);
+      this.rs = rs;
+      this.rsmd = rsmd;
+      this.resultCols = resultCols;
+    }
+
+    public void run()
+    {
+      try
+      {
+        response = readNextResultRow(rs,rsmd,resultCols);
+      }
+      catch (Throwable e)
+      {
+        this.exception = e;
+      }
+    }
+
+    public IDynamicResultRow finishUp()
+      throws ManifoldCFException, ServiceInterruption, InterruptedException
+    {
+      join();
+      Throwable thr = exception;
+      if (thr != null)
+      {
+        if (thr instanceof java.sql.SQLException)
+          throw new ManifoldCFException("Error fetching next JDBC result row: "+thr.getMessage(),thr);
+        else if (thr instanceof ManifoldCFException)
+          throw (ManifoldCFException)thr;
+        else if (thr instanceof ServiceInterruption)
+          throw (ServiceInterruption)thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException)thr;
+        else
+          throw (Error)thr;
+      }
+      return response;
+    }
+    
+  }
+
+  protected static IDynamicResultRow readNextResultRow(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      if (rs.next())
+      {
+        return readResultRow(rs,rsmd,resultCols);
+      }
+      return null;
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Result set error: "+e.getMessage(),e);
+    }
+  }
+
+  protected static void closeResultset(ResultSet rs)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      rs.close();
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Exception closing resultset: "+e.getMessage(),e);
+    }
+  }
+
+  protected static void closeStmt(Statement stmt)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      stmt.close();
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Exception closing statement: "+e.getMessage(),e);
+    }
+  }
+
+  protected static void closePS(PreparedStatement ps)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      ps.close();
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Exception closing statement: "+e.getMessage(),e);
+    }
+  }
+
+
+  /** Test connection.
+  */
+  public void testConnection()
+    throws ManifoldCFException, ServiceInterruption
+  {
+    TestConnectionThread t = new TestConnectionThread();
+    try
+    {
+      t.start();
+      t.finishUp();
+    }
+    catch (InterruptedException e)
+    {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  protected class TestConnectionThread extends Thread
+  {
+    protected Throwable exception = null;
+
+    public TestConnectionThread()
+    {
+      super();
+      setDaemon(true);
+    }
+
+    public void run()
+    {
+      try
+      {
+        WrappedConnection tempConnection = JDBCConnectionFactory.getConnection(jdbcProvider,driverString,userName,password);
+        JDBCConnectionFactory.releaseConnection(tempConnection);
+      }
+      catch (Throwable e)
+      {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws ManifoldCFException, ServiceInterruption, InterruptedException
+    {
+      join();
+      Throwable thr = exception;
+      if (thr != null)
+      {
+        if (thr instanceof java.sql.SQLException)
+          throw new ManifoldCFException("Error doing JDBC connection test: "+thr.getMessage(),thr);
+        else if (thr instanceof ManifoldCFException)
+          throw (ManifoldCFException)thr;
+        else if (thr instanceof ServiceInterruption)
+          throw (ServiceInterruption)thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException)thr;
+        else
+          throw (Error)thr;
+      }
+    }
+  }
+
+  /** Execute query.
+  */
+  public IDynamicResultSet executeUncachedQuery(String query, ArrayList params, int maxResults)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    if (params == null)
+      return new JDBCResultSet(query,maxResults);
+    else
+      return new JDBCPSResultSet(query,params,maxResults);
+  }
+
+  /** Execute operation.
+  */
+  public void executeOperation(String query, ArrayList params)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    ExecuteOperationThread t = new ExecuteOperationThread(query,params);
+    try
+    {
+      t.start();
+      t.finishUp();
+    }
+    catch (InterruptedException e)
+    {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Read object as a string */
+  public static String readAsString(Object o)
+    throws ManifoldCFException
+  {
+    if (o instanceof BinaryInput)
+    {
+      // Convert this input to a string, since mssql can mess us up with the wrong column types here.
+      BinaryInput bi = (BinaryInput)o;
+      try
+      {
+        InputStream is = bi.getStream();
+        try
+        {
+          InputStreamReader reader = new InputStreamReader(is,"utf-8");
+          StringBuilder sb = new StringBuilder();
+          while (true)
+          {
+            int x = reader.read();
+            if (x == -1)
+              break;
+            sb.append((char)x);
+          }
+          return sb.toString();
+        }
+        finally
+        {
+          is.close();
+        }
+      }
+      catch (IOException e)
+      {
+        throw new ManifoldCFException(e.getMessage(),e);
+      }
+      finally
+      {
+        bi.doneWithStream();
+      }
+    }
+    else if (o instanceof CharacterInput)
+    {
+      CharacterInput ci = (CharacterInput)o;
+      try
+      {
+        Reader reader = ci.getStream();
+        try
+        {
+          StringBuilder sb = new StringBuilder();
+          while (true)
+          {
+            int x = reader.read();
+            if (x == -1)
+              break;
+            sb.append((char)x);
+          }
+          return sb.toString();
+        }
+        finally
+        {
+          reader.close();
+        }
+      }
+      catch (IOException e)
+      {
+        throw new ManifoldCFException(e.getMessage(),e);
+      }
+      finally
+      {
+        ci.doneWithStream();
+      }
+    }
+    else
+    {
+      return o.toString();
+    }
+  }
+
+  protected class ExecuteOperationThread extends Thread
+  {
+    protected String query;
+    protected ArrayList params;
+
+    protected Throwable exception = null;
+
+    public ExecuteOperationThread(String query, ArrayList params)
+    {
+      super();
+      setDaemon(true);
+      this.query = query;
+      this.params = params;
+    }
+
+    public void run()
+    {
+      try
+      {
+        WrappedConnection tempConnection = JDBCConnectionFactory.getConnection(jdbcProvider,driverString,userName,password);
+        try
+        {
+          execute(tempConnection.getConnection(),query,params,false,0,useName);
+        }
+        finally
+        {
+          JDBCConnectionFactory.releaseConnection(tempConnection);
+        }
+      }
+      catch (Throwable e)
+      {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws ManifoldCFException, ServiceInterruption, InterruptedException
+    {
+      join();
+      Throwable thr = exception;
+      if (thr != null)
+      {
+        if (thr instanceof java.sql.SQLException)
+          throw new ManifoldCFException("Exception doing connector query '"+query+"': "+thr.getMessage(),thr);
+        else if (thr instanceof ManifoldCFException)
+          throw (ManifoldCFException)thr;
+        else if (thr instanceof ServiceInterruption)
+          throw (ServiceInterruption)thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException)thr;
+        else
+          throw (Error)thr;
+      }
+    }
+  }
+
+  /** Run a query.  No caching is involved at all at this level.
+  * @param query String the query string
+  * @param maxResults is the maximum number of results to load: -1 if all
+  * @param params ArrayList if params !=null, use preparedStatement
+  */
+  protected static IResultSet execute(Connection connection, String query, ArrayList params, boolean bResults, int maxResults, boolean useName)
+    throws ManifoldCFException, ServiceInterruption
+  {
+
+    ResultSet rs;
+
+    try
+    {
+
+      if (params==null)
+      {
+        // lightest statement type
+        Statement stmt = connection.createStatement();
+        try
+        {
+          stmt.execute(query);
+          rs = stmt.getResultSet();
+          try
+          {
+            // Suck data from resultset
+            if (bResults)
+              return getData(rs,maxResults,useName);
+            return null;
+          }
+          finally
+          {
+            if (rs != null)
+              rs.close();
+          }
+        }
+        finally
+        {
+          stmt.close();
+        }
+      }
+      else
+      {
+        PreparedStatement ps = connection.prepareStatement(query);
+        try
+        {
+          loadPS(ps, params);
+
+          if (bResults)
+          {
+            rs = ps.executeQuery();
+            try
+            {
+              // Suck data from resultset
+              return getData(rs,maxResults,useName);
+            }
+            finally
+            {
+              if (rs != null)
+                rs.close();
+            }
+          }
+          else
+          {
+            ps.executeUpdate();
+            return null;
+          }
+
+        }
+        finally
+        {
+          ps.close();
+          cleanupParameters(params);
+        }
+      }
+
+    }
+    catch (ManifoldCFException e)
+    {
+      throw e;
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Exception doing connector query '"+query+"': "+e.getMessage(),e);
+    }
+  }
+
+  /** Read the current row from the resultset */
+  protected static IDynamicResultRow readResultRow(ResultSet rs, ResultSetMetaData rsmd, String[] resultCols)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      Object value = null;
+      RDynamicRow m = new RDynamicRow();
+
+      // We have 'colcount' cols to look thru
+      for (int i = 0; i < resultCols.length; i++)
+      {
+        String key = resultCols[i];
+        // System.out.println("Key = "+key);
+        int colnum = findColumn(rs,key);
+        if (colnum > -1)
+        {
+          if (isBinaryData(rsmd,colnum))
+          {
+            InputStream bis = rs.getBinaryStream(colnum);
+            if (bis != null)
+              value = new TempFileInput(bis);
+          }
+          else if (isBLOB(rsmd,colnum))
+          {
+            // System.out.println("It's a blob!");
+            Blob blob = getBLOB(rs,colnum);
+            // Create a tempfileinput object!
+            // Cleanup should happen by the user of the resultset.
+            // System.out.println(" Blob length = "+Long.toString(blob.length()));
+            if (blob != null)
+              value = new TempFileInput(blob.getBinaryStream(),blob.length());
+          }
+          else if (isCLOB(rsmd,colnum))
+          {
+            Clob clob = getCLOB(rs,colnum);
+            if (clob != null)
+              value = new TempFileCharacterInput(clob.getCharacterStream(),clob.length());
+          }
+          else
+          {
+            // System.out.println("It's not a blob");
+            value = getObject(rs,rsmd,colnum);
+          }
+        }
+        if (value != null)
+          m.put(key, value);
+      }
+      return m;
+
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Resultset error: "+e.getMessage(),e);
+    }
+  }
+
+  protected static String[] readColumnNames(ResultSetMetaData rsmd, boolean useName)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      String[] resultCols;
+      if (rsmd != null)
+      {
+        int colcount = rsmd.getColumnCount();
+        resultCols = new String[colcount];
+        for (int i = 0; i < colcount; i++)
+        {
+          String name;
+          if (useName)
+            name = rsmd.getColumnName(i+1);
+          else
+            name = rsmd.getColumnLabel(i+1);
+          resultCols[i] = name;
+        }
+      }
+      else
+        resultCols = new String[0];
+      return resultCols;
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Sql exception reading column names: "+e.getMessage(),e);
+    }
+  }
+
+  // Read data from a resultset
+  protected static IResultSet getData(ResultSet rs, int maxResults, boolean useName)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      RSet results = new RSet();  // might be empty but not an error
+
+      if (rs != null)
+      {
+        // Optionally we're going to suck the data
+        // out of the db and return it in a
+        // readonly structure
+        ResultSetMetaData rsmd = rs.getMetaData();
+        String[] resultCols = readColumnNames(rsmd, useName);
+        if (resultCols.length == 0)
+        {
+          // This is an error situation; if a result with no columns is
+          // necessary, bResults must be false!!!
+          throw new ManifoldCFException("Empty query, no columns returned",ManifoldCFException.GENERAL_ERROR);
+        }
+
+        while (rs.next() && (maxResults == -1 || maxResults > 0))
+        {
+          IResultRow m = readResultRow(rs,rsmd,resultCols);
+          if (maxResults != -1)
+            maxResults--;
+          results.addRow(m);
+        }
+      }
+      return results;
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Resultset error: "+e.getMessage(),e);
+    }
+  }
+
+  // pass params to preparedStatement
+  protected static void loadPS(PreparedStatement ps, ArrayList data)
+    throws java.sql.SQLException, ManifoldCFException
+  {
+    if (data!=null)
+    {
+      for (int i = 0; i < data.size(); i++)
+      {
+        // If the input type is a string, then set it as such.
+        // Otherwise, if it's an input stream, we make a blob out of it.
+        Object x = data.get(i);
+        if (x instanceof String)
+        {
+          String value = (String)x;
+          // letting database do lame conversion!
+          ps.setString(i+1, value);
+        }
+        else if (x instanceof BinaryInput)
+        {
+          BinaryInput value = (BinaryInput)x;
+          ps.setBinaryStream(i+1,value.getStream(),value.getLength());
+          // Hopefully with the introduction of CharacterInput below, this hackery is no longer needed.
+          // System.out.println("Blob length on write = "+Long.toString(value.getLength()));
+          // The oracle driver does a binary conversion to base 64 when writing data
+          // into a clob column using a binary stream operator.  Since at this
+          // point there is no way to distinguish the two, and since our tests use CLOB,
+          // this code doesn't work for them.
+          // So, for now, use the ascii stream method.
+          //ps.setAsciiStream(i+1,value.getStream(),value.getLength());
+        }
+        else if (x instanceof CharacterInput)
+        {
+          CharacterInput value = (CharacterInput)x;
+          ps.setCharacterStream(i+1,value.getStream(),value.getCharacterLength());
+        }
+        else if (x instanceof java.util.Date)
+        {
+          ps.setDate(i+1,new java.sql.Date(((java.util.Date)x).getTime()));
+        }
+        else if (x instanceof Long)
+        {
+          ps.setLong(i+1,((Long)x).longValue());
+        }
+        else if (x instanceof TimeMarker)
+        {
+          ps.setTimestamp(i+1,new java.sql.Timestamp(((Long)x).longValue()));
+        }
+        else if (x instanceof Double)
+        {
+          ps.setDouble(i+1,((Double)x).doubleValue());
+        }
+        else if (x instanceof Integer)
+        {
+          ps.setInt(i+1,((Integer)x).intValue());
+        }
+        else if (x instanceof Float)
+        {
+          ps.setFloat(i+1,((Float)x).floatValue());
+        }
+        else
+          throw new ManifoldCFException("Unknown data type: "+x.getClass().getName());
+      }
+    }
+  }
+
+  /** Permanently discard database object.
+  */
+  protected static void discardDatabaseObject(Object x)
+    throws ManifoldCFException
+  {
+    if (x instanceof PersistentDatabaseObject)
+    {
+      PersistentDatabaseObject value = (PersistentDatabaseObject)x;
+      value.discard();
+    }
+  }
+  
+  /** Call this method on every parameter or result object, when we're done with it, if it's possible that the object is a BLOB
+  * or CLOB.
+  */
+  protected static void cleanupDatabaseObject(Object x)
+    throws ManifoldCFException
+  {
+    if (x instanceof PersistentDatabaseObject)
+    {
+      PersistentDatabaseObject value = (PersistentDatabaseObject)x;
+      value.doneWithStream();
+    }
+  }
+  
+  /** Clean up parameters after query has been triggered.
+  */
+  protected static void cleanupParameters(ArrayList data)
+    throws ManifoldCFException
+  {
+    if (data != null)
+    {
+      for (Object x : data)
+      {
+        cleanupDatabaseObject(x);
+      }
+    }
+  }
+
+  protected static int findColumn(ResultSet rs, String name)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      return rs.findColumn(name);
+    }
+    catch (java.sql.SQLException e)
+    {
+      return -1;
+    }
+  }
+
+  protected static Blob getBLOB(ResultSet rs, int col)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      return rs.getBlob(col);
+    }
+    catch (java.sql.SQLException sqle)
+    {
+      throw new ManifoldCFException("Error in getBlob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
+    }
+  }
+
+  protected static Clob getCLOB(ResultSet rs, int col)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      return rs.getClob(col);
+    }
+    catch (java.sql.SQLException sqle)
+    {
+      throw new ManifoldCFException("Error in getClob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
+    }
+  }
+
+  protected static boolean isBLOB(ResultSetMetaData rsmd, int col)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      int type = rsmd.getColumnType(col);
+      return (type == java.sql.Types.BLOB);
+    }
+    catch (java.sql.SQLException sqle)
+    {
+      throw new ManifoldCFException("Error in isBlob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
+    }
+  }
+
+  protected static boolean isBinaryData(ResultSetMetaData rsmd, int col)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      int type = rsmd.getColumnType(col);
+      return (type == java.sql.Types.VARBINARY ||
+        type == java.sql.Types.BINARY || type == java.sql.Types.LONGVARBINARY);
+    }
+    catch (java.sql.SQLException sqle)
+    {
+      throw new ManifoldCFException("Error in isBinaryData("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
+    }
+  }
+
+  protected static boolean isCLOB(ResultSetMetaData rsmd, int col)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    try
+    {
+      int type = rsmd.getColumnType(col);
+      return (type == java.sql.Types.CLOB || type == java.sql.Types.LONGVARCHAR);
+    }
+    catch (java.sql.SQLException sqle)
+    {
+      throw new ManifoldCFException("Error in isClob("+col+"): "+sqle.getMessage(),sqle,ManifoldCFException.DATABASE_ERROR);
+    }
+  }
+
+  protected static Object getObject(ResultSet rs, ResultSetMetaData rsmd, int col)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    Object result = null;
+
+    try
+    {
+      Timestamp timestamp;
+      java.sql.Date date;
+      Clob clob;
+      String resultString;
+
+      switch (rsmd.getColumnType(col))
+      {
+      case java.sql.Types.CHAR :
+        if ((resultString = rs.getString(col)) != null)
+        {
+          if (rsmd.getColumnDisplaySize(col) < resultString.length())
+          {
+            result = resultString.substring(0,rsmd.getColumnDisplaySize(col));
+          }
+          else
+            result = resultString;
+        }
+        break;
+      case java.sql.Types.CLOB :
+        if ((clob = rs.getClob(col)) != null)
+        {
+          result = clob.getSubString(1, (int) clob.length());
+        }
+        break;
+
+      case java.sql.Types.BIGINT :
+        long l = rs.getLong(col);
+        if (!rs.wasNull())
+          result = new Long(l);
+        break;
+
+      case java.sql.Types.INTEGER :
+        int i = rs.getInt(col);
+        if (!rs.wasNull())
+          result = new Integer(i);
+        break;
+
+      case java.sql.Types.REAL :
+      case java.sql.Types.FLOAT :
+        float f = rs.getFloat(col);
+        if (!rs.wasNull())
+          result = new Float(f);
+        break;
+
+      case java.sql.Types.DOUBLE :
+        double d = rs.getDouble(col);
+        if (!rs.wasNull())
+          result = new Double(d);
+        break;
+
+      case java.sql.Types.DATE :
+        if ((date = rs.getDate(col)) != null)
+        {
+          result = new java.util.Date(date.getTime());
+        }
+        break;
+
+      case java.sql.Types.TIMESTAMP :
+        if ((timestamp = rs.getTimestamp(col)) != null)
+        {
+          result = new TimeMarker(timestamp.getTime());
+        }
+        break;
+
+      case java.sql.Types.BLOB:
+      case java.sql.Types.VARBINARY:
+      case java.sql.Types.BINARY:
+      case java.sql.Types.LONGVARBINARY:
+        throw new ManifoldCFException("Binary type is not a string, column = " + col,ManifoldCFException.GENERAL_ERROR);
+        //break
+
+      default :
+        result = rs.getString(col);
+        break;
+      }
+      if (rs.wasNull())
+      {
+        result = null;
+      }
+    }
+    catch (java.sql.SQLException e)
+    {
+      throw new ManifoldCFException("Exception in getString(): "+e.getMessage(),e,ManifoldCFException.DATABASE_ERROR);
+    }
+    return result;
+  }
+
+  protected class JDBCResultSet implements IDynamicResultSet
+  {
+    protected WrappedConnection connection;
+    protected Statement stmt;
+    protected ResultSet rs;
+    protected ResultSetMetaData rsmd;
+    protected String[] resultCols;
+    protected int maxResults;
+
+    /** Constructor */
+    public JDBCResultSet(String query, int maxResults)
+      throws ManifoldCFException, ServiceInterruption
+    {
+      this.maxResults = maxResults;
+      StatementQueryThread t = new StatementQueryThread(query);
+      try
+      {
+        t.start();
+        t.finishUp();
+        connection = t.getConnection();
+        stmt = t.getStatement();
+        rs = t.getResultSet();
+        rsmd = t.getResultSetMetaData();
+        resultCols = t.getColumnNames();
+      }
+      catch (InterruptedException e)
+      {
+        t.interrupt();
+        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+      }
+    }
+
+    /** Get the next row from the resultset.
+    *@return the immutable row description, or null if there is no such row.
+    */
+    public IDynamicResultRow getNextRow()
+      throws ManifoldCFException, ServiceInterruption
+    {
+      if (maxResults == -1 || maxResults > 0)
+      {
+        IDynamicResultRow row = readNextResultRowViaThread(rs,rsmd,resultCols);
+        if (row != null && maxResults != -1)
+          maxResults--;
+        return row;
+      }
+      return null;
+    }
+
+    /** Close this resultset.
+    */
+    public void close()
+      throws ManifoldCFException, ServiceInterruption
+    {
+      ManifoldCFException rval = null;
+      Error error = null;
+      RuntimeException rtException = null;
+      if (rs != null)
+      {
+        try
+        {
+          closeResultset(rs);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            rval = e;
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          rs = null;
+        }
+      }
+      if (stmt != null)
+      {
+        try
+        {
+          closeStmt(stmt);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            rval = e;
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          stmt = null;
+        }
+      }
+      if (connection != null)
+      {
+        try
+        {
+          JDBCConnectionFactory.releaseConnection(connection);
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          connection = null;
+        }
+      }
+      if (error != null)
+        throw error;
+      if (rtException != null)
+        throw rtException;
+      if (rval != null)
+        throw rval;
+    }
+
+  }
+
+  protected class StatementQueryThread extends Thread
+  {
+    protected String query;
+
+    protected Throwable exception = null;
+    protected WrappedConnection connection = null;
+    protected Statement stmt = null;
+    protected ResultSet rs = null;
+    protected ResultSetMetaData rsmd = null;
+    protected String[] resultCols = null;
+
+    public StatementQueryThread(String query)
+    {
+      super();
+      setDaemon(true);
+      this.query = query;
+    }
+
+    public void run()
+    {
+      try
+      {
+        connection = JDBCConnectionFactory.getConnection(jdbcProvider,driverString,userName,password);
+        // lightest statement type
+        stmt = connection.getConnection().createStatement();
+        stmt.execute(query);
+        rs = stmt.getResultSet();
+        rsmd = rs.getMetaData();
+        resultCols = readColumnNames(rsmd,useName);
+      }
+      catch (Throwable e)
+      {
+        this.exception = e;
+        if (rs != null)
+        {
+          try
+          {
+            closeResultset(rs);
+          }
+          catch (ManifoldCFException e2)
+          {
+            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
+              this.exception = e2;
+            // Ignore
+          }
+          catch (Throwable e2)
+          {
+            // We already have an exception to report.
+            // Eat any other exceptions from closing
+          }
+          finally
+          {
+            rs = null;
+          }
+        }
+        if (stmt != null)
+        {
+          try
+          {
+            closeStmt(stmt);
+          }
+          catch (ManifoldCFException e2)
+          {
+            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
+              this.exception = e2;
+            // Ignore
+          }
+          catch (Throwable e2)
+          {
+            // We already have an exception to report.
+            // Eat any other exceptions from closing statements
+          }
+          finally
+          {
+            stmt = null;
+          }
+        }
+        if (connection != null)
+        {
+          JDBCConnectionFactory.releaseConnection(connection);
+          connection = null;
+        }
+      }
+    }
+
+    public void finishUp()
+      throws ManifoldCFException, ServiceInterruption, InterruptedException
+    {
+      join();
+      Throwable thr = exception;
+      if (thr != null)
+      {
+        if (thr instanceof java.sql.SQLException)
+          throw new ManifoldCFException("Exception doing connector query '"+query+"': "+thr.getMessage(),thr);
+        else if (thr instanceof ManifoldCFException)
+          throw (ManifoldCFException)thr;
+        else if (thr instanceof ServiceInterruption)
+          throw (ServiceInterruption)thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException)thr;
+        else
+          throw (Error)thr;
+      }
+    }
+
+    public WrappedConnection getConnection()
+    {
+      return connection;
+    }
+
+    public Statement getStatement()
+    {
+      return stmt;
+    }
+
+    public ResultSet getResultSet()
+    {
+      return rs;
+    }
+
+    public ResultSetMetaData getResultSetMetaData()
+    {
+      return rsmd;
+    }
+
+    public String[] getColumnNames()
+    {
+      return resultCols;
+    }
+  }
+
+  protected class JDBCPSResultSet implements IDynamicResultSet
+  {
+    protected WrappedConnection connection;
+    protected PreparedStatement ps;
+    protected ResultSet rs;
+    protected ResultSetMetaData rsmd;
+    protected String[] resultCols;
+    protected int maxResults;
+    protected ArrayList params;
+
+    /** Constructor */
+    public JDBCPSResultSet(String query, ArrayList params, int maxResults)
+      throws ManifoldCFException, ServiceInterruption
+    {
+      this.maxResults = maxResults;
+      this.params = params;
+      PreparedStatementQueryThread t = new PreparedStatementQueryThread(query,params);
+      try
+      {
+        t.start();
+        t.finishUp();
+        connection = t.getConnection();
+        ps = t.getPreparedStatement();
+        rs = t.getResultSet();
+        rsmd = t.getResultSetMetaData();
+        resultCols = t.getColumnNames();
+      }
+      catch (InterruptedException e)
+      {
+        cleanupParameters(params);
+        t.interrupt();
+        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+      }
+    }
+
+    /** Get the next row from the resultset.
+    *@return the immutable row description, or null if there is no such row.
+    */
+    public IDynamicResultRow getNextRow()
+      throws ManifoldCFException, ServiceInterruption
+    {
+      if (maxResults == -1 || maxResults > 0)
+      {
+        IDynamicResultRow row = readNextResultRowViaThread(rs,rsmd,resultCols);
+        if (row != null && maxResults != -1)
+          maxResults--;
+        return row;
+      }
+      return null;
+    }
+
+    /** Close this resultset.
+    */
+    public void close()
+      throws ManifoldCFException, ServiceInterruption
+    {
+      ManifoldCFException rval = null;
+      Error error = null;
+      RuntimeException rtException = null;
+      if (rs != null)
+      {
+        try
+        {
+          closeResultset(rs);
+        }
+        catch (ServiceInterruption e)
+        {
+        }
+        catch (ManifoldCFException e)
+        {
+          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            rval = e;
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          rs = null;
+        }
+      }
+      if (ps != null)
+      {
+        try
+        {
+          closePS(ps);
+        }
+        catch (ServiceInterruption e)
+        {
+        }
+        catch (ManifoldCFException e)
+        {
+          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            rval = e;
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          ps = null;
+        }
+      }
+      if (connection != null)
+      {
+        try
+        {
+          JDBCConnectionFactory.releaseConnection(connection);
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          connection = null;
+        }
+      }
+      if (params != null)
+      {
+        try
+        {
+          cleanupParameters(params);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (rval == null || e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            rval = e;
+        }
+        catch (Error e)
+        {
+          error = e;
+        }
+        catch (RuntimeException e)
+        {
+          rtException = e;
+        }
+        finally
+        {
+          params = null;
+        }
+      }
+      if (error != null)
+        throw error;
+      if (rtException != null)
+        throw rtException;
+      if (rval != null)
+        throw rval;
+
+    }
+
+  }
+
+  protected class PreparedStatementQueryThread extends Thread
+  {
+    protected ArrayList params;
+    protected String query;
+
+    protected WrappedConnection connection = null;
+    protected Throwable exception = null;
+    protected PreparedStatement ps = null;
+    protected ResultSet rs = null;
+    protected ResultSetMetaData rsmd = null;
+    protected String[] resultCols = null;
+
+    public PreparedStatementQueryThread(String query, ArrayList params)
+    {
+      super();
+      setDaemon(true);
+      this.query = query;
+      this.params = params;
+    }
+
+    public void run()
+    {
+      try
+      {
+        connection = JDBCConnectionFactory.getConnection(jdbcProvider,driverString,userName,password);
+        ps = connection.getConnection().prepareStatement(query);
+        loadPS(ps, params);
+        rs = ps.executeQuery();
+        rsmd = rs.getMetaData();
+        resultCols = readColumnNames(rsmd,useName);
+      }
+      catch (Throwable e)
+      {
+        this.exception = e;
+        if (rs != null)
+        {
+          try
+          {
+            closeResultset(rs);
+          }
+          catch (ManifoldCFException e2)
+          {
+            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
+              this.exception = e2;
+          }
+          catch (Throwable e2)
+          {
+          }
+          finally
+          {
+            rs = null;
+          }
+        }
+        if (ps != null)
+        {
+          try
+          {
+            closePS(ps);
+          }
+          catch (ManifoldCFException e2)
+          {
+            if (e2.getErrorCode() == ManifoldCFException.INTERRUPTED)
+              this.exception = e2;
+          }
+          catch (Throwable e2)
+          {
+          }
+          finally
+          {
+            ps = null;
+          }
+        }
+        if (connection != null)
+        {
+          JDBCConnectionFactory.releaseConnection(connection);
+          connection = null;
+        }
+      }
+    }
+
+    public void finishUp()
+      throws ManifoldCFException, ServiceInterruption, InterruptedException
+    {
+      join();
+      Throwable thr = exception;
+      if (thr != null)
+      {
+        // Cleanup of parameters happens even if exception doing query
+        cleanupParameters(params);
+        if (thr instanceof java.sql.SQLException)
+          throw new ManifoldCFException("Exception doing connector query '"+query+"': "+thr.getMessage(),thr);
+        else if (thr instanceof ManifoldCFException)
+          throw (ManifoldCFException)thr;
+        else if (thr instanceof ServiceInterruption)
+          throw (ServiceInterruption)thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException)thr;
+        else
+          throw (Error)thr;
+      }
+    }
+
+    public WrappedConnection getConnection()
+    {
+      return connection;
+    }
+
+    public PreparedStatement getPreparedStatement()
+    {
+      return ps;
+    }
+
+    public ResultSet getResultSet()
+    {
+      return rs;
+    }
+
+    public ResultSetMetaData getResultSetMetaData()
+    {
+      return rsmd;
+    }
+
+    public String[] getColumnNames()
+    {
+      return resultCols;
+    }
+  }
+
+  /** Dynamic result row implementation */
+  protected static class RDynamicRow extends RRow implements IDynamicResultRow
+  {
+    public RDynamicRow()
+    {
+      super();
+    }
+    
+    /** Close this resultrow.
+    */
+    public void close()
+      throws ManifoldCFException
+    {
+      // Discard everything permanently from the row
+      Iterator<String> columns = getColumns();
+      while (columns.hasNext())
+      {
+        String column = columns.next();
+        Object o = getValue(column);
+        discardDatabaseObject(o);
+      }
+    }
+
+  }
+  
+}
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConnectionFactory.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConnectionFactory.java
new file mode 100644
index 0000000..7dc75db
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConnectionFactory.java
@@ -0,0 +1,185 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.jdbc;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.jdbcpool.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.util.*;
+import java.sql.*;
+import javax.naming.*;
+import javax.sql.*;
+import java.util.*;
+
+/** This class creates a connection
+*/
+public class JDBCConnectionFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private static Map<String,String> driverMap;
+
+  private static ConnectionPoolManager _pool = null;
+
+  static
+  {
+    driverMap = new HashMap<String,String>();
+    driverMap.put("oracle:thin:@","oracle.jdbc.OracleDriver");
+    driverMap.put("postgresql:","org.postgresql.Driver");
+    driverMap.put("jtds:sqlserver:","net.sourceforge.jtds.jdbc.Driver");
+    driverMap.put("jtds:sybase:","net.sourceforge.jtds.jdbc.Driver");
+    driverMap.put("mysql:","com.mysql.jdbc.Driver");
+    try
+    {
+      _pool = new ConnectionPoolManager(120,false);
+    }
+    catch (Exception e)
+    {
+      System.err.println("Can't set up pool");
+      e.printStackTrace(System.err);
+    }
+  }
+
+  private JDBCConnectionFactory()
+  {
+  }
+
+  /** Convert various connection parameters to a JDBC connection string, used in conjunction with the
+  * provider name.
+  */
+  public static String getJDBCDriverString(String providerName, String host, String database, String rawDriverString)
+  {
+    if (rawDriverString != null && rawDriverString.length() > 0)
+      return rawDriverString;
+
+    if (database.length() == 0)
+      database = "_root_";
+
+    String instanceName = null;
+    // Special for MSSQL: Allow database spec to contain an instance name too, in form:
+    // <instance>/<database>
+    if (providerName.startsWith("jtds:"))
+    {
+      int slashIndex = database.indexOf("/");
+      if (slashIndex != -1)
+      {
+        instanceName = database.substring(0,slashIndex);
+        database = database.substring(slashIndex+1);
+      }
+    }
+
+    return host + "/" + database + ((instanceName==null)?"":";instance="+instanceName);
+  }
+  
+  public static WrappedConnection getConnection(String providerName, String jdbcDriverString, String userName, String password)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    String driverClassName = driverMap.get(providerName);
+    if (driverClassName == null)
+      throw new ManifoldCFException("Unrecognized jdbc provider: '"+providerName+"'");
+
+    String dburl = "jdbc:" + providerName + "//" + jdbcDriverString;
+    if (Logging.connectors != null && Logging.connectors.isDebugEnabled())
+      Logging.connectors.debug("JDBC: The connect string is '"+dburl+"'");
+    try
+    {
+      // Hope for a connection now
+      if (_pool != null)
+      {
+        // Build a unique string to identify the pool.  This has to include
+        // the database and host at a minimum.
+
+        // Provider is part of the pool key, so that the pools can distinguish between different databases
+        String poolKey = providerName + "/" + jdbcDriverString;
+
+        // Better include the credentials on the pool key, or we won't be able to change those and have it build new connections
+        // The password value is SHA-1 hashed, because the pool driver reports the password in many exceptions and we don't want it
+        // to be displayed.
+        poolKey += "/" + userName + "/" + ManifoldCF.hash(password);
+
+        ConnectionPool cp;
+        synchronized (_pool)
+        {
+          cp = _pool.getPool(poolKey);
+          if (cp == null)
+          {
+            // Register the driver here
+            Class.forName(driverClassName);
+            //System.out.println("Class name '"+driverClassName+"'; URL = '"+dburl+"'");
+            cp =_pool.addAlias(poolKey, driverClassName, dburl,
+              userName, password, 30, 300000L);
+          }
+        }
+        return cp.getConnection();
+      }
+      else
+        throw new ManifoldCFException("Can't get connection since pool driver did not initialize properly");
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+    }
+    catch (java.sql.SQLException e)
+    {
+      e.printStackTrace();
+      // Unfortunately, the connection pool manager manages to eat all actual connection setup errors.  This makes it very hard to figure anything out
+      // when something goes wrong.  So, we try again, going directly this time as a means of getting decent error feedback.
+      try
+      {
+        if (userName != null && userName.length() > 0)
+        {
+          DriverManager.getConnection(dburl, userName, password).close();
+        }
+        else
+        {
+          DriverManager.getConnection(dburl).close();
+        }
+      }
+      catch (java.sql.SQLException e2)
+      {
+        throw new ManifoldCFException("Error getting connection: "+e2.getMessage(),e2,ManifoldCFException.SETUP_ERROR);
+      }
+      // By definition, this must be a service interruption, because the direct route in setting up the connection succeeded.
+      long currentTime = System.currentTimeMillis();
+      throw new ServiceInterruption("Error getting connection: "+e.getMessage(),e,currentTime + 300000L,currentTime + 6 * 60 * 60000L,-1,true);
+    }
+    catch (java.lang.ClassNotFoundException e)
+    {
+      throw new ManifoldCFException("Driver class not found: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+    catch (java.lang.InstantiationException e)
+    {
+      throw new ManifoldCFException("Driver class not instantiable: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+    catch (java.lang.IllegalAccessException e)
+    {
+      throw new ManifoldCFException("Driver class not accessible: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+  }
+
+  public static void releaseConnection(WrappedConnection c)
+  {
+    c.release();
+  }
+
+}
+
diff --git a/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConstants.java b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConstants.java
new file mode 100644
index 0000000..f8c3210
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/java/org/apache/manifoldcf/jdbc/JDBCConstants.java
@@ -0,0 +1,92 @@
+/* $Id: JDBCConstants.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.jdbc;
+
+/** These are the constant strings needed by the Oracle connector.
+*/
+public class JDBCConstants
+{
+  public static final String _rcsid = "@(#)$Id: JDBCConstants.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  /** The jdbc provider parameter */
+  public static String providerParameter = "JDBC Provider";
+  /** The column interrogation method name parameter */
+  public static String methodParameter = "JDBC column access method";
+  /** The host machine config parameter */
+  public static String hostParameter = "Host";
+  /** The database name config parameter */
+  public static String databaseNameParameter = "Database name";
+  /** The raw configuration string */
+  public static String driverStringParameter = "Raw driver string";
+  /** The user name config parameter */
+  public static String databaseUserName = "User name";
+  /** The password config parameter */
+  public static String databasePassword = "Password";
+
+  /** The node containing the identifier query */
+  public static String idQueryNode = "idquery";
+  /** The node containing the version query */
+  public static String versionQueryNode = "versionquery";
+  /** The node containing the process query */
+  public static String dataQueryNode = "dataquery";
+
+  /** The name of the id return column */
+  public static String idReturnColumnName = "lcf__id";
+  /** The name of the version return column */
+  public static String versionReturnColumnName = "lcf__version";
+  /** The name of the url return column */
+  public static String urlReturnColumnName = "lcf__url";
+  /** The name of the data return column */
+  public static String dataReturnColumnName = "lcf__data";
+  /** The name of the content type return column */
+  public static String contentTypeReturnColumnName = "lcf__contenttype";
+  /** The name of the token return column */
+  public static String tokenReturnColumnName = "lcf__token";
+  
+  /** The name of the id return variable */
+  public static String idReturnVariable = "IDCOLUMN";
+  /** The name of the version return variable */
+  public static String versionReturnVariable = "VERSIONCOLUMN";
+  /** The name of the url return variable */
+  public static String urlReturnVariable = "URLCOLUMN";
+  /** The name of the data return variable */
+  public static String dataReturnVariable = "DATACOLUMN";
+  /** The name of the content type return variable */
+  public static String contentTypeReturnVariable = "CONTENTTYPE";
+  /** The name of the start time variable */
+  public static String startTimeVariable = "STARTTIME";
+  /** The name of the end time variable */
+  public static String endTimeVariable = "ENDTIME";
+  /** The name of the id list */
+  public static String idListVariable = "IDLIST";
+  /** The name of token return variable */
+  public static String tokenReturnVariable = "TOKENCOLUMN";
+
+  /** JDBCAuthority */
+  /** Query returning user Id parameter name */
+  public static String databaseUserIdQuery = "User Id Query";
+  /** Query returning user tokens parameter name */
+  public static String databaseTokensQuery = "User Tokens Query";
+  /** The name of the user name variable */
+  public static String userNameVariable = "USERNAME";
+  /** The name of the user id variable */
+  public static String userIDVariable = "UID";
+}
+
+
diff --git a/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jdbc/common_en_US.properties b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jdbc/common_en_US.properties
new file mode 100644
index 0000000..241079b
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jdbc/common_en_US.properties
@@ -0,0 +1,38 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+JDBCAuthority.DatabaseType=Database Type
+JDBCAuthority.AccessMethod=Access Method
+JDBCAuthority.ByName=by name
+JDBCAuthority.ByLabel=by label
+JDBCAuthority.Server=Server
+JDBCAuthority.Credentials=Credentials
+JDBCAuthority.DatabaseType2=Database type:
+JDBCAuthority.DatabaseHostAndPort=Database host and port:
+JDBCAuthority.DatabaseServiceNameOrInstanceDatabase=Database service name or instance/database:
+JDBCAuthority.RawDatabaseConnectString=Raw database connect string:
+JDBCAuthority.UserName=User name:
+JDBCAuthority.Password=Password:
+JDBCAuthority.Parameters=Parameters:
+JDBCAuthority.Queries=Queries
+JDBCAuthority.UserIdQuery=User ID query:
+JDBCAuthority.TokenQuery=Authorization tokens query:
+JDBCAuthority.returnUserIdOrEmptyResultset=(return user id if user exists or empty resultset)
+JDBCAuthority.returnTokensForUser=(return authorization tokens for user)
+JDBCAuthority.NoAccessTokensPresent=No access tokens present
+JDBCAuthority.NoAccessTokensSpecified=No access tokens specified
+JDBCAuthority.PleaseFillInADatabaseServerName=Please fill in a database server name
+JDBCAuthority.PleaseFillInTheNameOfTheDatabase=Please fill in the name of the database
+JDBCAuthority.PleaseSupplyTheDatabaseUsernameForThisConnection=Please supply the database username for this connection
diff --git a/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jdbc/common_ja_JP.properties b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jdbc/common_ja_JP.properties
new file mode 100644
index 0000000..3dd4986
--- /dev/null
+++ b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jdbc/common_ja_JP.properties
@@ -0,0 +1,38 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+JDBCAuthority.DatabaseType=データベースタイプ
+JDBCAuthority.AccessMethod=アクセス方式
+JDBCAuthority.ByName=by name
+JDBCAuthority.ByLabel=by label
+JDBCAuthority.Server=サーバ
+JDBCAuthority.Credentials=証明書
+JDBCAuthority.DatabaseType2=データベースタイプ:
+JDBCAuthority.DatabaseHostAndPort=データベースホスト/ポート:
+JDBCAuthority.DatabaseServiceNameOrInstanceDatabase=データベースサービス名又はインスタンス/データベース:
+JDBCAuthority.RawDatabaseConnectString=Raw database connect string:
+JDBCAuthority.UserName=ユーザ名:
+JDBCAuthority.Password=パスワード:
+JDBCAuthority.Parameters=引数:
+JDBCAuthority.Queries=クエリ
+JDBCAuthority.UserIdQuery=ユーザIDクエリ:
+JDBCAuthority.TokenQuery=認証トークンクエリ:
+JDBCAuthority.returnUserIdOrEmptyResultset=(return user id if user exists or empty resultset)
+JDBCAuthority.returnTokensForUser=(return authorization tokens for user)
+JDBCAuthority.NoAccessTokensPresent=No access tokens present
+JDBCAuthority.NoAccessTokensSpecified=No access tokens specified
+JDBCAuthority.PleaseFillInADatabaseServerName=データベースサーバ名を入力してください
+JDBCAuthority.PleaseFillInTheNameOfTheDatabase=データベース名を入力してください
+JDBCAuthority.PleaseSupplyTheDatabaseUsernameForThisConnection=このコネクションに対するデータベースユーザ名を提供してください
diff --git a/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_en_US.properties b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_en_US.properties
index ebf7a30..4d8ccac 100644
--- a/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_en_US.properties
+++ b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_en_US.properties
@@ -22,6 +22,7 @@
 JDBCConnector.DatabaseType2=Database type:
 JDBCConnector.DatabaseHostAndPort=Database host and port:
 JDBCConnector.DatabaseServiceNameOrInstanceDatabase=Database service name or instance/database:
+JDBCConnector.RawDatabaseConnectString=Raw database connect string:
 JDBCConnector.UserName=User name:
 JDBCConnector.Password=Password:
 JDBCConnector.Parameters=Parameters:
@@ -54,23 +55,3 @@
 JDBCConnector.VersionCheckQuery=Version check query:
 JDBCConnector.DataQuery=Data query:
 JDBCConnector.AccessTokens=Access tokens:
-
-JDBCAuthority.DatabaseType=Database Type
-JDBCAuthority.Server=Server
-JDBCAuthority.Credentials=Credentials
-JDBCAuthority.DatabaseType2=Database type:
-JDBCAuthority.DatabaseHostAndPort=Database host and port:
-JDBCAuthority.DatabaseServiceNameOrInstanceDatabase=Database service name or instance/database:
-JDBCAuthority.UserName=User name:
-JDBCAuthority.Password=Password:
-JDBCAuthority.Parameters=Parameters:
-JDBCAuthority.Queries=Queries
-JDBCAuthority.UserIdQuery=User ID query:
-JDBCAuthority.TokenQuery=Authorization tokens query:
-JDBCAuthority.returnUserIdOrEmptyResultset=(return user id if user exists or empty resultset)
-JDBCAuthority.returnTokensForUser=(return authorization tokens for user)
-JDBCAuthority.NoAccessTokensPresent=No access tokens present
-JDBCAuthority.NoAccessTokensSpecified=No access tokens specified
-JDBCAuthority.PleaseFillInADatabaseServerName=Please fill in a database server name
-JDBCAuthority.PleaseFillInTheNameOfTheDatabase=Please fill in the name of the database
-JDBCAuthority.PleaseSupplyTheDatabaseUsernameForThisConnection=Please supply the database username for this connection
diff --git a/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_ja_JP.properties b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_ja_JP.properties
index 04ffa2c..78cad4e 100644
--- a/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_ja_JP.properties
+++ b/connectors/jdbc/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jdbc/common_ja_JP.properties
@@ -22,6 +22,7 @@
 JDBCConnector.DatabaseType2=データベースタイプ:
 JDBCConnector.DatabaseHostAndPort=データベースホスト/ポート:
 JDBCConnector.DatabaseServiceNameOrInstanceDatabase=データベースサービス名又はインスタンス/データベース:
+JDBCConnector.RawDatabaseConnectString=Raw database connect string:
 JDBCConnector.UserName=ユーザ名:
 JDBCConnector.Password=パスワード:
 JDBCConnector.Parameters=引数:
@@ -54,23 +55,3 @@
 JDBCConnector.VersionCheckQuery=バージョンチェッククエリー:
 JDBCConnector.DataQuery=データクエリー:
 JDBCConnector.AccessTokens=アクセストークン:
-
-JDBCAuthority.DatabaseType=データベースタイプ
-JDBCAuthority.Server=サーバ
-JDBCAuthority.Credentials=証明書
-JDBCAuthority.DatabaseType2=データベースタイプ:
-JDBCAuthority.DatabaseHostAndPort=データベースホスト/ポート:
-JDBCAuthority.DatabaseServiceNameOrInstanceDatabase=データベースサービス名又はインスタンス/データベース:
-JDBCAuthority.UserName=ユーザ名:
-JDBCAuthority.Password=パスワード:
-JDBCAuthority.Parameters=引数:
-JDBCAuthority.Queries=クエリ
-JDBCAuthority.UserIdQuery=ユーザIDクエリ:
-JDBCAuthority.TokenQuery=認証トークンクエリ:
-JDBCAuthority.returnUserIdOrEmptyResultset=(return user id if user exists or empty resultset)
-JDBCAuthority.returnTokensForUser=(return authorization tokens for user)
-JDBCAuthority.NoAccessTokensPresent=No access tokens present
-JDBCAuthority.NoAccessTokensSpecified=No access tokens specified
-JDBCAuthority.PleaseFillInADatabaseServerName=データベースサーバ名を入力してください
-JDBCAuthority.PleaseFillInTheNameOfTheDatabase=データベース名を入力してください
-JDBCAuthority.PleaseSupplyTheDatabaseUsernameForThisConnection=このコネクションに対するデータベースユーザ名を提供してください
diff --git a/connectors/jdbc/pom.xml b/connectors/jdbc/pom.xml
index 0dd4e1a..4e76a65 100644
--- a/connectors/jdbc/pom.xml
+++ b/connectors/jdbc/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/jira/build.xml b/connectors/jira/build.xml
new file mode 100644
index 0000000..4d2f0f9
--- /dev/null
+++ b/connectors/jira/build.xml
@@ -0,0 +1,40 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project name="jira" default="all">

+

+    <import file="../connector-build.xml"/>

+

+    <path id="connector-classpath">

+        <path refid="mcf-connector-build.connector-classpath"/>

+        <fileset dir="../../lib">

+            <include name="google-*.jar"/>

+	    <include name="jackson-core.jar"/>

+        </fileset>

+    </path>

+

+    <target name="lib" depends="mcf-connector-build.lib,precompile-check" if="canBuild">

+        <mkdir dir="dist/lib"/>

+        <copy todir="dist/lib">

+            <fileset dir="../../lib">

+                <include name="google-*.jar"/>

+		<include name="jackson-core.jar"/>

+            </fileset>

+        </copy>

+    </target>

+

+</project>

diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraAuthorityConnector.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraAuthorityConnector.java
new file mode 100644
index 0000000..26b62d4
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraAuthorityConnector.java
@@ -0,0 +1,642 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.IOException;
+import java.io.InterruptedIOException;
+import java.util.HashMap;
+import java.util.HashSet;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.Locale;
+import java.util.Map;
+import java.util.Date;
+import java.util.Set;
+import java.util.Iterator;
+import org.apache.manifoldcf.authorities.system.Logging;
+import org.apache.manifoldcf.authorities.authorities.BaseAuthorityConnector;
+import org.apache.manifoldcf.authorities.interfaces.AuthorizationResponse;
+import org.apache.manifoldcf.core.interfaces.ConfigParams;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.commons.lang.StringUtils;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;
+import org.apache.manifoldcf.core.interfaces.IPostParameters;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import java.util.Map.Entry;
+
+/** Jira Authority Connector.  This connector verifies user existence against Jira.
+ */
+public class JiraAuthorityConnector extends BaseAuthorityConnector {
+
+  /** Deny access token for default authority */
+  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
+
+  // Configuration tabs
+  private static final String JIRA_SERVER_TAB_PROPERTY = "JiraAuthorityConnector.Server";
+  private static final String JIRA_PROXY_TAB_PROPERTY = "JiraAuthorityConnector.Proxy";
+  
+  // Template names for configuration
+  /**
+   * Forward to the javascript to check the configuration parameters
+   */
+  private static final String EDIT_CONFIG_HEADER_FORWARD = "editConfiguration_jira.js";
+  /**
+   * Server tab template
+   */
+  private static final String EDIT_CONFIG_FORWARD_SERVER = "editConfiguration_jira_server.html";
+  /**
+   * Proxy tab template
+   */
+  private static final String EDIT_CONFIG_FORWARD_PROXY = "editConfiguration_jira_proxy.html";
+  
+  /**
+   * Forward to the HTML template to view the configuration parameters
+   */
+  private static final String VIEW_CONFIG_FORWARD = "viewConfiguration_jira.html";
+   
+  // Session data
+  protected JiraSession session = null;
+  protected long lastSessionFetch = -1L;
+  protected static final long timeToRelease = 300000L;
+  
+  // Parameter data
+  protected String jiraprotocol = null;
+  protected String jirahost = null;
+  protected String jiraport = null;
+  protected String jirapath = null;
+  protected String clientid = null;
+  protected String clientsecret = null;
+
+  protected String jiraproxyhost = null;
+  protected String jiraproxyport = null;
+  protected String jiraproxydomain = null;
+  protected String jiraproxyusername = null;
+  protected String jiraproxypassword = null;
+  
+  public JiraAuthorityConnector() {
+    super();
+  }
+
+  /**
+   * Close the connection. Call this before discarding the connection.
+   */
+  @Override
+  public void disconnect() throws ManifoldCFException {
+    if (session != null) {
+      session.close();
+      session = null;
+      lastSessionFetch = -1L;
+    }
+
+    jiraprotocol = null;
+    jirahost = null;
+    jiraport = null;
+    jirapath = null;
+    clientid = null;
+    clientsecret = null;
+    
+    jiraproxyhost = null;
+    jiraproxyport = null;
+    jiraproxydomain = null;
+    jiraproxyusername = null;
+    jiraproxypassword = null;
+  }
+
+  /**
+   * This method create a new JIRA session for a JIRA
+   * repository, if the repositoryId is not provided in the configuration, the
+   * connector will retrieve all the repositories exposed for this endpoint
+   * the it will start to use the first one.
+   *
+   * @param configParameters is the set of configuration parameters, which in
+   * this case describe the target appliance, basic auth configuration, etc.
+   * (This formerly came out of the ini file.)
+   */
+  @Override
+  public void connect(ConfigParams configParams) {
+    super.connect(configParams);
+
+    jiraprotocol = params.getParameter(JiraConfig.JIRA_PROTOCOL_PARAM);
+    jirahost = params.getParameter(JiraConfig.JIRA_HOST_PARAM);
+    jiraport = params.getParameter(JiraConfig.JIRA_PORT_PARAM);
+    jirapath = params.getParameter(JiraConfig.JIRA_PATH_PARAM);
+    clientid = params.getParameter(JiraConfig.CLIENT_ID_PARAM);
+    clientsecret = params.getObfuscatedParameter(JiraConfig.CLIENT_SECRET_PARAM);
+    
+    jiraproxyhost = params.getParameter(JiraConfig.JIRA_PROXYHOST_PARAM);
+    jiraproxyport = params.getParameter(JiraConfig.JIRA_PROXYPORT_PARAM);
+    jiraproxydomain = params.getParameter(JiraConfig.JIRA_PROXYDOMAIN_PARAM);
+    jiraproxyusername = params.getParameter(JiraConfig.JIRA_PROXYUSERNAME_PARAM);
+    jiraproxypassword = params.getObfuscatedParameter(JiraConfig.JIRA_PROXYPASSWORD_PARAM);
+    
+  }
+
+  /**
+   * Test the connection. Returns a string describing the connection
+   * integrity.
+   *
+   * @return the connection's status as a displayable string.
+   */
+  @Override
+  public String check() throws ManifoldCFException {
+    try {
+      checkConnection();
+      return super.check();
+    } catch (ManifoldCFException e) {
+      return "Connection failed: " + e.getMessage();
+    }
+  }
+
+
+  /**
+   * Set up a session
+   */
+  protected JiraSession getSession() throws ManifoldCFException {
+    if (session == null) {
+      // Check for parameter validity
+
+      if (StringUtils.isEmpty(jiraprotocol)) {
+        throw new ManifoldCFException("Parameter " + JiraConfig.JIRA_PROTOCOL_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled()) {
+        Logging.authorityConnectors.debug("JIRA: jiraprotocol = '" + jiraprotocol + "'");
+      }
+
+      if (StringUtils.isEmpty(jirahost)) {
+        throw new ManifoldCFException("Parameter " + JiraConfig.JIRA_HOST_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled()) {
+        Logging.authorityConnectors.debug("JIRA: jirahost = '" + jirahost + "'");
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled()) {
+        Logging.authorityConnectors.debug("JIRA: jiraport = '" + jiraport + "'");
+      }
+
+      if (StringUtils.isEmpty(jirapath)) {
+        throw new ManifoldCFException("Parameter " + JiraConfig.JIRA_PATH_PARAM
+            + " required but not set");
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled()) {
+        Logging.authorityConnectors.debug("JIRA: jirapath = '" + jirapath + "'");
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled()) {
+        Logging.authorityConnectors.debug("JIRA: Clientid = '" + clientid + "'");
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled()) {
+        Logging.authorityConnectors.debug("JIRA: Clientsecret = '" + clientsecret + "'");
+      }
+
+      String jiraurl = jiraprotocol + "://" + jirahost + (StringUtils.isEmpty(jiraport)?"":":"+jiraport) + jirapath;
+      session = new JiraSession(clientid, clientsecret, jiraurl,
+        jiraproxyhost, jiraproxyport, jiraproxydomain, jiraproxyusername, jiraproxypassword);
+
+    }
+    lastSessionFetch = System.currentTimeMillis();
+    return session;
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
+  @Override
+  public void poll() throws ManifoldCFException {
+    if (lastSessionFetch == -1L) {
+      return;
+    }
+
+    long currentTime = System.currentTimeMillis();
+    if (currentTime >= lastSessionFetch + timeToRelease) {
+      session.close();
+      session = null;
+      lastSessionFetch = -1L;
+    }
+  }
+
+  /**
+   * Fill in a Server tab configuration parameter map for calling a Velocity
+   * template.
+   *
+   * @param newMap is the map to fill in
+   * @param parameters is the current set of configuration parameters
+   */
+  private static void fillInServerConfigurationMap(Map<String, Object> newMap, IPasswordMapperActivity mapper, ConfigParams parameters) {
+    String jiraprotocol = parameters.getParameter(JiraConfig.JIRA_PROTOCOL_PARAM);
+    String jirahost = parameters.getParameter(JiraConfig.JIRA_HOST_PARAM);
+    String jiraport = parameters.getParameter(JiraConfig.JIRA_PORT_PARAM);
+    String jirapath = parameters.getParameter(JiraConfig.JIRA_PATH_PARAM);
+    String clientid = parameters.getParameter(JiraConfig.CLIENT_ID_PARAM);
+    String clientsecret = parameters.getObfuscatedParameter(JiraConfig.CLIENT_SECRET_PARAM);
+
+    if (jiraprotocol == null)
+      jiraprotocol = JiraConfig.JIRA_PROTOCOL_DEFAULT;
+    if (jirahost == null)
+      jirahost = JiraConfig.JIRA_HOST_DEFAULT;
+    if (jiraport == null)
+      jiraport = JiraConfig.JIRA_PORT_DEFAULT;
+    if (jirapath == null)
+      jirapath = JiraConfig.JIRA_PATH_DEFAULT;
+    
+    if (clientid == null)
+      clientid = JiraConfig.CLIENT_ID_DEFAULT;
+    if (clientsecret == null)
+      clientsecret = JiraConfig.CLIENT_SECRET_DEFAULT;
+    else
+      clientsecret = mapper.mapPasswordToKey(clientsecret);
+
+    newMap.put("JIRAPROTOCOL", jiraprotocol);
+    newMap.put("JIRAHOST", jirahost);
+    newMap.put("JIRAPORT", jiraport);
+    newMap.put("JIRAPATH", jirapath);
+    newMap.put("CLIENTID", clientid);
+    newMap.put("CLIENTSECRET", clientsecret);
+  }
+
+  /**
+   * Fill in a Proxy tab configuration parameter map for calling a Velocity
+   * template.
+   *
+   * @param newMap is the map to fill in
+   * @param parameters is the current set of configuration parameters
+   */
+  private static void fillInProxyConfigurationMap(Map<String, Object> newMap, IPasswordMapperActivity mapper, ConfigParams parameters) {
+    String jiraproxyhost = parameters.getParameter(JiraConfig.JIRA_PROXYHOST_PARAM);
+    String jiraproxyport = parameters.getParameter(JiraConfig.JIRA_PROXYPORT_PARAM);
+    String jiraproxydomain = parameters.getParameter(JiraConfig.JIRA_PROXYDOMAIN_PARAM);
+    String jiraproxyusername = parameters.getParameter(JiraConfig.JIRA_PROXYUSERNAME_PARAM);
+    String jiraproxypassword = parameters.getObfuscatedParameter(JiraConfig.JIRA_PROXYPASSWORD_PARAM);
+
+    if (jiraproxyhost == null)
+      jiraproxyhost = JiraConfig.JIRA_PROXYHOST_DEFAULT;
+    if (jiraproxyport == null)
+      jiraproxyport = JiraConfig.JIRA_PROXYPORT_DEFAULT;
+
+    if (jiraproxydomain == null)
+      jiraproxydomain = JiraConfig.JIRA_PROXYDOMAIN_DEFAULT;
+    if (jiraproxyusername == null)
+      jiraproxyusername = JiraConfig.JIRA_PROXYUSERNAME_DEFAULT;
+    if (jiraproxypassword == null)
+      jiraproxypassword = JiraConfig.JIRA_PROXYPASSWORD_DEFAULT;
+    else
+      jiraproxypassword = mapper.mapPasswordToKey(jiraproxypassword);
+
+    newMap.put("JIRAPROXYHOST", jiraproxyhost);
+    newMap.put("JIRAPROXYPORT", jiraproxyport);
+    newMap.put("JIRAPROXYDOMAIN", jiraproxydomain);
+    newMap.put("JIRAPROXYUSERNAME", jiraproxyusername);
+    newMap.put("JIRAPROXYPASSWORD", jiraproxypassword);
+  }
+
+  /**
+   * View configuration. This method is called in the body section of the
+   * connector's view configuration page. Its purpose is to present the
+   * connection information to the user. The coder can presume that the HTML
+   * that is output from this configuration will be within appropriate <html>
+   * and <body> tags.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
+      Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in map from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    fillInProxyConfigurationMap(paramMap, out, parameters);
+
+    Messages.outputResourceWithVelocity(out,locale,VIEW_CONFIG_FORWARD,paramMap);
+  }
+
+  /**
+   *
+   * Output the configuration header section. This method is called in the
+   * head section of the connector's configuration page. Its purpose is to add
+   * the required tabs to the list, and to output any javascript methods that
+   * might be needed by the configuration editing HTML.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @param tabsArray is an array of tab names. Add to this array any tab
+   * names that are specific to the connector.
+   */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext,
+      IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
+      throws ManifoldCFException, IOException {
+    // Add the Server tab
+    tabsArray.add(Messages.getString(locale, JIRA_SERVER_TAB_PROPERTY));
+    // Add the Proxy tab
+    tabsArray.add(Messages.getString(locale, JIRA_PROXY_TAB_PROPERTY));
+    // Map the parameters
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+
+    // Fill in the parameters from each tab
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    fillInProxyConfigurationMap(paramMap, out, parameters);
+        
+    // Output the Javascript - only one Velocity template for all tabs
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_HEADER_FORWARD,paramMap);
+  }
+
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext,
+      IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+      throws ManifoldCFException, IOException {
+
+
+    // Call the Velocity templates for each tab
+    Map<String, Object> paramMap = new HashMap<String, Object>();
+    // Set the tab name
+    paramMap.put("TabName", tabName);
+
+    // Fill in the parameters
+    fillInServerConfigurationMap(paramMap, out, parameters);
+    fillInProxyConfigurationMap(paramMap, out, parameters);
+        
+    // Server tab
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_FORWARD_SERVER,paramMap);
+    // Proxy tab
+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_FORWARD_PROXY,paramMap);
+
+  }
+
+  /**
+   * Process a configuration post. This method is called at the start of the
+   * connector's configuration page, whenever there is a possibility that form
+   * data for a connection has been posted. Its purpose is to gather form
+   * information and modify the configuration parameters accordingly. The name
+   * of the posted form is "editconnection".
+   *
+   * @param threadContext is the local thread context.
+   * @param variableContext is the set of variables available from the post,
+   * including binary file post information.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @return null if all is well, or a string error message if there is an
+   * error that should prevent saving of the connection (and cause a
+   * redirection to an error page).
+   *
+   */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext,
+    IPostParameters variableContext, ConfigParams parameters)
+    throws ManifoldCFException {
+
+    // Server tab parameters
+
+    String jiraprotocol = variableContext.getParameter("jiraprotocol");
+    if (jiraprotocol != null)
+      parameters.setParameter(JiraConfig.JIRA_PROTOCOL_PARAM, jiraprotocol);
+
+    String jirahost = variableContext.getParameter("jirahost");
+    if (jirahost != null)
+      parameters.setParameter(JiraConfig.JIRA_HOST_PARAM, jirahost);
+
+    String jiraport = variableContext.getParameter("jiraport");
+    if (jiraport != null)
+      parameters.setParameter(JiraConfig.JIRA_PORT_PARAM, jiraport);
+
+    String jirapath = variableContext.getParameter("jirapath");
+    if (jirapath != null)
+      parameters.setParameter(JiraConfig.JIRA_PATH_PARAM, jirapath);
+
+    String clientid = variableContext.getParameter("clientid");
+    if (clientid != null)
+      parameters.setParameter(JiraConfig.CLIENT_ID_PARAM, clientid);
+
+    String clientsecret = variableContext.getParameter("clientsecret");
+    if (clientsecret != null)
+      parameters.setObfuscatedParameter(JiraConfig.CLIENT_SECRET_PARAM, variableContext.mapKeyToPassword(clientsecret));
+
+    // Proxy tab parameters
+    
+    String jiraproxyhost = variableContext.getParameter("jiraproxyhost");
+    if (jiraproxyhost != null)
+      parameters.setParameter(JiraConfig.JIRA_PROXYHOST_PARAM, jiraproxyhost);
+
+    String jiraproxyport = variableContext.getParameter("jiraproxyport");
+    if (jiraproxyport != null)
+      parameters.setParameter(JiraConfig.JIRA_PROXYPORT_PARAM, jiraproxyport);
+    
+    String jiraproxydomain = variableContext.getParameter("jiraproxydomain");
+    if (jiraproxydomain != null)
+      parameters.setParameter(JiraConfig.JIRA_PROXYDOMAIN_PARAM, jiraproxydomain);
+
+    String jiraproxyusername = variableContext.getParameter("jiraproxyusername");
+    if (jiraproxyusername != null)
+      parameters.setParameter(JiraConfig.JIRA_PROXYUSERNAME_PARAM, jiraproxyusername);
+
+    String jiraproxypassword = variableContext.getParameter("jiraproxypassword");
+    if (jiraproxypassword != null)
+      parameters.setObfuscatedParameter(JiraConfig.JIRA_PROXYPASSWORD_PARAM, variableContext.mapKeyToPassword(jiraproxypassword));
+
+    return null;
+  }
+
+  /** Obtain the access tokens for a given Active Directory user name.
+  *@param userName is the user name or identifier.
+  *@return the response tokens (according to the current authority).
+  * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
+  */
+  @Override
+  public AuthorizationResponse getAuthorizationResponse(String userName)
+    throws ManifoldCFException {
+    if (checkUserExists(userName))
+      return new AuthorizationResponse(new String[]{userName},AuthorizationResponse.RESPONSE_OK);
+    return RESPONSE_USERNOTFOUND;
+  }
+
+  /** Obtain the default access tokens for a given user name.
+  *@param userName is the user name or identifier.
+  *@return the default response tokens, presuming that the connect method fails.
+  */
+  @Override
+  public AuthorizationResponse getDefaultAuthorizationResponse(String userName) {
+    return RESPONSE_UNREACHABLE;
+  }
+
+  private static void handleIOException(IOException e)
+    throws ManifoldCFException {
+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    }
+    Logging.authorityConnectors.warn("JIRA: IO exception: "+e.getMessage(), e);
+    throw new ManifoldCFException("IO exception: "+e.getMessage(), e);
+  }
+
+  private static void handleResponseException(ResponseException e)
+    throws ManifoldCFException {
+    throw new ManifoldCFException("Response exception: "+e.getMessage(),e);
+  }
+  
+  // Background threads
+
+  protected static class CheckUserExistsThread extends Thread {
+    protected final JiraSession session;
+    protected final String userName;
+    protected Throwable exception = null;
+    protected boolean result = false;
+
+    public CheckUserExistsThread(JiraSession session, String userName) {
+      super();
+      this.session = session;
+      this.userName = userName;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        result = session.checkUserExists(userName);
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException, IOException, ResponseException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof ResponseException) {
+          throw (ResponseException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+    
+    public boolean getResult() {
+      return result;
+    }
+    
+  }
+  
+  protected boolean checkUserExists(String userName) throws ManifoldCFException {
+    CheckUserExistsThread t = new CheckUserExistsThread(getSession(), userName);
+    try {
+      t.start();
+      t.finishUp();
+      return t.getResult();
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    } catch (ResponseException e) {
+      handleResponseException(e);
+    }
+    return false;
+  }
+
+  protected static class CheckConnectionThread extends Thread {
+
+    protected final JiraSession session;
+    protected Throwable exception = null;
+
+    public CheckConnectionThread(JiraSession session) {
+      super();
+      this.session = session;
+      setDaemon(true);
+    }
+
+    public void run() {
+      try {
+        session.getRepositoryInfo();
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException, IOException, ResponseException {
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof IOException) {
+          throw (IOException) thr;
+        } else if (thr instanceof ResponseException) {
+          throw (ResponseException) thr;
+        } else if (thr instanceof RuntimeException) {
+          throw (RuntimeException) thr;
+        } else {
+          throw (Error) thr;
+        }
+      }
+    }
+  }
+
+  protected void checkConnection() throws ManifoldCFException {
+    CheckConnectionThread t = new CheckConnectionThread(getSession());
+    try {
+      t.start();
+      t.finishUp();
+      return;
+    } catch (InterruptedException e) {
+      t.interrupt();
+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+        ManifoldCFException.INTERRUPTED);
+    } catch (java.net.SocketTimeoutException e) {
+      handleIOException(e);
+    } catch (InterruptedIOException e) {
+      t.interrupt();
+      handleIOException(e);
+    } catch (IOException e) {
+      handleIOException(e);
+    } catch (ResponseException e) {
+      handleResponseException(e);
+    }
+  }
+
+}
+
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraConfig.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraConfig.java
new file mode 100644
index 0000000..82ea914
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraConfig.java
@@ -0,0 +1,52 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+/** Parameters for Jira Authority.
+ */
+public class JiraConfig {
+
+  public static final String CLIENT_ID_PARAM = "clientid";
+  public static final String CLIENT_SECRET_PARAM = "clientsecret";
+  public static final String JIRA_PROTOCOL_PARAM = "jiraprotocol";
+  public static final String JIRA_HOST_PARAM = "jirahost";
+  public static final String JIRA_PORT_PARAM = "jiraport";
+  public static final String JIRA_PATH_PARAM = "jirapath";
+  
+  public static final String JIRA_PROXYHOST_PARAM = "jiraproxyhost";
+  public static final String JIRA_PROXYPORT_PARAM = "jiraproxyport";
+  public static final String JIRA_PROXYDOMAIN_PARAM = "jiraproxydomain";
+  public static final String JIRA_PROXYUSERNAME_PARAM = "jiraproxyusername";
+  public static final String JIRA_PROXYPASSWORD_PARAM = "jiraproxypassword";
+  
+  public static final String CLIENT_ID_DEFAULT = "";
+  public static final String CLIENT_SECRET_DEFAULT = "";
+  public static final String JIRA_PROTOCOL_DEFAULT = "http";
+  public static final String JIRA_HOST_DEFAULT = "";
+  public static final String JIRA_PORT_DEFAULT = "";
+  public static final String JIRA_PATH_DEFAULT = "/rest/api/2/";
+    
+  public static final String JIRA_PROXYHOST_DEFAULT = "";
+  public static final String JIRA_PROXYPORT_DEFAULT = "";
+  public static final String JIRA_PROXYDOMAIN_DEFAULT = "";
+  public static final String JIRA_PROXYUSERNAME_DEFAULT = "";
+  public static final String JIRA_PROXYPASSWORD_DEFAULT = "";
+
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraJSONResponse.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraJSONResponse.java
new file mode 100644
index 0000000..d3d76f6
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraJSONResponse.java
@@ -0,0 +1,46 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+import org.json.simple.JSONObject;
+
+/** An instance of this class represents a Jira JSON object, and the parser hooks
+* needed to understand it.
+*
+* If we needed streaming anywhere, this would implement org.json.simple.parser.ContentHandler,
+* where we would extract the data from a JSON event stream.  But since we don't need that
+* functionality, instead we're just going to accept an already-parsed JSONObject.
+*
+* This class is meant to be overridden (selectively) by derived classes.
+*/
+public class JiraJSONResponse {
+
+  protected Object object = null;
+
+  public JiraJSONResponse() {
+  }
+  
+  /** Receive a parsed JSON object.
+  */
+  public void acceptJSONObject(Object object) {
+    this.object = object;
+  }
+  
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraSession.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraSession.java
new file mode 100644
index 0000000..616f516
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraSession.java
@@ -0,0 +1,297 @@
+/* $Id$ */
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with this
+ * work for additional information regarding copyright ownership. The ASF
+ * licenses this file to You under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ * 
+* http://www.apache.org/licenses/LICENSE-2.0
+ * 
+* Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+ * License for the specific language governing permissions and limitations under
+ * the License.
+ */
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+import org.apache.manifoldcf.core.common.*;
+import org.apache.manifoldcf.core.interfaces.KeystoreManagerFactory;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+
+import java.io.Reader;
+import java.io.Writer;
+import java.io.ByteArrayInputStream;
+import java.io.StringWriter;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.InputStreamReader;
+import java.io.InterruptedIOException;
+import java.io.OutputStream;
+import java.net.URL;
+import java.net.URLEncoder;
+
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.apache.http.conn.ClientConnectionManager;
+import org.apache.http.client.HttpClient;
+import org.apache.http.impl.conn.PoolingClientConnectionManager;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.HttpHost;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.auth.NTCredentials;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.impl.client.DefaultHttpClient;
+import org.apache.http.impl.client.DefaultRedirectStrategy;
+import org.apache.http.util.EntityUtils;
+import org.apache.http.params.BasicHttpParams;
+import org.apache.http.params.HttpParams;
+import org.apache.http.params.CoreConnectionPNames;
+import org.apache.http.conn.params.ConnRoutePNames;
+import org.apache.http.client.params.ClientPNames;
+import org.apache.http.client.HttpRequestRetryHandler;
+import org.apache.http.protocol.HttpContext;
+import org.apache.http.conn.scheme.Scheme;
+import org.apache.http.conn.ssl.SSLSocketFactory;
+import org.apache.http.conn.ssl.AllowAllHostnameVerifier;
+import org.apache.http.params.CoreProtocolPNames;
+
+import org.json.simple.JSONObject;
+import org.json.simple.JSONValue;
+import org.json.simple.JSONArray;
+
+/**
+ *
+ * @author andrew
+ */
+public class JiraSession {
+
+  private final String URLbase;
+  private final String clientId;
+  private final String clientSecret;
+  
+  private ClientConnectionManager connectionManager;
+  private HttpClient httpClient;
+  
+  // Current host name
+  private static String currentHost = null;
+  static
+  {
+    // Find the current host name
+    try
+    {
+      java.net.InetAddress addr = java.net.InetAddress.getLocalHost();
+
+      // Get hostname
+      currentHost = addr.getHostName();
+    }
+    catch (java.net.UnknownHostException e)
+    {
+    }
+  }
+
+  /**
+   * Constructor. Create a session.
+   */
+  public JiraSession(String clientId, String clientSecret, String URLbase,
+    String proxyHost, String proxyPort, String proxyDomain, String proxyUsername, String proxyPassword)
+    throws ManifoldCFException {
+    this.URLbase = URLbase;
+    this.clientId = clientId;
+    this.clientSecret = clientSecret;
+
+    int socketTimeout = 900000;
+    int connectionTimeout = 60000;
+
+    javax.net.ssl.SSLSocketFactory httpsSocketFactory = KeystoreManagerFactory.getTrustingSecureSocketFactory();
+    SSLSocketFactory myFactory = new SSLSocketFactory(new InterruptibleSocketFactory(httpsSocketFactory,connectionTimeout),
+      new AllowAllHostnameVerifier());
+    Scheme myHttpsProtocol = new Scheme("https", 443, myFactory);
+
+    PoolingClientConnectionManager localConnectionManager = new PoolingClientConnectionManager();
+    localConnectionManager.setMaxTotal(1);
+    connectionManager = localConnectionManager;
+    // Set up protocol registry
+    connectionManager.getSchemeRegistry().register(myHttpsProtocol);
+
+    BasicHttpParams params = new BasicHttpParams();
+    params.setBooleanParameter(CoreProtocolPNames.USE_EXPECT_CONTINUE,true);
+    params.setIntParameter(CoreProtocolPNames.WAIT_FOR_CONTINUE,socketTimeout);
+    params.setBooleanParameter(CoreConnectionPNames.TCP_NODELAY,true);
+    params.setBooleanParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK,false);
+    params.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT,connectionTimeout);
+    params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT,socketTimeout);
+    params.setBooleanParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS,true);
+    DefaultHttpClient localHttpClient = new DefaultHttpClient(connectionManager,params);
+    // No retries
+    localHttpClient.setHttpRequestRetryHandler(new HttpRequestRetryHandler()
+      {
+        public boolean retryRequest(
+          IOException exception,
+          int executionCount,
+          HttpContext context)
+        {
+          return false;
+        }
+       
+      });
+    localHttpClient.setRedirectStrategy(new DefaultRedirectStrategy());
+    
+    // If authentication needed, set that
+    if (clientId != null)
+    {
+      localHttpClient.getCredentialsProvider().setCredentials(
+        AuthScope.ANY,
+        new UsernamePasswordCredentials(clientId,clientSecret));
+    }
+    
+    // If there's a proxy, set that too.
+    if (proxyHost != null && proxyHost.length() > 0)
+    {
+
+      int proxyPortInt;
+      if (proxyPort != null && proxyPort.length() > 0)
+      {
+        try
+        {
+          proxyPortInt = Integer.parseInt(proxyPort);
+        }
+        catch (NumberFormatException e)
+        {
+          throw new ManifoldCFException("Bad number: "+e.getMessage(),e);
+        }
+      }
+      else
+        proxyPortInt = 8080;
+
+      // Configure proxy authentication
+      if (proxyUsername != null && proxyUsername.length() > 0)
+      {
+        if (proxyPassword == null)
+          proxyPassword = "";
+        if (proxyDomain == null)
+          proxyDomain = "";
+
+        localHttpClient.getCredentialsProvider().setCredentials(
+          new AuthScope(proxyHost, proxyPortInt),
+          new NTCredentials(proxyUsername, proxyPassword, currentHost, proxyDomain));
+      }
+
+      HttpHost proxy = new HttpHost(proxyHost, proxyPortInt);
+
+      localHttpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy);
+    }
+
+    httpClient = localHttpClient;
+  }
+
+  /**
+   * Close session.
+   */
+  public void close() {
+    httpClient = null;
+    if (connectionManager != null)
+      connectionManager.shutdown();
+    connectionManager = null;
+  }
+
+  private static Object convertToJSON(HttpResponse httpResponse)
+    throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      InputStream is = entity.getContent();
+      try {
+        String charSet = EntityUtils.getContentCharSet(entity);
+        if (charSet == null)
+          charSet = "utf-8";
+        Reader r = new InputStreamReader(is,charSet);
+        return JSONValue.parse(r);
+      } finally {
+        is.close();
+      }
+    }
+    return null;
+  }
+
+  private static String convertToString(HttpResponse httpResponse)
+    throws IOException {
+    HttpEntity entity = httpResponse.getEntity();
+    if (entity != null) {
+      InputStream is = entity.getContent();
+      try {
+        String charSet = EntityUtils.getContentCharSet(entity);
+        if (charSet == null)
+          charSet = "utf-8";
+        char[] buffer = new char[65536];
+        Reader r = new InputStreamReader(is,charSet);
+        Writer w = new StringWriter();
+        try {
+          while (true) {
+            int amt = r.read(buffer);
+            if (amt == -1)
+              break;
+            w.write(buffer,0,amt);
+          }
+        } finally {
+          w.flush();
+        }
+        return w.toString();
+      } finally {
+        is.close();
+      }
+    }
+    return "";
+  }
+
+  private void getRest(String rightside, JiraJSONResponse response)
+    throws IOException, ResponseException {
+
+    final HttpRequestBase method = new HttpGet(URLbase + rightside);
+    method.addHeader("Accept", "application/json");
+
+    try {
+      HttpResponse httpResponse = httpClient.execute(method);
+      int resultCode = httpResponse.getStatusLine().getStatusCode();
+      if (resultCode != 200)
+        throw new ResponseException("Unexpected result code "+resultCode+": "+convertToString(httpResponse));
+      Object jo = convertToJSON(httpResponse);
+      response.acceptJSONObject(jo);
+    } finally {
+      method.abort();
+    }
+  }
+
+  /**
+   * Obtain repository information.
+   */
+  public Map<String, String> getRepositoryInfo() throws IOException, ResponseException {
+    HashMap<String, String> statistics = new HashMap<String, String>();
+    JiraUserQueryResults qr = new JiraUserQueryResults();
+    getRest("user/search?username=&maxResults=1&startAt=0", qr);
+    return statistics;
+  }
+
+  /** Check if user exists.
+  */
+  public boolean checkUserExists(String userName) throws IOException, ResponseException {
+    JiraUserQueryResults qr = new JiraUserQueryResults();
+    getRest("user/search?username="+URLEncoder.encode(userName,"utf-8")+"&maxResults=1&startAt=0", qr);
+    List<String> values = new ArrayList<String>();
+    qr.getNames(values);
+    if (values.size() == 0)
+      return false;
+    for (String value : values) {
+      if (userName.equals(value))
+        return true;
+    }
+    return false;
+  }
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraUserQueryResults.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraUserQueryResults.java
new file mode 100644
index 0000000..878362a
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/JiraUserQueryResults.java
@@ -0,0 +1,53 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.json.simple.JSONObject;
+import org.json.simple.JSONArray;
+
+/** An instance of this class represents the results of a Jira user query, and
+* the ability to parse the corresponding JSON response.
+*/
+public class JiraUserQueryResults extends JiraJSONResponse {
+
+  // Specific keys we care about
+  private final static String KEY_NAME = "name";
+
+  public JiraUserQueryResults() {
+    super();
+  }
+
+  public void getNames(List<String> nameBuffer) {
+    JSONArray users = (JSONArray)object;
+    for (Object user : users) {
+      if (user instanceof JSONObject) {
+        JSONObject jo = (JSONObject)user;
+        nameBuffer.add(jo.get(KEY_NAME).toString());
+      }
+    }
+  }
+  
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/Messages.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/Messages.java
new file mode 100644
index 0000000..e248ba3
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.authorities.authorities.jira.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.authorities.authorities.jira";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/ResponseException.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/ResponseException.java
new file mode 100644
index 0000000..4a110cf
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/jira/ResponseException.java
@@ -0,0 +1,34 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities.authorities.jira;
+
+/** This exception is thrown when the response from REST is not what
+* was expected.
+ */
+public class ResponseException extends Exception {
+
+  public ResponseException(String msg) {
+    super(msg);
+  }
+  
+  public ResponseException(String msg, Throwable cause) {
+    super(msg, cause);
+  }
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraConfig.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraConfig.java
new file mode 100644
index 0000000..d7c11cf
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraConfig.java
@@ -0,0 +1,59 @@
+/* $Id: JiraConfig.java 1488537 2013-06-01 15:30:15Z kwright $ */

+

+/**

+* Licensed to the Apache Software Foundation (ASF) under one or more

+* contributor license agreements. See the NOTICE file distributed with

+* this work for additional information regarding copyright ownership.

+* The ASF licenses this file to You under the Apache License, Version 2.0

+* (the "License"); you may not use this file except in compliance with

+* the License. You may obtain a copy of the License at

+*

+* http://www.apache.org/licenses/LICENSE-2.0

+*

+* Unless required by applicable law or agreed to in writing, software

+* distributed under the License is distributed on an "AS IS" BASIS,

+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+* See the License for the specific language governing permissions and

+* limitations under the License.

+*/

+

+package org.apache.manifoldcf.crawler.connectors.jira;

+

+/**

+ *

+ * @author andrew

+ */

+public class JiraConfig {

+

+  public static final String CLIENT_ID_PARAM = "clientid";

+  public static final String CLIENT_SECRET_PARAM = "clientsecret";

+  public static final String JIRA_PROTOCOL_PARAM = "jiraprotocol";

+  public static final String JIRA_HOST_PARAM = "jirahost";

+  public static final String JIRA_PORT_PARAM = "jiraport";

+  public static final String JIRA_PATH_PARAM = "jirapath";

+  

+  public static final String JIRA_PROXYHOST_PARAM = "jiraproxyhost";

+  public static final String JIRA_PROXYPORT_PARAM = "jiraproxyport";

+  public static final String JIRA_PROXYDOMAIN_PARAM = "jiraproxydomain";

+  public static final String JIRA_PROXYUSERNAME_PARAM = "jiraproxyusername";

+  public static final String JIRA_PROXYPASSWORD_PARAM = "jiraproxypassword";

+

+  public static final String JIRA_QUERY_PARAM = "jiraquery";

+  

+  public static final String CLIENT_ID_DEFAULT = "";

+  public static final String CLIENT_SECRET_DEFAULT = "";

+  public static final String JIRA_PROTOCOL_DEFAULT = "http";

+  public static final String JIRA_HOST_DEFAULT = "";

+  public static final String JIRA_PORT_DEFAULT = "";

+  public static final String JIRA_PATH_DEFAULT = "/rest/api/2/";

+  

+  public static final String JIRA_PROXYHOST_DEFAULT = "";

+  public static final String JIRA_PROXYPORT_DEFAULT = "";

+  public static final String JIRA_PROXYDOMAIN_DEFAULT = "";

+  public static final String JIRA_PROXYUSERNAME_DEFAULT = "";

+  public static final String JIRA_PROXYPASSWORD_DEFAULT = "";

+

+  public static final String JIRA_QUERY_DEFAULT = "ORDER BY createdDate Asc";

+

+    

+}

diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraIssue.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraIssue.java
new file mode 100644
index 0000000..b9f1438
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraIssue.java
@@ -0,0 +1,168 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.jira;
+
+import org.apache.manifoldcf.core.common.*;
+
+import java.util.Date;
+import java.util.Map;
+import java.util.HashMap;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.json.simple.JSONObject;
+import org.json.simple.JSONArray;
+
+/** An instance of this class represents a Jira issue, and the parser hooks
+* needed to extract the data from the JSON event stream we use to parse it.
+*/
+public class JiraIssue extends JiraJSONResponse {
+
+  // Specific keys we care about
+  private final static String KEY_FIELDS = "fields";
+  private final static String KEY_KEY = "key";
+  private final static String KEY_SELF = "self";
+  private final static String KEY_CREATED = "created";
+  private final static String KEY_UPDATED = "updated";
+  private final static String KEY_DESCRIPTION = "description";
+  private final static String KEY_SUMMARY = "summary";
+
+  public JiraIssue() {
+    super();
+  }
+
+  public String getKey() {
+    Object key = ((JSONObject)object).get(KEY_KEY);
+    if (key == null)
+      return null;
+    return key.toString();
+  }
+  
+  public String getSelf() {
+    Object key = ((JSONObject)object).get(KEY_SELF);
+    if (key == null)
+      return null;
+    return key.toString();
+  }
+  
+  public Date getCreatedDate() {
+    JSONObject fields = (JSONObject)((JSONObject)object).get(KEY_FIELDS);
+    if (fields == null)
+      return null;
+    Object createdDate = fields.get(KEY_CREATED);
+    if (createdDate == null)
+      return null;
+    return DateParser.parseISO8601Date(createdDate.toString());
+  }
+  
+  public Date getUpdatedDate() {
+    JSONObject fields = (JSONObject)((JSONObject)object).get(KEY_FIELDS);
+    if (fields == null)
+      return null;
+    Object updatedDate = fields.get(KEY_UPDATED);
+    if (updatedDate == null)
+      return null;
+    return DateParser.parseISO8601Date(updatedDate.toString());
+  }
+  
+  public String getDescription() {
+    JSONObject fields = (JSONObject)((JSONObject)object).get(KEY_FIELDS);
+    if (fields == null)
+      return null;
+    Object description = fields.get(KEY_DESCRIPTION);
+    if (description == null)
+      return null;
+    return description.toString();
+  }
+  
+  public String getSummary() {
+    JSONObject fields = (JSONObject)((JSONObject)object).get(KEY_FIELDS);
+    if (fields == null)
+      return null;
+    Object summary = fields.get(KEY_SUMMARY);
+    if (summary == null)
+      return null;
+    return summary.toString();
+  }
+  
+  public Map<String,String[]> getMetadata() {
+    Map<String,List<String>> map = new HashMap<String,List<String>>();
+    JSONObject fields = (JSONObject)((JSONObject)object).get(KEY_FIELDS);
+    if (fields != null)
+      addMetadataToMap("", fields, map);
+    
+    // Now convert to a form more suited for RepositoryDocument
+    Map<String,String[]> rmap = new HashMap<String,String[]>();
+    for (String key : map.keySet()) {
+      List<String> values = map.get(key);
+      String[] valueArray = values.toArray(new String[0]);
+      rmap.put(key,valueArray);
+    }
+    return rmap;
+  }
+
+  protected static void addMetadataToMap(String parent, Object cval, Map<String,List<String>> currentMap) {
+
+    if (cval == null)
+      return;
+
+    // See if it is a basic type
+    if (cval instanceof String || cval instanceof Number || cval instanceof Boolean) {
+      List<String> current = currentMap.get(parent);
+      if (current == null) {
+        current = new ArrayList<String>();
+        currentMap.put(parent,current);
+      }
+      current.add(cval.toString());
+      return;
+    }
+
+    // See if it is an array
+    if (cval instanceof JSONArray) {
+      JSONArray ja = (JSONArray)cval;
+      for (Object subpiece : ja) {
+        addMetadataToMap(parent, subpiece, currentMap);
+      }
+      return;
+    }
+    
+    // See if it is a JSONObject
+    if (cval instanceof JSONObject) {
+      JSONObject jo = (JSONObject)cval;
+      String append="";
+      if (parent.length() > 0) {
+        append=parent+"_";
+      }
+      for (Object key : jo.keySet()) {
+        Object value = jo.get(key);
+        if (value == null) {
+          continue;
+        }
+        String newKey = append + key;
+        addMetadataToMap(newKey, value, currentMap);
+      }
+      return;
+    }
+    
+
+    throw new IllegalArgumentException("Unknown object to addMetadataToMap: "+cval.getClass().getName());
+  }
+
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraJSONResponse.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraJSONResponse.java
new file mode 100644
index 0000000..2789663
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraJSONResponse.java
@@ -0,0 +1,46 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.jira;
+
+import org.json.simple.JSONObject;
+
+/** An instance of this class represents a Jira JSON object, and the parser hooks
+* needed to understand it.
+*
+* If we needed streaming anywhere, this would implement org.json.simple.parser.ContentHandler,
+* where we would extract the data from a JSON event stream.  But since we don't need that
+* functionality, instead we're just going to accept an already-parsed JSONObject.
+*
+* This class is meant to be overridden (selectively) by derived classes.
+*/
+public class JiraJSONResponse {
+
+  protected Object object = null;
+
+  public JiraJSONResponse() {
+  }
+  
+  /** Receive a parsed JSON object.
+  */
+  public void acceptJSONObject(Object object) {
+    this.object = object;
+  }
+  
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraQueryResults.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraQueryResults.java
new file mode 100644
index 0000000..aecda7b
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraQueryResults.java
@@ -0,0 +1,58 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.jira;
+
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.IOException;
+
+import org.json.simple.JSONObject;
+import org.json.simple.JSONArray;
+
+/** An instance of this class represents the results of a Jira query, and
+* the ability to parse the corresponding JSON response.
+*/
+public class JiraQueryResults extends JiraJSONResponse {
+
+  // Specific keys we care about
+  private final static String KEY_TOTAL = "total";
+  private final static String KEY_ISSUES = "issues";
+  private final static String KEY_KEY = "key";
+
+  public JiraQueryResults() {
+    super();
+  }
+
+  public Long getTotal() {
+    return (Long)((JSONObject)object).get(KEY_TOTAL);
+  }
+  
+  public void pushIds(XThreadStringBuffer seedBuffer)
+    throws IOException, InterruptedException {
+    JSONArray issues = (JSONArray)((JSONObject)object).get(KEY_ISSUES);
+    for (Object issue : issues) {
+      if (issue instanceof JSONObject) {
+        JSONObject jo = (JSONObject)issue;
+        seedBuffer.add(jo.get(KEY_KEY).toString());
+      }
+    }
+  }
+
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraRepositoryConnector.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraRepositoryConnector.java
new file mode 100644
index 0000000..645b2e0
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraRepositoryConnector.java
@@ -0,0 +1,1377 @@
+/* $Id: JiraRepositoryConnector.java 1490585 2013-06-07 11:13:35Z kwright $ */

+

+/**

+* Licensed to the Apache Software Foundation (ASF) under one or more

+* contributor license agreements. See the NOTICE file distributed with

+* this work for additional information regarding copyright ownership.

+* The ASF licenses this file to You under the Apache License, Version 2.0

+* (the "License"); you may not use this file except in compliance with

+* the License. You may obtain a copy of the License at

+*

+* http://www.apache.org/licenses/LICENSE-2.0

+*

+* Unless required by applicable law or agreed to in writing, software

+* distributed under the License is distributed on an "AS IS" BASIS,

+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+* See the License for the specific language governing permissions and

+* limitations under the License.

+*/

+

+package org.apache.manifoldcf.crawler.connectors.jira;

+

+import java.io.ByteArrayInputStream;

+import org.apache.manifoldcf.core.common.*;

+

+import java.io.IOException;

+import java.io.InputStream;

+import java.io.InterruptedIOException;

+import java.util.HashMap;

+import java.util.HashSet;

+import java.util.ArrayList;

+import java.util.List;

+import java.util.Locale;

+import java.util.Map;

+import java.util.Date;

+import java.util.Set;

+import java.util.Iterator;

+import org.apache.manifoldcf.crawler.system.Logging;

+import org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector;

+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;

+import org.apache.manifoldcf.core.interfaces.ConfigParams;

+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;

+import org.apache.commons.lang.StringUtils;

+import org.apache.manifoldcf.agents.interfaces.RepositoryDocument;

+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;

+import org.apache.manifoldcf.core.interfaces.IPasswordMapperActivity;

+import org.apache.manifoldcf.core.interfaces.IPostParameters;

+import org.apache.manifoldcf.core.interfaces.IThreadContext;

+import org.apache.manifoldcf.core.interfaces.SpecificationNode;

+import org.apache.manifoldcf.crawler.interfaces.DocumentSpecification;

+import org.apache.manifoldcf.crawler.interfaces.IProcessActivity;

+import org.apache.manifoldcf.crawler.interfaces.ISeedingActivity;

+import java.util.Map.Entry;

+

+/**

+ *

+ * @author andrew

+ */

+public class JiraRepositoryConnector extends BaseRepositoryConnector {

+

+  protected final static String ACTIVITY_READ = "read document";

+  

+  /** Deny access token for default authority */

+  private final static String defaultAuthorityDenyToken = GLOBAL_DENY_TOKEN;

+

+  // Nodes

+  private static final String JOB_STARTPOINT_NODE_TYPE = "startpoint";

+  private static final String JOB_QUERY_ATTRIBUTE = "query";

+  private static final String JOB_SECURITY_NODE_TYPE = "security";

+  private static final String JOB_VALUE_ATTRIBUTE = "value";

+  private static final String JOB_ACCESS_NODE_TYPE = "access";

+  private static final String JOB_TOKEN_ATTRIBUTE = "token";

+

+  // Configuration tabs

+  private static final String JIRA_SERVER_TAB_PROPERTY = "JiraRepositoryConnector.Server";

+  private static final String JIRA_PROXY_TAB_PROPERTY = "JiraRepositoryConnector.Proxy";

+

+  // Specification tabs

+  private static final String JIRA_QUERY_TAB_PROPERTY = "JiraRepositoryConnector.JiraQuery";

+  private static final String JIRA_SECURITY_TAB_PROPERTY = "JiraRepositoryConnector.Security";

+  

+  // Template names for configuration

+  /**

+   * Forward to the javascript to check the configuration parameters

+   */

+  private static final String EDIT_CONFIG_HEADER_FORWARD = "editConfiguration_jira.js";

+  /**

+   * Server tab template

+   */

+  private static final String EDIT_CONFIG_FORWARD_SERVER = "editConfiguration_jira_server.html";

+  /**

+   * Proxy tab template

+   */

+  private static final String EDIT_CONFIG_FORWARD_PROXY = "editConfiguration_jira_proxy.html";

+

+  /**

+   * Forward to the HTML template to view the configuration parameters

+   */

+  private static final String VIEW_CONFIG_FORWARD = "viewConfiguration_jira.html";

+   

+  // Template names for specification

+  /**

+   * Forward to the javascript to check the specification parameters for the job

+   */

+  private static final String EDIT_SPEC_HEADER_FORWARD = "editSpecification_jira.js";

+  /**

+   * Forward to the template to edit the query for the job

+   */

+  private static final String EDIT_SPEC_FORWARD_JIRAQUERY = "editSpecification_jiraQuery.html";

+  /**

+   * Forward to the template to edit the security parameters for the job

+   */

+  private static final String EDIT_SPEC_FORWARD_SECURITY = "editSpecification_jiraSecurity.html";

+  

+  /**

+   * Forward to the template to view the specification parameters for the job

+   */

+  private static final String VIEW_SPEC_FORWARD = "viewSpecification_jira.html";

+  

+  // Session data

+  protected JiraSession session = null;

+  protected long lastSessionFetch = -1L;

+  protected static final long timeToRelease = 300000L;

+  

+  // Parameter data

+  protected String jiraprotocol = null;

+  protected String jirahost = null;

+  protected String jiraport = null;

+  protected String jirapath = null;

+  protected String clientid = null;

+  protected String clientsecret = null;

+

+  protected String jiraproxyhost = null;

+  protected String jiraproxyport = null;

+  protected String jiraproxydomain = null;

+  protected String jiraproxyusername = null;

+  protected String jiraproxypassword = null;

+

+  public JiraRepositoryConnector() {

+    super();

+  }

+

+  /**

+   * Return the list of activities that this connector supports (i.e. writes

+   * into the log).

+   *

+   * @return the list.

+   */

+  @Override

+  public String[] getActivitiesList() {

+    return new String[]{ACTIVITY_READ};

+  }

+

+  /**

+   * Get the bin name strings for a document identifier. The bin name

+   * describes the queue to which the document will be assigned for throttling

+   * purposes. Throttling controls the rate at which items in a given queue

+   * are fetched; it does not say anything about the overall fetch rate, which

+   * may operate on multiple queues or bins. For example, if you implement a

+   * web crawler, a good choice of bin name would be the server name, since

+   * that is likely to correspond to a real resource that will need real

+   * throttle protection.

+   *

+   * @param documentIdentifier is the document identifier.

+   * @return the set of bin names. If an empty array is returned, it is

+   * equivalent to there being no request rate throttling available for this

+   * identifier.

+   */

+  @Override

+  public String[] getBinNames(String documentIdentifier) {

+    return new String[]{jirahost};

+  }

+

+  /**

+   * Close the connection. Call this before discarding the connection.

+   */

+  @Override

+  public void disconnect() throws ManifoldCFException {

+    if (session != null) {

+      session.close();

+      session = null;

+      lastSessionFetch = -1L;

+    }

+

+    jiraprotocol = null;

+    jirahost = null;

+    jiraport = null;

+    jirapath = null;

+    clientid = null;

+    clientsecret = null;

+    

+    jiraproxyhost = null;

+    jiraproxyport = null;

+    jiraproxydomain = null;

+    jiraproxyusername = null;

+    jiraproxypassword = null;

+  }

+

+  /**

+   * This method create a new JIRA session for a JIRA

+   * repository, if the repositoryId is not provided in the configuration, the

+   * connector will retrieve all the repositories exposed for this endpoint

+   * the it will start to use the first one.

+   *

+   * @param configParameters is the set of configuration parameters, which in

+   * this case describe the target appliance, basic auth configuration, etc.

+   * (This formerly came out of the ini file.)

+   */

+  @Override

+  public void connect(ConfigParams configParams) {

+    super.connect(configParams);

+

+    jiraprotocol = params.getParameter(JiraConfig.JIRA_PROTOCOL_PARAM);

+    jirahost = params.getParameter(JiraConfig.JIRA_HOST_PARAM);

+    jiraport = params.getParameter(JiraConfig.JIRA_PORT_PARAM);

+    jirapath = params.getParameter(JiraConfig.JIRA_PATH_PARAM);

+    clientid = params.getParameter(JiraConfig.CLIENT_ID_PARAM);

+    clientsecret = params.getObfuscatedParameter(JiraConfig.CLIENT_SECRET_PARAM);

+    

+    jiraproxyhost = params.getParameter(JiraConfig.JIRA_PROXYHOST_PARAM);

+    jiraproxyport = params.getParameter(JiraConfig.JIRA_PROXYPORT_PARAM);

+    jiraproxydomain = params.getParameter(JiraConfig.JIRA_PROXYDOMAIN_PARAM);

+    jiraproxyusername = params.getParameter(JiraConfig.JIRA_PROXYUSERNAME_PARAM);

+    jiraproxypassword = params.getObfuscatedParameter(JiraConfig.JIRA_PROXYPASSWORD_PARAM);

+

+  }

+

+  /**

+   * Test the connection. Returns a string describing the connection

+   * integrity.

+   *

+   * @return the connection's status as a displayable string.

+   */

+  @Override

+  public String check() throws ManifoldCFException {

+    try {

+      checkConnection();

+      return super.check();

+    } catch (ServiceInterruption e) {

+      return "Connection temporarily failed: " + e.getMessage();

+    } catch (ManifoldCFException e) {

+      return "Connection failed: " + e.getMessage();

+    }

+  }

+

+

+  /**

+   * Set up a session

+   */

+  protected JiraSession getSession() throws ManifoldCFException, ServiceInterruption {

+    if (session == null) {

+      // Check for parameter validity

+

+      if (StringUtils.isEmpty(jiraprotocol)) {

+        throw new ManifoldCFException("Parameter " + JiraConfig.JIRA_PROTOCOL_PARAM

+            + " required but not set");

+      }

+

+      if (Logging.connectors.isDebugEnabled()) {

+        Logging.connectors.debug("JIRA: jiraprotocol = '" + jiraprotocol + "'");

+      }

+

+      if (StringUtils.isEmpty(jirahost)) {

+        throw new ManifoldCFException("Parameter " + JiraConfig.JIRA_HOST_PARAM

+            + " required but not set");

+      }

+

+      if (Logging.connectors.isDebugEnabled()) {

+        Logging.connectors.debug("JIRA: jirahost = '" + jirahost + "'");

+      }

+

+      if (Logging.connectors.isDebugEnabled()) {

+        Logging.connectors.debug("JIRA: jiraport = '" + jiraport + "'");

+      }

+

+      if (StringUtils.isEmpty(jirapath)) {

+        throw new ManifoldCFException("Parameter " + JiraConfig.JIRA_PATH_PARAM

+            + " required but not set");

+      }

+

+      if (Logging.connectors.isDebugEnabled()) {

+        Logging.connectors.debug("JIRA: jirapath = '" + jirapath + "'");

+      }

+

+      if (Logging.connectors.isDebugEnabled()) {

+        Logging.connectors.debug("JIRA: Clientid = '" + clientid + "'");

+      }

+

+      if (Logging.connectors.isDebugEnabled()) {

+        Logging.connectors.debug("JIRA: Clientsecret = '" + clientsecret + "'");

+      }

+

+      String jiraurl = jiraprotocol + "://" + jirahost + (StringUtils.isEmpty(jiraport)?"":":"+jiraport) + jirapath;

+      session = new JiraSession(clientid, clientsecret, jiraurl,

+        jiraproxyhost, jiraproxyport, jiraproxydomain, jiraproxyusername, jiraproxypassword);

+

+    }

+    lastSessionFetch = System.currentTimeMillis();

+    return session;

+  }

+

+  /** This method is called to assess whether to count this connector instance should

+  * actually be counted as being connected.

+  *@return true if the connector instance is actually connected.

+  */

+  @Override

+  public boolean isConnected()

+  {

+    return session != null;

+  }

+

+  @Override

+  public void poll() throws ManifoldCFException {

+    if (lastSessionFetch == -1L) {

+      return;

+    }

+

+    long currentTime = System.currentTimeMillis();

+    if (currentTime >= lastSessionFetch + timeToRelease) {

+      session.close();

+      session = null;

+      lastSessionFetch = -1L;

+    }

+  }

+

+  /**

+   * Get the maximum number of documents to amalgamate together into one

+   * batch, for this connector.

+   *

+   * @return the maximum number. 0 indicates "unlimited".

+   */

+  @Override

+  public int getMaxDocumentRequest() {

+    return 1;

+  }

+

+  /**

+   * Return the list of relationship types that this connector recognizes.

+   *

+   * @return the list.

+   */

+  @Override

+  public String[] getRelationshipTypes() {

+    return new String[]{};

+  }

+

+  /**

+   * Fill in a Server tab configuration parameter map for calling a Velocity

+   * template.

+   *

+   * @param newMap is the map to fill in

+   * @param parameters is the current set of configuration parameters

+   */

+  private static void fillInServerConfigurationMap(Map<String, Object> newMap, IPasswordMapperActivity mapper, ConfigParams parameters) {

+    String jiraprotocol = parameters.getParameter(JiraConfig.JIRA_PROTOCOL_PARAM);

+    String jirahost = parameters.getParameter(JiraConfig.JIRA_HOST_PARAM);

+    String jiraport = parameters.getParameter(JiraConfig.JIRA_PORT_PARAM);

+    String jirapath = parameters.getParameter(JiraConfig.JIRA_PATH_PARAM);

+    String clientid = parameters.getParameter(JiraConfig.CLIENT_ID_PARAM);

+    String clientsecret = parameters.getObfuscatedParameter(JiraConfig.CLIENT_SECRET_PARAM);

+

+    if (jiraprotocol == null)

+      jiraprotocol = JiraConfig.JIRA_PROTOCOL_DEFAULT;

+    if (jirahost == null)

+      jirahost = JiraConfig.JIRA_HOST_DEFAULT;

+    if (jiraport == null)

+      jiraport = JiraConfig.JIRA_PORT_DEFAULT;

+    if (jirapath == null)

+      jirapath = JiraConfig.JIRA_PATH_DEFAULT;

+    

+    if (clientid == null)

+      clientid = JiraConfig.CLIENT_ID_DEFAULT;

+    if (clientsecret == null)

+      clientsecret = JiraConfig.CLIENT_SECRET_DEFAULT;

+    else

+      clientsecret = mapper.mapPasswordToKey(clientsecret);

+

+    newMap.put("JIRAPROTOCOL", jiraprotocol);

+    newMap.put("JIRAHOST", jirahost);

+    newMap.put("JIRAPORT", jiraport);

+    newMap.put("JIRAPATH", jirapath);

+    newMap.put("CLIENTID", clientid);

+    newMap.put("CLIENTSECRET", clientsecret);

+  }

+

+  /**

+   * Fill in a Proxy tab configuration parameter map for calling a Velocity

+   * template.

+   *

+   * @param newMap is the map to fill in

+   * @param parameters is the current set of configuration parameters

+   */

+  private static void fillInProxyConfigurationMap(Map<String, Object> newMap, IPasswordMapperActivity mapper, ConfigParams parameters) {

+    String jiraproxyhost = parameters.getParameter(JiraConfig.JIRA_PROXYHOST_PARAM);

+    String jiraproxyport = parameters.getParameter(JiraConfig.JIRA_PROXYPORT_PARAM);

+    String jiraproxydomain = parameters.getParameter(JiraConfig.JIRA_PROXYDOMAIN_PARAM);

+    String jiraproxyusername = parameters.getParameter(JiraConfig.JIRA_PROXYUSERNAME_PARAM);

+    String jiraproxypassword = parameters.getObfuscatedParameter(JiraConfig.JIRA_PROXYPASSWORD_PARAM);

+

+    if (jiraproxyhost == null)

+      jiraproxyhost = JiraConfig.JIRA_PROXYHOST_DEFAULT;

+    if (jiraproxyport == null)

+      jiraproxyport = JiraConfig.JIRA_PROXYPORT_DEFAULT;

+

+    if (jiraproxydomain == null)

+      jiraproxydomain = JiraConfig.JIRA_PROXYDOMAIN_DEFAULT;

+    if (jiraproxyusername == null)

+      jiraproxyusername = JiraConfig.JIRA_PROXYUSERNAME_DEFAULT;

+    if (jiraproxypassword == null)

+      jiraproxypassword = JiraConfig.JIRA_PROXYPASSWORD_DEFAULT;

+    else

+      jiraproxypassword = mapper.mapPasswordToKey(jiraproxypassword);

+

+    newMap.put("JIRAPROXYHOST", jiraproxyhost);

+    newMap.put("JIRAPROXYPORT", jiraproxyport);

+    newMap.put("JIRAPROXYDOMAIN", jiraproxydomain);

+    newMap.put("JIRAPROXYUSERNAME", jiraproxyusername);

+    newMap.put("JIRAPROXYPASSWORD", jiraproxypassword);

+  }

+

+  /**

+   * View configuration. This method is called in the body section of the

+   * connector's view configuration page. Its purpose is to present the

+   * connection information to the user. The coder can presume that the HTML

+   * that is output from this configuration will be within appropriate <html>

+   * and <body> tags.

+   *

+   * @param threadContext is the local thread context.

+   * @param out is the output to which any HTML should be sent.

+   * @param parameters are the configuration parameters, as they currently

+   * exist, for this connection being configured.

+   */

+  @Override

+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,

+      Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {

+    Map<String, Object> paramMap = new HashMap<String, Object>();

+

+    // Fill in map from each tab

+    fillInServerConfigurationMap(paramMap, out, parameters);

+    fillInProxyConfigurationMap(paramMap, out, parameters);

+

+    Messages.outputResourceWithVelocity(out,locale,VIEW_CONFIG_FORWARD,paramMap);

+  }

+

+  /**

+   *

+   * Output the configuration header section. This method is called in the

+   * head section of the connector's configuration page. Its purpose is to add

+   * the required tabs to the list, and to output any javascript methods that

+   * might be needed by the configuration editing HTML.

+   *

+   * @param threadContext is the local thread context.

+   * @param out is the output to which any HTML should be sent.

+   * @param parameters are the configuration parameters, as they currently

+   * exist, for this connection being configured.

+   * @param tabsArray is an array of tab names. Add to this array any tab

+   * names that are specific to the connector.

+   */

+  @Override

+  public void outputConfigurationHeader(IThreadContext threadContext,

+      IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)

+      throws ManifoldCFException, IOException {

+    // Add the Server tab

+    tabsArray.add(Messages.getString(locale, JIRA_SERVER_TAB_PROPERTY));

+    // Add the Proxy tab

+    tabsArray.add(Messages.getString(locale, JIRA_PROXY_TAB_PROPERTY));

+    // Map the parameters

+    Map<String, Object> paramMap = new HashMap<String, Object>();

+

+    // Fill in the parameters from each tab

+    fillInServerConfigurationMap(paramMap, out, parameters);

+    fillInProxyConfigurationMap(paramMap, out, parameters);

+

+    // Output the Javascript - only one Velocity template for all tabs

+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_HEADER_FORWARD,paramMap);

+  }

+

+  @Override

+  public void outputConfigurationBody(IThreadContext threadContext,

+      IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)

+      throws ManifoldCFException, IOException {

+

+

+    // Call the Velocity templates for each tab

+    Map<String, Object> paramMap = new HashMap<String, Object>();

+    // Set the tab name

+    paramMap.put("TabName", tabName);

+

+    // Fill in the parameters

+    fillInServerConfigurationMap(paramMap, out, parameters);

+    fillInProxyConfigurationMap(paramMap, out, parameters);

+

+    // Server tab

+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_FORWARD_SERVER,paramMap);

+    // Proxy tab

+    Messages.outputResourceWithVelocity(out,locale,EDIT_CONFIG_FORWARD_PROXY,paramMap);

+

+  }

+

+  /**

+   * Process a configuration post. This method is called at the start of the

+   * connector's configuration page, whenever there is a possibility that form

+   * data for a connection has been posted. Its purpose is to gather form

+   * information and modify the configuration parameters accordingly. The name

+   * of the posted form is "editconnection".

+   *

+   * @param threadContext is the local thread context.

+   * @param variableContext is the set of variables available from the post,

+   * including binary file post information.

+   * @param parameters are the configuration parameters, as they currently

+   * exist, for this connection being configured.

+   * @return null if all is well, or a string error message if there is an

+   * error that should prevent saving of the connection (and cause a

+   * redirection to an error page).

+   *

+   */

+  @Override

+  public String processConfigurationPost(IThreadContext threadContext,

+    IPostParameters variableContext, ConfigParams parameters)

+    throws ManifoldCFException {

+

+    // Server tab parameters

+

+    String jiraprotocol = variableContext.getParameter("jiraprotocol");

+    if (jiraprotocol != null)

+      parameters.setParameter(JiraConfig.JIRA_PROTOCOL_PARAM, jiraprotocol);

+

+    String jirahost = variableContext.getParameter("jirahost");

+    if (jirahost != null)

+      parameters.setParameter(JiraConfig.JIRA_HOST_PARAM, jirahost);

+

+    String jiraport = variableContext.getParameter("jiraport");

+    if (jiraport != null)

+      parameters.setParameter(JiraConfig.JIRA_PORT_PARAM, jiraport);

+

+    String jirapath = variableContext.getParameter("jirapath");

+    if (jirapath != null)

+      parameters.setParameter(JiraConfig.JIRA_PATH_PARAM, jirapath);

+

+    String clientid = variableContext.getParameter("clientid");

+    if (clientid != null)

+      parameters.setParameter(JiraConfig.CLIENT_ID_PARAM, clientid);

+

+    String clientsecret = variableContext.getParameter("clientsecret");

+    if (clientsecret != null)

+      parameters.setObfuscatedParameter(JiraConfig.CLIENT_SECRET_PARAM, variableContext.mapKeyToPassword(clientsecret));

+

+    // Proxy tab parameters

+    

+    String jiraproxyhost = variableContext.getParameter("jiraproxyhost");

+    if (jiraproxyhost != null)

+      parameters.setParameter(JiraConfig.JIRA_PROXYHOST_PARAM, jiraproxyhost);

+

+    String jiraproxyport = variableContext.getParameter("jiraproxyport");

+    if (jiraproxyport != null)

+      parameters.setParameter(JiraConfig.JIRA_PROXYPORT_PARAM, jiraproxyport);

+    

+    String jiraproxydomain = variableContext.getParameter("jiraproxydomain");

+    if (jiraproxydomain != null)

+      parameters.setParameter(JiraConfig.JIRA_PROXYDOMAIN_PARAM, jiraproxydomain);

+

+    String jiraproxyusername = variableContext.getParameter("jiraproxyusername");

+    if (jiraproxyusername != null)

+      parameters.setParameter(JiraConfig.JIRA_PROXYUSERNAME_PARAM, jiraproxyusername);

+

+    String jiraproxypassword = variableContext.getParameter("jiraproxypassword");

+    if (jiraproxypassword != null)

+      parameters.setObfuscatedParameter(JiraConfig.JIRA_PROXYPASSWORD_PARAM, variableContext.mapKeyToPassword(jiraproxypassword));

+

+    return null;

+  }

+

+  /**

+   * Fill in specification Velocity parameter map for JIRAQuery tab.

+   */

+  private static void fillInJIRAQuerySpecificationMap(Map<String, Object> newMap, DocumentSpecification ds) {

+    String JiraQuery = JiraConfig.JIRA_QUERY_DEFAULT;

+    for (int i = 0; i < ds.getChildCount(); i++) {

+      SpecificationNode sn = ds.getChild(i);

+      if (sn.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {

+        JiraQuery = sn.getAttributeValue(JOB_QUERY_ATTRIBUTE);

+      }

+    }

+    newMap.put("JIRAQUERY", JiraQuery);

+  }

+

+  /**

+   * Fill in specification Velocity parameter map for JIRASecurity tab.

+   */

+  private static void fillInJIRASecuritySpecificationMap(Map<String, Object> newMap, DocumentSpecification ds) {

+    List<Map<String,String>> accessTokenList = new ArrayList<Map<String,String>>();

+    String securityValue = "on";

+    for (int i = 0; i < ds.getChildCount(); i++) {

+      SpecificationNode sn = ds.getChild(i);

+      if (sn.getType().equals(JOB_ACCESS_NODE_TYPE)) {

+        String token = sn.getAttributeValue(JOB_TOKEN_ATTRIBUTE);

+        Map<String,String> accessMap = new HashMap<String,String>();

+        accessMap.put("TOKEN",token);

+        accessTokenList.add(accessMap);

+      } else if (sn.getType().equals(JOB_SECURITY_NODE_TYPE)) {

+        securityValue = sn.getAttributeValue(JOB_VALUE_ATTRIBUTE);

+      }

+    }

+    newMap.put("ACCESSTOKENS", accessTokenList);

+    newMap.put("SECURITYON", securityValue);

+  }

+

+  /**

+   * View specification. This method is called in the body section of a job's

+   * view page. Its purpose is to present the document specification

+   * information to the user. The coder can presume that the HTML that is

+   * output from this configuration will be within appropriate <html> and

+   * <body> tags.

+   *

+   * @param out is the output to which any HTML should be sent.

+   * @param ds is the current document specification for this job.

+   */

+  @Override

+  public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)

+      throws ManifoldCFException, IOException {

+

+    Map<String, Object> paramMap = new HashMap<String, Object>();

+

+    // Fill in the map with data from all tabs

+    fillInJIRAQuerySpecificationMap(paramMap, ds);

+    fillInJIRASecuritySpecificationMap(paramMap, ds);

+

+    Messages.outputResourceWithVelocity(out,locale,VIEW_SPEC_FORWARD,paramMap);

+  }

+

+  /**

+   * Process a specification post. This method is called at the start of job's

+   * edit or view page, whenever there is a possibility that form data for a

+   * connection has been posted. Its purpose is to gather form information and

+   * modify the document specification accordingly. The name of the posted

+   * form is "editjob".

+   *

+   * @param variableContext contains the post data, including binary

+   * file-upload information.

+   * @param ds is the current document specification for this job.

+   * @return null if all is well, or a string error message if there is an

+   * error that should prevent saving of the job (and cause a redirection to

+   * an error page).

+   */

+  @Override

+  public String processSpecificationPost(IPostParameters variableContext,

+      DocumentSpecification ds) throws ManifoldCFException {

+

+    String jiraDriveQuery = variableContext.getParameter("jiraquery");

+    if (jiraDriveQuery != null) {

+      int i = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode oldNode = ds.getChild(i);

+        if (oldNode.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {

+          ds.removeChild(i);

+          break;

+        }

+        i++;

+      }

+      SpecificationNode node = new SpecificationNode(JOB_STARTPOINT_NODE_TYPE);

+      node.setAttribute(JOB_QUERY_ATTRIBUTE, jiraDriveQuery);

+      ds.addChild(ds.getChildCount(), node);

+    }

+    

+    String securityOn = variableContext.getParameter("specsecurity");

+    if (securityOn != null) {

+      // Delete all security records first

+      int i = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i);

+        if (sn.getType().equals(JOB_SECURITY_NODE_TYPE))

+          ds.removeChild(i);

+        else

+          i++;

+      }

+      SpecificationNode node = new SpecificationNode(JOB_SECURITY_NODE_TYPE);

+      node.setAttribute(JOB_VALUE_ATTRIBUTE,securityOn);

+      ds.addChild(ds.getChildCount(),node);

+    }

+    

+    String xc = variableContext.getParameter("tokencount");

+    if (xc != null) {

+      // Delete all tokens first

+      int i = 0;

+      while (i < ds.getChildCount()) {

+        SpecificationNode sn = ds.getChild(i);

+        if (sn.getType().equals(JOB_ACCESS_NODE_TYPE))

+          ds.removeChild(i);

+        else

+          i++;

+      }

+

+      int accessCount = Integer.parseInt(xc);

+      i = 0;

+      while (i < accessCount) {

+        String accessDescription = "_"+Integer.toString(i);

+        String accessOpName = "accessop"+accessDescription;

+        xc = variableContext.getParameter(accessOpName);

+        if (xc != null && xc.equals("Delete")) {

+          // Next row

+          i++;

+          continue;

+        }

+        // Get the stuff we need

+        String accessSpec = variableContext.getParameter("spectoken"+accessDescription);

+        SpecificationNode node = new SpecificationNode(JOB_ACCESS_NODE_TYPE);

+        node.setAttribute(JOB_TOKEN_ATTRIBUTE,accessSpec);

+        ds.addChild(ds.getChildCount(),node);

+        i++;

+      }

+

+      String op = variableContext.getParameter("accessop");

+      if (op != null && op.equals("Add"))

+      {

+        String accessspec = variableContext.getParameter("spectoken");

+        SpecificationNode node = new SpecificationNode(JOB_ACCESS_NODE_TYPE);

+        node.setAttribute(JOB_TOKEN_ATTRIBUTE,accessspec);

+        ds.addChild(ds.getChildCount(),node);

+      }

+    }

+

+    return null;

+  }

+

+  /**

+   * Output the specification body section. This method is called in the body

+   * section of a job page which has selected a repository connection of the

+   * current type. Its purpose is to present the required form elements for

+   * editing. The coder can presume that the HTML that is output from this

+   * configuration will be within appropriate <html>, <body>, and <form> tags.

+   * The name of the form is "editjob".

+   *

+   * @param out is the output to which any HTML should be sent.

+   * @param ds is the current document specification for this job.

+   * @param tabName is the current tab name.

+   */

+  @Override

+  public void outputSpecificationBody(IHTTPOutput out,

+      Locale locale, DocumentSpecification ds, String tabName) throws ManifoldCFException,

+      IOException {

+

+    // Output JIRAQuery tab

+    Map<String, Object> paramMap = new HashMap<String, Object>();

+    paramMap.put("TabName", tabName);

+    fillInJIRAQuerySpecificationMap(paramMap, ds);

+    fillInJIRASecuritySpecificationMap(paramMap, ds);

+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_FORWARD_JIRAQUERY,paramMap);

+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_FORWARD_SECURITY,paramMap);

+  }

+

+  /**

+   * Output the specification header section. This method is called in the

+   * head section of a job page which has selected a repository connection of

+   * the current type. Its purpose is to add the required tabs to the list,

+   * and to output any javascript methods that might be needed by the job

+   * editing HTML.

+   *

+   * @param out is the output to which any HTML should be sent.

+   * @param ds is the current document specification for this job.

+   * @param tabsArray is an array of tab names. Add to this array any tab

+   * names that are specific to the connector.

+   */

+  @Override

+  public void outputSpecificationHeader(IHTTPOutput out,

+      Locale locale, DocumentSpecification ds, List<String> tabsArray)

+      throws ManifoldCFException, IOException {

+

+    tabsArray.add(Messages.getString(locale, JIRA_QUERY_TAB_PROPERTY));

+    tabsArray.add(Messages.getString(locale, JIRA_SECURITY_TAB_PROPERTY));

+

+    Map<String, Object> paramMap = new HashMap<String, Object>();

+

+    // Fill in the specification header map, using data from all tabs.

+    fillInJIRAQuerySpecificationMap(paramMap, ds);

+    fillInJIRASecuritySpecificationMap(paramMap, ds);

+

+    Messages.outputResourceWithVelocity(out,locale,EDIT_SPEC_HEADER_FORWARD,paramMap);

+  }

+

+  /**

+   * Queue "seed" documents. Seed documents are the starting places for

+   * crawling activity. Documents are seeded when this method calls

+   * appropriate methods in the passed in ISeedingActivity object.

+   *

+   * This method can choose to find repository changes that happen only during

+   * the specified time interval. The seeds recorded by this method will be

+   * viewed by the framework based on what the getConnectorModel() method

+   * returns.

+   *

+   * It is not a big problem if the connector chooses to create more seeds

+   * than are strictly necessary; it is merely a question of overall work

+   * required.

+   *

+   * The times passed to this method may be interpreted for greatest

+   * efficiency. The time ranges any given job uses with this connector will

+   * not overlap, but will proceed starting at 0 and going to the "current

+   * time", each time the job is run. For continuous crawling jobs, this

+   * method will be called once, when the job starts, and at various periodic

+   * intervals as the job executes.

+   *

+   * When a job's specification is changed, the framework automatically resets

+   * the seeding start time to 0. The seeding start time may also be set to 0

+   * on each job run, depending on the connector model returned by

+   * getConnectorModel().

+   *

+   * Note that it is always ok to send MORE documents rather than less to this

+   * method.

+   *

+   * @param activities is the interface this method should use to perform

+   * whatever framework actions are desired.

+   * @param spec is a document specification (that comes from the job).

+   * @param startTime is the beginning of the time range to consider,

+   * inclusive.

+   * @param endTime is the end of the time range to consider, exclusive.

+   * @param jobMode is an integer describing how the job is being run, whether

+   * continuous or once-only.

+   */

+  @Override

+  public void addSeedDocuments(ISeedingActivity activities,

+      DocumentSpecification spec, long startTime, long endTime, int jobMode)

+      throws ManifoldCFException, ServiceInterruption {

+

+    String jiraDriveQuery = JiraConfig.JIRA_QUERY_DEFAULT;

+    int i = 0;

+    while (i < spec.getChildCount()) {

+      SpecificationNode sn = spec.getChild(i);

+      if (sn.getType().equals(JOB_STARTPOINT_NODE_TYPE)) {

+        jiraDriveQuery = sn.getAttributeValue(JOB_QUERY_ATTRIBUTE);

+        break;

+      }

+      i++;

+    }

+

+    GetSeedsThread t = new GetSeedsThread(getSession(), jiraDriveQuery);

+    try {

+      t.start();

+      boolean wasInterrupted = false;

+      try {

+        XThreadStringBuffer seedBuffer = t.getBuffer();

+        // Pick up the paths, and add them to the activities, before we join with the child thread.

+        while (true) {

+          // The only kind of exceptions this can throw are going to shut the process down.

+          String issueKey = seedBuffer.fetch();

+          if (issueKey ==  null)

+            break;

+          // Add the pageID to the queue

+          activities.addSeedDocument("I-"+issueKey);

+        }

+      } catch (InterruptedException e) {

+        wasInterrupted = true;

+        throw e;

+      } catch (ManifoldCFException e) {

+        if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)

+          wasInterrupted = true;

+        throw e;

+      } finally {

+        if (!wasInterrupted)

+          t.finishUp();

+      }

+    } catch (InterruptedException e) {

+      t.interrupt();

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,

+        ManifoldCFException.INTERRUPTED);

+    } catch (java.net.SocketTimeoutException e) {

+      handleIOException(e);

+    } catch (InterruptedIOException e) {

+      t.interrupt();

+      handleIOException(e);

+    } catch (IOException e) {

+      handleIOException(e);

+    } catch (ResponseException e) {

+      handleResponseException(e);

+    }

+  }

+  

+

+  

+  /**

+   * Process a set of documents. This is the method that should cause each

+   * document to be fetched, processed, and the results either added to the

+   * queue of documents for the current job, and/or entered into the

+   * incremental ingestion manager. The document specification allows this

+   * class to filter what is done based on the job.

+   *

+   * @param documentIdentifiers is the set of document identifiers to process.

+   * @param versions is the corresponding document versions to process, as

+   * returned by getDocumentVersions() above. The implementation may choose to

+   * ignore this parameter and always process the current version.

+   * @param activities is the interface this method should use to queue up new

+   * document references and ingest documents.

+   * @param spec is the document specification.

+   * @param scanOnly is an array corresponding to the document identifiers. It

+   * is set to true to indicate when the processing should only find other

+   * references, and should not actually call the ingestion methods.

+   * @param jobMode is an integer describing how the job is being run, whether

+   * continuous or once-only.

+   */

+  @SuppressWarnings("unchecked")

+  @Override

+  public void processDocuments(String[] documentIdentifiers, String[] versions,

+      IProcessActivity activities, DocumentSpecification spec,

+      boolean[] scanOnly) throws ManifoldCFException, ServiceInterruption {

+

+    Logging.connectors.debug("JIRA: Inside processDocuments");

+

+    for (int i = 0; i < documentIdentifiers.length; i++) {

+      String nodeId = documentIdentifiers[i];

+      String version = versions[i];

+      

+      long startTime = System.currentTimeMillis();

+      String errorCode = "FAILED";

+      String errorDesc = StringUtils.EMPTY;

+      Long fileSize = null;

+      boolean doLog = false;

+      

+      try {

+        if (Logging.connectors.isDebugEnabled()) {

+          Logging.connectors.debug("JIRA: Processing document identifier '"

+              + nodeId + "'");

+        }

+

+        if (!scanOnly[i]) {

+          doLog = true;

+

+          if (nodeId.startsWith("I-")) {

+            // It's an issue

+            String issueKey = nodeId.substring(2);

+            JiraIssue jiraFile = getIssue(issueKey);

+            if (jiraFile == null) {

+              activities.deleteDocument(nodeId, version);

+              continue;

+            }

+            

+            if (Logging.connectors.isDebugEnabled()) {

+              Logging.connectors.debug("JIRA: This issue exists: " + jiraFile.getKey());

+            }

+

+            // Unpack the version string

+            ArrayList acls = new ArrayList();

+            StringBuilder denyAclBuffer = new StringBuilder();

+            int index = unpackList(acls,version,0,'+');

+            if (index < version.length() && version.charAt(index++) == '+') {

+              index = unpack(denyAclBuffer,version,index,'+');

+            }

+

+            //otherwise process

+            RepositoryDocument rd = new RepositoryDocument();

+              

+            // Turn into acls and add into description

+            String[] aclArray = new String[acls.size()];

+            for (int j = 0; j < aclArray.length; j++) {

+              aclArray[j] = (String)acls.get(j);

+            }

+            rd.setACL(aclArray);

+            if (denyAclBuffer.length() > 0) {

+              String[] denyAclArray = new String[]{denyAclBuffer.toString()};

+              rd.setDenyACL(denyAclArray);

+            }

+

+            // Now do standard stuff

+              

+            String mimeType = "text/plain";

+            Date createdDate = jiraFile.getCreatedDate();

+            Date modifiedDate = jiraFile.getUpdatedDate();

+

+            rd.setMimeType(mimeType);

+            if (createdDate != null)

+              rd.setCreatedDate(createdDate);

+            if (modifiedDate != null)

+              rd.setModifiedDate(modifiedDate);

+            

+            // Get general document metadata

+            Map<String,String[]> metadataMap = jiraFile.getMetadata();

+              

+            for (Entry<String, String[]> entry : metadataMap.entrySet()) {

+              rd.addField(entry.getKey(), entry.getValue());

+            }

+

+            String documentURI = jiraFile.getSelf();

+            String document = getJiraBody(jiraFile);

+            try {

+              byte[] documentBytes = document.getBytes("UTF-8");

+              InputStream is = new ByteArrayInputStream(documentBytes);

+              try {

+                rd.setBinary(is, documentBytes.length);

+                activities.ingestDocument(nodeId, version, documentURI, rd);

+                // No errors.  Record the fact that we made it.

+                errorCode = "OK";

+                fileSize = new Long(documentBytes.length);

+              } finally {

+                is.close();

+              }

+            } catch (java.io.IOException e) {

+              throw new RuntimeException("UTF-8 encoding unknown!!");

+            }

+          }

+        }

+      } finally {

+        if (doLog)

+          activities.recordActivity(new Long(startTime), ACTIVITY_READ,

+            fileSize, nodeId, errorCode, errorDesc, null);

+      }

+    }

+  }

+

+  protected static String getJiraBody(JiraIssue jiraFile) {

+    String summary = jiraFile.getSummary();

+    String description = jiraFile.getDescription();

+    StringBuilder body = new StringBuilder();

+    if (summary != null)

+      body.append(summary);

+    if (description != null) {

+      if (body.length() > 0)

+        body.append(" : ");

+      body.append(description);

+    }

+    return body.toString();

+  }

+

+  /**

+   * The short version of getDocumentVersions. Get document versions given an

+   * array of document identifiers. This method is called for EVERY document

+   * that is considered. It is therefore important to perform as little work

+   * as possible here.

+   *

+   * @param documentIdentifiers is the array of local document identifiers, as

+   * understood by this connector.

+   * @param spec is the current document specification for the current job. If

+   * there is a dependency on this specification, then the version string

+   * should include the pertinent data, so that reingestion will occur when

+   * the specification changes. This is primarily useful for metadata.

+   * @return the corresponding version strings, with null in the places where

+   * the document no longer exists. Empty version strings indicate that there

+   * is no versioning ability for the corresponding document, and the document

+   * will always be processed.

+   */

+  @Override

+  public String[] getDocumentVersions(String[] documentIdentifiers,

+      DocumentSpecification spec) throws ManifoldCFException,

+      ServiceInterruption {

+

+    // Forced acls

+    String[] acls = getAcls(spec);

+    if (acls != null)

+      java.util.Arrays.sort(acls);

+

+    String[] rval = new String[documentIdentifiers.length];

+    for (int i = 0; i < rval.length; i++) {

+      String nodeId = documentIdentifiers[i];

+      if (nodeId.startsWith("I-")) {

+        // It is an issue

+        String issueID = nodeId.substring(2);

+        JiraIssue jiraFile = getIssue(issueID);

+        Date rev = jiraFile.getUpdatedDate();

+        if (rev != null) {

+          StringBuilder sb = new StringBuilder();

+

+          String[] aclsToUse;

+          if (acls == null) {

+            // Get acls from issue

+            List<String> users = getUsers(issueID);

+            aclsToUse = (String[])users.toArray(new String[0]);

+            java.util.Arrays.sort(aclsToUse);

+          } else {

+            aclsToUse = acls;

+          }

+          

+          // Acls

+          packList(sb,aclsToUse,'+');

+          if (aclsToUse.length > 0) {

+            sb.append('+');

+            pack(sb,defaultAuthorityDenyToken,'+');

+          } else

+            sb.append('-');

+          sb.append(rev.toString());

+          rval[i] = sb.toString();

+        } else {

+          //a jira document that doesn't contain versioning information will NEVER be processed.

+          // I don't know what this means, and whether it can ever occur.

+          rval[i] = null;

+        }

+      }

+    }

+    return rval;

+  }

+

+  /** Grab forced acl out of document specification.

+  *@param spec is the document specification.

+  *@return the acls, or null if security is on (and the acls need to be fetched)

+  */

+  protected static String[] getAcls(DocumentSpecification spec) {

+    Set<String> map = new HashSet<String>();

+    for (int i = 0; i < spec.getChildCount(); i++) {

+      SpecificationNode sn = spec.getChild(i);

+      if (sn.getType().equals(JOB_ACCESS_NODE_TYPE)) {

+        String token = sn.getAttributeValue(JOB_TOKEN_ATTRIBUTE);

+        map.add(token);

+      }

+      else if (sn.getType().equals(JOB_SECURITY_NODE_TYPE)) {

+        String onOff = sn.getAttributeValue(JOB_VALUE_ATTRIBUTE);

+        if (onOff != null && onOff.equals("on"))

+          return null;

+      }

+    }

+

+    String[] rval = new String[map.size()];

+    Iterator<String> iter = map.iterator();

+    int i = 0;

+    while (iter.hasNext()) {

+      rval[i++] = (String)iter.next();

+    }

+    return rval;

+  }

+

+  private static void handleIOException(IOException e)

+    throws ManifoldCFException, ServiceInterruption {

+    if (!(e instanceof java.net.SocketTimeoutException) && (e instanceof InterruptedIOException)) {

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,

+        ManifoldCFException.INTERRUPTED);

+    }

+    Logging.connectors.warn("JIRA: IO exception: "+e.getMessage(), e);

+    long currentTime = System.currentTimeMillis();

+    throw new ServiceInterruption("IO exception: "+e.getMessage(), e, currentTime + 300000L,

+      currentTime + 3 * 60 * 60000L,-1,false);

+  }

+  

+  private static void handleResponseException(ResponseException e)

+    throws ManifoldCFException, ServiceInterruption {

+    throw new ManifoldCFException("Unexpected response: "+e.getMessage(),e);

+  }

+

+  // Background threads

+

+  protected static class GetUsersThread extends Thread {

+

+    protected final JiraSession session;

+    protected final String issueKey;

+    protected Throwable exception = null;

+    protected List<String> result = null;

+

+    public GetUsersThread(JiraSession session, String issueKey) {

+      super();

+      this.session = session;

+      this.issueKey = issueKey;

+      setDaemon(true);

+    }

+

+    public void run() {

+      try {

+        result = session.getUsers(issueKey);

+      } catch (Throwable e) {

+        this.exception = e;

+      }

+    }

+

+    public void finishUp()

+      throws InterruptedException, IOException, ResponseException {

+      join();

+      Throwable thr = exception;

+      if (thr != null) {

+        if (thr instanceof IOException) {

+          throw (IOException) thr;

+        } else if (thr instanceof ResponseException) {

+          throw (ResponseException) thr;

+        } else if (thr instanceof RuntimeException) {

+          throw (RuntimeException) thr;

+        } else {

+          throw (Error) thr;

+        }

+      }

+    }

+    

+    public List<String> getResult() {

+      return result;

+    }

+

+  }

+

+  protected List<String> getUsers(String issueKey) throws ManifoldCFException, ServiceInterruption {

+    GetUsersThread t = new GetUsersThread(getSession(), issueKey);

+    try {

+      t.start();

+      t.finishUp();

+      return t.getResult();

+    } catch (InterruptedException e) {

+      t.interrupt();

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,

+        ManifoldCFException.INTERRUPTED);

+    } catch (java.net.SocketTimeoutException e) {

+      handleIOException(e);

+    } catch (InterruptedIOException e) {

+      t.interrupt();

+      handleIOException(e);

+    } catch (IOException e) {

+      handleIOException(e);

+    } catch (ResponseException e) {

+      handleResponseException(e);

+    }

+    return null;

+  }

+

+  protected static class CheckConnectionThread extends Thread {

+

+    protected final JiraSession session;

+    protected Throwable exception = null;

+

+    public CheckConnectionThread(JiraSession session) {

+      super();

+      this.session = session;

+      setDaemon(true);

+    }

+

+    public void run() {

+      try {

+        session.getRepositoryInfo();

+      } catch (Throwable e) {

+        this.exception = e;

+      }

+    }

+

+    public void finishUp()

+      throws InterruptedException, IOException, ResponseException {

+      join();

+      Throwable thr = exception;

+      if (thr != null) {

+        if (thr instanceof IOException) {

+          throw (IOException) thr;

+        } else if (thr instanceof ResponseException) {

+          throw (ResponseException) thr;

+        } else if (thr instanceof RuntimeException) {

+          throw (RuntimeException) thr;

+        } else {

+          throw (Error) thr;

+        }

+      }

+    }

+  }

+

+  protected void checkConnection() throws ManifoldCFException, ServiceInterruption {

+    CheckConnectionThread t = new CheckConnectionThread(getSession());

+    try {

+      t.start();

+      t.finishUp();

+      return;

+    } catch (InterruptedException e) {

+      t.interrupt();

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,

+        ManifoldCFException.INTERRUPTED);

+    } catch (java.net.SocketTimeoutException e) {

+      handleIOException(e);

+    } catch (InterruptedIOException e) {

+      t.interrupt();

+      handleIOException(e);

+    } catch (IOException e) {

+      handleIOException(e);

+    } catch (ResponseException e) {

+      handleResponseException(e);

+    }

+  }

+

+  protected static class GetSeedsThread extends Thread {

+

+    protected Throwable exception = null;

+    protected final JiraSession session;

+    protected final String jiraDriveQuery;

+    protected final XThreadStringBuffer seedBuffer;

+    

+    public GetSeedsThread(JiraSession session, String jiraDriveQuery) {

+      super();

+      this.session = session;

+      this.jiraDriveQuery = jiraDriveQuery;

+      this.seedBuffer = new XThreadStringBuffer();

+      setDaemon(true);

+    }

+

+    @Override

+    public void run() {

+      try {

+        session.getSeeds(seedBuffer, jiraDriveQuery);

+      } catch (Throwable e) {

+        this.exception = e;

+      } finally {

+        seedBuffer.signalDone();

+      }

+    }

+

+    public XThreadStringBuffer getBuffer() {

+      return seedBuffer;

+    }

+    

+    public void finishUp()

+      throws InterruptedException, IOException, ResponseException {

+      seedBuffer.abandon();

+      join();

+      Throwable thr = exception;

+      if (thr != null) {

+        if (thr instanceof IOException)

+          throw (IOException) thr;

+        else if (thr instanceof ResponseException)

+          throw (ResponseException) thr;

+        else if (thr instanceof RuntimeException)

+          throw (RuntimeException) thr;

+        else if (thr instanceof Error)

+          throw (Error) thr;

+        else

+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);

+      }

+    }

+  }

+

+  protected JiraIssue getIssue(String issueID)

+    throws ManifoldCFException, ServiceInterruption {

+    GetIssueThread t = new GetIssueThread(getSession(), issueID);

+    try {

+      t.start();

+      t.finishUp();

+    } catch (InterruptedException e) {

+      t.interrupt();

+      throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,

+        ManifoldCFException.INTERRUPTED);

+    } catch (java.net.SocketTimeoutException e) {

+      handleIOException(e);

+    } catch (InterruptedIOException e) {

+      t.interrupt();

+      handleIOException(e);

+    } catch (IOException e) {

+      handleIOException(e);

+    } catch (ResponseException e) {

+      handleResponseException(e);

+    }

+    return t.getResponse();

+  }

+  

+  protected static class GetIssueThread extends Thread {

+

+    protected final JiraSession session;

+    protected final String nodeId;

+    protected Throwable exception = null;

+    protected JiraIssue response = null;

+

+    public GetIssueThread(JiraSession session, String nodeId) {

+      super();

+      setDaemon(true);

+      this.session = session;

+      this.nodeId = nodeId;

+    }

+

+    public void run() {

+      try {

+        response = session.getIssue(nodeId);

+      } catch (Throwable e) {

+        this.exception = e;

+      }

+    }

+

+    public JiraIssue getResponse() {

+      return response;

+    }

+    

+    public void finishUp() throws InterruptedException, IOException, ResponseException {

+      join();

+      Throwable thr = exception;

+      if (thr != null) {

+        if (thr instanceof IOException) {

+          throw (IOException) thr;

+        } else if (thr instanceof ResponseException) {

+          throw (ResponseException) thr;

+        } else if (thr instanceof RuntimeException) {

+          throw (RuntimeException) thr;

+        } else {

+          throw (Error) thr;

+        }

+      }

+    }

+  }

+

+}

+

diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraSession.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraSession.java
new file mode 100644
index 0000000..c25cf36
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraSession.java
@@ -0,0 +1,334 @@
+/* $Id: JiraSession.java 1490586 2013-06-07 11:14:52Z kwright $ */

+/**

+ * Licensed to the Apache Software Foundation (ASF) under one or more

+ * contributor license agreements. See the NOTICE file distributed with this

+ * work for additional information regarding copyright ownership. The ASF

+ * licenses this file to You under the Apache License, Version 2.0 (the

+ * "License"); you may not use this file except in compliance with the License.

+ * You may obtain a copy of the License at

+ * 

+* http://www.apache.org/licenses/LICENSE-2.0

+ * 

+* Unless required by applicable law or agreed to in writing, software

+ * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT

+ * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the

+ * License for the specific language governing permissions and limitations under

+ * the License.

+ */

+package org.apache.manifoldcf.crawler.connectors.jira;

+

+import org.apache.manifoldcf.core.common.*;

+import org.apache.manifoldcf.core.interfaces.KeystoreManagerFactory;

+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;

+

+import java.io.Reader;

+import java.io.Writer;

+import java.io.ByteArrayInputStream;

+import java.io.StringWriter;

+import java.io.IOException;

+import java.io.InputStream;

+import java.io.InputStreamReader;

+import java.io.InterruptedIOException;

+import java.io.OutputStream;

+import java.net.URL;

+import java.net.URLEncoder;

+

+import java.util.Map;

+import java.util.HashMap;

+import java.util.List;

+import java.util.ArrayList;

+

+import org.apache.http.conn.ClientConnectionManager;

+import org.apache.http.client.HttpClient;

+import org.apache.http.impl.conn.PoolingClientConnectionManager;

+import org.apache.http.HttpEntity;

+import org.apache.http.HttpResponse;

+import org.apache.http.HttpHost;

+import org.apache.http.auth.AuthScope;

+import org.apache.http.auth.UsernamePasswordCredentials;

+import org.apache.http.auth.NTCredentials;

+import org.apache.http.client.methods.HttpGet;

+import org.apache.http.client.methods.HttpRequestBase;

+import org.apache.http.impl.client.DefaultHttpClient;

+import org.apache.http.impl.client.DefaultRedirectStrategy;

+import org.apache.http.util.EntityUtils;

+import org.apache.http.params.BasicHttpParams;

+import org.apache.http.params.HttpParams;

+import org.apache.http.params.CoreConnectionPNames;

+import org.apache.http.conn.params.ConnRoutePNames;

+import org.apache.http.client.params.ClientPNames;

+import org.apache.http.client.HttpRequestRetryHandler;

+import org.apache.http.protocol.HttpContext;

+import org.apache.http.conn.scheme.Scheme;

+import org.apache.http.conn.ssl.SSLSocketFactory;

+import org.apache.http.conn.ssl.AllowAllHostnameVerifier;

+import org.apache.http.params.CoreProtocolPNames;

+

+import org.json.simple.JSONObject;

+import org.json.simple.JSONValue;

+import org.json.simple.JSONArray;

+

+/**

+ *

+ * @author andrew

+ */

+public class JiraSession {

+

+  private final String URLbase;

+  private final String clientId;

+  private final String clientSecret;

+  

+  private ClientConnectionManager connectionManager;

+  private HttpClient httpClient;

+  

+  // Current host name

+  private static String currentHost = null;

+  static

+  {

+    // Find the current host name

+    try

+    {

+      java.net.InetAddress addr = java.net.InetAddress.getLocalHost();

+

+      // Get hostname

+      currentHost = addr.getHostName();

+    }

+    catch (java.net.UnknownHostException e)

+    {

+    }

+  }

+

+  /**

+   * Constructor. Create a session.

+   */

+  public JiraSession(String clientId, String clientSecret, String URLbase,

+    String proxyHost, String proxyPort, String proxyDomain, String proxyUsername, String proxyPassword)

+    throws ManifoldCFException {

+    this.URLbase = URLbase;

+    this.clientId = clientId;

+    this.clientSecret = clientSecret;

+

+    int socketTimeout = 900000;

+    int connectionTimeout = 60000;

+

+    javax.net.ssl.SSLSocketFactory httpsSocketFactory = KeystoreManagerFactory.getTrustingSecureSocketFactory();

+    SSLSocketFactory myFactory = new SSLSocketFactory(new InterruptibleSocketFactory(httpsSocketFactory,connectionTimeout),

+      new AllowAllHostnameVerifier());

+    Scheme myHttpsProtocol = new Scheme("https", 443, myFactory);

+

+    PoolingClientConnectionManager localConnectionManager = new PoolingClientConnectionManager();

+    localConnectionManager.setMaxTotal(1);

+    connectionManager = localConnectionManager;

+    // Set up protocol registry

+    connectionManager.getSchemeRegistry().register(myHttpsProtocol);

+

+    BasicHttpParams params = new BasicHttpParams();

+    params.setBooleanParameter(CoreProtocolPNames.USE_EXPECT_CONTINUE,true);

+    params.setIntParameter(CoreProtocolPNames.WAIT_FOR_CONTINUE,socketTimeout);

+    params.setBooleanParameter(CoreConnectionPNames.TCP_NODELAY,true);

+    params.setBooleanParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK,false);

+    params.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT,connectionTimeout);

+    params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT,socketTimeout);

+    params.setBooleanParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS,true);

+    DefaultHttpClient localHttpClient = new DefaultHttpClient(connectionManager,params);

+    // No retries

+    localHttpClient.setHttpRequestRetryHandler(new HttpRequestRetryHandler()

+      {

+        public boolean retryRequest(

+          IOException exception,

+          int executionCount,

+          HttpContext context)

+        {

+          return false;

+        }

+       

+      });

+    localHttpClient.setRedirectStrategy(new DefaultRedirectStrategy());

+      

+    // If authentication needed, set that

+    if (clientId != null)

+    {

+      localHttpClient.getCredentialsProvider().setCredentials(

+        AuthScope.ANY,

+        new UsernamePasswordCredentials(clientId,clientSecret));

+    }

+

+    // If there's a proxy, set that too.

+    if (proxyHost != null && proxyHost.length() > 0)

+    {

+

+      int proxyPortInt;

+      if (proxyPort != null && proxyPort.length() > 0)

+      {

+        try

+        {

+          proxyPortInt = Integer.parseInt(proxyPort);

+        }

+        catch (NumberFormatException e)

+        {

+          throw new ManifoldCFException("Bad number: "+e.getMessage(),e);

+        }

+      }

+      else

+        proxyPortInt = 8080;

+

+      // Configure proxy authentication

+      if (proxyUsername != null && proxyUsername.length() > 0)

+      {

+        if (proxyPassword == null)

+          proxyPassword = "";

+        if (proxyDomain == null)

+          proxyDomain = "";

+

+        localHttpClient.getCredentialsProvider().setCredentials(

+          new AuthScope(proxyHost, proxyPortInt),

+          new NTCredentials(proxyUsername, proxyPassword, currentHost, proxyDomain));

+      }

+

+      HttpHost proxy = new HttpHost(proxyHost, proxyPortInt);

+

+      localHttpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy);

+    }

+

+    httpClient = localHttpClient;

+  }

+

+  /**

+   * Close session.

+   */

+  public void close() {

+    httpClient = null;

+    if (connectionManager != null)

+      connectionManager.shutdown();

+    connectionManager = null;

+  }

+

+  private static Object convertToJSON(HttpResponse httpResponse)

+    throws IOException {

+    HttpEntity entity = httpResponse.getEntity();

+    if (entity != null) {

+      InputStream is = entity.getContent();

+      try {

+        String charSet = EntityUtils.getContentCharSet(entity);

+        if (charSet == null)

+          charSet = "utf-8";

+        Reader r = new InputStreamReader(is,charSet);

+        return JSONValue.parse(r);

+      } finally {

+        is.close();

+      }

+    }

+    return null;

+  }

+

+  private static String convertToString(HttpResponse httpResponse)

+    throws IOException {

+    HttpEntity entity = httpResponse.getEntity();

+    if (entity != null) {

+      InputStream is = entity.getContent();

+      try {

+        String charSet = EntityUtils.getContentCharSet(entity);

+        if (charSet == null)

+          charSet = "utf-8";

+        char[] buffer = new char[65536];

+        Reader r = new InputStreamReader(is,charSet);

+        Writer w = new StringWriter();

+        try {

+          while (true) {

+            int amt = r.read(buffer);

+            if (amt == -1)

+              break;

+            w.write(buffer,0,amt);

+          }

+        } finally {

+          w.flush();

+        }

+        return w.toString();

+      } finally {

+        is.close();

+      }

+    }

+    return "";

+  }

+

+  private void getRest(String rightside, JiraJSONResponse response) 

+    throws IOException, ResponseException {

+

+    final HttpRequestBase method = new HttpGet(URLbase + rightside);

+    method.addHeader("Accept", "application/json");

+

+    try {

+      HttpResponse httpResponse = httpClient.execute(method);

+      int resultCode = httpResponse.getStatusLine().getStatusCode();

+      if (resultCode != 200)

+        throw new IOException("Unexpected result code "+resultCode+": "+convertToString(httpResponse));

+      Object jo = convertToJSON(httpResponse);

+      response.acceptJSONObject(jo);

+    } finally {

+      method.abort();

+    }

+  }

+

+  /**

+   * Obtain repository information.

+   */

+  public Map<String, String> getRepositoryInfo()

+    throws IOException, ResponseException {

+    HashMap<String, String> statistics = new HashMap<String, String>();

+    JiraQueryResults qr = new JiraQueryResults();

+    getRest("search?maxResults=1&jql=", qr);

+    statistics.put("Total Issues", qr.getTotal().toString());

+    return statistics;

+  }

+

+  /**

+   * Get the list of matching root documents, e.g. seeds.

+   */

+  public void getSeeds(XThreadStringBuffer idBuffer, String jiraDriveQuery)

+      throws IOException, ResponseException, InterruptedException {

+    long startAt = 0L;

+    long setSize = 800L;

+    long totalAmt = 0L;

+    do {

+      JiraQueryResults qr = new JiraQueryResults();

+      getRest("search?maxResults=" + setSize + "&startAt=" + startAt + "&jql=" + URLEncoder.encode(jiraDriveQuery, "UTF-8"), qr);

+      Long total = qr.getTotal();

+      if (total == null)

+        return;

+      totalAmt = total.longValue();

+      qr.pushIds(idBuffer);

+      startAt += setSize;

+    } while (startAt < totalAmt);

+  }

+

+  /**

+  * Get the list of users that can see the specified issue.

+  */

+  public List<String> getUsers(String issueKey)

+    throws IOException, ResponseException {

+    List<String> rval = new ArrayList<String>();

+    long startAt = 0L;

+    long setSize = 800L;

+    while (true) {

+      JiraUserQueryResults qr = new JiraUserQueryResults();

+      getRest("user/viewissue/search?username=&issueKey="+URLEncoder.encode(issueKey,"utf-8")+"&maxResults=" + setSize + "&startAt=" + startAt, qr);

+      qr.getNames(rval);

+      startAt += setSize;

+      if (rval.size() < startAt)

+        break;

+    }

+    return rval;

+  }

+

+  /**

+   * Get an individual issue.

+   */

+  public JiraIssue getIssue(String issueKey)

+    throws IOException, ResponseException {

+    JiraIssue ji = new JiraIssue();

+    getRest("issue/" + URLEncoder.encode(issueKey,"utf-8"), ji);

+    return ji;

+  }

+

+

+}

diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraUserQueryResults.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraUserQueryResults.java
new file mode 100644
index 0000000..a53dedf
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/JiraUserQueryResults.java
@@ -0,0 +1,53 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.jira;
+
+import org.apache.manifoldcf.core.common.*;
+
+import java.io.IOException;
+import java.util.List;
+import java.util.ArrayList;
+
+import org.json.simple.JSONObject;
+import org.json.simple.JSONArray;
+
+/** An instance of this class represents the results of a Jira user query, and
+* the ability to parse the corresponding JSON response.
+*/
+public class JiraUserQueryResults extends JiraJSONResponse {
+
+  // Specific keys we care about
+  private final static String KEY_NAME = "name";
+
+  public JiraUserQueryResults() {
+    super();
+  }
+
+  public void getNames(List<String> nameBuffer) {
+    JSONArray users = (JSONArray)object;
+    for (Object user : users) {
+      if (user instanceof JSONObject) {
+        JSONObject jo = (JSONObject)user;
+        nameBuffer.add(jo.get(KEY_NAME).toString());
+      }
+    }
+  }
+  
+}
diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/Messages.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/Messages.java
new file mode 100644
index 0000000..c8d4841
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/Messages.java
@@ -0,0 +1,141 @@
+/* $Id: Messages.java 1488537 2013-06-01 15:30:15Z kwright $ */

+

+/**

+* Licensed to the Apache Software Foundation (ASF) under one or more

+* contributor license agreements. See the NOTICE file distributed with

+* this work for additional information regarding copyright ownership.

+* The ASF licenses this file to You under the Apache License, Version 2.0

+* (the "License"); you may not use this file except in compliance with

+* the License. You may obtain a copy of the License at

+*

+* http://www.apache.org/licenses/LICENSE-2.0

+*

+* Unless required by applicable law or agreed to in writing, software

+* distributed under the License is distributed on an "AS IS" BASIS,

+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+* See the License for the specific language governing permissions and

+* limitations under the License.

+*/

+package org.apache.manifoldcf.crawler.connectors.jira;

+

+import java.util.Locale;

+import java.util.Map;

+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;

+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;

+

+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages

+{

+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.crawler.connectors.jira.common";

+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.crawler.connectors.jira";

+  

+  /** Constructor - do no instantiate

+  */

+  protected Messages()

+  {

+  }

+  

+  public static String getString(Locale locale, String messageKey)

+  {

+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getAttributeString(Locale locale, String messageKey)

+  {

+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getBodyString(Locale locale, String messageKey)

+  {

+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getAttributeJavascriptString(Locale locale, String messageKey)

+  {

+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getBodyJavascriptString(Locale locale, String messageKey)

+  {

+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);

+  }

+

+  public static String getString(Locale locale, String messageKey, Object[] args)

+  {

+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)

+  {

+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+  

+  public static String getBodyString(Locale locale, String messageKey, Object[] args)

+  {

+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)

+  {

+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)

+  {

+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);

+  }

+

+  // More general methods which allow bundlenames and class loaders to be specified.

+  

+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)

+  {

+    return getString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)

+  {

+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)

+  {

+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);

+  }

+  

+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)

+  {

+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)

+  {

+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);

+  }

+

+  // Resource output

+  

+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String,String> substitutionParameters, boolean mapToUpperCase)

+    throws ManifoldCFException

+  {

+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,

+      substitutionParameters,mapToUpperCase);

+  }

+  

+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String,String> substitutionParameters, boolean mapToUpperCase)

+    throws ManifoldCFException

+  {

+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,

+      substitutionParameters,mapToUpperCase);

+  }

+

+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,

+    Map<String,Object> contextObjects)

+    throws ManifoldCFException

+  {

+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,

+      contextObjects);

+  }

+  

+}

+

diff --git a/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/ResponseException.java b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/ResponseException.java
new file mode 100644
index 0000000..94b4f4d
--- /dev/null
+++ b/connectors/jira/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/jira/ResponseException.java
@@ -0,0 +1,34 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.crawler.connectors.jira;
+
+/** This exception is thrown when the response from REST is not what
+* was expected.
+ */
+public class ResponseException extends Exception {
+
+  public ResponseException(String msg) {
+    super(msg);
+  }
+  
+  public ResponseException(String msg, Throwable cause) {
+    super(msg, cause);
+  }
+}
diff --git a/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jira/common_en_US.properties b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jira/common_en_US.properties
new file mode 100644
index 0000000..4c6dfd6
--- /dev/null
+++ b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jira/common_en_US.properties
@@ -0,0 +1,40 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+JiraAuthorityConnector.Server=Server
+JiraAuthorityConnector.Proxy=Proxy
+
+JiraAuthorityConnector.JiraProtocolColon=JIRA protocol:
+JiraAuthorityConnector.JiraHostColon=JIRA host:
+JiraAuthorityConnector.JiraPortColon=JIRA port:
+JiraAuthorityConnector.JiraRESTAPIPathColon=JIRA REST API path:
+JiraAuthorityConnector.ClientIDColon=Client ID (Optional):
+JiraAuthorityConnector.ClientSecretColon=Client Secret (Optional):
+
+JiraAuthorityConnector.JiraProxyHostColon=Proxy host:
+JiraAuthorityConnector.JiraProxyPortColon=Proxy port:
+JiraAuthorityConnector.JiraProxyDomainColon=Proxy authentication domain:
+JiraAuthorityConnector.JiraProxyUsernameColon=Proxy authentication user name:
+JiraAuthorityConnector.JiraProxyPasswordColon=Proxy authentication password:
+
+JiraAuthorityConnector.JiraHostMustNotBeNull=JIRA host must not be null
+JiraAuthorityConnector.JiraHostMustNotIncludeSlash=JIRA host must not include a '/' character
+JiraAuthorityConnector.JiraPortMustBeAnInteger=JIRA port must be an integer
+JiraAuthorityConnector.JiraPathMustNotBeNull=JIRA path must not be null
+JiraAuthorityConnector.JiraPathMustBeginWithASlash=JIRA path must begin with a '/' character
+
+JiraAuthorityConnector.JiraProxyPortMustBeAnInteger=Proxy port must be an integer
+JiraAuthorityConnector.JiraProxyHostMustNotIncludeSlash=Proxy host cannot include a '/' character
+
diff --git a/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jira/common_ja_JP.properties b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jira/common_ja_JP.properties
new file mode 100644
index 0000000..96237dc
--- /dev/null
+++ b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/jira/common_ja_JP.properties
@@ -0,0 +1,39 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+JiraAuthorityConnector.Server=Server
+JiraAuthorityConnector.Proxy=Proxy
+
+JiraAuthorityConnector.JiraProtocolColon=JIRA protocol:
+JiraAuthorityConnector.JiraHostColon=JIRA host:
+JiraAuthorityConnector.JiraPortColon=JIRA port:
+JiraAuthorityConnector.JiraRESTAPIPathColon=JIRA REST API path:
+JiraAuthorityConnector.ClientIDColon=Client ID (Optional):
+JiraAuthorityConnector.ClientSecretColon=Client Secret (Optional):
+
+JiraAuthorityConnector.JiraProxyHostColon=Proxy host:
+JiraAuthorityConnector.JiraProxyPortColon=Proxy port:
+JiraAuthorityConnector.JiraProxyDomainColon=Proxy authentication domain:
+JiraAuthorityConnector.JiraProxyUsernameColon=Proxy authentication user name:
+JiraAuthorityConnector.JiraProxyPasswordColon=Proxy authentication password:
+
+JiraAuthorityConnector.JiraHostMustNotBeNull=JIRA host must not be null
+JiraAuthorityConnector.JiraHostMustNotIncludeSlash=JIRA host must not include a '/' character
+JiraAuthorityConnector.JiraPortMustBeAnInteger=JIRA port must be an integer
+JiraAuthorityConnector.JiraPathMustNotBeNull=JIRA path must not be null
+JiraAuthorityConnector.JiraPathMustBeginWithASlash=JIRA path must begin with a '/' character
+
+JiraAuthorityConnector.JiraProxyPortMustBeAnInteger=Proxy port must be an integer
+JiraAuthorityConnector.JiraProxyHostMustNotIncludeSlash=Proxy host cannot include a '/' character
diff --git a/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jira/common_en_US.properties b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jira/common_en_US.properties
new file mode 100644
index 0000000..bcb529a
--- /dev/null
+++ b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jira/common_en_US.properties
@@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+JiraRepositoryConnector.Server=Server

+JiraRepositoryConnector.Proxy=Proxy

+JiraRepositoryConnector.JiraQuery=Seed Query

+JiraRepositoryConnector.Security=Security

+

+JiraRepositoryConnector.JiraProtocolColon=JIRA protocol:

+JiraRepositoryConnector.JiraHostColon=JIRA host:

+JiraRepositoryConnector.JiraPortColon=JIRA port:

+JiraRepositoryConnector.JiraRESTAPIPathColon=JIRA REST API path:

+JiraRepositoryConnector.ClientIDColon=Client ID (Optional):

+JiraRepositoryConnector.ClientSecretColon=Client Secret (Optional):

+

+JiraRepositoryConnector.JiraProxyHostColon=Proxy host:

+JiraRepositoryConnector.JiraProxyPortColon=Proxy port:

+JiraRepositoryConnector.JiraProxyDomainColon=Proxy authentication domain:

+JiraRepositoryConnector.JiraProxyUsernameColon=Proxy authentication user name:

+JiraRepositoryConnector.JiraProxyPasswordColon=Proxy authentication password:

+

+JiraRepositoryConnector.JiraHostMustNotBeNull=JIRA host must not be null

+JiraRepositoryConnector.JiraHostMustNotIncludeSlash=JIRA host must not include a '/' character

+JiraRepositoryConnector.JiraPortMustBeAnInteger=JIRA port must be an integer

+JiraRepositoryConnector.JiraPathMustNotBeNull=JIRA path must not be null

+JiraRepositoryConnector.JiraPathMustBeginWithASlash=JIRA path must begin with a '/' character

+

+JiraRepositoryConnector.JiraProxyPortMustBeAnInteger=Proxy port must be an integer

+JiraRepositoryConnector.JiraProxyHostMustNotIncludeSlash=Proxy host cannot include a '/' character

+

+JiraRepositoryConnector.JiraQueryColon=JIRA query:

+JiraRepositoryConnector.SeedQueryCannotBeNull=Seed query cannot be null

+

+JiraRepositoryConnector.SecurityColon=Security:

+JiraRepositoryConnector.Enabled=Enabled

+JiraRepositoryConnector.Disabled=Disabled

+

+JiraRepositoryConnector.NoAccessTokensPresent=No access tokens present

+JiraRepositoryConnector.Add=Add

+JiraRepositoryConnector.AddAccessToken=Add access token

+JiraRepositoryConnector.Delete=Delete

+JiraRepositoryConnector.DeleteToken=Delete token #

+JiraRepositoryConnector.AccessTokensColon=Access tokens:

+JiraRepositoryConnector.TypeInAnAccessToken=Type in an access token

diff --git a/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jira/common_ja_JP.properties b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jira/common_ja_JP.properties
new file mode 100644
index 0000000..bcb529a
--- /dev/null
+++ b/connectors/jira/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/jira/common_ja_JP.properties
@@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+JiraRepositoryConnector.Server=Server

+JiraRepositoryConnector.Proxy=Proxy

+JiraRepositoryConnector.JiraQuery=Seed Query

+JiraRepositoryConnector.Security=Security

+

+JiraRepositoryConnector.JiraProtocolColon=JIRA protocol:

+JiraRepositoryConnector.JiraHostColon=JIRA host:

+JiraRepositoryConnector.JiraPortColon=JIRA port:

+JiraRepositoryConnector.JiraRESTAPIPathColon=JIRA REST API path:

+JiraRepositoryConnector.ClientIDColon=Client ID (Optional):

+JiraRepositoryConnector.ClientSecretColon=Client Secret (Optional):

+

+JiraRepositoryConnector.JiraProxyHostColon=Proxy host:

+JiraRepositoryConnector.JiraProxyPortColon=Proxy port:

+JiraRepositoryConnector.JiraProxyDomainColon=Proxy authentication domain:

+JiraRepositoryConnector.JiraProxyUsernameColon=Proxy authentication user name:

+JiraRepositoryConnector.JiraProxyPasswordColon=Proxy authentication password:

+

+JiraRepositoryConnector.JiraHostMustNotBeNull=JIRA host must not be null

+JiraRepositoryConnector.JiraHostMustNotIncludeSlash=JIRA host must not include a '/' character

+JiraRepositoryConnector.JiraPortMustBeAnInteger=JIRA port must be an integer

+JiraRepositoryConnector.JiraPathMustNotBeNull=JIRA path must not be null

+JiraRepositoryConnector.JiraPathMustBeginWithASlash=JIRA path must begin with a '/' character

+

+JiraRepositoryConnector.JiraProxyPortMustBeAnInteger=Proxy port must be an integer

+JiraRepositoryConnector.JiraProxyHostMustNotIncludeSlash=Proxy host cannot include a '/' character

+

+JiraRepositoryConnector.JiraQueryColon=JIRA query:

+JiraRepositoryConnector.SeedQueryCannotBeNull=Seed query cannot be null

+

+JiraRepositoryConnector.SecurityColon=Security:

+JiraRepositoryConnector.Enabled=Enabled

+JiraRepositoryConnector.Disabled=Disabled

+

+JiraRepositoryConnector.NoAccessTokensPresent=No access tokens present

+JiraRepositoryConnector.Add=Add

+JiraRepositoryConnector.AddAccessToken=Add access token

+JiraRepositoryConnector.Delete=Delete

+JiraRepositoryConnector.DeleteToken=Delete token #

+JiraRepositoryConnector.AccessTokensColon=Access tokens:

+JiraRepositoryConnector.TypeInAnAccessToken=Type in an access token

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira.js b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira.js
new file mode 100644
index 0000000..ce40411
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira.js
@@ -0,0 +1,122 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkConfig()
+{
+  if (editconnection.jiraport.value != "" && !isInteger(editconnection.jiraport.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPortMustBeAnInteger'))");
+    editconnection.jiraport.focus();
+    return false;
+  }
+
+  if (editconnection.jirahost.value != "" && editconnection.jirahost.value.indexOf("/") != -1)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraHostMustNotIncludeSlash'))");
+    editconnection.jirahost.focus();
+    return false;
+  }
+
+  if (editconnection.jirapath.value != "" && !(editconnection.jirapath.value.indexOf("/") == 0))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPathMustBeginWithASlash'))");
+    editconnection.jirapath.focus();
+    return false;
+  }
+
+  if (editconnection.jiraproxyport.value != "" && !isInteger(editconnection.jiraproxyport.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyPortMustBeAnInteger'))");
+    editconnection.jiraproxyport.focus();
+    return false;
+  }
+
+  if (editconnection.jiraproxyhost.value != "" && editconnection.jiraproxyhost.value.indexOf("/") != -1)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyHostMustNotIncludeSlash'))");
+    editconnection.jiraproxyhost.focus();
+    return false;
+  }
+
+  return true;
+}
+ 
+function checkConfigForSave()
+{
+    
+  if (editconnection.jirahost.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraHostMustNotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Server'))");
+    editconnection.jirahost.focus();
+    return false;
+  }
+  
+  if (editconnection.jirahost.value != "" && editconnection.jirahost.value.indexOf("/") != -1)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraHostMustNotIncludeSlash'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Server'))");
+    editconnection.jirahost.focus();
+    return false;
+  }
+
+  if (editconnection.jiraport.value != "" && !isInteger(editconnection.jiraport.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPortMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Server'))");
+    editconnection.jiraport.focus();
+    return false;
+  }
+
+  if (editconnection.jirapath.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPathMustNotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Server'))");
+    editconnection.jirapath.focus();
+    return false;
+  }
+  
+  if (editconnection.jirapath.value != "" && !(editconnection.jirapath.value.indexOf("/") == 0))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPathMustBeginWithASlash'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Server'))");
+    editconnection.jirapath.focus();
+    return false;
+  }
+
+  if (editconnection.jiraproxyhost.value != "" && editconnection.jiraproxyhost.value.indexOf("/") != -1)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyHostMustNotIncludeSlash'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Proxy'))");
+    editconnection.jirahost.focus();
+    return false;
+  }
+
+  if (editconnection.jiraproxyport.value != "" && !isInteger(editconnection.jiraproxyport.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyPortMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraAuthorityConnector.Proxy'))");
+    editconnection.jiraport.focus();
+    return false;
+  }
+
+  return true;
+}
+//-->
+</script>
diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira_proxy.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira_proxy.html
new file mode 100644
index 0000000..07df4bf
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira_proxy.html
@@ -0,0 +1,78 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('JiraAuthorityConnector.Proxy'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyHostColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="32" type="text" id="jiraproxyhost" name="jiraproxyhost" value="$Encoder.attributeEscape($JIRAPROXYHOST)" />
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyPortColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="5" type="text" id="jiraproxyport" name="jiraproxyport" value="$Encoder.attributeEscape($JIRAPROXYPORT)" />
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyDomainColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="32" type="text" id="jiraproxydomain" name="jiraproxydomain" value="$Encoder.attributeEscape($JIRAPROXYDOMAIN)" />
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyUsernameColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="16" type="text" id="jiraproxyusername" name="jiraproxyusername" value="$Encoder.attributeEscape($JIRAPROXYUSERNAME)" />
+    </td>
+  </tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyPasswordColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="16" type="password" id="jiraproxypassword" name="jiraproxypassword" value="$Encoder.attributeEscape($JIRAPROXYPASSWORD)" />
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="jiraproxyhost" value="$Encoder.attributeEscape($JIRAPROXYHOST)" />
+<input type="hidden" name="jiraproxyport" value="$Encoder.attributeEscape($JIRAPROXYPORT)" />
+<input type="hidden" name="jiraproxydomain" value="$Encoder.attributeEscape($JIRAPROXYDOMAIN)" />
+<input type="hidden" name="jiraproxyusername" value="$Encoder.attributeEscape($JIRAPROXYUSERNAME)" />
+<input type="hidden" name="jiraproxypassword" value="$Encoder.attributeEscape($JIRAPROXYPASSWORD)" />
+
+#end
diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira_server.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira_server.html
new file mode 100644
index 0000000..e43728e
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/editConfiguration_jira_server.html
@@ -0,0 +1,99 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('JiraAuthorityConnector.Server'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProtocolColon'))</nobr>
+    </td>
+    <td class="value">
+      <select size="2" name="jiraprotocol"/>
+#if($JIRAPROTOCOL == 'http')
+        <option value="http" selected="true">http</option>
+#else
+        <option value="http">http</option>
+#end
+#if($JIRAPROTOCOL == 'https')
+        <option value="https" selected="true">https</option>
+#else
+        <option value="https">https</option>
+#end
+      </select>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraHostColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="32" type="text" id="jirahost" name="jirahost" value="$Encoder.attributeEscape($JIRAHOST)" />
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPortColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="5" type="text" id="jiraport" name="jiraport" value="$Encoder.attributeEscape($JIRAPORT)" />
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraRESTAPIPathColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="32" type="text" id="jirapath" name="jirapath" value="$Encoder.attributeEscape($JIRAPATH)" />
+    </td>
+  </tr>
+  
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.ClientIDColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="16" type="text" id="clientid" name="clientid" value="$Encoder.attributeEscape($CLIENTID)" />
+    </td>
+  </tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.ClientSecretColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="16" type="password" id="clientsecret" name="clientsecret" value="$Encoder.attributeEscape($CLIENTSECRET)" />
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="jiraprotocol" value="$Encoder.attributeEscape($JIRAPROTOCOL)" />
+<input type="hidden" name="jirahost" value="$Encoder.attributeEscape($JIRAHOST)" />
+<input type="hidden" name="jiraport" value="$Encoder.attributeEscape($JIRAPORT)" />
+<input type="hidden" name="jirapath" value="$Encoder.attributeEscape($JIRAPATH)" />
+<input type="hidden" name="clientid" value="$Encoder.attributeEscape($CLIENTID)" />
+<input type="hidden" name="clientsecret" value="$Encoder.attributeEscape($CLIENTSECRET)" />
+
+#end
diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/viewConfiguration_jira.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/viewConfiguration_jira.html
new file mode 100644
index 0000000..cb5cde2
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/jira/viewConfiguration_jira.html
@@ -0,0 +1,125 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProtocolColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPROTOCOL)</nobr>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraHostColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAHOST)</nobr>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraPortColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPORT)</nobr>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraRESTAPIPathColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPATH)</nobr>
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.ClientIDColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($CLIENTID)</nobr>
+    </td>
+  </tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.ClientSecretColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>********</nobr>
+    </td>
+  </tr>
+  
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyHostColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPROXYHOST)</nobr>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyPortColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPROXYPORT)</nobr>
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyDomainColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPROXYDOMAIN)</nobr>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyUsernameColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($JIRAPROXYUSERNAME)</nobr>
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraAuthorityConnector.JiraProxyPasswordColon'))</nobr>
+    </td>
+    <td class="value">
+      <nobr>********</nobr>
+    </td>
+  </tr>
+
+</table>
+
diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira.js b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira.js
new file mode 100644
index 0000000..dfcde63
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira.js
@@ -0,0 +1,122 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<script type="text/javascript">

+<!--

+function checkConfig()

+{

+  if (editconnection.jiraport.value != "" && !isInteger(editconnection.jiraport.value))

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPortMustBeAnInteger'))");

+    editconnection.jiraport.focus();

+    return false;

+  }

+

+  if (editconnection.jirahost.value != "" && editconnection.jirahost.value.indexOf("/") != -1)

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraHostMustNotIncludeSlash'))");

+    editconnection.jirahost.focus();

+    return false;

+  }

+

+  if (editconnection.jirapath.value != "" && !(editconnection.jirapath.value.indexOf("/") == 0))

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPathMustBeginWithASlash'))");

+    editconnection.jirapath.focus();

+    return false;

+  }

+

+  if (editconnection.jiraproxyport.value != "" && !isInteger(editconnection.jiraproxyport.value))

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyPortMustBeAnInteger'))");

+    editconnection.jiraproxyport.focus();

+    return false;

+  }

+

+  if (editconnection.jiraproxyhost.value != "" && editconnection.jiraproxyhost.value.indexOf("/") != -1)

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyHostMustNotIncludeSlash'))");

+    editconnection.jiraproxyhost.focus();

+    return false;

+  }

+

+  return true;

+}

+ 

+function checkConfigForSave()

+{

+    

+  if (editconnection.jirahost.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraHostMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Server'))");

+    editconnection.jirahost.focus();

+    return false;

+  }

+  

+  if (editconnection.jirahost.value != "" && editconnection.jirahost.value.indexOf("/") != -1)

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraHostMustNotIncludeSlash'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Server'))");

+    editconnection.jirahost.focus();

+    return false;

+  }

+

+  if (editconnection.jiraport.value != "" && !isInteger(editconnection.jiraport.value))

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPortMustBeAnInteger'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Server'))");

+    editconnection.jiraport.focus();

+    return false;

+  }

+

+  if (editconnection.jirapath.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPathMustNotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Server'))");

+    editconnection.jirapath.focus();

+    return false;

+  }

+  

+  if (editconnection.jirapath.value != "" && !(editconnection.jirapath.value.indexOf("/") == 0))

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPathMustBeginWithASlash'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Server'))");

+    editconnection.jirapath.focus();

+    return false;

+  }

+  

+  if (editconnection.jiraproxyhost.value != "" && editconnection.jiraproxyhost.value.indexOf("/") != -1)

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyHostMustNotIncludeSlash'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Proxy'))");

+    editconnection.jirahost.focus();

+    return false;

+  }

+

+  if (editconnection.jiraproxyport.value != "" && !isInteger(editconnection.jiraproxyport.value))

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyPortMustBeAnInteger'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.Proxy'))");

+    editconnection.jiraport.focus();

+    return false;

+  }

+

+  return true;

+}

+//-->

+</script>

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira_proxy.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira_proxy.html
new file mode 100644
index 0000000..90e059b
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira_proxy.html
@@ -0,0 +1,78 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('JiraRepositoryConnector.Proxy'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyHostColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="32" type="text" id="jiraproxyhost" name="jiraproxyhost" value="$Encoder.attributeEscape($JIRAPROXYHOST)" />
+    </td>
+  </tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyPortColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="5" type="text" id="jiraproxyport" name="jiraproxyport" value="$Encoder.attributeEscape($JIRAPROXYPORT)" />
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyDomainColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="32" type="text" id="jiraproxydomain" name="jiraproxydomain" value="$Encoder.attributeEscape($JIRAPROXYDOMAIN)" />
+    </td>
+  </tr>
+  
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyUsernameColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="16" type="text" id="jiraproxyusername" name="jiraproxyusername" value="$Encoder.attributeEscape($JIRAPROXYUSERNAME)" />
+    </td>
+  </tr>
+  <tr>
+    <td class="description">
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyPasswordColon'))</nobr>
+    </td>
+    <td class="value">
+      <input size="16" type="password" id="jiraproxypassword" name="jiraproxypassword" value="$Encoder.attributeEscape($JIRAPROXYPASSWORD)" />
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="jiraproxyhost" value="$Encoder.attributeEscape($JIRAPROXYHOST)" />
+<input type="hidden" name="jiraproxyport" value="$Encoder.attributeEscape($JIRAPROXYPORT)" />
+<input type="hidden" name="jiraproxydomain" value="$Encoder.attributeEscape($JIRAPROXYDOMAIN)" />
+<input type="hidden" name="jiraproxyusername" value="$Encoder.attributeEscape($JIRAPROXYUSERNAME)" />
+<input type="hidden" name="jiraproxypassword" value="$Encoder.attributeEscape($JIRAPROXYPASSWORD)" />
+
+#end
diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira_server.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira_server.html
new file mode 100644
index 0000000..3f34a60
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editConfiguration_jira_server.html
@@ -0,0 +1,99 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+#if($TabName == $ResourceBundle.getString('JiraRepositoryConnector.Server'))

+

+<table class="displaytable">

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProtocolColon'))</nobr>

+    </td>

+    <td class="value">

+      <select size="2" name="jiraprotocol"/>

+#if($JIRAPROTOCOL == 'http')

+        <option value="http" selected="true">http</option>

+#else

+        <option value="http">http</option>

+#end

+#if($JIRAPROTOCOL == 'https')

+        <option value="https" selected="true">https</option>

+#else

+        <option value="https">https</option>

+#end

+      </select>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraHostColon'))</nobr>

+    </td>

+    <td class="value">

+      <input size="32" type="text" id="jirahost" name="jirahost" value="$Encoder.attributeEscape($JIRAHOST)" />

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPortColon'))</nobr>

+    </td>

+    <td class="value">

+      <input size="5" type="text" id="jiraport" name="jiraport" value="$Encoder.attributeEscape($JIRAPORT)" />

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraRESTAPIPathColon'))</nobr>

+    </td>

+    <td class="value">

+      <input size="32" type="text" id="jirapath" name="jirapath" value="$Encoder.attributeEscape($JIRAPATH)" />

+    </td>

+  </tr>

+  

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.ClientIDColon'))</nobr>

+    </td>

+    <td class="value">

+      <input size="16" type="text" id="clientid" name="clientid" value="$Encoder.attributeEscape($CLIENTID)" />

+    </td>

+  </tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.ClientSecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <input size="16" type="password" id="clientsecret" name="clientsecret" value="$Encoder.attributeEscape($CLIENTSECRET)" />

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="jiraprotocol" value="$Encoder.attributeEscape($JIRAPROTOCOL)" />

+<input type="hidden" name="jirahost" value="$Encoder.attributeEscape($JIRAHOST)" />

+<input type="hidden" name="jiraport" value="$Encoder.attributeEscape($JIRAPORT)" />

+<input type="hidden" name="jirapath" value="$Encoder.attributeEscape($JIRAPATH)" />

+<input type="hidden" name="clientid" value="$Encoder.attributeEscape($CLIENTID)" />

+<input type="hidden" name="clientsecret" value="$Encoder.attributeEscape($CLIENTSECRET)" />

+

+#end

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jira.js b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jira.js
new file mode 100644
index 0000000..d376997
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jira.js
@@ -0,0 +1,54 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<script type="text/javascript">

+<!--

+function checkSpecificationForSave()

+{

+  if (editjob.jiraquery.value == "") {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.SeedQueryCannotBeNull'))");

+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraQuery'))");

+    editjob.jiraquery.focus();

+    return false;

+  }

+  return true;

+}

+ 

+function SpecOp(n, opValue, anchorvalue)

+{

+  eval("editjob."+n+".value = \""+opValue+"\"");

+  postFormSetAnchor(anchorvalue);

+}

+

+function SpecDeleteToken(i)

+{

+  SpecOp("accessop_"+i,"Delete","token_"+i);

+}

+

+function SpecAddToken(i)

+{

+  if (editjob.spectoken.value == "")

+  {

+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('JiraRepositoryConnector.TypeInAnAccessToken'))");

+    editjob.spectoken.focus();

+    return;

+  }

+  SpecOp("accessop","Add","token_"+i);

+}

+

+//-->

+</script>

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jiraQuery.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jiraQuery.html
new file mode 100644
index 0000000..95aa98c
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jiraQuery.html
@@ -0,0 +1,40 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+#if($TabName == $ResourceBundle.getString('JiraRepositoryConnector.JiraQuery'))

+

+<table class="displaytable">

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+    <td class="description">

+      <nobr>

+        $Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraQueryColon'))

+      </nobr>

+    </td>

+    <td class="value">

+      <nobr>

+        <input type="text" size="120" name="jiraquery" value="$Encoder.attributeEscape($JIRAQUERY)" />

+      </nobr>

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="jiraquery" value="$Encoder.attributeEscape($JIRAQUERY)" />

+

+#end

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jiraSecurity.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jiraSecurity.html
new file mode 100644
index 0000000..81defc7
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/editSpecification_jiraSecurity.html
@@ -0,0 +1,95 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+#if($TabName == $ResourceBundle.getString('JiraRepositoryConnector.Security'))

+

+<table class="displaytable">

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  

+  <tr>

+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.SecurityColon'))</nobr></td>

+    <td class="value">

+  #if($SECURITYON == 'on')

+      <input type="radio" name="specsecurity" value="on" checked="true"/>

+  #else

+      <input type="radio" name="specsecurity" value="on"/>

+  #end

+      $Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.Enabled'))

+  #if($SECURITYON == 'off')

+      <input type="radio" name="specsecurity" value="off" checked="true"/>

+  #else

+      <input type="radio" name="specsecurity" value="off"/>

+  #end

+      $Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.Disabled'))

+    </td>

+  </tr>

+  

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+

+  #set($atcounter = 0)

+  #foreach($atoken in $ACCESSTOKENS)

+

+  <tr>

+    <td class="description">

+      <input type="hidden" name="accessop_$atcounter" value=""/>

+      <input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($atoken.get('TOKEN'))"/>

+      <a name="token_$atcounter">

+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('JiraRepositoryConnector.Delete'))" onClick='Javascript:SpecDeleteToken($atcounter)' alt="$Encoder.attributeEscape($ResourceBundle.getString('JiraRepositoryConnector.DeleteToken'))$atcounter"/>

+      </a>

+    </td>

+    <td class="value">$Encoder.bodyEscape($atoken.get('TOKEN'))</td>

+  </tr>

+

+    #set($atcounter = $atcounter + 1)

+  #end

+

+  #set($nexttoken = $atcounter + 1)

+

+  #if($atcounter == 0)

+  <tr>

+    <td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.NoAccessTokensPresent'))</td>

+  </tr>

+  #end

+

+  <tr><td class="lightseparator" colspan="2"><hr/></td></tr>

+  

+  <tr>

+    <td class="description">

+      <input type="hidden" name="tokencount" value="$atcounter"/>

+      <input type="hidden" name="accessop" value=""/>

+      <a name="token_$atcounter">

+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('JiraRepositoryConnector.Add'))" onClick='Javascript:SpecAddToken($nexttoken)' alt="$Encoder.attributeEscape($ResourceBundle.getString('JiraRepositoryConnector.AddAccessToken'))"/>

+      </a>

+    </td>

+    <td class="value">

+      <input type="text" size="30" name="spectoken" value=""/>

+    </td>

+  </tr>

+</table>

+

+#else

+

+<input type="hidden" name="specsecurity" value="$SECURITYON"/>

+

+  #set($atcounter = 0)

+  #foreach($atoken in $ACCESSTOKENS)

+<input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($atoken.get('TOKEN'))"/>

+    #set($atcounter = $atcounter + 1)

+  #end

+<input type="hidden" name="tokencount" value="$atcounter"/>

+

+#end

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/viewConfiguration_jira.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/viewConfiguration_jira.html
new file mode 100644
index 0000000..bb1073b
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/viewConfiguration_jira.html
@@ -0,0 +1,126 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<table class="displaytable">

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProtocolColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPROTOCOL)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraHostColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAHOST)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraPortColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPORT)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraRESTAPIPathColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPATH)</nobr>

+    </td>

+  </tr>

+

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.ClientIDColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($CLIENTID)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.ClientSecretColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>********</nobr>

+    </td>

+  </tr>

+  

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyHostColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPROXYHOST)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyPortColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPROXYPORT)</nobr>

+    </td>

+  </tr>

+

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyDomainColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPROXYDOMAIN)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyUsernameColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAPROXYUSERNAME)</nobr>

+    </td>

+  </tr>

+

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraProxyPasswordColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>********</nobr>

+    </td>

+  </tr>

+

+</table>

+

diff --git a/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/viewSpecification_jira.html b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/viewSpecification_jira.html
new file mode 100644
index 0000000..9dde5ae
--- /dev/null
+++ b/connectors/jira/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/jira/viewSpecification_jira.html
@@ -0,0 +1,60 @@
+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<table class="displaytable">

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.JiraQueryColon'))</nobr>

+    </td>

+    <td class="value">

+      <nobr>$Encoder.bodyEscape($JIRAQUERY)</nobr>

+    </td>

+  </tr>

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.SecurityColon'))</nobr>

+    </td>

+    <td class="value">

+#if($SECURITYON == 'on')

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.Enabled'))</nobr>

+#else

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.Disabled'))</nobr>

+#end

+    </td>

+  </tr>

+  <tr><td class="separator" colspan="2"><hr/></td></tr>

+  <tr>

+#if($ACCESSTOKENS.size() == 0)

+    <td class="message" colspan="2">

+      $Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.NoAccessTokensPresent'))

+    </td>

+#else

+    <td class="description">

+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('JiraRepositoryConnector.AccessTokensColon'))</nobr>

+    </td>

+    <td class="value">

+  #set($atcounter = 0)

+  #foreach($atoken in $ACCESSTOKENS)

+    <nobr>$Encoder.bodyEscape($atoken.get('TOKEN'))</nobr><br/>

+    #set($atcounter = $atcounter + 1)

+  #end

+    </td>

+#end

+  </tr>

+

+</table>

diff --git a/connectors/jira/pom.xml b/connectors/jira/pom.xml
new file mode 100644
index 0000000..a0d7bf2
--- /dev/null
+++ b/connectors/jira/pom.xml
@@ -0,0 +1,161 @@
+<?xml version="1.0" encoding="UTF-8"?>

+<!--

+ Licensed to the Apache Software Foundation (ASF) under one or more

+ contributor license agreements.  See the NOTICE file distributed with

+ this work for additional information regarding copyright ownership.

+ The ASF licenses this file to You under the Apache License, Version 2.0

+ (the "License"); you may not use this file except in compliance with

+ the License.  You may obtain a copy of the License at

+

+     http://www.apache.org/licenses/LICENSE-2.0

+

+ Unless required by applicable law or agreed to in writing, software

+ distributed under the License is distributed on an "AS IS" BASIS,

+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+ See the License for the specific language governing permissions and

+ limitations under the License.

+-->

+

+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

+         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">

+

+    <parent>

+        <groupId>org.apache.manifoldcf</groupId>

+        <artifactId>mcf-connectors</artifactId>

+        <version>1.5-SNAPSHOT</version>

+    </parent>

+    <modelVersion>4.0.0</modelVersion>

+

+    <packaging>jar</packaging>

+

+    <developers>

+        <developer>

+            <name>Andrew Janowczyk</name>

+            <organization>Searchbox</organization>

+            <organizationUrl>http://www.searchbox.com</organizationUrl>

+            <url>http://www.searchbox.com</url>

+        </developer>

+    </developers>

+

+    <artifactId>mcf-jira-connector</artifactId>

+    <name>ManifoldCF - Connectors - Jira</name>

+

+

+    <properties>

+        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>

+        <project.http.version>1.14.1-beta</project.http.version>

+        <project.oauth.version>1.14.1-beta</project.oauth.version>

+    </properties>

+

+    <build>

+        <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>

+        <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>

+        <resources>

+            <resource>

+                <directory>${basedir}/connector/src/main/resources</directory>

+                <includes>

+                    <include>**/*.html</include>

+                    <include>**/*.js</include>

+                </includes>

+            </resource>

+            <resource>

+                <directory>${basedir}/connector/src/main/native2ascii</directory>

+                <includes>

+                    <include>**/*.properties</include>

+                </includes>

+            </resource>

+        </resources>

+        <plugins>

+            <plugin>

+                <groupId>org.codehaus.mojo</groupId>

+                <artifactId>native2ascii-maven-plugin</artifactId>

+                <version>1.0-beta-1</version>

+                <configuration>

+                    <workDir>target/classes</workDir>

+                </configuration>

+                <executions>

+                    <execution>

+                        <id>native2ascii-utf8</id>

+                        <goals>

+                            <goal>native2ascii</goal>

+                        </goals>

+                        <configuration>

+                            <encoding>UTF8</encoding>

+                            <includes>

+                                <include>**/*.properties</include>

+                            </includes>

+                        </configuration>

+                    </execution>

+                </executions>

+            </plugin>

+

+            <plugin>

+                <groupId>org.apache.maven.plugins</groupId>

+                <artifactId>maven-surefire-plugin</artifactId>

+                <configuration>

+                    <excludes>

+                        <exclude>**/*Postgresql*.java</exclude>

+                        <exclude>**/*MySQL*.java</exclude>

+                    </excludes>

+                    <forkMode>always</forkMode>

+                    <workingDirectory>target/test-output</workingDirectory>

+                </configuration>

+            </plugin>

+

+        </plugins>

+    </build>

+

+    <dependencies>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-pull-agent</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-agents</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>${project.groupId}</groupId>

+            <artifactId>mcf-ui-core</artifactId>

+            <version>${project.version}</version>

+        </dependency>

+        <dependency>

+            <groupId>commons-lang</groupId>

+            <artifactId>commons-lang</artifactId>

+            <version>${commons-lang.version}</version>

+            <type>jar</type>

+        </dependency>

+        

+        

+        <dependency>

+            <groupId>commons-logging</groupId>

+            <artifactId>commons-logging</artifactId>

+            <version>1.1.1</version>

+            <scope>test</scope>

+        </dependency>

+        <dependency>

+            <groupId>log4j</groupId>

+            <artifactId>log4j</artifactId>

+            <version>1.2.16</version>

+            <scope>provided</scope>

+            <type>jar</type>

+        </dependency>

+        <dependency>

+            <groupId>com.googlecode.json-simple</groupId>

+            <artifactId>json-simple</artifactId>

+            <version>1.1</version>

+        </dependency>

+        <dependency>

+            <groupId>commons-codec</groupId>

+            <artifactId>commons-codec</artifactId>

+            <version>1.8</version>

+        </dependency>

+    </dependencies>

+</project>

diff --git a/connectors/ldap/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/ldap/LDAPAuthority.java b/connectors/ldap/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/ldap/LDAPAuthority.java
index b247543..9eda53d 100644
--- a/connectors/ldap/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/ldap/LDAPAuthority.java
+++ b/connectors/ldap/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/ldap/LDAPAuthority.java
@@ -6,9 +6,9 @@
  * licenses this file to You under the Apache License, Version 2.0 (the
  * "License"); you may not use this file except in compliance with the License.
  * You may obtain a copy of the License at
- * 
+ *
  * http://www.apache.org/licenses/LICENSE-2.0
- * 
+ *
  * Unless required by applicable law or agreed to in writing, software
  * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
  * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
@@ -39,39 +39,47 @@
 public class LDAPAuthority extends org.apache.manifoldcf.authorities.authorities.BaseAuthorityConnector {
 
   public static final String _rcsid = "@(#)$Id$";
+
   /**
-  * Session information for all DC's we talk with.
-  */
+   * Session information for all DC's we talk with.
+   */
   private LdapContext session = null;
+
   private long sessionExpirationTime = -1L;
-  
-  /**
-  * This is the active directory global deny token. This should be ingested
-  * with all documents.
-  */
-  private static final String globalDenyToken = "DEAD_AUTHORITY";
-  private static final AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{globalDenyToken},
-    AuthorizationResponse.RESPONSE_UNREACHABLE);
-  private static final AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{globalDenyToken},
-    AuthorizationResponse.RESPONSE_USERNOTFOUND);
 
   private ConfigParams parameters;
-  
+
   private String serverName;
+
   private String serverPort;
+
   private String serverBase;
+
   private String userBase;
+
   private String userSearch;
+
   private String groupBase;
+
   private String groupSearch;
+
   private String groupNameAttr;
+
   private boolean groupMemberDN;
+
   private boolean addUserRecord;
+
+  private List<String> forcedTokens;
+
   private String userNameAttr;
 
   private long responseLifetime = 60000L; //60sec
+
   private int LRUsize = 1000;
-  /** Cache manager. */
+
+  /**
+   * Cache manager.
+   */
   private ICacheManager cacheManager = null;
 
   /**
@@ -101,30 +109,38 @@
     parameters = configParams;
 
     // We get the parameters here, so we can check them in case they are missing
-    serverName = configParams.getParameter( "ldapServerName" );
-    serverPort = configParams.getParameter( "ldapServerPort" );
-    serverBase = configParams.getParameter( "ldapServerBase" );
+    serverName = configParams.getParameter("ldapServerName");
+    serverPort = configParams.getParameter("ldapServerPort");
+    serverBase = configParams.getParameter("ldapServerBase");
 
-    userBase = configParams.getParameter( "ldapUserBase" );
-    userSearch = configParams.getParameter( "ldapUserSearch" );
-    groupBase = configParams.getParameter( "ldapGroupBase" );
-    groupSearch = configParams.getParameter( "ldapGroupSearch" );
-    groupNameAttr = configParams.getParameter( "ldapGroupNameAttr" );
-    userNameAttr = configParams.getParameter( "ldapUserNameAttr" );
-    
+    userBase = configParams.getParameter("ldapUserBase");
+    userSearch = configParams.getParameter("ldapUserSearch");
+    groupBase = configParams.getParameter("ldapGroupBase");
+    groupSearch = configParams.getParameter("ldapGroupSearch");
+    groupNameAttr = configParams.getParameter("ldapGroupNameAttr");
+    userNameAttr = configParams.getParameter("ldapUserNameAttr");
     groupMemberDN = "1".equals(getParam(configParams, "ldapGroupMemberDn", ""));
     addUserRecord = "1".equals(getParam(configParams, "ldapAddUserRecord", ""));
+
+    forcedTokens = new ArrayList<String>();
+    int i = 0;
+    while (i < parameters.getChildCount()) {
+      ConfigNode sn = parameters.getChild(i++);
+      if (sn.getType().equals("access")) {
+        String token = "" + sn.getAttributeValue("token");
+        forcedTokens.add(token);
+      }
+    }
   }
 
   // All methods below this line will ONLY be called if a connect() call succeeded
   // on this instance!
-
-  /** Session setup.  Anything that might need to throw an exception should go
-  * here.
-  */
+  /**
+   * Session setup. Anything that might need to throw an exception should go
+   * here.
+   */
   protected LdapContext getSession()
-    throws ManifoldCFException
-  {
+    throws ManifoldCFException {
     if (serverName == null || serverName.length() == 0) {
       throw new ManifoldCFException("Server name parameter missing but required");
     }
@@ -155,13 +171,19 @@
 
     Hashtable env = new Hashtable();
     env.put(Context.INITIAL_CONTEXT_FACTORY, "com.sun.jndi.ldap.LdapCtxFactory");
-    env.put(Context.PROVIDER_URL, "ldap://"+serverName+":"+serverPort+"/"+serverBase);
+    env.put(Context.PROVIDER_URL, "ldap://" + serverName + ":" + serverPort + "/" + serverBase);
 
     //get bind credentials
-    String bindUser = getParam(parameters, "ldapBindUser", null);
-    String bindPass = getParam(parameters, "ldapBindPass", null);
-    if (bindPass != null && bindUser != null) {
-      bindPass = ManifoldCF.deobfuscate(bindPass);
+    String bindUser = getParam(parameters, "ldapBindUser", "");
+    String bindPass = "";
+    try {
+      bindPass = ManifoldCF.deobfuscate(getParam(parameters, "ldapBindPass", ""));
+    } catch (ManifoldCFException ex) {
+      if (!bindUser.isEmpty()) {
+        Logger.getLogger(LDAPAuthority.class.getName()).log(Level.SEVERE, "Deobfuscation error", ex);
+      }
+    }
+    if (!bindUser.isEmpty()) {
       env.put(Context.SECURITY_AUTHENTICATION, "simple");
       env.put(Context.SECURITY_PRINCIPAL, bindUser);
       env.put(Context.SECURITY_CREDENTIALS, bindPass);
@@ -178,30 +200,40 @@
     } catch (AuthenticationException e) {
       session = null;
       sessionExpirationTime = -1L;
-      throw new ManifoldCFException("Authentication error: "+e.getMessage(),e);
+      throw new ManifoldCFException("Authentication error: " + e.getMessage() + ", explanation: " + e.getExplanation(), e);
     } catch (CommunicationException e) {
       session = null;
       sessionExpirationTime = -1L;
-      throw new ManifoldCFException("Communication error: "+e.getMessage(),e);
+      throw new ManifoldCFException("Communication error: " + e.getMessage(), e);
     } catch (NamingException e) {
       session = null;
       sessionExpirationTime = -1L;
-      throw new ManifoldCFException("Naming error: "+e.getMessage(),e);
+      throw new ManifoldCFException("Naming error: " + e.getMessage(), e);
     }
   }
-    
+
   /**
-  * Check connection for sanity.
-  */
+   * Check connection for sanity.
+   */
   @Override
   public String check()
     throws ManifoldCFException {
     disconnectSession();
-    LdapContext fSession = getSession();
+    getSession();
     // MHL for a real check of all the search etc.
     return super.check();
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return session != null;
+  }
+
   /**
    * Poll. The connection should be closed if it has been idle for too long.
    */
@@ -214,8 +246,9 @@
     super.poll();
   }
 
-  /** Disconnect a session.
-  */
+  /**
+   * Disconnect a session.
+   */
   protected void disconnectSession() {
     if (session != null) {
       try {
@@ -225,14 +258,12 @@
       }
       session = null;
       sessionExpirationTime = -1L;
-
     }
   }
-    
+
   /**
-  * Close the connection. Call this before discarding the repository
-  * connector.
-  */
+   * Close the connection. Call this before discarding the repository connector.
+   */
   @Override
   public void disconnect()
     throws ManifoldCFException {
@@ -248,7 +279,7 @@
     groupSearch = null;
     groupNameAttr = null;
     userNameAttr = null;
-
+    forcedTokens = null;
   }
 
   protected String createCacheConnectionString() {
@@ -268,19 +299,19 @@
     sb.append(groupBase).append("|").append(groupSearch).append("|").append(groupNameAttr).append("|").append(groupMemberDN ? 'Y' : 'N');
     return sb.toString();
   }
-  
+
   /**
-  * Obtain the access tokens for a given user name.
-  *
-  * @param userName is the user name or identifier.
-  * @return the response tokens (according to the current authority). (Should
-  * throws an exception only when a condition cannot be properly described
-  * within the authorization response object.)
-  */
+   * Obtain the access tokens for a given user name.
+   *
+   * @param userName is the user name or identifier.
+   * @return the response tokens (according to the current authority). (Should
+   * throws an exception only when a condition cannot be properly described
+   * within the authorization response object.)
+   */
   @Override
   public AuthorizationResponse getAuthorizationResponse(String userName)
     throws ManifoldCFException {
-    
+
     getSession();
     // Construct a cache description object
     ICacheDescription objectDescription = new LdapAuthorizationResponseDescription(userName,
@@ -312,44 +343,52 @@
 
   protected AuthorizationResponse getAuthorizationResponseUncached(String userName)
     throws ManifoldCFException {
-    LdapContext session = getSession();
+    getSession();
     try {
       //find user in LDAP tree
       SearchResult usrRecord = getUserEntry(session, userName);
       if (usrRecord == null) {
-        return userNotFoundResponse;
+        return RESPONSE_USERNOTFOUND;
       }
 
       ArrayList theGroups = new ArrayList();
+      theGroups.addAll(forcedTokens);
 
-      String usrName = userName;
+      String usrName = userName.split("@")[0];
       if (userNameAttr != null && !"".equals(userNameAttr)) {
         if (usrRecord.getAttributes() != null) {
           Attribute attr = usrRecord.getAttributes().get(userNameAttr);
           if (attr != null) {
             usrName = attr.get().toString();
+            if (addUserRecord) {
+              NamingEnumeration values = attr.getAll();
+              while (values.hasMore()) {
+                theGroups.add(values.next().toString());
+              }
+            }
           }
         }
       }
-      if (addUserRecord) {
-        theGroups.add(usrName);
-      }
 
-      //specify the LDAP search filter
-      String searchFilter = groupSearch.replaceAll("\\{0\\}", escapeLDAPSearchFilter(groupMemberDN ? usrRecord.getNameInNamespace() : usrName));
-      SearchControls searchCtls = new SearchControls();
-      searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE);
-      String returnedAtts[] = {groupNameAttr};
-      searchCtls.setReturningAttributes(returnedAtts);
+      if (groupSearch != null && !groupSearch.isEmpty()) {
+        //specify the LDAP search filter
+        String searchFilter = groupSearch.replaceAll("\\{0\\}", escapeLDAPSearchFilter(groupMemberDN ? usrRecord.getNameInNamespace() : usrName));
+        SearchControls searchCtls = new SearchControls();
+        searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE);
+        String returnedAtts[] = {groupNameAttr};
+        searchCtls.setReturningAttributes(returnedAtts);
 
-      //Search for tokens.  Since every user *must* have a SID, the "no user" detection should be safe.
-      NamingEnumeration answer = session.search(groupBase, searchFilter, searchCtls);
+        NamingEnumeration answer = session.search(groupBase, searchFilter, searchCtls);
 
-      while (answer.hasMoreElements()) {
-        SearchResult sr = (SearchResult) answer.next();
-        Attributes attrs = sr.getAttributes();
-        if (attrs != null) {
-          theGroups.add(attrs.get(groupNameAttr).get().toString());
+        while (answer.hasMoreElements()) {
+          SearchResult sr = (SearchResult) answer.next();
+          Attributes attrs = sr.getAttributes();
+          if (attrs != null) {
+            NamingEnumeration values = attrs.get(groupNameAttr).getAll();
+            while (values.hasMore()) {
+              theGroups.add(values.next().toString());
+            }
+          }
         }
       }
 
@@ -364,259 +403,257 @@
 
     } catch (NameNotFoundException e) {
       // This means that the user doesn't exist
-      return userNotFoundResponse;
+      return RESPONSE_USERNOTFOUND;
     } catch (NamingException e) {
       // Unreachable
-      return unreachableResponse;
+      return RESPONSE_UNREACHABLE;
     }
   }
 
   /**
-  * Obtain the default access tokens for a given user name.
-  *
-  * @param userName is the user name or identifier.
-  * @return the default response tokens, presuming that the connect method
-  * fails.
-  */
+   * Obtain the default access tokens for a given user name.
+   *
+   * @param userName is the user name or identifier.
+   * @return the default response tokens, presuming that the connect method
+   * fails.
+   */
   @Override
   public AuthorizationResponse getDefaultAuthorizationResponse(String userName) {
     // The default response if the getConnection method fails
-    return unreachableResponse;
+    return RESPONSE_UNREACHABLE;
   }
 
   // UI support methods.
   //
   // These support methods are involved in setting up authority connection configuration information. The configuration methods cannot assume that the
   // current authority object is connected.  That is why they receive a thread context argument.
-  
   /**
-  * Output the configuration header section. This method is called in the
-  * head section of the connector's configuration page. Its purpose is to add
-  * the required tabs to the list, and to output any javascript methods that
-  * might be needed by the configuration editing HTML.
-  *
-  * @param threadContext is the local thread context.
-  * @param out is the output to which any HTML should be sent.
-  * @param parameters are the configuration parameters, as they currently
-  * exist, for this connection being configured.
-  * @param tabsArray is an array of tab names. Add to this array any tab
-  * names that are specific to the connector.
-  */
+   * Output the configuration header section. This method is called in the head
+   * section of the connector's configuration page. Its purpose is to add the
+   * required tabs to the list, and to output any javascript methods that might
+   * be needed by the configuration editing HTML.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @param tabsArray is an array of tab names. Add to this array any tab names
+   * that are specific to the connector.
+   */
   @Override
   public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
     throws ManifoldCFException, IOException {
-    tabsArray.add(Messages.getString(locale,"LDAP.LDAP"));
+    tabsArray.add(Messages.getString(locale, "LDAP.ForcedTokens"));
+    tabsArray.add(Messages.getString(locale, "LDAP.LDAP"));
     out.print(
-"<script type=\"text/javascript\">\n"+
-"<!--\n"+
-"function checkConfig() {\n"+
-"  if (editconnection.ldapServerName.value.indexOf(\"/\") != -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerNameCannotIncludeSlash")+"\");\n"+
-"    editconnection.ldapServerName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapServerPort.value != \"\" && !isInteger(editconnection.ldapServerPort.value)) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerPortMustBeAnInteger")+"\");\n"+
-"    editconnection.ldapServerPort.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapServerBase.value.indexOf(\"/\") != -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerBaseCannotIncludeSlash")+"\");\n"+
-"    editconnection.ldapServerBase.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapUserSearch.value != \"\" && editconnection.ldapUserSearch.value.indexOf(\"{0}\") == -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.UserSearchMustIncludeSubstitution")+"\");\n"+
-"    editconnection.ldapUserSearch.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapGroupSearch.value != \"\" && editconnection.ldapGroupSearch.value.indexOf(\"{0}\") == -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.GroupSearchMustIncludeSubstitution")+"\");\n"+
-"    editconnection.ldapGroupSearch.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  return true;\n"+ 
-"}\n"+ 
-"\n"+
-"function checkConfigForSave() {\n"+ 
-"  if (editconnection.ldapServerName.value == \"\") {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerNameCannotBeBlank")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapServerName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapServerPort.value == \"\") {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerPortCannotBeBlank")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapServerPort.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapUserSearch.value == \"\") {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.UserSearchCannotBeBlank")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapUserSearch.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapGroupSearch.value == \"\") {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.GroupSearchCannotBeBlank")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapGroupSearch.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapGroupNameAttr.value == \"\") {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.GroupNameAttrCannotBeBlank")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapGroupNameAttr.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapUserSearch.value != \"\" && editconnection.ldapUserSearch.value.indexOf(\"{0}\") == -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.UserSearchMustIncludeSubstitution")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapUserSearch.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapGroupSearch.value != \"\" && editconnection.ldapGroupSearch.value.indexOf(\"{0}\") == -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.GroupSearchMustIncludeSubstitution")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapGroupSearch.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapServerPort.value != \"\" && !isInteger(editconnection.ldapServerPort.value)) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerPortMustBeAnInteger")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapServerPort.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapServerName.value.indexOf(\"/\") != -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerNameCannotIncludeSlash")+"\");\n"+
-"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"LDAP.LDAP")+"\");\n"+
-"    editconnection.ldapServerName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.ldapServerBase.value.indexOf(\"/\") != -1) {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LDAP.ServerBaseCannotIncludeSlash")+"\");\n"+
-"    editconnection.ldapServerBase.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  return true;\n"+ 
-"}\n"+ 
-"//-->\n"+
-"</script>\n"
-    );
+      "<script type=\"text/javascript\">\n"
+      + "<!--\n"
+      + "function checkConfig() {\n"
+      + "  if (editconnection.ldapServerName.value.indexOf(\"/\") != -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerNameCannotIncludeSlash") + "\");\n"
+      + "    editconnection.ldapServerName.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapServerPort.value != \"\" && !isInteger(editconnection.ldapServerPort.value)) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerPortMustBeAnInteger") + "\");\n"
+      + "    editconnection.ldapServerPort.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapServerBase.value.indexOf(\"/\") != -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerBaseCannotIncludeSlash") + "\");\n"
+      + "    editconnection.ldapServerBase.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapUserSearch.value != \"\" && editconnection.ldapUserSearch.value.indexOf(\"{0}\") == -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.UserSearchMustIncludeSubstitution") + "\");\n"
+      + "    editconnection.ldapUserSearch.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapGroupSearch.value != \"\" && editconnection.ldapGroupSearch.value.indexOf(\"{0}\") == -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.GroupSearchMustIncludeSubstitution") + "\");\n"
+      + "    editconnection.ldapGroupSearch.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  return true;\n"
+      + "}\n"
+      + "\n"
+      + "function checkConfigForSave() {\n"
+      + "  if (editconnection.ldapServerName.value == \"\") {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerNameCannotBeBlank") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapServerName.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapServerPort.value == \"\") {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerPortCannotBeBlank") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapServerPort.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapUserSearch.value == \"\") {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.UserSearchCannotBeBlank") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapUserSearch.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapGroupSearch.value == \"\") {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.GroupSearchCannotBeBlank") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapGroupSearch.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapGroupNameAttr.value == \"\") {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.GroupNameAttrCannotBeBlank") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapGroupNameAttr.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapUserSearch.value != \"\" && editconnection.ldapUserSearch.value.indexOf(\"{0}\") == -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.UserSearchMustIncludeSubstitution") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapUserSearch.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapGroupSearch.value != \"\" && editconnection.ldapGroupSearch.value.indexOf(\"{0}\") == -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.GroupSearchMustIncludeSubstitution") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapGroupSearch.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapServerPort.value != \"\" && !isInteger(editconnection.ldapServerPort.value)) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerPortMustBeAnInteger") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapServerPort.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapServerName.value.indexOf(\"/\") != -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerNameCannotIncludeSlash") + "\");\n"
+      + "    SelectTab(\"" + Messages.getBodyJavascriptString(locale, "LDAP.LDAP") + "\");\n"
+      + "    editconnection.ldapServerName.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  if (editconnection.ldapServerBase.value.indexOf(\"/\") != -1) {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.ServerBaseCannotIncludeSlash") + "\");\n"
+      + "    editconnection.ldapServerBase.focus();\n"
+      + "    return false;\n"
+      + "  }\n"
+      + "  return true;\n"
+      + "}\n"
+      + "function SpecOp(n, opValue, anchorvalue) {\n"
+      + "  eval(\"editconnection.\"+n+\".value = \\\"\"+opValue+\"\\\"\");\n"
+      + "  postFormSetAnchor(anchorvalue);\n"
+      + "}\n"
+      + "function SpecAddToken(anchorvalue) {\n"
+      + "  if (editconnection.spectoken.value == \"\")\n"
+      + "  {\n"
+      + "    alert(\"" + Messages.getBodyJavascriptString(locale, "LDAP.TypeInToken") + "\");\n"
+      + "    editconnection.spectoken.focus();\n"
+      + "    return;\n"
+      + "  }\n"
+      + "  SpecOp(\"accessop\",\"Add\",anchorvalue);\n"
+      + "}\n"
+      + "//-->\n"
+      + "</script>\n");
   }
 
   /**
-  * Output the configuration body section. This method is called in the body
-  * section of the authority connector's configuration page. Its purpose is
-  * to present the required form elements for editing. The coder can presume
-  * that the HTML that is output from this configuration will be within
-  * appropriate <html>, <body>, and <form> tags. The name of the form is
-  * "editconnection".
-  *
-  * @param threadContext is the local thread context.
-  * @param out is the output to which any HTML should be sent.
-  * @param parameters are the configuration parameters, as they currently
-  * exist, for this connection being configured.
-  * @param tabName is the current tab name.
-  */
+   * Output the configuration body section. This method is called in the body
+   * section of the authority connector's configuration page. Its purpose is to
+   * present the required form elements for editing. The coder can presume that
+   * the HTML that is output from this configuration will be within appropriate
+   * <html>, <body>, and <form> tags. The name of the form is "editconnection".
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @param tabName is the current tab name.
+   */
   @Override
   public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException {
-    String fServerName = getParam( parameters, "ldapServerName", "");
-    String fServerPort = getParam( parameters, "ldapServerPort", "389");
-    String fServerBase = getParam( parameters, "ldapServerBase", "");
-    
-    String fUserBase = getParam( parameters, "ldapUserBase", "ou=People" );
-    String fUserSearch = getParam( parameters, "ldapUserSearch", "(&(objectClass=inetOrgPerson)(uid={0}))" );
+    String fServerName = getParam(parameters, "ldapServerName", "");
+    String fServerPort = getParam(parameters, "ldapServerPort", "389");
+    String fServerBase = getParam(parameters, "ldapServerBase", "");
+
+    String fUserBase = getParam(parameters, "ldapUserBase", "ou=People");
+    String fUserSearch = getParam(parameters, "ldapUserSearch", "(&(objectClass=inetOrgPerson)(uid={0}))");
     String fUserNameAttr = getParam(parameters, "ldapUserNameAttr", "uid");
     boolean fAddUserRecord = "1".equals(getParam(parameters, "ldapAddUserRecord", ""));
-    
-    String fGroupBase = getParam( parameters, "ldapGroupBase", "ou=Groups" );
-    String fGroupSearch = getParam( parameters, "ldapGroupSearch", "(&(objectClass=groupOfNames)(member={0}))" );
-    String fGroupNameAttr = getParam( parameters, "ldapGroupNameAttr", "cn" );
+
+    String fGroupBase = getParam(parameters, "ldapGroupBase", "ou=Groups");
+    String fGroupSearch = getParam(parameters, "ldapGroupSearch", "(&(objectClass=groupOfNames)(member={0}))");
+    String fGroupNameAttr = getParam(parameters, "ldapGroupNameAttr", "cn");
     boolean fGroupMemberDN = "1".equals(getParam(parameters, "ldapGroupMemberDn", ""));
-    
+
     String fBindUser = getParam(parameters, "ldapBindUser", "");
-    String fBindPass = getParam(parameters, "ldapBindPass", null);
-    if (fBindPass != null)
-      fBindPass = ManifoldCF.deobfuscate(fBindPass);
-    else
-      fBindPass = "";
+    String fBindPass = "";
+    try {
+      fBindPass = ManifoldCF.deobfuscate(getParam(parameters, "ldapBindPass", ""));
+    } catch (ManifoldCFException ex) {
+      //ignore
+    }
+    fBindPass = out.mapPasswordToKey(fBindPass);
 
-    if (tabName.equals(Messages.getString(locale,"LDAP.LDAP"))) {
+    if (tabName.equals(Messages.getString(locale, "LDAP.LDAP"))) {
       out.print(
-"<table class=\"displaytable\">\n"+
-" <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
-                    
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPServerNameColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"ldapServerName\" value=\""+Encoder.attributeEscape(fServerName)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPServerPortColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"5\" name=\"ldapServerPort\" value=\""+Encoder.attributeEscape(fServerPort)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPServerBaseColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapServerBase\" value=\""+Encoder.attributeEscape(fServerBase)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPBindUserColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapBindUser\" value=\""+Encoder.attributeEscape(fBindUser)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPBindPasswordColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"password\" size=\"64\" name=\"ldapBindPass\" value=\""+Encoder.attributeEscape(fBindPass)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.UserSearchBaseColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapUserBase\" value=\""+Encoder.attributeEscape(fUserBase)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.UserSearchFilterColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapUserSearch\" value=\""+Encoder.attributeEscape(fUserSearch)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.AddUserAuthColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"checkbox\" value=\"1\" name=\"ldapAddUserRecord\" " + (fAddUserRecord ? "checked=\"true\"" : "") + "/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.UserNameAttrColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapUserNameAttr\" value=\"" + Encoder.attributeEscape(fUserNameAttr) + "\"/></td>\n"+
-" </tr>\n"+
-
-" <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n" +
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupSearchBaseColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapGroupBase\" value=\""+Encoder.attributeEscape(fGroupBase)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupSearchFilterColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapGroupSearch\" value=\""+Encoder.attributeEscape(fGroupSearch)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupNameAttributeColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapGroupNameAttr\" value=\""+Encoder.attributeEscape(fGroupNameAttr)+"\"/></td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupMemberDnColon")+"</nobr></td>\n"+
-"  <td class=\"value\"><input type=\"checkbox\" value=\"1\" name=\"ldapGroupMemberDn\" " + (fGroupMemberDN ? "checked=\"true\"" : "") + "/></td>\n"+
-" </tr>\n"+
-
-"</table>\n"
-      );
+        "<table class=\"displaytable\">\n"
+        + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPServerNameColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"32\" name=\"ldapServerName\" value=\"" + Encoder.attributeEscape(fServerName) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPServerPortColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"5\" name=\"ldapServerPort\" value=\"" + Encoder.attributeEscape(fServerPort) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPServerBaseColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapServerBase\" value=\"" + Encoder.attributeEscape(fServerBase) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPBindUserColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapBindUser\" value=\"" + Encoder.attributeEscape(fBindUser) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPBindPasswordColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"password\" size=\"64\" name=\"ldapBindPass\" value=\"" + Encoder.attributeEscape(fBindPass) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.UserSearchBaseColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapUserBase\" value=\"" + Encoder.attributeEscape(fUserBase) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.UserSearchFilterColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapUserSearch\" value=\"" + Encoder.attributeEscape(fUserSearch) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.AddUserAuthColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"checkbox\" value=\"1\" name=\"ldapAddUserRecord\" " + (fAddUserRecord ? "checked=\"true\"" : "") + "/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.UserNameAttrColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapUserNameAttr\" value=\"" + Encoder.attributeEscape(fUserNameAttr) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupSearchBaseColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapGroupBase\" value=\"" + Encoder.attributeEscape(fGroupBase) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupSearchFilterColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapGroupSearch\" value=\"" + Encoder.attributeEscape(fGroupSearch) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupNameAttributeColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"text\" size=\"64\" name=\"ldapGroupNameAttr\" value=\"" + Encoder.attributeEscape(fGroupNameAttr) + "\"/></td>\n"
+        + " </tr>\n"
+        + " <tr>\n"
+        + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupMemberDnColon") + "</nobr></td>\n"
+        + "  <td class=\"value\"><input type=\"checkbox\" value=\"1\" name=\"ldapGroupMemberDn\" " + (fGroupMemberDN ? "checked=\"true\"" : "") + "/></td>\n"
+        + " </tr>\n"
+        + "</table>\n");
     } else {
       out.print("<input type=\"hidden\" name=\"ldapServerName\" value=\"" + Encoder.attributeEscape(fServerName) + "\"/>\n");
       out.print("<input type=\"hidden\" name=\"ldapServerPort\" value=\"" + Encoder.attributeEscape(fServerPort) + "\"/>\n");
@@ -632,174 +669,303 @@
       out.print("<input type=\"hidden\" name=\"ldapAddUserRecord\" value=\"" + (fAddUserRecord ? "1" : "0") + "\"/>\n");
       out.print("<input type=\"hidden\" name=\"ldapGroupMemberDn\" value=\"" + (fGroupMemberDN ? "1" : "0") + "\"/>\n");
     }
+
+    if (tabName.equals(Messages.getString(locale, "LDAP.ForcedTokens"))) {
+      out.print(
+        "<table class=\"displaytable\">\n"
+        + "  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+        + "  <tr><td class=\"value\" colspan=\"2\">" + Messages.getBodyString(locale, "LDAP.ForcedTokensDisclaimer") + "</td></tr>\n"
+        + "  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n");
+
+      out.print("  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n");
+      // Go through forced ACL
+      int i = 0;
+      int k = 0;
+      while (i < parameters.getChildCount()) {
+        ConfigNode sn = parameters.getChild(i++);
+        if (sn.getType().equals("access")) {
+          String accessDescription = "_" + Integer.toString(k);
+          String accessOpName = "accessop" + accessDescription;
+          String token = sn.getAttributeValue("token");
+          out.print(
+            "  <tr>\n"
+            + "    <td class=\"description\">\n"
+            + "      <input type=\"hidden\" name=\"" + accessOpName + "\" value=\"\"/>\n"
+            + "      <input type=\"hidden\" name=\"" + "spectoken" + accessDescription + "\" value=\"" + Encoder.attributeEscape(token) + "\"/>\n"
+            + "      <a name=\"" + "token_" + Integer.toString(k) + "\">\n"
+            + "        <input type=\"button\" value=\"" + Messages.getAttributeString(locale, "LDAP.Delete") + "\" onClick='Javascript:SpecOp(\"" + accessOpName + "\",\"Delete\",\"token_" + Integer.toString(k) + "\")' alt=\"" + Messages.getAttributeString(locale, "LDAP.DeleteToken") + Integer.toString(k) + "\"/>\n"
+            + "      </a>&nbsp;\n"
+            + "    </td>\n"
+            + "    <td class=\"value\">\n"
+            + "      " + Encoder.bodyEscape(token) + "\n"
+            + "    </td>\n"
+            + "  </tr>\n");
+          k++;
+        }
+      }
+      if (k == 0) {
+        out.print(
+          "  <tr>\n"
+          + "    <td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale, "LDAP.NoTokensPresent") + "</td>\n"
+          + "  </tr>\n");
+      }
+      out.print(
+        "  <tr><td class=\"lightseparator\" colspan=\"2\"><hr/></td></tr>\n"
+        + "  <tr>\n"
+        + "    <td class=\"description\">\n"
+        + "      <input type=\"hidden\" name=\"tokencount\" value=\"" + Integer.toString(k) + "\"/>\n"
+        + "      <input type=\"hidden\" name=\"accessop\" value=\"\"/>\n"
+        + "      <a name=\"" + "token_" + Integer.toString(k) + "\">\n"
+        + "        <input type=\"button\" value=\"" + Messages.getAttributeString(locale, "LDAP.Add") + "\" onClick='Javascript:SpecAddToken(\"token_" + Integer.toString(k + 1) + "\")' alt=\"" + Messages.getAttributeString(locale, "LDAP.AddToken") + "\"/>\n"
+        + "      </a>&nbsp;\n"
+        + "    </td>\n"
+        + "    <td class=\"value\">\n"
+        + "      <input type=\"text\" size=\"30\" name=\"spectoken\" value=\"\"/>\n"
+        + "    </td>\n"
+        + "  </tr>\n"
+        + "</table>\n");
+    } else {
+      // Finally, go through forced ACL
+      int i = 0;
+      int k = 0;
+      while (i < parameters.getChildCount()) {
+        ConfigNode sn = parameters.getChild(i++);
+        if (sn.getType().equals("access")) {
+          String accessDescription = "_" + Integer.toString(k);
+          String token = "" + sn.getAttributeValue("token");
+          out.print(
+            "<input type=\"hidden\" name=\"" + "spectoken" + accessDescription + "\" value=\"" + Encoder.attributeEscape(token) + "\"/>\n");
+          k++;
+        }
+      }
+      out.print("<input type=\"hidden\" name=\"tokencount\" value=\"" + Integer.toString(k) + "\"/>\n");
+    }
   }
 
-  private String getParam( ConfigParams parameters, String name, String def) {
+  private String getParam(ConfigParams parameters, String name, String def) {
     return parameters.getParameter(name) != null ? parameters.getParameter(name) : def;
   }
 
-  private String getViewParam( ConfigParams parameters, String name) {
+  private String getViewParam(ConfigParams parameters, String name) {
     return parameters.getParameter(name) != null ? parameters.getParameter(name) : "";
   }
 
-  private boolean copyParam( IPostParameters variableContext, ConfigParams parameters, String name) {
-    String val = variableContext.getParameter( name );
-    if( val == null ){
+  private boolean copyParam(IPostParameters variableContext, ConfigParams parameters, String name) {
+    String val = variableContext.getParameter(name);
+    if (val == null) {
       return false;
     }
-    parameters.setParameter( name, val );
+    parameters.setParameter(name, val);
     return true;
   }
 
-  private void copyParam2(IPostParameters variableContext, ConfigParams parameters, String name) {
+  private boolean copyParam(IPostParameters variableContext, ConfigParams parameters, String name, String def) {
     String val = variableContext.getParameter(name);
     if (val == null) {
-      val = "";
+      val = def;
     }
     parameters.setParameter(name, val);
+    return true;
   }
 
   /**
-  * Process a configuration post. This method is called at the start of the
-  * authority connector's configuration page, whenever there is a possibility
-  * that form data for a connection has been posted. Its purpose is to gather
-  * form information and modify the configuration parameters accordingly. The
-  * name of the posted form is "editconnection".
-  *
-  * @param threadContext is the local thread context.
-  * @param variableContext is the set of variables available from the post,
-  * including binary file post information.
-  * @param parameters are the configuration parameters, as they currently
-  * exist, for this connection being configured.
-  * @return null if all is well, or a string error message if there is an
-  * error that should prevent saving of the connection (and cause a
-  * redirection to an error page).
-  */
+   * Process a configuration post. This method is called at the start of the
+   * authority connector's configuration page, whenever there is a possibility
+   * that form data for a connection has been posted. Its purpose is to gather
+   * form information and modify the configuration parameters accordingly. The
+   * name of the posted form is "editconnection".
+   *
+   * @param threadContext is the local thread context.
+   * @param variableContext is the set of variables available from the post,
+   * including binary file post information.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   * @return null if all is well, or a string error message if there is an error
+   * that should prevent saving of the connection (and cause a redirection to an
+   * error page).
+   */
   @Override
   public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, Locale locale, ConfigParams parameters)
     throws ManifoldCFException {
-    copyParam(variableContext, parameters, "ldapServerName" );
-    copyParam(variableContext, parameters, "ldapServerPort" );
-    copyParam(variableContext, parameters, "ldapServerBase" );
-    copyParam(variableContext, parameters, "ldapUserBase" );
-    copyParam(variableContext, parameters, "ldapUserSearch" );
-    copyParam(variableContext, parameters, "ldapUserNameAttr" );
-    copyParam(variableContext, parameters, "ldapGroupBase" );
-    copyParam(variableContext, parameters, "ldapGroupSearch" );
-    copyParam(variableContext, parameters, "ldapGroupNameAttr" );
-    
-    copyParam(variableContext, parameters, "ldapGroupMemberDn");
-    copyParam(variableContext, parameters, "ldapAddUserRecord");
+    copyParam(variableContext, parameters, "ldapServerName");
+    copyParam(variableContext, parameters, "ldapServerPort");
+    copyParam(variableContext, parameters, "ldapServerBase");
+    copyParam(variableContext, parameters, "ldapUserBase");
+    copyParam(variableContext, parameters, "ldapUserSearch");
+    copyParam(variableContext, parameters, "ldapUserNameAttr");
+    copyParam(variableContext, parameters, "ldapGroupBase");
+    copyParam(variableContext, parameters, "ldapGroupSearch");
+    copyParam(variableContext, parameters, "ldapGroupNameAttr");
+
+    copyParam(variableContext, parameters, "ldapGroupMemberDn", "0"); //checkbox boolean value
+    copyParam(variableContext, parameters, "ldapAddUserRecord", "0"); //checkbox boolean value
+
     copyParam(variableContext, parameters, "ldapBindUser");
     String bindPass = variableContext.getParameter("ldapBindPass");
     if (bindPass != null) {
-      parameters.setParameter("ldapBindPass", ManifoldCF.obfuscate(bindPass));
+      parameters.setObfuscatedParameter("ldapBindPass", variableContext.mapKeyToPassword(bindPass));
+    }
+
+    String xc = variableContext.getParameter("tokencount");
+    if (xc != null) {
+      // Delete all tokens first
+      int i = 0;
+      while (i < parameters.getChildCount()) {
+        ConfigNode sn = parameters.getChild(i);
+        if (sn.getType().equals("access")) {
+          parameters.removeChild(i);
+        } else {
+          i++;
+        }
+      }
+
+      int accessCount = Integer.parseInt(xc);
+      i = 0;
+      while (i < accessCount) {
+        String accessDescription = "_" + Integer.toString(i);
+        String accessOpName = "accessop" + accessDescription;
+        xc = variableContext.getParameter(accessOpName);
+        if (xc != null && xc.equals("Delete")) {
+          // Next row
+          i++;
+          continue;
+        }
+        // Get the stuff we need
+        String accessSpec = variableContext.getParameter("spectoken" + accessDescription);
+        ConfigNode node = new ConfigNode("access");
+        node.setAttribute("token", accessSpec);
+        parameters.addChild(parameters.getChildCount(), node);
+        i++;
+      }
+
+      String op = variableContext.getParameter("accessop");
+      if (op != null && op.equals("Add")) {
+        String accessspec = variableContext.getParameter("spectoken");
+        ConfigNode node = new ConfigNode("access");
+        node.setAttribute("token", accessspec);
+        parameters.addChild(parameters.getChildCount(), node);
+      }
     }
 
     return null;
   }
 
   /**
-  * View configuration. This method is called in the body section of the
-  * authority connector's view configuration page. Its purpose is to present
-  * the connection information to the user. The coder can presume that the
-  * HTML that is output from this configuration will be within appropriate
-  * <html> and <body> tags.
-  *
-  * @param threadContext is the local thread context.
-  * @param out is the output to which any HTML should be sent.
-  * @param parameters are the configuration parameters, as they currently
-  * exist, for this connection being configured.
-  */
+   * View configuration. This method is called in the body section of the
+   * authority connector's view configuration page. Its purpose is to present
+   * the connection information to the user. The coder can presume that the HTML
+   * that is output from this configuration will be within appropriate <html>
+   * and <body> tags.
+   *
+   * @param threadContext is the local thread context.
+   * @param out is the output to which any HTML should be sent.
+   * @param parameters are the configuration parameters, as they currently
+   * exist, for this connection being configured.
+   */
   @Override
   public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters)
     throws ManifoldCFException, IOException {
-    String f_serverName = getViewParam( parameters, "ldapServerName" );
-    String f_serverPort = getViewParam( parameters, "ldapServerPort" );
-    String f_serverBase = getViewParam( parameters, "ldapServerBase" );
+    String f_serverName = getViewParam(parameters, "ldapServerName");
+    String f_serverPort = getViewParam(parameters, "ldapServerPort");
+    String f_serverBase = getViewParam(parameters, "ldapServerBase");
     String f_bindUser = getViewParam(parameters, "ldapBindUser");
 
-    String f_userBase = getViewParam( parameters, "ldapUserBase" );
-    String f_userSearch = getViewParam( parameters, "ldapUserSearch" );
-    String f_groupBase = getViewParam( parameters, "ldapGroupBase" );
-    String f_groupSearch = getViewParam( parameters, "ldapGroupSearch" );
-    String f_groupNameAttr = getViewParam( parameters, "ldapGroupNameAttr" );
-    
+    String f_userBase = getViewParam(parameters, "ldapUserBase");
+    String f_userSearch = getViewParam(parameters, "ldapUserSearch");
+    String f_groupBase = getViewParam(parameters, "ldapGroupBase");
+    String f_groupSearch = getViewParam(parameters, "ldapGroupSearch");
+    String f_groupNameAttr = getViewParam(parameters, "ldapGroupNameAttr");
+
     String f_userNameAttr = getViewParam(parameters, "ldapUserNameAttr");
     boolean f_groupMemberDN = "1".equals(getViewParam(parameters, "ldapGroupMemberDn"));
     boolean f_addUserRecord = "1".equals(getViewParam(parameters, "ldapAddUserRecord"));
 
     out.print(
-"<table class=\"displaytable\">\n"+
-" <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
-                    
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPServerNameColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_serverName)+"</td>\n"+
-" </tr>\n"+
+      "<table class=\"displaytable\">\n"
+      + " <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPServerNameColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_serverName) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPServerPortColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_serverPort) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPServerBaseColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_serverBase) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPBindUserColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_bindUser) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.LDAPBindPasswordColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">*******</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.UserSearchBaseColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_userBase) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.UserSearchFilterColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_userSearch) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.AddUserAuthColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + (f_addUserRecord ? "Y" : "N") + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.UserNameAttrColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_userNameAttr) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupSearchBaseColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_groupBase) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupSearchFilterColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_groupSearch) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupNameAttributeColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + Encoder.bodyEscape(f_groupNameAttr) + "</td>\n"
+      + " </tr>\n"
+      + " <tr>\n"
+      + "  <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.GroupMemberDnColon") + "</nobr></td>\n"
+      + "  <td class=\"value\">" + (f_groupMemberDN ? "Y" : "N") + "</td>\n"
+      + " </tr>\n");
 
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPServerPortColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_serverPort)+"</td>\n"+
-" </tr>\n"+
+    out.print("  <tr><td class=\"separator\" colspan=\"4\"><hr/></td></tr>\n");
+    boolean seenAny = false;
+    int i;
 
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPServerBaseColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_serverBase)+"</td>\n"+
-" </tr>\n"+
+    // Go through looking for access tokens
+    i = 0;
+    while (i < parameters.getChildCount()) {
+      ConfigNode sn = parameters.getChild(i++);
+      if (sn.getType().equals("access")) {
+        if (seenAny == false) {
+          out.print(
+            "  <tr>\n"
+            + "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "LDAP.ForcedTokensColon") + "</nobr></td>\n"
+            + "    <td class=\"value\">\n");
+          seenAny = true;
+        }
+        String token = sn.getAttributeValue("token");
+        out.print(Encoder.bodyEscape(token) + "<br/>\n");
+      }
+    }
 
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPBindUserColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_bindUser)+"</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.LDAPBindPasswordColon")+"</nobr></td>\n"+
-"  <td class=\"value\">*******</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.UserSearchBaseColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_userBase)+"</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.UserSearchFilterColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_userSearch)+"</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.AddUserAuthColon")+"</nobr></td>\n"+
-"  <td class=\"value\">" + (f_addUserRecord ? "Y" : "N") + "</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.UserNameAttrColon")+"</nobr></td>\n"+
-"  <td class=\"value\">" + Encoder.bodyEscape(f_userNameAttr) + "</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupSearchBaseColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_groupBase)+"</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupSearchFilterColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_groupSearch)+"</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupNameAttributeColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+Encoder.bodyEscape(f_groupNameAttr)+"</td>\n"+
-" </tr>\n"+
-
-" <tr>\n"+
-"  <td class=\"description\"><nobr>"+Messages.getBodyString(locale,"LDAP.GroupMemberDnColon")+"</nobr></td>\n"+
-"  <td class=\"value\">"+(f_groupMemberDN?"Y":"N")+"</td>\n"+
-" </tr>\n"+
-
-"</table>\n"
-    );
+    if (seenAny) {
+      out.print(
+        "    </td>\n"
+        + "  </tr>\n");
+    } else {
+      out.print(
+        "  <tr><td class=\"message\" colspan=\"4\"><nobr>" + Messages.getBodyString(locale, "LDAP.NoTokensSpecified") + "</nobr></td></tr>\n");
+    }
+    out.print("</table>\n");
   }
 
   // Protected methods
@@ -810,12 +976,12 @@
    * @param userName (Domain Logon Name) is the user name or identifier.
    * @param searchBase (Full Domain Name for the search ie:
    * DC=qa-ad-76,DC=metacarta,DC=com)
-   * @return SearchResult for given domain user logon name. (Should throws
-   * an exception if user is not found.)
+   * @return SearchResult for given domain user logon name. (Should throws an
+   * exception if user is not found.)
    */
   protected SearchResult getUserEntry(LdapContext ctx, String userName)
     throws ManifoldCFException {
-    String searchFilter = userSearch.replaceAll("\\{0\\}", escapeDN(userName));
+    String searchFilter = userSearch.replaceAll("\\{0\\}", escapeDN(userName.split("@")[0]));
     SearchControls searchCtls = new SearchControls();
     searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE);
 
@@ -915,7 +1081,7 @@
     }
     return sb.toString();
   }
-  
+
   protected static StringSet emptyStringSet = new StringSet();
 
   /**
@@ -928,22 +1094,27 @@
      * The user name
      */
     protected String userName;
+
     /**
      * LDAP connection string with server name and base DN
      */
     protected String connectionString;
+
     /**
      * User search definition
      */
     protected String userSearch;
+
     /**
      * Group search definition
      */
     protected String groupSearch;
+
     /**
      * The response lifetime
      */
     protected long responseLifetime;
+
     /**
      * The expiration time
      */
diff --git a/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_en_US.properties b/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_en_US.properties
index 3d55874..893a652 100644
--- a/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_en_US.properties
+++ b/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_en_US.properties
@@ -28,6 +28,16 @@
 LDAP.UserNameAttrColon=User name attribute:
 LDAP.GroupMemberDnColon=Member attribute is DN:
 
+LDAP.ForcedTokens=Forced tokens
+LDAP.ForcedTokensColon=Forced tokens:
+LDAP.Add=Add
+LDAP.Delete=Delete
+LDAP.AddToken=Add token
+LDAP.TypeInToken=Token cannot be empty
+LDAP.NoTokensSpecified=No tokens specified
+LDAP.NoTokensPresent=No tokens specified
+LDAP.ForcedTokensDisclaimer=Forced tokens are meant to enrich results with common tokens explicitly handled by authorization center, like "Everyone". Use with extreme attention as this mechanism can grant privileges to every user outside authorization directory!
+
 LDAP.ServerNameCannotBeBlank=Server name cannot be blank
 LDAP.ServerPortCannotBeBlank=Server port cannot be blank
 LDAP.UserSearchCannotBeBlank=User search expression cannot be blank
@@ -38,13 +48,3 @@
 LDAP.ServerPortMustBeAnInteger=Server port must be an integer
 LDAP.ServerNameCannotIncludeSlash=Server name cannot include "/" character
 LDAP.ServerBaseCannotIncludeSlash=Server base cannot include "/" character
-
-
-
-
-
-
-
-
-
-
diff --git a/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_ja_JP.properties b/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_ja_JP.properties
index 4c81266..89ca93d 100644
--- a/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_ja_JP.properties
+++ b/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_ja_JP.properties
@@ -38,3 +38,13 @@
 LDAP.ServerPortMustBeAnInteger=サーバポートは整数でなければなりません
 LDAP.ServerNameCannotIncludeSlash=サーバ名は"/"文字を含むことができません
 LDAP.ServerBaseCannotIncludeSlash=サーバベースは"/"文字を含むことができません
+
+LDAP.ForcedTokens=Forced tokens
+LDAP.ForcedTokensColon=Forced tokens:
+LDAP.Add=Add
+LDAP.Delete=Delete
+LDAP.AddToken=Add token
+LDAP.TypeInToken=Token cannot be empty
+LDAP.NoTokensSpecified=No tokens specified
+LDAP.NoTokensPresent=No tokens specified
+LDAP.ForcedTokensDisclaimer=Forced tokens are meant to enrich results with common tokens explicitly handled by authorization center, like "Everyone". Use with extreme attention as this mechanism can grant privileges to every user outside authorization directory!
diff --git a/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_pl_PL.properties b/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_pl_PL.properties
new file mode 100644
index 0000000..98b13d8
--- /dev/null
+++ b/connectors/ldap/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/ldap/common_pl_PL.properties
@@ -0,0 +1,50 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more

+# contributor license agreements.  See the NOTICE file distributed with

+# this work for additional information regarding copyright ownership.

+# The ASF licenses this file to You under the Apache License, Version 2.0

+# (the "License"); you may not use this file except in compliance with

+# the License.  You may obtain a copy of the License at

+#

+#     http://www.apache.org/licenses/LICENSE-2.0

+#

+# Unless required by applicable law or agreed to in writing, software

+# distributed under the License is distributed on an "AS IS" BASIS,

+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+# See the License for the specific language governing permissions and

+# limitations under the License.

+

+LDAP.LDAP=LDAP

+LDAP.LDAPServerNameColon=Serwer LDAP:

+LDAP.LDAPServerPortColon=Port LDAP:

+LDAP.LDAPServerBaseColon=Baza DN (np. 'dc=office,dc=com'):

+LDAP.LDAPBindUserColon=Podłącz do serwera jako użytkownik (pozostaw puste jeśli niepotrzebne):

+LDAP.LDAPBindPasswordColon=Podłącz do serwera z hasłem:

+LDAP.UserSearchBaseColon=Baza wyszukiwania użytkowników:

+LDAP.UserSearchFilterColon=Filtr użytkowników:

+LDAP.GroupSearchBaseColon=Baza wyszukiwania grup:

+LDAP.GroupSearchFilterColon=Filtr grup:

+LDAP.GroupNameAttributeColon=Atrybut nazwy grupy:

+LDAP.AddUserAuthColon=Dodaj nazwę użytkownika jako token:

+LDAP.UserNameAttrColon=Atrybut nazwy użytkownika:

+LDAP.GroupMemberDnColon=Elementy atrybutu "member" są w postaci DN:

+

+LDAP.ForcedTokens=Wymuszone tokeny

+LDAP.ForcedTokensColon=Wymuszone tokeny:

+LDAP.Add=Dodaj

+LDAP.Delete=Usuń

+LDAP.AddToken=Dodaj token

+LDAP.TypeInToken=Token nie może być pusty

+LDAP.NoTokensSpecified=Brak zdefiniowanych tokenów

+LDAP.NoTokensPresent=Brak zdefiniowanych tokenów

+LDAP.ForcedTokensDisclaimer=Wymuszone tokeny służą do wzbogacania zwracanych wyników o grupy obsługiwane niejawnie przez centra autoryzacji, jak np. "Wszyscy"/"Everyone". Uzywaj z rozwagą, gdyż mozna w ten sposób nadawać wszystkim użytkownikom dodatkowe uprawnienia poze centrum autoryzacji!

+

+LDAP.ServerNameCannotBeBlank=Nazwa serwera nie może być pusta

+LDAP.ServerPortCannotBeBlank=Port nie może być pusty

+LDAP.UserSearchCannotBeBlank=Filtr użytkowników nie może być pusty

+LDAP.GroupSearchCannotBeBlank=Filtr grup nie może być pusty

+LDAP.GroupNameAttrCannotBeBlank=Atrybut nazwy grupy nie może być pusty

+LDAP.UserSearchMustIncludeSubstitution=Filtr użytkowników musi zawierać odwołanie do nazwy użytkownika ({0})

+LDAP.GroupSearchMustIncludeSubstitution=Filtr grupy musi zawierać odwołanie do nazwy użytkownika ({0})

+LDAP.ServerPortMustBeAnInteger=Port musi być liczbą całkowitą

+LDAP.ServerNameCannotIncludeSlash=Nazwa serwera nie może zawierać znaku "/"

+LDAP.ServerBaseCannotIncludeSlash=Baza DN nie może zawierać znaku "/"

diff --git a/connectors/ldap/pom.xml b/connectors/ldap/pom.xml
index 36a1673..ed0fd05 100644
--- a/connectors/ldap/pom.xml
+++ b/connectors/ldap/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_DOCUMENTS.java b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_DOCUMENTS.java
index b7bad63..c955669 100644
--- a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_DOCUMENTS.java
+++ b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_DOCUMENTS.java
@@ -53,6 +53,11 @@
     return 0;
   }
   
+  public int FetchVersion(int vol, int id, int revNumber, java.io.OutputStream output)
+  {
+    return 0;
+  }
+
   public int GetObjectRights(int vol, int objID, LLValue objinfo)
   {
     return 0;
diff --git a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_USERS.java b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_USERS.java
index bb289cb..d1710bc 100644
--- a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_USERS.java
+++ b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LAPI_USERS.java
@@ -45,5 +45,9 @@
     return 0;
   }
   
+  public int ListUsers(LLValue rval)
+  {
+    return 0;
+  }
 }
 
diff --git a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValue.java b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValue.java
index 92577ee..5f4fcfc 100644
--- a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValue.java
+++ b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValue.java
@@ -31,6 +31,16 @@
   {
   }
   
+  public long toLong(int index, String attributeName)
+  {
+    return 0;
+  }
+  
+  public long toLong(String attributeName)
+  {
+    return 0;
+  }
+
   public int toInteger(int index, String attributeName)
   {
     return 0;
@@ -91,7 +101,7 @@
     return false;
   }
   
-  public Enumeration enumerateValues()
+  public LLValueEnumeration enumerateValues()
   {
     return null;
   }
diff --git a/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValueEnumeration.java b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValueEnumeration.java
new file mode 100644
index 0000000..4d876bb
--- /dev/null
+++ b/connectors/livelink/build-stub/src/main/java/com/opentext/api/LLValueEnumeration.java
@@ -0,0 +1,50 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package com.opentext.api;
+
+import java.util.Enumeration;
+
+/** Stub classes to get connector to build.
+*/
+public class LLValueEnumeration implements Enumeration
+{
+  public LLValueEnumeration()
+  {
+  }
+  
+  public LLValueEnumeration(Enumeration e)
+  {
+  }
+  
+  public boolean hasMoreElements()
+  {
+    return false;
+  }
+  
+  public Object nextElement()
+  {
+    return null;
+  }
+  
+  public LLValue nextValue()
+  {
+    return null;
+  }
+}
+
diff --git a/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkAuthority.java b/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkAuthority.java
index c03dc3a..84b4194 100644
--- a/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkAuthority.java
+++ b/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkAuthority.java
@@ -89,11 +89,6 @@
   // Livelink does not have "deny" permissions, and there is no such thing as a document with no tokens, so it is safe to not have a local "deny" token.
   // However, people feel that a suspenders-and-belt approach is called for, so this restriction has been added.
   // Livelink tokens are numbers, "SYSTEM", or "GUEST", so they can't collide with the standard form.
-  private static final String denyToken = "DEAD_AUTHORITY";
-  private static final AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{denyToken},
-    AuthorizationResponse.RESPONSE_UNREACHABLE);
-  private static final AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{denyToken},
-    AuthorizationResponse.RESPONSE_USERNOTFOUND);
 
   /** Constructor.
   */
@@ -322,6 +317,16 @@
     }
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return hasConnected;
+  }
+
   /** Close the connection.  Call this before discarding the repository connector.
   */
   @Override
@@ -446,14 +451,14 @@
           {
             if (Logging.authorityConnectors.isDebugEnabled())
               Logging.authorityConnectors.debug("Livelink: Livelink user '"+domainAndUser+"' does not exist");
-            return userNotFoundResponse;
+            return RESPONSE_USERNOTFOUND;
           }
 
           if (status != 0)
           {
             Logging.authorityConnectors.warn("Livelink: User '"+domainAndUser+"' GetUserInfo error # "+Integer.toString(status)+" "+llServer.getErrors());
             // The server is probably down.
-            return unreachableResponse;
+            return RESPONSE_UNREACHABLE;
           }
 
           int deleted = userObject.toInteger("Deleted");
@@ -462,7 +467,7 @@
             if (Logging.authorityConnectors.isDebugEnabled())
               Logging.authorityConnectors.debug("Livelink: Livelink user '"+domainAndUser+"' has been deleted");
             // Since the user cannot become undeleted, then this should be treated as 'user does not exist'.
-            return userNotFoundResponse;
+            return RESPONSE_USERNOTFOUND;
           }
           int privs = userObject.toInteger("UserPrivileges");
           if ((privs & LAPI_USERS.PRIV_PERM_WORLD) == LAPI_USERS.PRIV_PERM_WORLD)
@@ -476,7 +481,7 @@
           {
             if (Logging.authorityConnectors.isDebugEnabled())
               Logging.authorityConnectors.debug("Livelink: Livelink error looking up user rights for '"+domainAndUser+"' - user does not exist");
-            return userNotFoundResponse;
+            return RESPONSE_USERNOTFOUND;
           }
 
           if (status != 0)
@@ -485,7 +490,7 @@
             // right error code, so just stuff it in the log.
             Logging.authorityConnectors.warn("Livelink: For user '"+domainAndUser+"', ListRights error # "+Integer.toString(status)+" "+llServer.getErrors());
             // An error code at this level has to indicate a suddenly unreachable authority
-            return unreachableResponse;
+            return RESPONSE_UNREACHABLE;
           }
 
           // Go through the individual objects, and get their IDs.  These id's will be the access tokens
@@ -549,7 +554,7 @@
     catch (ServiceInterruption e)
     {
       Logging.authorityConnectors.warn("Livelink: Server seems to be down: "+e.getMessage(),e);
-      return unreachableResponse;
+      return RESPONSE_UNREACHABLE;
     }
   }
 
@@ -561,7 +566,7 @@
   public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
   {
     // The default response if the getConnection method fails
-    return unreachableResponse;
+    return RESPONSE_UNREACHABLE;
   }
 
   // UI support methods.
@@ -716,6 +721,8 @@
     String serverPassword = parameters.getObfuscatedParameter(LiveLinkParameters.serverPassword);
     if (serverPassword == null)
       serverPassword = "";
+    else
+      serverPassword = out.mapPasswordToKey(serverPassword);
     String serverHTTPCgiPath = parameters.getParameter(LiveLinkParameters.serverHTTPCgiPath);
     if (serverHTTPCgiPath == null)
       serverHTTPCgiPath = "/livelink/livelink.exe";
@@ -728,6 +735,8 @@
     String serverHTTPNTLMPassword = parameters.getObfuscatedParameter(LiveLinkParameters.serverHTTPNTLMPassword);
     if (serverHTTPNTLMPassword == null)
       serverHTTPNTLMPassword = "";
+    else
+      serverHTTPNTLMPassword = out.mapPasswordToKey(serverHTTPNTLMPassword);
     String serverHTTPSKeystore = parameters.getParameter(LiveLinkParameters.serverHTTPSKeystore);
     IKeystoreManager localServerHTTPSKeystore;
     if (serverHTTPSKeystore == null)
@@ -974,7 +983,7 @@
       parameters.setParameter(LiveLinkParameters.serverUsername,serverUserName);
     String serverPassword = variableContext.getParameter("serverpassword");
     if (serverPassword != null)
-      parameters.setObfuscatedParameter(LiveLinkParameters.serverPassword,serverPassword);
+      parameters.setObfuscatedParameter(LiveLinkParameters.serverPassword,variableContext.mapKeyToPassword(serverPassword));
     String serverHTTPCgiPath = variableContext.getParameter("serverhttpcgipath");
     if (serverHTTPCgiPath != null)
       parameters.setParameter(LiveLinkParameters.serverHTTPCgiPath,serverHTTPCgiPath);
@@ -986,7 +995,7 @@
       parameters.setParameter(LiveLinkParameters.serverHTTPNTLMUsername,serverHTTPNTLMUserName);
     String serverHTTPNTLMPassword = variableContext.getParameter("serverhttpntlmpassword");
     if (serverHTTPNTLMPassword != null)
-      parameters.setObfuscatedParameter(LiveLinkParameters.serverHTTPNTLMPassword,serverHTTPNTLMPassword);
+      parameters.setObfuscatedParameter(LiveLinkParameters.serverHTTPNTLMPassword,variableContext.mapKeyToPassword(serverHTTPNTLMPassword));
     String serverHTTPSKeystoreValue = variableContext.getParameter("serverhttpskeystoredata");
     if (serverHTTPSKeystoreValue != null)
       parameters.setParameter(LiveLinkParameters.serverHTTPSKeystore,serverHTTPSKeystoreValue);
@@ -1265,7 +1274,10 @@
       this.serverHTTPNTLMDomain = (serverHTTPNTLMDomain==null)?"":serverHTTPNTLMDomain;
       this.serverHTTPNTLMUsername = (serverHTTPNTLMUsername==null)?"":serverHTTPNTLMUsername;
       this.serverHTTPNTLMPassword = (serverHTTPNTLMPassword==null)?"":serverHTTPNTLMPassword;
-      this.serverHTTPSKeystore = serverHTTPSKeystore.getString();
+      if (serverHTTPSKeystore != null)
+        this.serverHTTPSKeystore = serverHTTPSKeystore.getString();
+      else
+        this.serverHTTPSKeystore = null;
       this.responseLifetime = responseLifetime;
     }
 
@@ -1281,7 +1293,7 @@
       return getClass().getName() + "-" + userName + "-" + serverProtocol + "-" + serverName +
         "-" + Integer.toString(serverPort) + "-" + serverUsername + "-" + serverPassword +
         "-" + serverHTTPCgi + "-" + serverHTTPNTLMDomain + "-" + serverHTTPNTLMUsername +
-        "-" + serverHTTPNTLMPassword + "-" + serverHTTPSKeystore;
+        "-" + serverHTTPNTLMPassword + "-" + ((serverHTTPSKeystore==null)?"":serverHTTPSKeystore);
     }
 
     /** Return the object expiration interval */
@@ -1298,7 +1310,7 @@
         serverProtocol.hashCode() + serverName.hashCode() + new Integer(serverPort).hashCode() +
         serverUsername.hashCode() + serverPassword.hashCode() +
         serverHTTPCgi.hashCode() + serverHTTPNTLMDomain.hashCode() + serverHTTPNTLMUsername.hashCode() +
-        serverHTTPNTLMPassword.hashCode() + serverHTTPSKeystore.hashCode();
+        serverHTTPNTLMPassword.hashCode() + ((serverHTTPSKeystore==null)?0:serverHTTPSKeystore.hashCode());
     }
     
     public boolean equals(Object o)
@@ -1311,7 +1323,8 @@
         ard.serverUsername.equals(serverUsername) && ard.serverPassword.equals(serverPassword) &&
         ard.serverHTTPCgi.equals(serverHTTPCgi) && ard.serverHTTPNTLMDomain.equals(serverHTTPNTLMDomain) &&
         ard.serverHTTPNTLMUsername.equals(serverHTTPNTLMUsername) && ard.serverHTTPNTLMPassword.equals(serverHTTPNTLMPassword) &&
-        ard.serverHTTPSKeystore.equals(serverHTTPSKeystore);
+        ((ard.serverHTTPSKeystore != null && serverHTTPSKeystore != null && ard.serverHTTPSKeystore.equals(serverHTTPSKeystore)) ||
+          ((ard.serverHTTPSKeystore == null || serverHTTPSKeystore == null) && ard.serverHTTPSKeystore == serverHTTPSKeystore));
     }
     
   }
diff --git a/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkConnector.java b/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkConnector.java
index 58607cf..1be4ae7 100644
--- a/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkConnector.java
+++ b/connectors/livelink/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/livelink/LivelinkConnector.java
@@ -24,6 +24,8 @@
 import org.apache.manifoldcf.crawler.system.Logging;
 import org.apache.manifoldcf.crawler.system.ManifoldCF;
 import org.apache.manifoldcf.core.common.XThreadInputStream;
+import org.apache.manifoldcf.core.common.XThreadOutputStream;
+import org.apache.manifoldcf.core.common.InterruptibleSocketFactory;
 
 import java.io.*;
 import java.util.*;
@@ -334,21 +336,31 @@
       serverHTTPNTLMPassword = params.getObfuscatedParameter(LiveLinkParameters.serverHTTPNTLMPassword);
 
       if (ingestProtocol == null || ingestProtocol.length() == 0)
-        ingestProtocol = "http";
+        ingestProtocol = null;
       if (viewProtocol == null || viewProtocol.length() == 0)
-        viewProtocol = ingestProtocol;
+      {
+        if (ingestProtocol == null)
+          viewProtocol = "http";
+        else
+          viewProtocol = ingestProtocol;
+      }
 
       if (ingestPort == null || ingestPort.length() == 0)
       {
-        if (!ingestProtocol.equals("https"))
-          ingestPort = "80";
+        if (ingestProtocol != null)
+        {
+          if (!ingestProtocol.equals("https"))
+            ingestPort = "80";
+          else
+            ingestPort = "443";
+        }
         else
-          ingestPort = "443";
+          ingestPort = null;
       }
 
       if (viewPort == null || viewPort.length() == 0)
       {
-        if (!viewProtocol.equals(ingestProtocol))
+        if (ingestProtocol == null || !viewProtocol.equals(ingestProtocol))
         {
           if (!viewProtocol.equals("https"))
             viewPort = "80";
@@ -359,13 +371,16 @@
           viewPort = ingestPort;
       }
 
-      try
+      if (ingestPort != null)
       {
-        ingestPortNumber = Integer.parseInt(ingestPort);
-      }
-      catch (NumberFormatException e)
-      {
-        throw new ManifoldCFException("Bad ingest port: "+e.getMessage(),e);
+        try
+        {
+          ingestPortNumber = Integer.parseInt(ingestPort);
+        }
+        catch (NumberFormatException e)
+        {
+          throw new ManifoldCFException("Bad ingest port: "+e.getMessage(),e);
+        }
       }
 
       String viewPortString;
@@ -459,6 +474,9 @@
     getSessionParameters();
     if (hasConnected == false)
     {
+      int socketTimeout = 900000;
+      int connectionTimeout = 300000;
+
       // Set up connection manager
       PoolingClientConnectionManager localConnectionManager = new PoolingClientConnectionManager();
       localConnectionManager.setMaxTotal(1);
@@ -466,7 +484,7 @@
       // Set up ingest ssl if indicated
       if (ingestKeystoreManager != null)
       {
-        SSLSocketFactory myFactory = new SSLSocketFactory(ingestKeystoreManager.getSecureSocketFactory(),
+        SSLSocketFactory myFactory = new SSLSocketFactory(new InterruptibleSocketFactory(ingestKeystoreManager.getSecureSocketFactory(), connectionTimeout),
           new BrowserCompatHostnameVerifier());
         Scheme myHttpsProtocol = new Scheme("https", 443, myFactory);
         localConnectionManager.getSchemeRegistry().register(myHttpsProtocol);
@@ -580,62 +598,67 @@
       getSession();
 
       // Now, set up trial of ingestion connection
-      String contextMsg = "for document access";
-      String ingestHttpAddress = ingestCgiPath;
-
-      HttpClient client = getInitializedClient(contextMsg);
-      HttpGet method = new HttpGet(getHost().toURI() + ingestHttpAddress);
-      method.setHeader(new BasicHeader("Accept","*/*"));
-      try
+      if (ingestProtocol != null)
       {
-        int statusCode = executeMethodViaThread(client,method);
-        switch (statusCode)
+        String contextMsg = "for document access";
+        String ingestHttpAddress = ingestCgiPath;
+
+        HttpClient client = getInitializedClient(contextMsg);
+        HttpGet method = new HttpGet(getHost().toURI() + ingestHttpAddress);
+        method.setHeader(new BasicHeader("Accept","*/*"));
+        try
         {
-        case 502:
-          return "Fetch test had transient 502 error response";
+          int statusCode = executeMethodViaThread(client,method);
+          switch (statusCode)
+          {
+          case 502:
+            return "Fetch test had transient 502 error response";
 
-        case HttpStatus.SC_UNAUTHORIZED:
-          return "Fetch test returned UNAUTHORIZED (401) response; check the security credentials and configuration";
+          case HttpStatus.SC_UNAUTHORIZED:
+            return "Fetch test returned UNAUTHORIZED (401) response; check the security credentials and configuration";
 
-        case HttpStatus.SC_OK:
-          return super.check();
+          case HttpStatus.SC_OK:
+            return super.check();
 
-        default:
-          return "Fetch test returned an unexpected response code of "+Integer.toString(statusCode);
+          default:
+            return "Fetch test returned an unexpected response code of "+Integer.toString(statusCode);
+          }
+        }
+        catch (InterruptedException e)
+        {
+          throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        }
+        catch (java.net.SocketTimeoutException e)
+        {
+          return "Fetch test timed out reading from the Livelink HTTP Server: "+e.getMessage();
+        }
+        catch (java.net.SocketException e)
+        {
+          return "Fetch test received a socket error reading from Livelink HTTP Server: "+e.getMessage();
+        }
+        catch (javax.net.ssl.SSLHandshakeException e)
+        {
+          return "Fetch test was unable to set up a SSL connection to Livelink HTTP Server: "+e.getMessage();
+        }
+        catch (ConnectTimeoutException e)
+        {
+          return "Fetch test connection timed out reading from Livelink HTTP Server: "+e.getMessage();
+        }
+        catch (InterruptedIOException e)
+        {
+          throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        }
+        catch (HttpException e)
+        {
+          return "Fetch test had an HTTP exception: "+e.getMessage();
+        }
+        catch (IOException e)
+        {
+          return "Fetch test had an IO failure: "+e.getMessage();
         }
       }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (java.net.SocketTimeoutException e)
-      {
-        return "Fetch test timed out reading from the Livelink HTTP Server: "+e.getMessage();
-      }
-      catch (java.net.SocketException e)
-      {
-        return "Fetch test received a socket error reading from Livelink HTTP Server: "+e.getMessage();
-      }
-      catch (javax.net.ssl.SSLHandshakeException e)
-      {
-        return "Fetch test was unable to set up a SSL connection to Livelink HTTP Server: "+e.getMessage();
-      }
-      catch (ConnectTimeoutException e)
-      {
-        return "Fetch test connection timed out reading from Livelink HTTP Server: "+e.getMessage();
-      }
-      catch (InterruptedIOException e)
-      {
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (HttpException e)
-      {
-        return "Fetch test had an HTTP exception: "+e.getMessage();
-      }
-      catch (IOException e)
-      {
-        return "Fetch test had an IO failure: "+e.getMessage();
-      }
+      else
+        return super.check();
     }
     catch (ServiceInterruption e)
     {
@@ -922,8 +945,8 @@
 
     // Walk the specification for the "startpoint" types.  Amalgamate these into a list of strings.
     // Presume that all roots are startpoint nodes
-    int i = 0;
-    while (i < spec.getChildCount())
+    boolean doUserWorkspaces = false;
+    for (int i = 0; i < spec.getChildCount(); i++)
     {
       SpecificationNode n = spec.getChild(i);
       if (n.getType().equals("startpoint"))
@@ -948,7 +971,78 @@
             path,"NOT FOUND",null,null);
         }
       }
-      i++;
+      else if (n.getType().equals("userworkspace"))
+      {
+        String value = n.getAttributeValue("value");
+        if (value != null && value.equals("true"))
+          doUserWorkspaces = true;
+        else if (value != null && value.equals("false"))
+          doUserWorkspaces = false;
+      }
+      
+      if (doUserWorkspaces)
+      {
+        // Do ListUsers and enumerate the values.
+        int sanityRetryCount = FAILURE_RETRY_COUNT;
+        while (true)
+        {
+          ListUsersThread t = new ListUsersThread();
+          try
+          {
+            t.start();
+            t.join();
+            Throwable thr = t.getException();
+            if (thr != null)
+            {
+              if (thr instanceof RuntimeException)
+                throw (RuntimeException)thr;
+              else if (thr instanceof ManifoldCFException)
+              {
+                sanityRetryCount = assessRetry(sanityRetryCount,(ManifoldCFException)thr);
+                continue;
+              }
+              else
+                throw (Error)thr;
+            }
+
+            LLValue childrenDocs = t.getResponse();
+
+            int size = 0;
+
+            if (childrenDocs.isRecord())
+              size = 1;
+            if (childrenDocs.isTable())
+              size = childrenDocs.size();
+
+            // Do the scan
+            for (int j = 0; j < size; j++)
+            {
+              int childID = childrenDocs.toInteger(j, "ID");
+              
+              // Skip admin user
+              if (childID == 1000 || childID == 1001)
+                continue;
+              
+              if (Logging.connectors.isDebugEnabled())
+                Logging.connectors.debug("Livelink: Found a user: ID="+Integer.toString(childID));
+
+              activities.addSeedDocument("F0:"+Integer.toString(childID));
+            }
+            break;
+          }
+          catch (InterruptedException e)
+          {
+            t.interrupt();
+            throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+          }
+          catch (RuntimeException e)
+          {
+            sanityRetryCount = handleLivelinkRuntimeException(e,sanityRetryCount,true);
+            continue;
+          }
+        }
+      }
+      
     }
 
   }
@@ -1616,16 +1710,23 @@
 "    editconnection.serverhttpcgipath.focus();\n"+
 "    return false;\n"+
 "  }\n"+
-"  if (editconnection.ingestcgipath.value == \"\")\n"+
+"  if (editconnection.viewprotocol.value == \"\" && editconnection.ingestprotocol.value == \"\")\n"+
 "  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LivelinkConnector.EnterTheCrawlCgiPathToLivelink")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"LivelinkConnector.DocumentAccess") + "\");\n"+
-"    editconnection.ingestcgipath.focus();\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"LivelinkConnector.SelectAViewProtocol")+"\");\n"+
+"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"LivelinkConnector.DocumentView") + "\");\n"+
+"    editconnection.viewprotocol.focus();\n"+
 "    return false;\n"+
 "  }\n"+
-"  if (editconnection.ingestcgipath.value.substring(0,1) != \"/\")\n"+
+"  if (editconnection.viewcgipath.value == \"\" && editconnection.ingestcgipath.value == \"\")\n"+
 "  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"LivelinkConnector.TheIngestCgiPathMustBeginWithACharacter")+"\");\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"LivelinkConnector.EnterTheViewCgiPathToLivelink")+"\");\n"+
+"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"LivelinkConnector.DocumentView") + "\");\n"+
+"    editconnection.viewcgipath.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  if (editconnection.ingestcgipath.value != \"\" && editconnection.ingestcgipath.value.substring(0,1) != \"/\")\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"LivelinkConnector.TheIngestCgiPathMustBeBlankOrBeginWithACharacter")+"\");\n"+
 "    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"LivelinkConnector.DocumentAccess") + "\");\n"+
 "    editconnection.ingestcgipath.focus();\n"+
 "    return false;\n"+
@@ -1698,13 +1799,13 @@
     // Document access parameters
     String ingestProtocol = parameters.getParameter(LiveLinkParameters.ingestProtocol);
     if (ingestProtocol == null)
-      ingestProtocol = "http";
+      ingestProtocol = "";
     String ingestPort = parameters.getParameter(LiveLinkParameters.ingestPort);
     if (ingestPort == null)
       ingestPort = "";
     String ingestCgiPath = parameters.getParameter(LiveLinkParameters.ingestCgiPath);
     if (ingestCgiPath == null)
-      ingestCgiPath = "/livelink/livelink.exe";
+      ingestCgiPath = "";
     String ingestNtlmUsername = parameters.getParameter(LiveLinkParameters.ingestNtlmUsername);
     if (ingestNtlmUsername == null)
       ingestNtlmUsername = "";
@@ -1724,7 +1825,7 @@
     // Document view parameters
     String viewProtocol = parameters.getParameter(LiveLinkParameters.viewProtocol);
     if (viewProtocol == null)
-      viewProtocol = "";
+      viewProtocol = "http";
     String viewServerName = parameters.getParameter(LiveLinkParameters.viewServerName);
     if (viewServerName == null)
       viewServerName = "";
@@ -1733,7 +1834,7 @@
       viewPort = "";
     String viewCgiPath = parameters.getParameter(LiveLinkParameters.viewCgiPath);
     if (viewCgiPath == null)
-      viewCgiPath = "";
+      viewCgiPath = "/livelink/livelink.exe";
 
     // The "Server" tab
     // Always pass the whole keystore as a hidden.
@@ -1881,7 +1982,8 @@
 "  <tr>\n"+
 "    <td class=\"description\">"+Messages.getBodyString(locale,"LivelinkConnector.DocumentFetchProtocol")+"</td>\n"+
 "    <td class=\"value\">\n"+
-"      <select name=\"ingestprotocol\" size=\"2\">\n"+
+"      <select name=\"ingestprotocol\" size=\"3\">\n"+
+"        <option value=\"\" "+((ingestProtocol.equals(""))?"selected=\"selected\"":"")+">"+Messages.getBodyString(locale,"LivelinkConnector.UseLAPI")+"</option>\n"+
 "        <option value=\"http\" "+((ingestProtocol.equals("http"))?"selected=\"selected\"":"")+">http</option>\n"+
 "        <option value=\"https\" "+((ingestProtocol.equals("https"))?"selected=\"selected\"":"")+">https</option>\n"+
 "      </select>\n"+
@@ -2409,10 +2511,32 @@
     int k;
 
     // Paths tab
+    boolean userWorkspaces = false;
+    i = 0;
+    while (i < ds.getChildCount())
+    {
+      SpecificationNode sn = ds.getChild(i++);
+      if (sn.getType().equals("userworkspace"))
+      {
+        String value = sn.getAttributeValue("value");
+        if (value != null && value.equals("true"))
+          userWorkspaces = true;
+      }
+    }
     if (tabName.equals(Messages.getString(locale,"LivelinkConnector.Paths")))
     {
       out.print(
 "<table class=\"displaytable\">\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\">\n"+
+"      <nobr>"+Messages.getBodyString(locale,"LivelinkConnector.CrawlUserWorkspaces")+"</nobr>\n"+
+"    </td>\n"+
+"    <td class=\"value\">\n"+
+"      <input type=\"checkbox\" name=\"userworkspace\" value=\"true\""+(userWorkspaces?" checked=\"true\"":"")+"/>\n"+
+"      <input type=\"hidden\" name=\"userworkspace_present\" value=\"true\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
 "  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
       );
       // Now, loop through paths
@@ -2544,7 +2668,9 @@
         }
       }
       out.print(
-"<input type=\"hidden\" name=\"pathcount\" value=\""+Integer.toString(k)+"\"/>\n"
+"<input type=\"hidden\" name=\"pathcount\" value=\""+Integer.toString(k)+"\"/>\n"+
+"<input type=\"hidden\" name=\"userworkspace\" value=\""+(userWorkspaces?"true":"false")+"\"/>\n"+
+"<input type=\"hidden\" name=\"userworkspace_present\" value=\"true\"/>\n"
       );
     }
 
@@ -3137,6 +3263,24 @@
   public String processSpecificationPost(IPostParameters variableContext, Locale locale, DocumentSpecification ds)
     throws ManifoldCFException
   {
+    String userWorkspacesPresent = variableContext.getParameter("userworkspace_present");
+    if (userWorkspacesPresent != null)
+    {
+      String value = variableContext.getParameter("userworkspace");
+      int i = 0;
+      while (i < ds.getChildCount())
+      {
+        SpecificationNode sn = ds.getChild(i);
+        if (sn.getType().equals("userworkspace"))
+          ds.removeChild(i);
+        else
+          i++;
+      }
+      SpecificationNode sn = new SpecificationNode("userworkspace");
+      sn.setAttribute("value",value);
+      ds.addChild(ds.getChildCount(),sn);
+    }
+    
     String xc = variableContext.getParameter("pathcount");
     if (xc != null)
     {
@@ -3632,6 +3776,35 @@
 "  <tr>\n"
     );
     int i = 0;
+    boolean userWorkspaces = false;
+    while (i < ds.getChildCount())
+    {
+      SpecificationNode sn = ds.getChild(i++);
+      if (sn.getType().equals("userworkspace"))
+      {
+        String value = sn.getAttributeValue("value");
+        if (value != null && value.equals("true"))
+          userWorkspaces = true;
+      }
+    }
+
+    out.print(
+"    <td class=\"description\"/>\n"+
+"      <nobr>"+Messages.getBodyString(locale,"LivelinkConnector.CrawlUserWorkspaces")+"</nobr>\n"+
+"    </td>\n"+
+"    <td class=\"value\"/>\n"+
+"      "+(userWorkspaces?Messages.getBodyString(locale,"LivelinkConnector.Yes"):Messages.getBodyString(locale,"LivelinkConnector.No"))+"\n"+
+"    </td>\n"+
+"  </tr>"
+    );
+    out.print(
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
+    );
+    out.print(
+"  <tr>"
+    );
+
+    i = 0;
     boolean seenAny = false;
     while (i < ds.getChildCount())
     {
@@ -4034,14 +4207,6 @@
 
     String contextMsg = "for '"+documentIdentifier+"'";
 
-    String ingestHttpAddress = convertToIngestURI(documentIdentifier);
-    if (ingestHttpAddress == null)
-    {
-      if (Logging.connectors.isDebugEnabled())
-        Logging.connectors.debug("Livelink: No fetch URI "+contextMsg+" - not ingesting");
-      return;
-    }
-
     String viewHttpAddress = convertToViewURI(documentIdentifier);
     if (viewHttpAddress == null)
     {
@@ -4050,13 +4215,17 @@
       return;
     }
 
-    RepositoryDocument rd = new RepositoryDocument();
-
-    int colonPos = documentIdentifier.indexOf(":",1);
-    
+    // Fetch logging
+    long startTime = System.currentTimeMillis();
+    String resultCode = "FAILED";
+    String resultDescription = null;
+    Long readSize = null;
+    boolean wasInterrupted = false;
     int objID;
     int vol;
 
+    int colonPos = documentIdentifier.indexOf(":",1);
+        
     if (colonPos == -1)
     {
       objID = new Integer(documentIdentifier.substring(1)).intValue();
@@ -4067,356 +4236,504 @@
       objID = new Integer(documentIdentifier.substring(colonPos+1)).intValue();
       vol = new Integer(documentIdentifier.substring(1,colonPos)).intValue();
     }
-
-    // Add general metadata
-    ObjectInformation objInfo = llc.getObjectInformation(vol,objID);
-    VersionInformation versInfo = llc.getVersionInformation(vol,objID,0);
-    if (!objInfo.exists())
-    {
-      Logging.connectors.debug("Livelink: No object "+contextMsg+": not ingesting");
-      return;
-    }
-    if (!versInfo.exists())
-    {
-      Logging.connectors.debug("Livelink: No version data "+contextMsg+": not ingesting");
-      return;
-    }
     
-    // Add general data we need for the output connector
-    String mimeType = versInfo.getMimeType();
-    if (mimeType != null)
-      rd.setMimeType(mimeType);
-    String fileName = versInfo.getFileName();
-    if (fileName != null)
-      rd.setFileName(fileName);
-    Date creationDate = objInfo.getCreationDate();
-    if (creationDate != null)
-      rd.setCreatedDate(creationDate);
-    Date modifyDate = versInfo.getModifyDate();
-    if (modifyDate != null)
-      rd.setModifiedDate(modifyDate);
-    
-    rd.addField(GENERAL_NAME_FIELD,objInfo.getName());
-    rd.addField(GENERAL_DESCRIPTION_FIELD,objInfo.getComments());
-    if (creationDate != null)
-      rd.addField(GENERAL_CREATIONDATE_FIELD,creationDate.toString());
-    if (modifyDate != null)
-      rd.addField(GENERAL_MODIFYDATE_FIELD,modifyDate.toString());
-    UserInformation owner = llc.getUserInformation(objInfo.getOwnerId().intValue());
-    UserInformation creator = llc.getUserInformation(objInfo.getCreatorId().intValue());
-    UserInformation modifier = llc.getUserInformation(versInfo.getOwnerId().intValue());
-    if (owner != null)
-      rd.addField(GENERAL_OWNER,owner.getName());
-    if (creator != null)
-      rd.addField(GENERAL_CREATOR,creator.getName());
-    if (modifier != null)
-      rd.addField(GENERAL_MODIFIER,modifier.getName());
-
-    // Iterate over the metadata items.  These are organized by category
-    // for speed of lookup.
-
-    // Unpack version string
-    int startPos = 0;
-
-    // Metadata items first
-    ArrayList metadataItems = new ArrayList();
-    startPos = unpackList(metadataItems,version,startPos,'+');
-    Iterator catIter = desc.getItems(metadataItems);
-    while (catIter.hasNext())
-    {
-      MetadataItem item = (MetadataItem)catIter.next();
-      MetadataPathItem pathItem = item.getPathItem();
-      if (pathItem != null)
-      {
-        int catID = pathItem.getCatID();
-        // grab the associated catversion
-        LLValue catVersion = getCatVersion(objID,catID);
-        if (catVersion != null)
-        {
-          // Go through attributes now
-          Iterator attrIter = item.getAttributeNames();
-          while (attrIter.hasNext())
-          {
-            String attrName = (String)attrIter.next();
-            // Create a unique metadata name
-            String metadataName = pathItem.getCatName()+":"+attrName;
-            // Fetch the metadata and stuff it into the RepositoryData structure
-            String[] metadataValue = getAttributeValue(catVersion,attrName);
-            if (metadataValue != null)
-              rd.addField(metadataName,metadataValue);
-            else
-              Logging.connectors.warn("Livelink: Metadata attribute '"+metadataName+"' does not seem to exist; please correct the job");
-          }
-        }
-
-      }
-    }
-
-    // Unpack acls (conditionally)
-    if (startPos < version.length())
-    {
-      char x = version.charAt(startPos++);
-      if (x == '+')
-      {
-        ArrayList acls = new ArrayList();
-        startPos = unpackList(acls,version,startPos,'+');
-        // Turn into acls and add into description
-        String[] aclArray = new String[acls.size()];
-        int j = 0;
-        while (j < aclArray.length)
-        {
-          aclArray[j] = (String)acls.get(j);
-          j++;
-        }
-        rd.setACL(aclArray);
-
-        StringBuilder denyBuffer = new StringBuilder();
-        startPos = unpack(denyBuffer,version,startPos,'+');
-        String denyAcl = denyBuffer.toString();
-        String[] denyAclArray = new String[1];
-        denyAclArray[0] = denyAcl;
-        rd.setDenyACL(denyAclArray);
-      }
-    }
-
-    // Add the path metadata item into the mix, if enabled
-    String pathAttributeName = sDesc.getPathAttributeName();
-    if (pathAttributeName != null && pathAttributeName.length() > 0)
-    {
-      String pathString = sDesc.getPathAttributeValue(documentIdentifier);
-      if (pathString != null)
-      {
-        if (Logging.connectors.isDebugEnabled())
-          Logging.connectors.debug("Livelink: Path attribute name is '"+pathAttributeName+"'"+contextMsg+", value is '"+pathString+"'");
-        rd.addField(pathAttributeName,pathString);
-      }
-    }
-
-    // Set up connection
-    HttpClient client = getInitializedClient(contextMsg);
-
-    long currentTime;
-
-    if (Logging.connectors.isInfoEnabled())
-      Logging.connectors.info("Livelink: " + ingestHttpAddress);
-
-    long startTime = System.currentTimeMillis();
-    String resultCode = "OK";
-    String resultDescription = null;
-    Long readSize = null;
-
-    HttpGet method = new HttpGet(getHost().toURI() + ingestHttpAddress);
-    method.setHeader(new BasicHeader("Accept","*/*"));
-
-    ExecuteMethodThread methodThread = new ExecuteMethodThread(client,method);
-    methodThread.start();
+    // Try/finally for fetch logging
     try
     {
-
-      int statusCode = methodThread.getResponseCode();
-      switch (statusCode)
+      // Check URL first
+      if (activities.checkURLIndexable(viewHttpAddress))
       {
-      case 500:
-      case 502:
-        Logging.connectors.warn("Livelink: Service interruption during fetch "+contextMsg+" with Livelink HTTP Server, retrying...");
-        throw new ServiceInterruption("Service interruption during fetch",new ManifoldCFException(Integer.toString(statusCode)+" error while fetching"),System.currentTimeMillis()+60000L,
-          System.currentTimeMillis()+600000L,-1,true);
 
-      case HttpStatus.SC_UNAUTHORIZED:
-        Logging.connectors.warn("Livelink: Document fetch unauthorized for "+ingestHttpAddress+" ("+contextMsg+")");
-        // Since we logged in, we should fail here if the ingestion user doesn't have access to the
-        // the document, but if we do, don't fail hard.
-        resultCode = "UNAUTHORIZED";
-        activities.deleteDocument(documentIdentifier,version);
-        return;
-
-      case HttpStatus.SC_OK:
-        if (Logging.connectors.isDebugEnabled())
-          Logging.connectors.debug("Livelink: Created http document connection to Livelink "+contextMsg);
-        long dataSize = methodThread.getResponseContentLength();
-        // The above replaces this, which required another access:
-        // long dataSize = (long)value.toInteger("DataSize");
-        // A non-existent content length will cause a value of -1 to be returned.  This seems to indicate that the session login did not work right.
-        if (dataSize >= 0)
+        // Add general metadata
+        ObjectInformation objInfo = llc.getObjectInformation(vol,objID);
+        VersionInformation versInfo = llc.getVersionInformation(vol,objID,0);
+        if (!objInfo.exists())
         {
-          if (Logging.connectors.isDebugEnabled())
-            Logging.connectors.debug("Livelink: Content length from livelink server "+contextMsg+"' = "+new Long(dataSize).toString());
-          if (activities.checkLengthIndexable(dataSize))
+          resultCode = "OBJECTNOTFOUND";
+          Logging.connectors.debug("Livelink: No object "+contextMsg+": not ingesting");
+          return;
+        }
+        if (!versInfo.exists())
+        {
+          resultCode = "VERSIONNOTFOUND";
+          Logging.connectors.debug("Livelink: No version data "+contextMsg+": not ingesting");
+          return;
+        }
+
+        String mimeType = versInfo.getMimeType();
+        if (activities.checkMimeTypeIndexable(mimeType))
+        {
+          Long dataSize = versInfo.getDataSize();
+          if (dataSize != null && activities.checkLengthIndexable(dataSize.longValue()))
           {
-            try
-            {
-              InputStream is = methodThread.getSafeInputStream();
-              try
-              {
-                rd.setBinary(is,dataSize);
-                
-                activities.ingestDocument(documentIdentifier,version,viewHttpAddress,rd);
+            String fileName = versInfo.getFileName();
+            Date creationDate = objInfo.getCreationDate();
+            Date modifyDate = versInfo.getModifyDate();
+            RepositoryDocument rd = new RepositoryDocument();
 
+            
+            // Add general data we need for the output connector
+            if (mimeType != null)
+              rd.setMimeType(mimeType);
+            if (fileName != null)
+              rd.setFileName(fileName);
+            if (creationDate != null)
+              rd.setCreatedDate(creationDate);
+            if (modifyDate != null)
+              rd.setModifiedDate(modifyDate);
+            
+            rd.addField(GENERAL_NAME_FIELD,objInfo.getName());
+            rd.addField(GENERAL_DESCRIPTION_FIELD,objInfo.getComments());
+            if (creationDate != null)
+              rd.addField(GENERAL_CREATIONDATE_FIELD,creationDate.toString());
+            if (modifyDate != null)
+              rd.addField(GENERAL_MODIFYDATE_FIELD,modifyDate.toString());
+            UserInformation owner = llc.getUserInformation(objInfo.getOwnerId().intValue());
+            UserInformation creator = llc.getUserInformation(objInfo.getCreatorId().intValue());
+            UserInformation modifier = llc.getUserInformation(versInfo.getOwnerId().intValue());
+            if (owner != null)
+              rd.addField(GENERAL_OWNER,owner.getName());
+            if (creator != null)
+              rd.addField(GENERAL_CREATOR,creator.getName());
+            if (modifier != null)
+              rd.addField(GENERAL_MODIFIER,modifier.getName());
+
+            // Iterate over the metadata items.  These are organized by category
+            // for speed of lookup.
+
+            // Unpack version string
+            int startPos = 0;
+
+            // Metadata items first
+            ArrayList metadataItems = new ArrayList();
+            startPos = unpackList(metadataItems,version,startPos,'+');
+            Iterator catIter = desc.getItems(metadataItems);
+            while (catIter.hasNext())
+            {
+              MetadataItem item = (MetadataItem)catIter.next();
+              MetadataPathItem pathItem = item.getPathItem();
+              if (pathItem != null)
+              {
+                int catID = pathItem.getCatID();
+                // grab the associated catversion
+                LLValue catVersion = getCatVersion(objID,catID);
+                if (catVersion != null)
+                {
+                  // Go through attributes now
+                  Iterator attrIter = item.getAttributeNames();
+                  while (attrIter.hasNext())
+                  {
+                    String attrName = (String)attrIter.next();
+                    // Create a unique metadata name
+                    String metadataName = pathItem.getCatName()+":"+attrName;
+                    // Fetch the metadata and stuff it into the RepositoryData structure
+                    String[] metadataValue = getAttributeValue(catVersion,attrName);
+                    if (metadataValue != null)
+                      rd.addField(metadataName,metadataValue);
+                    else
+                      Logging.connectors.warn("Livelink: Metadata attribute '"+metadataName+"' does not seem to exist; please correct the job");
+                  }
+                }
+
+              }
+            }
+
+            // Unpack acls (conditionally)
+            if (startPos < version.length())
+            {
+              char x = version.charAt(startPos++);
+              if (x == '+')
+              {
+                ArrayList acls = new ArrayList();
+                startPos = unpackList(acls,version,startPos,'+');
+                // Turn into acls and add into description
+                String[] aclArray = new String[acls.size()];
+                int j = 0;
+                while (j < aclArray.length)
+                {
+                  aclArray[j] = (String)acls.get(j);
+                  j++;
+                }
+                rd.setACL(aclArray);
+
+                StringBuilder denyBuffer = new StringBuilder();
+                startPos = unpack(denyBuffer,version,startPos,'+');
+                String denyAcl = denyBuffer.toString();
+                String[] denyAclArray = new String[1];
+                denyAclArray[0] = denyAcl;
+                rd.setDenyACL(denyAclArray);
+              }
+            }
+
+            // Add the path metadata item into the mix, if enabled
+            String pathAttributeName = sDesc.getPathAttributeName();
+            if (pathAttributeName != null && pathAttributeName.length() > 0)
+            {
+              String pathString = sDesc.getPathAttributeValue(documentIdentifier);
+              if (pathString != null)
+              {
                 if (Logging.connectors.isDebugEnabled())
-                  Logging.connectors.debug("Livelink: Ingesting done "+contextMsg);
+                  Logging.connectors.debug("Livelink: Path attribute name is '"+pathAttributeName+"'"+contextMsg+", value is '"+pathString+"'");
+                rd.addField(pathAttributeName,pathString);
+              }
+            }
 
-              }
-              finally
+            if (ingestProtocol != null)
+            {
+              // Use HTTP to fetch document!
+              String ingestHttpAddress = convertToIngestURI(documentIdentifier);
+              if (ingestHttpAddress != null)
               {
-                // Close stream via thread, since otherwise this can hang
-                is.close();
+
+                // Set up connection
+                HttpClient client = getInitializedClient(contextMsg);
+
+                long currentTime;
+
+                if (Logging.connectors.isInfoEnabled())
+                  Logging.connectors.info("Livelink: " + ingestHttpAddress);
+
+
+                HttpGet method = new HttpGet(getHost().toURI() + ingestHttpAddress);
+                method.setHeader(new BasicHeader("Accept","*/*"));
+
+                ExecuteMethodThread methodThread = new ExecuteMethodThread(client,method);
+                methodThread.start();
+                try
+                {
+
+                  int statusCode = methodThread.getResponseCode();
+                  switch (statusCode)
+                  {
+                  case 500:
+                  case 502:
+                    Logging.connectors.warn("Livelink: Service interruption during fetch "+contextMsg+" with Livelink HTTP Server, retrying...");
+                    throw new ServiceInterruption("Service interruption during fetch",new ManifoldCFException(Integer.toString(statusCode)+" error while fetching"),System.currentTimeMillis()+60000L,
+                      System.currentTimeMillis()+600000L,-1,true);
+
+                  case HttpStatus.SC_UNAUTHORIZED:
+                    Logging.connectors.warn("Livelink: Document fetch unauthorized for "+ingestHttpAddress+" ("+contextMsg+")");
+                    // Since we logged in, we should fail here if the ingestion user doesn't have access to the
+                    // the document, but if we do, don't fail hard.
+                    resultCode = "UNAUTHORIZED";
+                    activities.deleteDocument(documentIdentifier,version);
+                    return;
+
+                  case HttpStatus.SC_OK:
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug("Livelink: Created http document connection to Livelink "+contextMsg);
+                    // A non-existent content length will cause a value of -1 to be returned.  This seems to indicate that the session login did not work right.
+                    if (methodThread.getResponseContentLength() >= 0)
+                    {
+                      try
+                      {
+                        InputStream is = methodThread.getSafeInputStream();
+                        try
+                        {
+                          rd.setBinary(is,dataSize);
+                            
+                          activities.ingestDocument(documentIdentifier,version,viewHttpAddress,rd);
+
+                          if (Logging.connectors.isDebugEnabled())
+                            Logging.connectors.debug("Livelink: Ingesting done "+contextMsg);
+
+                        }
+                        finally
+                        {
+                          // Close stream via thread, since otherwise this can hang
+                          is.close();
+                        }
+                      }
+                      catch (java.net.SocketTimeoutException e)
+                      {
+                        resultCode = "DATATIMEOUT";
+                        resultDescription = e.getMessage();
+                        currentTime = System.currentTimeMillis();
+                        Logging.connectors.warn("Livelink: Livelink socket timed out ingesting from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
+                        throw new ServiceInterruption("Socket timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
+                      }
+                      catch (java.net.SocketException e)
+                      {
+                        resultCode = "DATASOCKETERROR";
+                        resultDescription = e.getMessage();
+                        currentTime = System.currentTimeMillis();
+                        Logging.connectors.warn("Livelink: Livelink socket error ingesting from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
+                        throw new ServiceInterruption("Socket error: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
+                      }
+                      catch (javax.net.ssl.SSLHandshakeException e)
+                      {
+                        resultCode = "DATASSLHANDSHAKEERROR";
+                        resultDescription = e.getMessage();
+                        currentTime = System.currentTimeMillis();
+                        Logging.connectors.warn("Livelink: SSL handshake failed authenticating "+contextMsg+": "+e.getMessage(),e);
+                        throw new ServiceInterruption("SSL handshake error: "+e.getMessage(),e,currentTime+60000L,currentTime+300000L,-1,true);
+                      }
+                      catch (ConnectTimeoutException e)
+                      {
+                        resultCode = "CONNECTTIMEOUT";
+                        resultDescription = e.getMessage();
+                        currentTime = System.currentTimeMillis();
+                        Logging.connectors.warn("Livelink: Livelink socket timed out connecting to the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
+                        throw new ServiceInterruption("Connect timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
+                      }
+                      catch (InterruptedException e)
+                      {
+                        wasInterrupted = true;
+                        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                      }
+                      catch (InterruptedIOException e)
+                      {
+                        wasInterrupted = true;
+                        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                      }
+                      catch (HttpException e)
+                      {
+                        resultCode = "HTTPEXCEPTION";
+                        resultDescription = e.getMessage();
+                        // Treat unknown error ingesting data as a transient condition
+                        currentTime = System.currentTimeMillis();
+                        Logging.connectors.warn("Livelink: HTTP exception ingesting "+contextMsg+": "+e.getMessage(),e);
+                        throw new ServiceInterruption("HTTP exception ingesting "+contextMsg+": "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
+                      }
+                      catch (IOException e)
+                      {
+                        resultCode = "DATAEXCEPTION";
+                        resultDescription = e.getMessage();
+                        // Treat unknown error ingesting data as a transient condition
+                        currentTime = System.currentTimeMillis();
+                        Logging.connectors.warn("Livelink: IO exception ingesting "+contextMsg+": "+e.getMessage(),e);
+                        throw new ServiceInterruption("IO exception ingesting "+contextMsg+": "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
+                      }
+                      readSize = dataSize;
+                    }
+                    else
+                    {
+                      resultCode = "SESSIONLOGINFAILED";
+                      activities.deleteDocument(documentIdentifier,version);
+                    }
+                    break;
+                  case HttpStatus.SC_BAD_REQUEST:
+                  case HttpStatus.SC_USE_PROXY:
+                  case HttpStatus.SC_GONE:
+                    resultCode = "ERROR "+Integer.toString(statusCode);
+                    throw new ManifoldCFException("Unrecoverable request failure; error = "+Integer.toString(statusCode));
+                  default:
+                    resultCode = "UNKNOWN";
+                    Logging.connectors.warn("Livelink: Attempt to retrieve document from '"+ingestHttpAddress+"' received a response of "+Integer.toString(statusCode)+"; retrying in one minute");
+                    currentTime = System.currentTimeMillis();
+                    throw new ServiceInterruption("Fetch failed; retrying in 1 minute",new ManifoldCFException("Fetch failed with unknown code "+Integer.toString(statusCode)),
+                      currentTime+60000L,currentTime+600000L,-1,true);
+                  }
+                }
+                catch (InterruptedException e)
+                {
+                  // Drop the connection on the floor
+                  methodThread.interrupt();
+                  methodThread = null;
+                  throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                }
+                catch (java.net.SocketTimeoutException e)
+                {
+                  Logging.connectors.warn("Livelink: Socket timed out reading from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
+                  resultCode = "TIMEOUT";
+                  resultDescription = e.getMessage();
+                  currentTime = System.currentTimeMillis();
+                  throw new ServiceInterruption("Socket timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
+                }
+                catch (java.net.SocketException e)
+                {
+                  Logging.connectors.warn("Livelink: Socket error reading from Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
+                  resultCode = "SOCKETERROR";
+                  resultDescription = e.getMessage();
+                  currentTime = System.currentTimeMillis();
+                  throw new ServiceInterruption("Socket error: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
+                }
+                catch (javax.net.ssl.SSLHandshakeException e)
+                {
+                  currentTime = System.currentTimeMillis();
+                  Logging.connectors.warn("Livelink: SSL handshake failed "+contextMsg+": "+e.getMessage(),e);
+                  resultCode = "SSLHANDSHAKEERROR";
+                  resultDescription = e.getMessage();
+                  throw new ServiceInterruption("SSL handshake error: "+e.getMessage(),e,currentTime+60000L,currentTime+300000L,-1,true);
+                }
+                catch (ConnectTimeoutException e)
+                {
+                  Logging.connectors.warn("Livelink: Connect timed out reading from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
+                  resultCode = "CONNECTTIMEOUT";
+                  resultDescription = e.getMessage();
+                  currentTime = System.currentTimeMillis();
+                  throw new ServiceInterruption("Connect timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
+                }
+                catch (InterruptedIOException e)
+                {
+                  methodThread.interrupt();
+                  throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                }
+                catch (HttpException e)
+                {
+                  resultCode = "EXCEPTION";
+                  resultDescription = e.getMessage();
+                  throw new ManifoldCFException("Exception getting response "+contextMsg+": "+e.getMessage(), e);
+                }
+                catch (IOException e)
+                {
+                  resultCode = "EXCEPTION";
+                  resultDescription = e.getMessage();
+                  throw new ManifoldCFException("Exception getting response "+contextMsg+": "+e.getMessage(), e);
+                }
+                finally
+                {
+                  if (methodThread != null)
+                  {
+                    methodThread.abort();
+                    if (!wasInterrupted)
+                    {
+                      try
+                      {
+                       methodThread.finishUp();
+                      }
+                      catch (InterruptedException e)
+                      {
+                        wasInterrupted = true;
+                        throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                      }
+                    }
+                  }
+                }
+              }
+              else
+              {
+                if (Logging.connectors.isDebugEnabled())
+                  Logging.connectors.debug("Livelink: No fetch URI "+contextMsg+" - not ingesting");
+                resultCode = "NOURI";
+                return;
               }
             }
-            catch (java.net.SocketTimeoutException e)
+            else
             {
-              resultCode = "DATATIMEOUT";
-              resultDescription = e.getMessage();
-              currentTime = System.currentTimeMillis();
-              Logging.connectors.warn("Livelink: Livelink socket timed out ingesting from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
-              throw new ServiceInterruption("Socket timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
+              // Use FetchVersion instead
+              long currentTime;
+              
+              // Fire up the document reading thread
+              DocumentReadingThread t = new DocumentReadingThread(vol,objID,0);
+              try 
+              {
+                t.start();
+                try
+                {
+                  InputStream is = t.getSafeInputStream();
+                  try 
+                  {
+                    // Can only index while background thread is running!
+                    rd.setBinary(is, dataSize);
+                    activities.ingestDocument(documentIdentifier, version, viewHttpAddress, rd);
+                  }
+                  finally
+                  {
+                    is.close();
+                  }
+                }
+                catch (ManifoldCFException e)
+                {
+                  if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                    wasInterrupted = true;
+                  throw e;
+                }
+                catch (java.net.SocketTimeoutException e)
+                {
+                  throw e;
+                }
+                catch (InterruptedIOException e)
+                {
+                  wasInterrupted = true;
+                  throw e;
+                }
+                finally
+                {
+                  if (!wasInterrupted)
+                    t.finishUp();
+                }
+
+                // No errors.  Record the fact that we made it.
+                resultCode = "OK";
+                readSize = dataSize;
+              }
+              catch (InterruptedException e) 
+              {
+                t.interrupt();
+                throw new ManifoldCFException("Interrupted: " + e.getMessage(), e,
+                  ManifoldCFException.INTERRUPTED);
+              }
+              catch (ConnectTimeoutException e)
+              {
+                Logging.connectors.warn("Livelink: Connect timed out "+contextMsg+": "+e.getMessage(), e);
+                resultCode = "CONNECTTIMEOUT";
+                resultDescription = e.getMessage();
+                currentTime = System.currentTimeMillis();
+                throw new ServiceInterruption("Connect timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
+              }
+              catch (InterruptedIOException e)
+              {
+                t.interrupt();
+                throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+              }
+              catch (IOException e)
+              {
+                resultCode = "EXCEPTION";
+                resultDescription = e.getMessage();
+                throw new ManifoldCFException("Exception getting response "+contextMsg+": "+e.getMessage(), e);
+              }
+              catch (ManifoldCFException e)
+              {
+                if (e.getErrorCode() != ManifoldCFException.INTERRUPTED)
+                {
+                  resultCode = "EXCEPTION";
+                  resultDescription = e.getMessage();
+                }
+                throw e;
+              }
+              catch (RuntimeException e)
+              {
+                resultCode = "EXCEPTION";
+                resultDescription = e.getMessage();
+                handleLivelinkRuntimeException(e,0,true);
+              }
             }
-            catch (java.net.SocketException e)
-            {
-              resultCode = "DATASOCKETERROR";
-              resultDescription = e.getMessage();
-              currentTime = System.currentTimeMillis();
-              Logging.connectors.warn("Livelink: Livelink socket error ingesting from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
-              throw new ServiceInterruption("Socket error: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
-            }
-            catch (javax.net.ssl.SSLHandshakeException e)
-            {
-              resultCode = "DATASSLHANDSHAKEERROR";
-              resultDescription = e.getMessage();
-              currentTime = System.currentTimeMillis();
-              Logging.connectors.warn("Livelink: SSL handshake failed authenticating "+contextMsg+": "+e.getMessage(),e);
-              throw new ServiceInterruption("SSL handshake error: "+e.getMessage(),e,currentTime+60000L,currentTime+300000L,-1,true);
-            }
-            catch (ConnectTimeoutException e)
-            {
-              resultCode = "CONNECTTIMEOUT";
-              resultDescription = e.getMessage();
-              currentTime = System.currentTimeMillis();
-              Logging.connectors.warn("Livelink: Livelink socket timed out connecting to the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
-              throw new ServiceInterruption("Connect timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
-            }
-            catch (InterruptedException e)
-            {
-              throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-            }
-            catch (InterruptedIOException e)
-            {
-              throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-            }
-            catch (HttpException e)
-            {
-              resultCode = "HTTPEXCEPTION";
-              resultDescription = e.getMessage();
-              // Treat unknown error ingesting data as a transient condition
-              currentTime = System.currentTimeMillis();
-              Logging.connectors.warn("Livelink: HTTP exception ingesting "+contextMsg+": "+e.getMessage(),e);
-              throw new ServiceInterruption("HTTP exception ingesting "+contextMsg+": "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
-            }
-            catch (IOException e)
-            {
-              resultCode = "DATAEXCEPTION";
-              resultDescription = e.getMessage();
-              // Treat unknown error ingesting data as a transient condition
-              currentTime = System.currentTimeMillis();
-              Logging.connectors.warn("Livelink: IO exception ingesting "+contextMsg+": "+e.getMessage(),e);
-              throw new ServiceInterruption("IO exception ingesting "+contextMsg+": "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,false);
-            }
-            readSize = new Long(dataSize);
           }
           else
           {
+            // Document not indexable because of its length
+            resultDescription = "Document length ("+dataSize+") was rejected by output connector";
+            if (Logging.connectors.isDebugEnabled())
+              Logging.connectors.debug("Livelink: Excluding document "+documentIdentifier+" because its length ("+dataSize+") was rejected by output connector");
             resultCode = "DOCUMENTTOOLONG";
             activities.deleteDocument(documentIdentifier,version);
           }
         }
         else
         {
-          resultCode = "SESSIONLOGINFAILED";
+          // Document not indexable because of its mime type
+          resultDescription = "Mime type ("+mimeType+") was rejected by output connector";
+          if (Logging.connectors.isDebugEnabled())
+            Logging.connectors.debug("Livelink: Excluding document "+documentIdentifier+" because its mime type ("+mimeType+") was rejected by output connector");
+          resultCode = "MIMETYPEEXCLUSION";
           activities.deleteDocument(documentIdentifier,version);
         }
-        break;
-      case HttpStatus.SC_BAD_REQUEST:
-      case HttpStatus.SC_USE_PROXY:
-      case HttpStatus.SC_GONE:
-        resultCode = "ERROR "+Integer.toString(statusCode);
-        throw new ManifoldCFException("Unrecoverable request failure; error = "+Integer.toString(statusCode));
-      default:
-        resultCode = "UNKNOWN";
-        Logging.connectors.warn("Livelink: Attempt to retrieve document from '"+ingestHttpAddress+"' received a response of "+Integer.toString(statusCode)+"; retrying in one minute");
-        currentTime = System.currentTimeMillis();
-        throw new ServiceInterruption("Fetch failed; retrying in 1 minute",new ManifoldCFException("Fetch failed with unknown code "+Integer.toString(statusCode)),
-          currentTime+60000L,currentTime+600000L,-1,true);
       }
-    }
-    catch (InterruptedException e)
-    {
-      // Drop the connection on the floor
-      methodThread.interrupt();
-      methodThread = null;
-      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-    }
-    catch (java.net.SocketTimeoutException e)
-    {
-      Logging.connectors.warn("Livelink: Socket timed out reading from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
-      resultCode = "TIMEOUT";
-      resultDescription = e.getMessage();
-      currentTime = System.currentTimeMillis();
-      throw new ServiceInterruption("Socket timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
-    }
-    catch (java.net.SocketException e)
-    {
-      Logging.connectors.warn("Livelink: Socket error reading from Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
-      resultCode = "SOCKETERROR";
-      resultDescription = e.getMessage();
-      currentTime = System.currentTimeMillis();
-      throw new ServiceInterruption("Socket error: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
-    }
-    catch (javax.net.ssl.SSLHandshakeException e)
-    {
-      currentTime = System.currentTimeMillis();
-      Logging.connectors.warn("Livelink: SSL handshake failed "+contextMsg+": "+e.getMessage(),e);
-      resultCode = "SSLHANDSHAKEERROR";
-      resultDescription = e.getMessage();
-      throw new ServiceInterruption("SSL handshake error: "+e.getMessage(),e,currentTime+60000L,currentTime+300000L,-1,true);
-    }
-    catch (ConnectTimeoutException e)
-    {
-      Logging.connectors.warn("Livelink: Connect timed out reading from the Livelink HTTP Server "+contextMsg+": "+e.getMessage(), e);
-      resultCode = "CONNECTTIMEOUT";
-      resultDescription = e.getMessage();
-      currentTime = System.currentTimeMillis();
-      throw new ServiceInterruption("Connect timed out: "+e.getMessage(),e,currentTime+300000L,currentTime+6*3600000L,-1,true);
-    }
-    catch (InterruptedIOException e)
-    {
-      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-    }
-    catch (HttpException e)
-    {
-      resultCode = "EXCEPTION";
-      resultDescription = e.getMessage();
-      throw new ManifoldCFException("Exception getting response "+contextMsg+": "+e.getMessage(), e);
-    }
-    catch (IOException e)
-    {
-      resultCode = "EXCEPTION";
-      resultDescription = e.getMessage();
-      throw new ManifoldCFException("Exception getting response "+contextMsg+": "+e.getMessage(), e);
+      else
+      {
+        // Document not ingestable due to URL
+        resultDescription = "URL ("+viewHttpAddress+") was rejected by output connector";
+        if (Logging.connectors.isDebugEnabled())
+          Logging.connectors.debug("Livelink: Excluding document "+documentIdentifier+" because its URL ("+viewHttpAddress+") was rejected by output connector");
+        resultCode = "URLEXCLUSION";
+        activities.deleteDocument(documentIdentifier,version);
+      }
     }
     finally
     {
-      if (methodThread != null)
-      {
-        methodThread.abort();
-        activities.recordActivity(new Long(startTime),ACTIVITY_FETCH,readSize,Integer.toString(objID),resultCode,resultDescription,null);
-        try
-        {
-         methodThread.finishUp();
-        }
-        catch (InterruptedException e)
-        {
-          throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-        }
-      }
+      if (!wasInterrupted)
+        activities.recordActivity(new Long(startTime),ACTIVITY_FETCH,readSize,vol+":"+objID,resultCode,resultDescription,null);
     }
   }
 
@@ -4734,7 +5051,7 @@
           return null;
 
         String[] rval = new String[children.size()];
-        Enumeration en = children.enumerateValues();
+        LLValueEnumeration en = children.enumerateValues();
 
         int j = 0;
         while (en.hasMoreElements())
@@ -4955,7 +5272,7 @@
         if (children == null)
           return null;
         String[] rval = new String[children.size()];
-        Enumeration en = children.enumerateValues();
+        LLValueEnumeration en = children.enumerateValues();
 
         int j = 0;
         while (en.hasMoreElements())
@@ -5289,7 +5606,18 @@
     {
       return getVersionValue() != null;
     }
-    
+
+    /** Get data size.
+    */
+    public Long getDataSize()
+      throws ServiceInterruption, ManifoldCFException
+    {
+      LLValue elem = getVersionValue();
+      if (elem == null)
+        return null;
+      return new Long(elem.toInteger("FILEDATASIZE"));
+    }
+
     /** Get file name.
     */
     public String getFileName()
@@ -5446,7 +5774,6 @@
       return "(Volume: "+volumeID+", Object: "+objectID+")";
     }
     
-
     /**
     * Returns the object ID specified by the path name.
     * @param startPath is the folder name (a string with dots as separators)
@@ -5823,7 +6150,61 @@
     }
   }
 
-  
+  /** Thread we can abandon that lists all users (except admin).
+  */
+  protected class ListUsersThread extends Thread
+  {
+    protected LLValue rval = null;
+    protected Throwable exception = null;
+
+    public ListUsersThread()
+    {
+      super();
+      setDaemon(true);
+    }
+
+    public void run()
+    {
+      try
+      {
+        LLValue userList = new LLValue();
+        int status = LLUsers.ListUsers(userList);
+
+        if (Logging.connectors.isDebugEnabled())
+        {
+          Logging.connectors.debug("Livelink: User list retrieved: status="+Integer.toString(status));
+        }
+
+        if (status < 0)
+        {
+          Logging.connectors.debug("Livelink: User list inaccessable ("+llServer.getErrors()+")");
+          return;
+        }
+
+        if (status != 0)
+        {
+          throw new ManifoldCFException("Error retrieving user list: status="+Integer.toString(status)+" ("+llServer.getErrors()+")");
+        }
+        
+        rval = userList;
+      }
+      catch (Throwable e)
+      {
+        this.exception = e;
+      }
+    }
+
+    public Throwable getException()
+    {
+      return exception;
+    }
+
+    public LLValue getResponse()
+    {
+      return rval;
+    }
+  }
+
   /** Thread we can abandon that gets user information for a userID.
   */
   protected class GetUserInfoThread extends Thread
@@ -7080,6 +7461,79 @@
 
   }
 
+  /** This thread performs a LAPI FetchVersion command, streaming the resulting
+  * document back through a XThreadInputStream to the invoking thread.
+  */
+  protected class DocumentReadingThread extends Thread 
+  {
+
+    protected Throwable exception = null;
+    protected final int volumeID;
+    protected final int docID;
+    protected final int versionNumber;
+    protected final XThreadInputStream stream;
+    
+    public DocumentReadingThread(int volumeID, int docID, int versionNumber)
+    {
+      super();
+      this.volumeID = volumeID;
+      this.docID = docID;
+      this.versionNumber = versionNumber;
+      this.stream = new XThreadInputStream();
+      setDaemon(true);
+    }
+
+    @Override
+    public void run()
+    {
+      try
+      {
+        XThreadOutputStream outputStream = new XThreadOutputStream(stream);
+        try 
+        {
+          int status = LLDocs.FetchVersion(volumeID, docID, versionNumber, outputStream);
+          if (status != 0)
+          {
+            throw new ManifoldCFException("Error retrieving contents of document "+Integer.toString(volumeID)+":"+Integer.toString(docID)+" revision "+versionNumber+" : Status="+Integer.toString(status)+" ("+llServer.getErrors()+")");
+          }
+        }
+        finally
+        {
+          outputStream.close();
+        }
+      } catch (Throwable e) {
+        this.exception = e;
+      }
+    }
+
+    public InputStream getSafeInputStream() {
+      return stream;
+    }
+    
+    public void finishUp()
+      throws InterruptedException, ManifoldCFException
+    {
+      // This will be called during the finally
+      // block in the case where all is well (and
+      // the stream completed) and in the case where
+      // there were exceptions.
+      stream.abort();
+      join();
+      Throwable thr = exception;
+      if (thr != null) {
+        if (thr instanceof ManifoldCFException)
+          throw (ManifoldCFException) thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException) thr;
+        else if (thr instanceof Error)
+          throw (Error) thr;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+thr.getClass().getName(),thr);
+      }
+    }
+
+  }
+
   /** This thread does the actual socket communication with the server.
   * It's set up so that it can be abandoned at shutdown time.
   *
@@ -7105,6 +7559,7 @@
     protected HttpResponse response = null;
     protected Throwable responseException = null;
     protected XThreadInputStream threadStream = null;
+    protected InputStream bodyStream = null;
     protected boolean streamCreated = false;
     protected Throwable streamException = null;
     protected boolean abortThread = false;
@@ -7165,7 +7620,7 @@
               {
                 try
                 {
-                  InputStream bodyStream = response.getEntity().getContent();
+                  bodyStream = response.getEntity().getContent();
                   if (bodyStream != null)
                   {
                     threadStream = new XThreadInputStream(bodyStream);
@@ -7205,6 +7660,17 @@
         }
         finally
         {
+          if (bodyStream != null)
+          {
+            try
+            {
+              bodyStream.close();
+            }
+            catch (IOException e)
+            {
+            }
+            bodyStream = null;
+          }
           synchronized (this)
           {
             try
diff --git a/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_en_US.properties b/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_en_US.properties
index c27079e..baecde2 100644
--- a/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_en_US.properties
+++ b/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_en_US.properties
@@ -88,7 +88,7 @@
 LivelinkConnector.Include=Include
 LivelinkConnector.Exclude=Exclude
 LivelinkConnector.SecurityColon=Security:
-LivelinkConnector.Enabled=Enabled&nbsp;
+LivelinkConnector.Enabled=Enabled
 LivelinkConnector.Disabled=Disabled
 LivelinkConnector.DeleteToken=Delete token #
 LivelinkConnector.AddAccessToken=Add access token
@@ -143,3 +143,9 @@
 LivelinkConnector.TheServerCgiPathMustBeginWithACharacter=The server CGI path must begin with a '/' character
 LivelinkConnector.Delete=Delete
 LivelinkConnector.Add=Add
+LivelinkConnector.CrawlUserWorkspaces=Crawl user workspaces?
+
+LivelinkConnector.EnterTheViewCgiPathToLivelink=Enter the view CGI path to LiveLink
+LivelinkConnector.TheIngestCgiPathMustBeBlankOrBeginWithACharacter=The ingestion CGI path must be blank or begin with a '/' character
+LivelinkConnector.UseLAPI=Use LAPI
+LivelinkConnector.SelectAViewProtocol=Select a view protocol
diff --git a/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_ja_JP.properties b/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_ja_JP.properties
index 6870e95..d179f83 100644
--- a/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_ja_JP.properties
+++ b/connectors/livelink/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/livelink/common_ja_JP.properties
@@ -143,3 +143,9 @@
 LivelinkConnector.TheServerCgiPathMustBeginWithACharacter=The server CGI path must begin with a '/' character
 LivelinkConnector.Delete=削除
 LivelinkConnector.Add=追加
+LivelinkConnector.CrawlUserWorkspaces=Crawl user workspaces?
+
+LivelinkConnector.EnterTheViewCgiPathToLivelink=Enter the view CGI path to LiveLink
+LivelinkConnector.TheIngestCgiPathMustBeBlankOrBeginWithACharacter=The ingestion CGI path must be blank or begin with a '/' character
+LivelinkConnector.UseLAPI=Use LAPI
+LivelinkConnector.SelectAViewProtocol=Select a view protocol
diff --git a/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/CommonsHTTPSender.java b/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/CommonsHTTPSender.java
index 6ad6e91..65c643e 100644
--- a/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/CommonsHTTPSender.java
+++ b/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/CommonsHTTPSender.java
@@ -614,6 +614,7 @@
     protected HttpResponse response = null;
     protected Throwable responseException = null;
     protected XThreadInputStream threadStream = null;
+    protected InputStream bodyStream = null;
     protected String charSet = null;
     protected boolean streamCreated = false;
     protected Throwable streamException = null;
@@ -676,7 +677,7 @@
                 try
                 {
                   HttpEntity entity = response.getEntity();
-                  InputStream bodyStream = entity.getContent();
+                  bodyStream = entity.getContent();
                   if (bodyStream != null)
                   {
                     threadStream = new XThreadInputStream(bodyStream);
@@ -717,6 +718,17 @@
         }
         finally
         {
+          if (bodyStream != null)
+          {
+            try
+            {
+              bodyStream.close();
+            }
+            catch (IOException e)
+            {
+            }
+            bodyStream = null;
+          }
           synchronized (this)
           {
             try
diff --git a/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioAuthority.java b/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioAuthority.java
index e221bef..175ba37 100644
--- a/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioAuthority.java
+++ b/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioAuthority.java
@@ -73,15 +73,6 @@
   
   final private static int MANAGE_DOCUMENT_PRIVILEGE = 17;
 
-  /** Deny access token for Meridio.  All tokens begin with "U" or with "G", except the blanket "READ_ALL" that I create.
-  * However, we currently have code in the field, so I will continue ot use "DEAD_AUTHORITY" for that reason.
-  */
-  private final static String denyToken = "DEAD_AUTHORITY";
-
-  private final static AuthorizationResponse unreachableResponse = new AuthorizationResponse(new String[]{denyToken},AuthorizationResponse.RESPONSE_UNREACHABLE);
-  private final static AuthorizationResponse userNotFoundResponse = new AuthorizationResponse(new String[]{denyToken},AuthorizationResponse.RESPONSE_USERNOTFOUND);
-
-
   /** Constructor.
   */
   public MeridioAuthority() {}
@@ -551,7 +542,7 @@
         {
           if (Logging.authorityConnectors.isDebugEnabled())
             Logging.authorityConnectors.debug("Meridio: User '" + userName + "' does not exist");
-          return userNotFoundResponse;
+          return RESPONSE_USERNOTFOUND;
         }
         if (Logging.authorityConnectors.isDebugEnabled())
           Logging.authorityConnectors.debug("Meridio: Found user - the User Id for '" + userName +
@@ -677,7 +668,7 @@
   @Override
   public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
   {
-    return unreachableResponse;
+    return RESPONSE_UNREACHABLE;
   }
   
   // UI support methods.
@@ -946,6 +937,8 @@
     String password = parameters.getObfuscatedParameter("Password");
     if (password == null)
       password = "";
+    else
+      password = out.mapPasswordToKey(password);
 
     String meridioKeystore = parameters.getParameter("MeridioKeystore");
     IKeystoreManager localKeystore;
@@ -1289,7 +1282,7 @@
 
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter("Password",password);
+      parameters.setObfuscatedParameter("Password",variableContext.mapKeyToPassword(password));
 
     String configOp = variableContext.getParameter("configop");
     if (configOp != null)
diff --git a/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioConnector.java b/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioConnector.java
index 0f2db37..7a4e482 100644
--- a/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioConnector.java
+++ b/connectors/meridio/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/meridio/MeridioConnector.java
@@ -71,7 +71,7 @@
   protected String urlVersionBase = null;
 
   /** Deny access token for Meridio */
-  private final static String denyToken = "DEAD_AUTHORITY";
+  private final static String denyToken = GLOBAL_DENY_TOKEN;
 
   /** Deny access token for Active Directory, which is what we expect to be in place for forced acls */
   private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
@@ -1690,6 +1690,8 @@
     String password = parameters.getObfuscatedParameter("Password");
     if (password == null)
       password = "";
+    else
+      password = out.mapPasswordToKey(password);
 
     String webClientProtocol = parameters.getParameter("MeridioWebClientProtocol");
     if (webClientProtocol == null)
@@ -2007,7 +2009,7 @@
 
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter("Password",password);
+      parameters.setObfuscatedParameter("Password",variableContext.mapKeyToPassword(password));
 
     String webClientProtocol = variableContext.getParameter("webClientProtocol");
     if (webClientProtocol != null)
diff --git a/connectors/nullauthority/pom.xml b/connectors/nullauthority/pom.xml
index b6c3428..289a864 100644
--- a/connectors/nullauthority/pom.xml
+++ b/connectors/nullauthority/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/nulloutput/connector/src/main/java/org/apache/manifoldcf/agents/output/nullconnector/NullConnector.java b/connectors/nulloutput/connector/src/main/java/org/apache/manifoldcf/agents/output/nullconnector/NullConnector.java
index f8587ab..973f0ce 100644
--- a/connectors/nulloutput/connector/src/main/java/org/apache/manifoldcf/agents/output/nullconnector/NullConnector.java
+++ b/connectors/nulloutput/connector/src/main/java/org/apache/manifoldcf/agents/output/nullconnector/NullConnector.java
@@ -35,6 +35,8 @@
   public final static String INGEST_ACTIVITY = "document ingest";
   /** Document removal activity */
   public final static String REMOVE_ACTIVITY = "document deletion";
+  /** Job notify activity */
+  public final static String JOB_COMPLETE_ACTIVITY = "output notification";
 
   /** Constructor.
   */
@@ -48,7 +50,7 @@
   @Override
   public String[] getActivitiesList()
   {
-    return new String[]{INGEST_ACTIVITY,REMOVE_ACTIVITY};
+    return new String[]{INGEST_ACTIVITY,REMOVE_ACTIVITY,JOB_COMPLETE_ACTIVITY};
   }
 
   /** Connect.
@@ -153,5 +155,16 @@
     activities.recordActivity(null,REMOVE_ACTIVITY,null,documentURI,"OK",null);
   }
 
+  /** Notify the connector of a completed job.
+  * This is meant to allow the connector to flush any internal data structures it has been keeping around, or to tell the output repository that this
+  * is a good time to synchronize things.  It is called whenever a job is either completed or aborted.
+  *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
+  */
+  @Override
+  public void noteJobComplete(IOutputNotifyActivity activities)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    activities.recordActivity(null,JOB_COMPLETE_ACTIVITY,null,"","OK",null);
+  }
 
 }
diff --git a/connectors/nulloutput/pom.xml b/connectors/nulloutput/pom.xml
index c1c7651..b071e39 100644
--- a/connectors/nulloutput/pom.xml
+++ b/connectors/nulloutput/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerConnector.java b/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerConnector.java
index b533c81..7576a01 100644
--- a/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerConnector.java
+++ b/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerConnector.java
@@ -199,6 +199,16 @@
     }
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return connectionManager != null;
+  }
+
   @Override
   public String[] getActivitiesList() {
     return OPENSEARCHSERVER_ACTIVITIES;
@@ -340,9 +350,7 @@
   @Override
   public boolean checkDocumentIndexable(String outputDescription, File localFile)
       throws ManifoldCFException, ServiceInterruption {
-    OpenSearchServerSpecs specs = getSpecsCache(outputDescription);
-    return specs
-        .checkExtension(FilenameUtils.getExtension(localFile.getName()));
+    return true;
   }
 
   @Override
@@ -352,6 +360,19 @@
     return specs.checkMimeType(mimeType);
   }
 
+  /** Pre-determine whether a document's URL is indexable by this connector.  This method is used by participating repository connectors
+  * to help filter out documents that are not worth indexing.
+  *@param outputDescription is the document's output version.
+  *@param url is the URL of the document.
+  *@return true if the file is indexable.
+  */
+  @Override
+  public boolean checkURLIndexable(String outputDescription, String url)
+    throws ManifoldCFException, ServiceInterruption {
+    OpenSearchServerSpecs specs = getSpecsCache(outputDescription);
+    return specs.checkExtension(FilenameUtils.getExtension(url));
+  }
+    
   @Override
   public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
       Locale locale, ConfigParams parameters) throws ManifoldCFException, IOException {
diff --git a/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerSpecs.java b/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerSpecs.java
index 1914bdf..f0ff202 100644
--- a/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerSpecs.java
+++ b/connectors/opensearchserver/connector/src/main/java/org/apache/manifoldcf/agents/output/opensearchserver/OpenSearchServerSpecs.java
@@ -145,10 +145,15 @@
   }
 
   public boolean checkExtension(String extension) {
+    if (extension == null || extension.length() == 0)
+      // Special character to match - see CONNECTORS-707
+      extension = ".";
     return extensionSet.contains(extension);
   }
 
   public boolean checkMimeType(String mimeType) {
+    if (mimeType == null)
+      mimeType = "application/unknown";
     return mimeTypeSet.contains(mimeType);
   }
 }
diff --git a/connectors/opensearchserver/pom.xml b/connectors/opensearchserver/pom.xml
index f7e307a..5bf2b38 100644
--- a/connectors/opensearchserver/pom.xml
+++ b/connectors/opensearchserver/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   
@@ -42,16 +42,26 @@
     <resources>
       <resource>
         <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
       </resource>
-    </resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -61,7 +71,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -93,7 +105,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/connectors/pom.xml b/connectors/pom.xml
index 75a3245..44ecdfd 100644
--- a/connectors/pom.xml
+++ b/connectors/pom.xml
@@ -20,13 +20,13 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-parent</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
   <groupId>org.apache.manifoldcf</groupId>
   <artifactId>mcf-connectors</artifactId>
-  <version>1.2-SNAPSHOT</version>
+  <version>1.5-SNAPSHOT</version>
 
   <name>ManifoldCF - Connectors</name>
   <packaging>pom</packaging>
@@ -36,6 +36,7 @@
     <module>activedirectory</module>
     <module>filesystem</module>
     <module>gts</module>
+    <module>hdfs</module>
     <module>jcifs</module>
     <module>jdbc</module>
     <module>ldap</module>
@@ -50,6 +51,12 @@
     <module>wiki</module>
     <module>alfresco</module>
     <module>elasticsearch</module>
+    <module>dropbox</module>
+    <module>googledrive</module>
+    <module>jira</module>
+    <module>generic</module>
+    <module>regexpmapper</module>
+    <module>email</module>
   </modules>
 
 </project>
diff --git a/connectors/regexpmapper/build.xml b/connectors/regexpmapper/build.xml
new file mode 100644
index 0000000..c5b4f4f
--- /dev/null
+++ b/connectors/regexpmapper/build.xml
@@ -0,0 +1,23 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project name="regexpmapper" default="all">
+
+    <import file="../connector-build.xml"/>
+    
+    
+</project>
diff --git a/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/MatchMap.java b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/MatchMap.java
new file mode 100644
index 0000000..8346234
--- /dev/null
+++ b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/MatchMap.java
@@ -0,0 +1,587 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mappers.regexp;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import java.util.regex.*;
+
+/** An instance of this class describes a "match map", which describes a translation of an input
+* string using regexp technology.
+* A match map consists of multiple clauses, which are fired in sequence.  Each clause is a regexp
+* search and replace, where the replace string can include references to the groups present in the
+* search regexp.
+* MatchMaps can be converted to strings in two different ways.  The first way is to build a single
+* string of the form "match1=replace1&match2=replace2...".  Strings of this kind must escape & and =
+* characters in the match and replace strings, where found.  The second way is to generate an array
+* of match strings and a corresponding array of replace strings.  This method requires no escaping
+* of the string contents.
+*/
+public class MatchMap
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** This is the set of match regexp strings */
+  protected List<String> matchStrings;
+  /** This is the set of Pattern objects corresponding to the match regexp strings.
+  * It's null if the patterns have not been built yet. */
+  protected Pattern[] matchPatterns = null;
+  /** This is the set of replace strings */
+  protected List<String> replaceStrings;
+
+  /** Constructor.  Build an empty matchmap. */
+  public MatchMap()
+  {
+    matchStrings = new ArrayList<String>();
+    replaceStrings = new ArrayList<String>();
+  }
+
+  /** Constructor.  Build a matchmap from a single string. */
+  public MatchMap(String stringForm)
+  {
+    matchStrings = new ArrayList<String>();
+    replaceStrings = new ArrayList<String>();
+    StringBuilder matchString = new StringBuilder();
+    StringBuilder replaceString = new StringBuilder();
+    int i = 0;
+    while (i < stringForm.length())
+    {
+      matchString.setLength(0);
+      replaceString.setLength(0);
+      while (i < stringForm.length())
+      {
+        char x = stringForm.charAt(i);
+        if (x == '&' || x == '=')
+          break;
+        i++;
+        if (x == '\\' && i < stringForm.length())
+          x = stringForm.charAt(i++);
+        matchString.append(x);
+      }
+
+      if (i < stringForm.length())
+      {
+        char x = stringForm.charAt(i);
+        if (x == '=')
+        {
+          i++;
+          // Pick up the second string
+          while (i < stringForm.length())
+          {
+            x = stringForm.charAt(i);
+            if (x == '&')
+              break;
+            i++;
+            if (x == '\\' && i < stringForm.length())
+              x = stringForm.charAt(i++);
+            replaceString.append(x);
+          }
+        }
+      }
+
+      matchStrings.add(matchString.toString());
+      replaceStrings.add(replaceString.toString());
+
+      if (i < stringForm.length())
+      {
+        char x = stringForm.charAt(i);
+        if (x == '&')
+          i++;
+      }
+    }
+  }
+
+  /** Constructor.  Build a matchmap from two lists representing match and replace strings */
+  public MatchMap(List<String> matchStrings, List<String> replaceStrings)
+  {
+    this.matchStrings = matchStrings;
+    this.replaceStrings = replaceStrings;
+  }
+
+  /** Get the number of match/replace strings */
+  public int getMatchCount()
+  {
+    return matchStrings.size();
+  }
+
+  /** Get a specific match string */
+  public String getMatchString(int index)
+  {
+    return matchStrings.get(index);
+  }
+
+  /** Get a specific replace string */
+  public String getReplaceString(int index)
+  {
+    return replaceStrings.get(index);
+  }
+
+  /** Delete a specified match/replace string pair */
+  public void deleteMatchPair(int index)
+  {
+    matchStrings.remove(index);
+    replaceStrings.remove(index);
+    matchPatterns = null;
+  }
+
+  /** Insert a match/replace string pair */
+  public void insertMatchPair(int index, String match, String replace)
+  {
+    matchStrings.add(index,match);
+    replaceStrings.add(index,replace);
+    matchPatterns = null;
+  }
+
+  /** Append a match/replace string pair */
+  public void appendMatchPair(String match, String replace)
+  {
+    matchStrings.add(match);
+    replaceStrings.add(replace);
+    matchPatterns = null;
+  }
+
+  /** Append old-style match/replace pair.
+  * This method translates old-style regexp and group output form to the
+  * current style before adding to the map.
+  */
+  public void appendOldstyleMatchPair(String oldstyleMatch, String oldstyleReplace)
+  {
+    String newStyleMatch = "^" + oldstyleMatch + "$";
+
+    // Need to build a new-style replace string from the old one.  To do that, use the
+    // original parser (which basically will guarantee that we get it right)
+
+    EvaluatorTokenStream et = new EvaluatorTokenStream(oldstyleReplace);
+    StringBuilder newStyleReplace = new StringBuilder();
+
+    while (true)
+    {
+      EvaluatorToken t = et.peek();
+      if (t == null)
+        break;
+      switch (t.getType())
+      {
+      case EvaluatorToken.TYPE_COMMA:
+        et.advance();
+        break;
+      case EvaluatorToken.TYPE_GROUP:
+        et.advance();
+        int groupNumber = t.getGroupNumber();
+        switch (t.getGroupStyle())
+        {
+        case EvaluatorToken.GROUPSTYLE_NONE:
+          newStyleReplace.append("$(").append(Integer.toString(groupNumber)).append(")");
+          break;
+        case EvaluatorToken.GROUPSTYLE_LOWER:
+          newStyleReplace.append("$(").append(Integer.toString(groupNumber)).append("l)");
+          break;
+        case EvaluatorToken.GROUPSTYLE_UPPER:
+          newStyleReplace.append("$(").append(Integer.toString(groupNumber)).append("u)");
+          break;
+        case EvaluatorToken.GROUPSTYLE_MIXED:
+          newStyleReplace.append("$(").append(Integer.toString(groupNumber)).append("m)");
+          break;
+        default:
+          break;
+        }
+        break;
+      case EvaluatorToken.TYPE_TEXT:
+        et.advance();
+        escape(newStyleReplace,t.getTextValue());
+        break;
+      default:
+        break;
+      }
+    }
+
+    appendMatchPair(newStyleMatch,newStyleReplace.toString());
+  }
+
+  /** Escape a string so it is verbatim */
+  protected static void escape(StringBuilder output, String input)
+  {
+    int i = 0;
+    while (i < input.length())
+    {
+      char x = input.charAt(i++);
+      if (x == '$')
+        output.append(x);
+      output.append(x);
+    }
+  }
+
+  /** Convert the matchmap to string form. */
+  public String toString()
+  {
+    int i = 0;
+    StringBuilder rval = new StringBuilder();
+    while (i < matchStrings.size())
+    {
+      String matchString = matchStrings.get(i);
+      String replaceString = replaceStrings.get(i);
+      if (i > 0)
+        rval.append('&');
+      stuff(rval,matchString);
+      rval.append('=');
+      stuff(rval,replaceString);
+      i++;
+    }
+    return rval.toString();
+  }
+
+  /** Stuff characters */
+  protected static void stuff(StringBuilder sb, String value)
+  {
+    int i = 0;
+    while (i < value.length())
+    {
+      char x = value.charAt(i++);
+      if (x == '\\' || x == '&' || x == '=')
+        sb.append('\\');
+      sb.append(x);
+    }
+  }
+
+  /** Perform a translation.
+  */
+  public String translate(String input)
+    throws ManifoldCFException
+  {
+    // Build pattern vector if not already there
+    if (matchPatterns == null)
+    {
+      matchPatterns = new Pattern[matchStrings.size()];
+      int i = 0;
+      while (i < matchPatterns.length)
+      {
+        String regexp = matchStrings.get(i);
+        try
+        {
+          matchPatterns[i] = Pattern.compile(regexp);
+        }
+        catch (java.util.regex.PatternSyntaxException e)
+        {
+          matchPatterns = null;
+          throw new ManifoldCFException("For match expression '"+regexp+"', found pattern syntax error: "+e.getMessage(),e);
+        }
+        i++;
+      }
+    }
+
+    int j = 0;
+    while (j < matchPatterns.length)
+    {
+      Pattern p = matchPatterns[j];
+      // Construct a matcher
+      Matcher m = p.matcher(input);
+      // Grab the output description
+      String outputDescription = replaceStrings.get(j);
+      j++;
+      // Create a copy buffer
+      StringBuilder outputBuffer = new StringBuilder();
+      // Keep track of the index in the original string we have done up to
+      int currentIndex = 0;
+      // Scan the string using find, and for each one found, do a translation
+      while (true)
+      {
+        boolean foundOne = m.find();
+        if (foundOne == false)
+        {
+          // No subsequent match found.
+          // Copy everything from currentIndex until the end of input
+          outputBuffer.append(input.substring(currentIndex));
+          break;
+        }
+
+        // Do a translation.  This involves copying everything in the input
+        // string up until the start of the match, then doing a replace for
+        // the match itself, and finally setting the currentIndex to the end
+        // of the match.
+
+        int matchStart = m.start(0);
+        int matchEnd = m.end(0);
+        if (matchStart == -1)
+        {
+          // The expression was degenerate; treat this as the end.
+          outputBuffer.append(input.substring(currentIndex));
+          break;
+        }
+        outputBuffer.append(input.substring(currentIndex,matchStart));
+
+        // Process translation description!
+        int i = 0;
+        while (i < outputDescription.length())
+        {
+          char x = outputDescription.charAt(i++);
+          if (x == '$' && i < outputDescription.length())
+          {
+            x = outputDescription.charAt(i++);
+            if (x == '(')
+            {
+              // Process evaluation expression
+              StringBuilder numberBuf = new StringBuilder();
+              boolean upper = false;
+              boolean lower = false;
+              boolean mixed = false;
+              while (i < outputDescription.length())
+              {
+                char y = outputDescription.charAt(i++);
+                if (y == ')')
+                  break;
+                else if (y >= '0' && y <= '9')
+                  numberBuf.append(y);
+                else if (y == 'u' || y == 'U')
+                  upper = true;
+                else if (y == 'l' || y == 'L')
+                  lower = true;
+                else if (y == 'm' || y == 'M')
+                  mixed = true;
+              }
+              String number = numberBuf.toString();
+              try
+              {
+                int groupnum = Integer.parseInt(number);
+                String groupValue = m.group(groupnum);
+                if (upper)
+                  outputBuffer.append(groupValue.toUpperCase());
+                else if (lower)
+                  outputBuffer.append(groupValue.toLowerCase());
+                else if (mixed && groupValue.length() > 0)
+                  outputBuffer.append(groupValue.substring(0,1).toUpperCase()).append(groupValue.substring(1).toLowerCase());
+                else
+                  outputBuffer.append(groupValue);
+
+              }
+              catch (NumberFormatException e)
+              {
+                // Silently skip, because it's an illegal group number, so nothing
+                // gets added.
+              }
+
+              // Go back around, so we don't add the $ in
+              continue;
+            }
+          }
+          outputBuffer.append(x);
+        }
+
+        currentIndex = matchEnd;
+      }
+
+      input = outputBuffer.toString();
+    }
+
+    return input;
+  }
+
+
+  // Protected classes
+
+  // These classes are used to process the old token-based replacement strings
+
+  /** Evaluator token.
+  */
+  protected static class EvaluatorToken
+  {
+    public final static int TYPE_GROUP = 0;
+    public final static int TYPE_TEXT = 1;
+    public final static int TYPE_COMMA = 2;
+
+    public final static int GROUPSTYLE_NONE = 0;
+    public final static int GROUPSTYLE_LOWER = 1;
+    public final static int GROUPSTYLE_UPPER = 2;
+    public final static int GROUPSTYLE_MIXED = 3;
+
+    protected int type;
+    protected int groupNumber = -1;
+    protected int groupStyle = GROUPSTYLE_NONE;
+    protected String textValue = null;
+
+    public EvaluatorToken()
+    {
+      type = TYPE_COMMA;
+    }
+
+    public EvaluatorToken(int groupNumber, int groupStyle)
+    {
+      type = TYPE_GROUP;
+      this.groupNumber = groupNumber;
+      this.groupStyle = groupStyle;
+    }
+
+    public EvaluatorToken(String text)
+    {
+      type = TYPE_TEXT;
+      this.textValue = text;
+    }
+
+    public int getType()
+    {
+      return type;
+    }
+
+    public int getGroupNumber()
+    {
+      return groupNumber;
+    }
+
+    public int getGroupStyle()
+    {
+      return groupStyle;
+    }
+
+    public String getTextValue()
+    {
+      return textValue;
+    }
+
+  }
+
+
+  /** Token stream.
+  */
+  protected static class EvaluatorTokenStream
+  {
+    protected String text;
+    protected int pos;
+    protected EvaluatorToken token = null;
+
+    /** Constructor.
+    */
+    public EvaluatorTokenStream(String text)
+    {
+      this.text = text;
+      this.pos = 0;
+    }
+
+    /** Get current token.
+    */
+    public EvaluatorToken peek()
+    {
+      if (token == null)
+      {
+        token = nextToken();
+      }
+      return token;
+    }
+
+    /** Go on to next token.
+    */
+    public void advance()
+    {
+      token = null;
+    }
+
+    protected EvaluatorToken nextToken()
+    {
+      char x;
+      // Fetch the next token
+      while (true)
+      {
+        if (pos == text.length())
+          return null;
+        x = text.charAt(pos);
+        if (x > ' ')
+          break;
+        pos++;
+      }
+
+      StringBuilder sb;
+
+      if (x == '"')
+      {
+        // Parse text
+        pos++;
+        sb = new StringBuilder();
+        while (true)
+        {
+          if (pos == text.length())
+            break;
+          x = text.charAt(pos);
+          pos++;
+          if (x == '"')
+          {
+            break;
+          }
+          if (x == '\\')
+          {
+            if (pos == text.length())
+              break;
+            x = text.charAt(pos++);
+          }
+          sb.append(x);
+        }
+
+        return new EvaluatorToken(sb.toString());
+      }
+
+      if (x == ',')
+      {
+        pos++;
+        return new EvaluatorToken();
+      }
+
+      // Eat number at beginning
+      sb = new StringBuilder();
+      while (true)
+      {
+        if (pos == text.length())
+          break;
+        x = text.charAt(pos);
+        if (x >= '0' && x <= '9')
+        {
+          sb.append(x);
+          pos++;
+          continue;
+        }
+        break;
+      }
+      String numberValue = sb.toString();
+      int groupNumber = 0;
+      if (numberValue.length() > 0)
+        groupNumber = new Integer(numberValue).intValue();
+      // Save the next char position
+      int modifierPos = pos;
+      // Go to the end of the word
+      while (true)
+      {
+        if (pos == text.length())
+          break;
+        x = text.charAt(pos);
+        if (x == ',' || x >= '0' && x <= '9' || x <= ' ' && x >= 0)
+          break;
+        pos++;
+      }
+
+      int style = EvaluatorToken.GROUPSTYLE_NONE;
+      if (modifierPos != pos)
+      {
+        String modifier = text.substring(modifierPos,pos);
+        if (modifier.startsWith("u"))
+          style = EvaluatorToken.GROUPSTYLE_UPPER;
+        else if (modifier.startsWith("l"))
+          style = EvaluatorToken.GROUPSTYLE_LOWER;
+        else if (modifier.startsWith("m"))
+          style = EvaluatorToken.GROUPSTYLE_MIXED;
+      }
+      return new EvaluatorToken(groupNumber,style);
+    }
+  }
+
+}
diff --git a/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/Messages.java b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/Messages.java
new file mode 100644
index 0000000..49f8d85
--- /dev/null
+++ b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mappers.regexp;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.authorities.mappers.regexp.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.authorities.mappers.regexp";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/RegexpMapper.java b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/RegexpMapper.java
new file mode 100644
index 0000000..83737f3
--- /dev/null
+++ b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/RegexpMapper.java
@@ -0,0 +1,246 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mappers.regexp;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.Logging;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+
+/** This is the regexp mapper implementation, which uses a regular expression to manipulate a user name.
+*/
+public class RegexpMapper extends org.apache.manifoldcf.authorities.mappers.BaseMappingConnector
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Match map for username
+  private MatchMap matchMap = null;
+
+  /** Constructor.
+  */
+  public RegexpMapper()
+  {
+  }
+
+  /** Close the connection.  Call this before discarding the mapping connection.
+  */
+  @Override
+  public void disconnect()
+    throws ManifoldCFException
+  {
+    matchMap = null;
+    super.disconnect();
+  }
+
+  // All methods below this line will ONLY be called if a connect() call succeeded
+  // on this instance!
+
+  private MatchMap getSession()
+    throws ManifoldCFException
+  {
+    if (matchMap == null)
+      matchMap = new MatchMap(params.getParameter(RegexpParameters.userNameMapping));
+    return matchMap;
+  }
+
+  /** Map an input user name to an output name.
+  *@param userName is the name to map
+  *@return the mapped user name
+  */
+  @Override
+  public String mapUser(String userName)
+    throws ManifoldCFException
+  {
+    MatchMap mm = getSession();
+    
+    String outputUserName = mm.translate(userName);
+    
+    if (Logging.mappingConnectors.isDebugEnabled())
+      Logging.mappingConnectors.debug("RegexpMapper: Input user name '"+userName+"'; output user name '"+outputUserName+"'");
+    
+    return outputUserName;
+  }
+
+  // UI support methods.
+  //
+  // These support methods are involved in setting up authority connection configuration information. The configuration methods cannot assume that the
+  // current authority object is connected.  That is why they receive a thread context argument.
+    
+  /** Output the configuration header section.
+  * This method is called in the head section of the connector's configuration page.  Its purpose is to add the required tabs to the list, and to output any
+  * javascript methods that might be needed by the configuration editing HTML.
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+  */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters, List<String> tabsArray)
+    throws ManifoldCFException, IOException
+  {
+    tabsArray.add(Messages.getString(locale,"RegexpMapper.UserMapping"));
+
+    out.print(
+"<script type=\"text/javascript\">\n"+
+"<!--\n"+
+"function checkConfig()\n"+
+"{\n"+
+"  if (editconnection.usernameregexp.value != \"\" && !isRegularExpression(editconnection.usernameregexp.value))\n"+
+"  {\n"+
+"    alert(\"" + Messages.getBodyJavascriptString(locale,"RegexpMapper.UserNameRegularExpressionMustBeValidRegularExpression") + "\");\n"+
+"    editconnection.usernameregexp.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  return true;\n"+
+"}\n"+
+"\n"+
+"function checkConfigForSave()\n"+
+"{\n"+
+"  if (editconnection.usernameregexp.value == \"\")\n"+
+"  {\n"+
+"    alert(\"" + Messages.getBodyJavascriptString(locale,"RegexpMapper.UserNameRegularExpressionCannotBeNull") + "\");\n"+
+"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"RegexpMapper.UserMapping") + "\");\n"+
+"    editconnection.usernameregexp.focus();\n"+
+"    return false;\n"+
+"  }\n"+
+"  return true;\n"+
+"}\n"+
+"//-->\n"+
+"</script>\n"
+    );
+  }
+  
+  /** Output the configuration body section.
+  * This method is called in the body section of the authority connector's configuration page.  Its purpose is to present the required form elements for editing.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+  * form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabName is the current tab name.
+  */
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    String userNameMapping = parameters.getParameter(RegexpParameters.userNameMapping);
+    if (userNameMapping == null)
+      userNameMapping = "^(.*)\\\\@([A-Z|a-z|0-9|_|-]*)\\\\.(.*)$=$(2)\\\\$(1l)";
+    MatchMap matchMap = new MatchMap(userNameMapping);
+
+    String usernameRegexp = matchMap.getMatchString(0);
+    String livelinkUserExpr = matchMap.getReplaceString(0);
+
+    // The "User Mapping" tab
+    if (tabName.equals(Messages.getString(locale,"RegexpMapper.UserMapping")))
+    {
+      out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"RegexpMapper.UserNameRegularExpressionColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"text\" size=\"40\" name=\"usernameregexp\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(usernameRegexp)+"\"/></td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"RegexpMapper.UserExpressionColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"text\" size=\"40\" name=\"livelinkuserexpr\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(livelinkUserExpr)+"\"/></td>\n"+
+"  </tr>\n"+
+"</table>\n"
+      );
+    }
+    else
+    {
+      // Hiddens for "User Mapping" tab
+      out.print(
+"<input type=\"hidden\" name=\"usernameregexp\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(usernameRegexp)+"\"/>\n"+
+"<input type=\"hidden\" name=\"livelinkuserexpr\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(livelinkUserExpr)+"\"/>\n"
+      );
+    }
+
+  }
+  
+  /** Process a configuration post.
+  * This method is called at the start of the authority connector's configuration page, whenever there is a possibility that form data for a connection has been
+  * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
+  * The name of the posted form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param variableContext is the set of variables available from the post, including binary file post information.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
+  */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext,
+    Locale locale, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    // User name parameters
+    String usernameRegexp = variableContext.getParameter("usernameregexp");
+    String livelinkUserExpr = variableContext.getParameter("livelinkuserexpr");
+    if (usernameRegexp != null && livelinkUserExpr != null)
+    {
+      MatchMap matchMap = new MatchMap();
+      matchMap.appendMatchPair(usernameRegexp,livelinkUserExpr);
+      parameters.setParameter(RegexpParameters.userNameMapping,matchMap.toString());
+    }
+
+    return null;
+  }
+  
+  /** View configuration.
+  * This method is called in the body section of the authority connector's view configuration page.  Its purpose is to present the connection information to the user.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out,
+    Locale locale, ConfigParams parameters)
+    throws ManifoldCFException, IOException
+  {
+    MatchMap matchMap = new MatchMap(parameters.getParameter(RegexpParameters.userNameMapping));
+
+    String usernameRegexp = matchMap.getMatchString(0);
+    String livelinkUserExpr = matchMap.getReplaceString(0);
+
+    out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"RegexpMapper.UserNameRegularExpressionColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(usernameRegexp)+"</nobr></td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"RegexpMapper.UserExpressionColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(livelinkUserExpr)+"</nobr></td>\n"+
+"  </tr>\n"+
+"</table>\n"
+    );
+
+  }
+
+}
+
+
diff --git a/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/RegexpParameters.java b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/RegexpParameters.java
new file mode 100644
index 0000000..6b39353
--- /dev/null
+++ b/connectors/regexpmapper/connector/src/main/java/org/apache/manifoldcf/authorities/mappers/regexp/RegexpParameters.java
@@ -0,0 +1,30 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mappers.regexp;
+
+/** This class describes regexp mapper parameters.
+*/
+public class RegexpParameters
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** User name mapping description. */
+  public final static String userNameMapping = "User name map";
+
+}
diff --git a/connectors/regexpmapper/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/mappers/regexp/common_en_US.properties b/connectors/regexpmapper/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/mappers/regexp/common_en_US.properties
new file mode 100644
index 0000000..60cd844
--- /dev/null
+++ b/connectors/regexpmapper/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/mappers/regexp/common_en_US.properties
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+RegexpMapper.UserMapping=User Mapping
+RegexpMapper.UserNameRegularExpressionColon=User name regular expression:
+RegexpMapper.UserExpressionColon=User expression:
+RegexpMapper.UserNameRegularExpressionMustBeValidRegularExpression=User name regular expression must be a valid regular expression
+RegexpMapper.UserNameRegularExpressionCannotBeNull=User name regular expression cannot be null
diff --git a/connectors/regexpmapper/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/mappers/regexp/common_ja_JP.properties b/connectors/regexpmapper/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/mappers/regexp/common_ja_JP.properties
new file mode 100644
index 0000000..60cd844
--- /dev/null
+++ b/connectors/regexpmapper/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/mappers/regexp/common_ja_JP.properties
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+RegexpMapper.UserMapping=User Mapping
+RegexpMapper.UserNameRegularExpressionColon=User name regular expression:
+RegexpMapper.UserExpressionColon=User expression:
+RegexpMapper.UserNameRegularExpressionMustBeValidRegularExpression=User name regular expression must be a valid regular expression
+RegexpMapper.UserNameRegularExpressionCannotBeNull=User name regular expression cannot be null
diff --git a/connectors/regexpmapper/pom.xml b/connectors/regexpmapper/pom.xml
new file mode 100644
index 0000000..830917d
--- /dev/null
+++ b/connectors/regexpmapper/pom.xml
@@ -0,0 +1,98 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+  <parent>
+    <groupId>org.apache.manifoldcf</groupId>
+    <artifactId>mcf-connectors</artifactId>
+    <version>1.5-SNAPSHOT</version>
+  </parent>
+  <modelVersion>4.0.0</modelVersion>
+
+  <artifactId>mcf-regexpmapper-connector</artifactId>
+  <name>ManifoldCF - Connectors - Regexp Mapper</name>
+
+  <build>
+    <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
+    <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
+      </resource>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
+    <plugins>
+      <plugin>
+        <groupId>org.codehaus.mojo</groupId>
+        <artifactId>native2ascii-maven-plugin</artifactId>
+        <version>1.0-beta-1</version>
+        <configuration>
+            <workDir>target/classes</workDir>
+        </configuration>
+        <executions>
+            <execution>
+                <id>native2ascii-utf8</id>
+                <goals>
+                    <goal>native2ascii</goal>
+                </goals>
+                <configuration>
+                    <encoding>UTF8</encoding>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
+                </configuration>
+            </execution>
+        </executions>
+      </plugin>
+    </plugins>
+
+  </build>
+
+  <dependencies>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-core</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-agents</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-pull-agent</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>${project.groupId}</groupId>
+      <artifactId>mcf-ui-core</artifactId>
+      <version>${project.version}</version>
+    </dependency>
+  </dependencies>
+</project>
\ No newline at end of file
diff --git a/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/RSSConnector.java b/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/RSSConnector.java
index dda19e3..7cc4b7b 100644
--- a/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/RSSConnector.java
+++ b/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/RSSConnector.java
@@ -52,7 +52,7 @@
 {
   public static final String _rcsid = "@(#)$Id: RSSConnector.java 994959 2010-09-08 10:04:42Z kwright $";
 
-
+  protected final static String rssThrottleGroupType = "_RSS_";
 
   // Usage flag values
   protected static final int ROBOTS_NONE = 0;
@@ -105,7 +105,7 @@
   protected Robots robots = null;
 
   /** Storage for fetcher objects */
-  protected static Map fetcherMap = new HashMap();
+  protected static Map<String,ThrottledFetcher> fetcherMap = new HashMap<String,ThrottledFetcher>();
   /** Storage for robots objects */
   protected static Map robotsMap = new HashMap();
 
@@ -231,10 +231,16 @@
 
       }
 
+      IThrottleGroups tg = ThrottleGroupsFactory.make(currentContext);
+      // Create the throttle group
+      tg.createOrUpdateThrottleGroup(rssThrottleGroupType, throttleGroupName, new ThrottleSpec(maxOpenConnectionsPerServer,
+        minimumMillisecondsPerFetchPerServer, minimumMillisecondsPerBytePerServer));
+      
       isInitialized = true;
     }
   }
 
+  
   /** Return the list of activities that this connector supports (i.e. writes into the log).
   *@return the list.
   */
@@ -815,11 +821,15 @@
                 String[] pubDates = activities.retrieveParentData(urlValue,"pubdate");
                 String[] sources = activities.retrieveParentData(urlValue,"source");
                 String[] titles = activities.retrieveParentData(urlValue,"title");
+                String[] authorNames = activities.retrieveParentData(urlValue,"authorname");
+                String[] authorEmails = activities.retrieveParentData(urlValue,"authoremail");
                 String[] categories = activities.retrieveParentData(urlValue,"category");
                 String[] descriptions = activities.retrieveParentData(urlValue,"description");
                 java.util.Arrays.sort(pubDates);
                 java.util.Arrays.sort(sources);
                 java.util.Arrays.sort(titles);
+                java.util.Arrays.sort(authorNames);
+                java.util.Arrays.sort(authorEmails);
                 java.util.Arrays.sort(categories);
                 java.util.Arrays.sort(descriptions);
 
@@ -852,6 +862,10 @@
                 packList(sb,categories,'+');
                 // The descriptions
                 packList(sb,descriptions,'+');
+                // The author names
+                packList(sb,authorNames,'+');
+                // The author emails
+                packList(sb,authorEmails,'+');
 
                 // Do the checksum part, which does not need to be parseable.
                 sb.append(new Long(checkSum).toString());
@@ -928,11 +942,9 @@
             String pathPart = url.getFile();
 
             // Check with robots to see if it's allowed
-            if (robotsUsage >= ROBOTS_DATA && !robots.isFetchAllowed(protocol,port,hostName,url.getPath(),
+            if (robotsUsage >= ROBOTS_DATA && !robots.isFetchAllowed(currentContext,throttleGroupName,
+              protocol,port,hostName,url.getPath(),
               userAgent,from,
-              minimumMillisecondsPerBytePerServer,
-              maxOpenConnectionsPerServer,
-              minimumMillisecondsPerFetchPerServer,
               proxyHost, proxyPort, proxyAuthDomain, proxyAuthUsername, proxyAuthPassword,
               activities, connectionLimit))
             {
@@ -947,10 +959,9 @@
             {
 
               // Now, use the fetcher, and get the file.
-              IThrottledConnection connection = fetcher.createConnection(hostName,
-                minimumMillisecondsPerBytePerServer,
-                maxOpenConnectionsPerServer,
-                minimumMillisecondsPerFetchPerServer,
+              IThrottledConnection connection = fetcher.createConnection(currentContext,
+                throttleGroupName,
+                hostName,
                 connectionLimit,
                 feedTimeout,
                 proxyHost,
@@ -1055,11 +1066,15 @@
                           String[] pubDates = activities.retrieveParentData(urlValue,"pubdate");
                           String[] sources = activities.retrieveParentData(urlValue,"source");
                           String[] titles = activities.retrieveParentData(urlValue,"title");
+                          String[] authorNames = activities.retrieveParentData(urlValue,"authorname");
+                          String[] authorEmails = activities.retrieveParentData(urlValue,"authoremail");
                           String[] categories = activities.retrieveParentData(urlValue,"category");
                           String[] descriptions = activities.retrieveParentData(urlValue,"description");
                           java.util.Arrays.sort(pubDates);
                           java.util.Arrays.sort(sources);
                           java.util.Arrays.sort(titles);
+                          java.util.Arrays.sort(authorNames);
+                          java.util.Arrays.sort(authorEmails);
                           java.util.Arrays.sort(categories);
                           java.util.Arrays.sort(descriptions);
 
@@ -1092,7 +1107,10 @@
                           packList(sb,categories,'+');
                           // The descriptions
                           packList(sb,descriptions,'+');
-
+                          // The author names
+                          packList(sb,authorNames,'+');
+                          // The author emails
+                          packList(sb,authorEmails,'+');
                         }
                         else
                         {
@@ -1322,6 +1340,10 @@
           startPos = unpackList(categories,version,startPos,'+');
           ArrayList descriptions = new ArrayList();
           startPos = unpackList(descriptions,version,startPos,'+');
+          ArrayList authorNames = new ArrayList();
+          startPos = unpackList(authorNames,version,startPos,'+');
+          ArrayList authorEmails = new ArrayList();
+          startPos = unpackList(authorEmails,version,startPos,'+');
 
           if (ingestURL.length() > 0)
           {
@@ -1389,6 +1411,28 @@
             }
             if (k > 0)
               rd.addField("title",titleValues);
+
+            // Loop through the author names to add those to the metadata
+            String[] authorNameValues = new String[authorNames.size()];
+            k = 0;
+            while (k < authorNameValues.length)
+            {
+              authorNameValues[k] = (String)authorNames.get(k);
+              k++;
+            }
+            if (k > 0)
+              rd.addField("authorname",authorNameValues);
+
+            // Loop through the author emails to add those to the metadata
+            String[] authorEmailValues = new String[authorEmails.size()];
+            k = 0;
+            while (k < authorEmailValues.length)
+            {
+              authorEmailValues[k] = (String)authorEmails.get(k);
+              k++;
+            }
+            if (k > 0)
+              rd.addField("authoremail",authorEmailValues);
             
             // Loop through the descriptions to add those to the metadata
             String[] descriptionValues = new String[descriptions.size()];
@@ -1651,6 +1695,8 @@
     String proxyAuthPassword = parameters.getObfuscatedParameter(RSSConfig.PARAMETER_PROXYAUTHPASSWORD);
     if (proxyAuthPassword == null)
       proxyAuthPassword = "";
+    else
+      proxyAuthPassword = out.mapPasswordToKey(proxyAuthPassword);
       
     // Email tab
     if (tabName.equals(Messages.getString(locale,"RSSConnector.Email")))
@@ -1819,7 +1865,7 @@
       parameters.setParameter(RSSConfig.PARAMETER_PROXYAUTHUSERNAME,proxyAuthUsername);
     String proxyAuthPassword = variableContext.getParameter("proxyauthpassword");
     if (proxyAuthPassword != null)
-      parameters.setObfuscatedParameter(RSSConfig.PARAMETER_PROXYAUTHPASSWORD,proxyAuthPassword);
+      parameters.setObfuscatedParameter(RSSConfig.PARAMETER_PROXYAUTHPASSWORD,variableContext.mapKeyToPassword(proxyAuthPassword));
 
     return null;
   }
@@ -3806,6 +3852,8 @@
     protected String pubDateField = null;
     protected String titleField = null;
     protected String descriptionField = null;
+    protected String authorEmailField = null;
+    protected String authorNameField = null;
     protected ArrayList categoryField = new ArrayList();
     protected File contentsFile = null;
 
@@ -3845,6 +3893,16 @@
         // "category" tag
         return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
       }
+      else if (localName.equals("author"))
+      {
+        // "author" tag, which contains email
+        return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
+      }
+      else if (localName.equals("creator"))
+      {
+        // "creator" tag which contains name (like dc:creator)
+        return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
+      }
       else
       {
         // Handle potentially longer fields.  Both "description" and "content" fields can potentially be large; they are thus
@@ -3941,6 +3999,14 @@
       {
         categoryField.add(((XMLStringParsingContext)theContext).getValue());
       }
+      else if (theTag.equals("author"))
+      {
+        authorEmailField = ((XMLStringParsingContext)theContext).getValue();
+      }
+      else if (theTag.equals("creator"))
+      {
+        authorNameField = ((XMLStringParsingContext)theContext).getValue();
+      }
       else
       {
         // What we want is: (a) if dechromed mode is NONE, just put the description file in the description field; (b)
@@ -4038,22 +4104,26 @@
               if (contentsFile == null && filter.getChromedContentMode() != CHROMED_METADATA_ONLY)
               {
                 // It's a reference!  Add it.
-                String[] dataNames = new String[]{"pubdate","title","source","category","description"};
+                String[] dataNames = new String[]{"pubdate","title","source","authoremail","authorname","category","description"};
                 String[][] dataValues = new String[dataNames.length][];
                 if (origDate != null)
                   dataValues[0] = new String[]{origDate.toString()};
                 if (titleField != null)
                   dataValues[1] = new String[]{titleField};
                 dataValues[2] = new String[]{documentIdentifier};
-                dataValues[3] = new String[categoryField.size()];
+                if (authorEmailField != null)
+                  dataValues[3] = new String[]{authorEmailField};
+                if (authorNameField != null)
+                  dataValues[4] = new String[]{authorNameField};
+                dataValues[5] = new String[categoryField.size()];
                 int q = 0;
                 while (q < categoryField.size())
                 {
-                  (dataValues[3])[q] = (String)categoryField.get(q);
+                  (dataValues[5])[q] = (String)categoryField.get(q);
                   q++;
                 }
                 if (descriptionField != null)
-                  dataValues[4] = new String[]{descriptionField};
+                  dataValues[6] = new String[]{descriptionField};
                 // Add document reference, not including the data to pass down, but including a description
                 activities.addDocumentReference(newIdentifier,documentIdentifier,null,dataNames,dataValues,origDate);
               }
@@ -4067,30 +4137,34 @@
                 // Since the dechromed data is available from the feed, the possibility remains of passing the document
 
                 // Now, set up the carrydown info
-                String[] dataNames = new String[]{"pubdate","title","source","category","data","description"};
+                String[] dataNames = new String[]{"pubdate","title","source","authoremail","authorname","category","data","description"};
                 Object[][] dataValues = new Object[dataNames.length][];
                 if (origDate != null)
                   dataValues[0] = new String[]{origDate.toString()};
                 if (titleField != null)
                   dataValues[1] = new String[]{titleField};
                 dataValues[2] = new String[]{documentIdentifier};
-                dataValues[3] = new String[categoryField.size()];
+                if (authorEmailField != null)
+                  dataValues[3] = new String[]{authorEmailField};
+                if (authorNameField != null)
+                  dataValues[4] = new String[]{authorNameField};
+                dataValues[5] = new String[categoryField.size()];
                 int q = 0;
                 while (q < categoryField.size())
                 {
-                  (dataValues[3])[q] = (String)categoryField.get(q);
+                  (dataValues[5])[q] = (String)categoryField.get(q);
                   q++;
                 }
 
                 if (descriptionField != null)
-                  dataValues[5] = new String[]{descriptionField};
+                  dataValues[7] = new String[]{descriptionField};
                   
                 if (contentsFile == null)
                 {
                   CharacterInput ci = new NullCharacterInput();
                   try
                   {
-                    dataValues[4] = new Object[]{ci};
+                    dataValues[6] = new Object[]{ci};
 
                     // Add document reference, including the data to pass down, and the dechromed content too
                     activities.addDocumentReference(newIdentifier,documentIdentifier,null,dataNames,dataValues,origDate);
@@ -4106,7 +4180,7 @@
                   try
                   {
                     contentsFile = null;
-                    dataValues[4] = new Object[]{ci};
+                    dataValues[6] = new Object[]{ci};
 
                     // Add document reference, including the data to pass down, and the dechromed content too
                     activities.addDocumentReference(newIdentifier,documentIdentifier,null,dataNames,dataValues,origDate);
@@ -4252,6 +4326,7 @@
     protected String linkField = null;
     protected String pubDateField = null;
     protected String titleField = null;
+    protected String authorNameField = null;
     protected String descriptionField = null;
     protected File contentsFile = null;
 
@@ -4281,6 +4356,11 @@
         // "title" tag
         return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
       }
+      else if (localName.equals("creator"))
+      {
+        // "creator" tag (e.g. "dc:creator")
+        return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
+      }
       else
       {
         switch (dechromedContentMode)
@@ -4366,6 +4446,10 @@
       {
         titleField = ((XMLStringParsingContext)theContext).getValue();
       }
+      else if (theTag.equals("creator"))
+      {
+        authorNameField = ((XMLStringParsingContext)theContext).getValue();
+      }
       else
       {
         switch (dechromedContentMode)
@@ -4450,15 +4534,17 @@
               if (contentsFile == null && filter.getChromedContentMode() != CHROMED_METADATA_ONLY)
               {
                 // It's a reference!  Add it.
-                String[] dataNames = new String[]{"pubdate","title","source","description"};
+                String[] dataNames = new String[]{"pubdate","title","source","authorname","description"};
                 String[][] dataValues = new String[dataNames.length][];
                 if (origDate != null)
                   dataValues[0] = new String[]{origDate.toString()};
                 if (titleField != null)
                   dataValues[1] = new String[]{titleField};
                 dataValues[2] = new String[]{documentIdentifier};
+                if (authorNameField != null)
+                  dataValues[3] = new String[]{authorNameField};
                 if (descriptionField != null)
-                  dataValues[3] = new String[]{descriptionField};
+                  dataValues[4] = new String[]{descriptionField};
 
                 // Add document reference, including the data to pass down
                 activities.addDocumentReference(newIdentifier,documentIdentifier,null,dataNames,dataValues,origDate);
@@ -4471,22 +4557,24 @@
                 // right here.
 
                 // Now, set up the carrydown info
-                String[] dataNames = new String[]{"pubdate","title","source","data","description"};
+                String[] dataNames = new String[]{"pubdate","title","source","authorname","data","description"};
                 Object[][] dataValues = new Object[dataNames.length][];
                 if (origDate != null)
                   dataValues[0] = new String[]{origDate.toString()};
                 if (titleField != null)
                   dataValues[1] = new String[]{titleField};
                 dataValues[2] = new String[]{documentIdentifier};
+                if (authorNameField != null)
+                  dataValues[3] = new String[]{authorNameField};
                 if (descriptionField != null)
-                  dataValues[4] = new String[]{descriptionField};
+                  dataValues[5] = new String[]{descriptionField};
                   
                 if (contentsFile == null)
                 {
                   CharacterInput ci = new NullCharacterInput();
                   try
                   {
-                    dataValues[3] = new Object[]{ci};
+                    dataValues[4] = new Object[]{ci};
 
                     // Add document reference, including the data to pass down, and the dechromed content too
                     activities.addDocumentReference(newIdentifier,documentIdentifier,null,dataNames,dataValues,origDate);
@@ -4502,7 +4590,7 @@
                   try
                   {
                     contentsFile = null;
-                    dataValues[3] = new Object[]{ci};
+                    dataValues[4] = new Object[]{ci};
 
                     // Add document reference, including the data to pass down, and the dechromed content too
                     activities.addDocumentReference(newIdentifier,documentIdentifier,null,dataNames,dataValues,origDate);
@@ -4648,6 +4736,8 @@
     protected List<String> linkField = new ArrayList<String>();
     protected String pubDateField = null;
     protected String titleField = null;
+    protected String authorNameField = null;
+    protected String authorEmailField = null;
     protected ArrayList categoryField = new ArrayList();
     protected File contentsFile = null;
     protected String descriptionField = null;
@@ -4681,6 +4771,10 @@
         // "title" tag
         return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
       }
+      else if (localName.equals("author"))
+      {
+        return new FeedAuthorContextClass(theStream,namespace,localName,qName,atts);
+      }
       else if (localName.equals("category"))
       {
         String category = atts.get("term");
@@ -4769,6 +4863,12 @@
       {
         titleField = ((XMLStringParsingContext)theContext).getValue();
       }
+      else if (theTag.equals("author"))
+      {
+        FeedAuthorContextClass authorContext = (FeedAuthorContextClass)theContext;
+        authorEmailField = authorContext.getAuthorEmail();
+        authorNameField = authorContext.getAuthorName();
+      }
       else
       {
         switch (dechromedContentMode)
@@ -4951,6 +5051,69 @@
     }
   }
   
+  protected class FeedAuthorContextClass extends XMLParsingContext
+  {
+    protected String authorNameField = null;
+    protected String authorEmailField = null;
+
+    public FeedAuthorContextClass(XMLFuzzyHierarchicalParseState theStream, String namespace, String localName, String qName, Map<String,String> atts)
+    {
+      super(theStream,namespace,localName,qName,atts);
+    }
+
+    @Override
+    protected XMLParsingContext beginTag(String namespace, String localName, String qName, Map<String,String> atts)
+      throws ManifoldCFException
+    {
+      if (localName.equals("name"))
+      {
+        // "name" tag
+        return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
+      }
+      else if (localName.equals("email"))
+      {
+        // "email" tag
+        return new XMLStringParsingContext(theStream,namespace,localName,qName,atts);
+      }
+      else
+      {
+        // Skip everything else.
+        return super.beginTag(namespace,localName,qName,atts);
+      }
+    }
+
+    /** Convert the individual sub-fields of the item context into their final forms */
+    @Override
+    protected void endTag()
+      throws ManifoldCFException
+    {
+      XMLParsingContext theContext = theStream.getContext();
+      String theTag = theContext.getLocalname();
+      if (theTag.equals("name"))
+      {
+        authorNameField = ((XMLStringParsingContext)theContext).getValue();
+      }
+      else if (theTag.equals("email"))
+      {
+        authorEmailField = ((XMLStringParsingContext)theContext).getValue();
+      }
+      else
+      {
+        super.endTag();
+      }
+    }
+    
+    public String getAuthorName()
+    {
+      return authorNameField;
+    }
+    
+    public String getAuthorEmail()
+    {
+      return authorEmailField;
+    }
+  }
+
   protected class UrlsetContextClass extends XMLParsingContext
   {
     /** The document identifier */
@@ -5244,7 +5407,7 @@
   {
     synchronized (fetcherMap)
     {
-      ThrottledFetcher tf = (ThrottledFetcher)fetcherMap.get(throttleGroupName);
+      ThrottledFetcher tf = fetcherMap.get(throttleGroupName);
       if (tf == null)
       {
         tf = new ThrottledFetcher();
@@ -5337,6 +5500,47 @@
 
   // Protected classes
 
+  /** The throttle specification class.  Each server name is a different bin in this model.
+  */
+  protected static class ThrottleSpec implements IThrottleSpec
+  {
+    protected final int maxOpenConnectionsPerServer;
+    protected final long minimumMillisecondsPerFetchPerServer;
+    protected final double minimumMillisecondsPerBytePerServer;
+    
+    public ThrottleSpec(int maxOpenConnectionsPerServer, long minimumMillisecondsPerFetchPerServer,
+      double minimumMillisecondsPerBytePerServer)
+    {
+      this.maxOpenConnectionsPerServer = maxOpenConnectionsPerServer;
+      this.minimumMillisecondsPerFetchPerServer = minimumMillisecondsPerFetchPerServer;
+      this.minimumMillisecondsPerBytePerServer = minimumMillisecondsPerBytePerServer;
+    }
+    
+    /** Given a bin name, find the max open connections to use for that bin.
+    *@return Integer.MAX_VALUE if no limit found.
+    */
+    public int getMaxOpenConnections(String binName)
+    {
+      return maxOpenConnectionsPerServer;
+    }
+
+    /** Look up minimum milliseconds per byte for a bin.
+    *@return 0.0 if no limit found.
+    */
+    public double getMinimumMillisecondsPerByte(String binName)
+    {
+      return minimumMillisecondsPerBytePerServer;
+    }
+
+    /** Look up minimum milliseconds for a fetch for a bin.
+    *@return 0 if no limit found.
+    */
+    public long getMinimumMillisecondsPerFetch(String binName)
+    {
+      return minimumMillisecondsPerFetchPerServer;
+    }
+  }
+
   /** Name/value class */
   protected static class NameValue
   {
diff --git a/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/Robots.java b/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/Robots.java
index d9c0e56..1ff565d 100644
--- a/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/Robots.java
+++ b/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/Robots.java
@@ -109,10 +109,9 @@
   *@param pathString is the path (non-query) part of the URL
   *@return true if fetch is allowed, false otherwise.
   */
-  public boolean isFetchAllowed(String protocol, int port, String hostName, String pathString,
+  public boolean isFetchAllowed(IThreadContext threadContext, String throttleGroupName,
+    String protocol, int port, String hostName, String pathString,
     String userAgent, String from,
-    double minimumMillisecondsPerBytePerServer, int maxOpenConnectionsPerServer,
-    long minimumMillisecondsPerFetchPerServer,
     String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword,
     IVersionActivity activities, int connectionLimit)
     throws ManifoldCFException, ServiceInterruption
@@ -134,9 +133,9 @@
       }
     }
 
-    return host.isFetchAllowed(System.currentTimeMillis(),pathString,
+    return host.isFetchAllowed(threadContext,throttleGroupName,
+      System.currentTimeMillis(),pathString,
       userAgent,from,
-      minimumMillisecondsPerBytePerServer,maxOpenConnectionsPerServer,minimumMillisecondsPerFetchPerServer,
       proxyHost, proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword,activities,connectionLimit);
   }
 
@@ -257,10 +256,9 @@
     *@param pathString is the path string to check.
     *@return true if crawling is allowed, false otherwise.
     */
-    public boolean isFetchAllowed(long currentTime, String pathString,
+    public boolean isFetchAllowed(IThreadContext threadContext, String throttleGroupName,
+      long currentTime, String pathString,
       String userAgent, String from,
-      double minimumMillisecondsPerBytePerServer, int maxOpenConnectionsPerServer,
-      long minimumMillisecondsPerFetchPerServer,
       String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword,
       IVersionActivity activities, int connectionLimit)
       throws ServiceInterruption, ManifoldCFException
@@ -323,9 +321,7 @@
 
         if (readingRobots)
           // This doesn't need to be synchronized because readingRobots blocks all other threads from getting at this object
-          makeValid(currentTime,userAgent,from,
-          minimumMillisecondsPerBytePerServer,maxOpenConnectionsPerServer,
-          minimumMillisecondsPerFetchPerServer,
+          makeValid(threadContext,throttleGroupName,currentTime,userAgent,from,
           proxyHost, proxyPort, proxyAuthDomain, proxyAuthUsername, proxyAuthPassword,
           hostName, activities, connectionLimit);
 
@@ -435,9 +431,8 @@
     /** Initialize the record.  This method reads the robots file on the specified protocol/host/port,
     * and parses it according to the rules.
     */
-    protected void makeValid(long currentTime, String userAgent, String from,
-      double minimumMillisecondsPerBytePerServer, int maxOpenConnectionsPerServer,
-      long minimumMillisecondsPerFetchPerServer,
+    protected void makeValid(IThreadContext threadContext, String throttleGroupName,
+      long currentTime, String userAgent, String from,
       String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword,
       String hostName, IVersionActivity activities, int connectionLimit)
       throws ServiceInterruption, ManifoldCFException
@@ -445,8 +440,8 @@
       invalidTime = currentTime + 24L * 60L * 60L * 1000L;
 
       // Do the fetch
-      IThrottledConnection connection = fetcher.createConnection(hostName,minimumMillisecondsPerBytePerServer,
-        maxOpenConnectionsPerServer,minimumMillisecondsPerFetchPerServer,connectionLimit,ROBOT_TIMEOUT_MILLISECONDS,
+      IThrottledConnection connection = fetcher.createConnection(threadContext,throttleGroupName,
+        hostName,connectionLimit,ROBOT_TIMEOUT_MILLISECONDS,
         proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
       try
       {
diff --git a/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/ThrottledFetcher.java b/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/ThrottledFetcher.java
index 43bf168..83a0e34 100644
--- a/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/ThrottledFetcher.java
+++ b/connectors/rss/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/rss/ThrottledFetcher.java
@@ -20,6 +20,7 @@
 
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.core.common.XThreadInputStream;
+import org.apache.manifoldcf.core.common.InterruptibleSocketFactory;
 import org.apache.manifoldcf.agents.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.*;
 import org.apache.manifoldcf.crawler.system.Logging;
@@ -87,9 +88,9 @@
   /** This is the lock object for that global handle counter */
   protected static Integer globalHandleCounterLock = new Integer(0);
 
-  /** This hash maps the server string (without port) to a server object, where
+  /** This hash maps the server string (without port) to a pool throttling object, where
   * we can track the statistics and make sure we throttle appropriately */
-  protected Map serverMap = new HashMap();
+  protected final Map<String,IConnectionThrottler> serverMap = new HashMap<String,IConnectionThrottler>();
 
   /** Reference count for how many connections to this pool there are */
   protected int refCount = 0;
@@ -150,35 +151,25 @@
 
   /** Establish a connection to a specified URL.
   * @param serverName is the FQDN of the server, e.g. foo.metacarta.com
-  * @param minimumMillisecondsPerBytePerServer is the average number of milliseconds to wait
-  *       between bytes, on
-  *       average, over all streams reading from this server.  That means that the
-  *       stream will block on fetch until the number of bytes being fetched, done
-  *       in the average time interval required for that fetch, would not exceed
-  *       the desired bandwidth.
-  * @param minimumMillisecondsPerFetchPerServer is the number of milliseconds
-  *        between fetches, as a minimum, on a per-server basis.  Set
-  *        to zero for no limit.
-  * @param maxOpenConnectionsPerServer is the maximum number of open connections to allow for a single server.
-  *        If more than this number of connections would need to be open, then this connection request will block
-  *        until this number will no longer be exceeded.
   * @param connectionLimit is the maximum desired outstanding connections at any one time.
   * @param connectionTimeoutMilliseconds is the number of milliseconds to wait for the connection before timing out.
   */
-  public synchronized IThrottledConnection createConnection(String serverName, double minimumMillisecondsPerBytePerServer,
-    int maxOpenConnectionsPerServer, long minimumMillisecondsPerFetchPerServer, int connectionLimit, int connectionTimeoutMilliseconds,
+  public synchronized IThrottledConnection createConnection(IThreadContext threadContext, String throttleGroupName,
+    String serverName, int connectionLimit, int connectionTimeoutMilliseconds,
     String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
     throws ManifoldCFException, ServiceInterruption
   {
-    Server server;
-    server = (Server)serverMap.get(serverName);
+    IConnectionThrottler server;
+    server = serverMap.get(serverName);
     if (server == null)
     {
-      server = new Server(serverName);
+      // Create a connection throttler for this server
+      IThrottleGroups tg = ThrottleGroupsFactory.make(threadContext);
+      server = tg.obtainConnectionThrottler(RSSConnector.rssThrottleGroupType, throttleGroupName, new String[]{serverName});
       serverMap.put(serverName,server);
     }
 
-    return new ThrottledConnection(server,minimumMillisecondsPerBytePerServer,maxOpenConnectionsPerServer,minimumMillisecondsPerFetchPerServer,
+    return new ThrottledConnection(serverName, server,
       connectionTimeoutMilliseconds,connectionLimit,
       proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
   }
@@ -205,14 +196,8 @@
     refCount--;
     if (refCount == 0)
     {
-      // Close all the servers one by one
-      Iterator iter = serverMap.keySet().iterator();
-      while (iter.hasNext())
-      {
-        String serverName = (String)iter.next();
-        Server server = (Server)serverMap.get(serverName);
-        server.discard();
-      }
+      // Since we don't have any actual pools here, this can be a no-op for now
+      // MHL
       serverMap.clear();
     }
   }
@@ -221,14 +206,12 @@
   */
   protected static class ThrottledConnection implements IThrottledConnection
   {
-    /** The connection bandwidth we want */
-    protected final double minimumMillisecondsPerBytePerServer;
-    /** The maximum open connections per server */
-    protected final int maxOpenConnectionsPerServer;
-    /** The minimum time between fetches */
-    protected final long minimumMillisecondsPerFetchPerServer;
-    /** The server object we use to track connections and fetches. */
-    protected final Server server;
+    /** The server fqdn */
+    protected final String serverName;
+    /** The throttling object we use to track connections */
+    protected final IConnectionThrottler connectionThrottler;
+    /** The throttling object we use to track fetches */
+    protected final IFetchThrottler fetchThrottler;
     /** Connection timeout in milliseconds */
     protected final int connectionTimeoutMilliseconds;
     /** The client connection manager */
@@ -258,15 +241,14 @@
     
     /** Constructor.
     */
-    public ThrottledConnection(Server server, double minimumMillisecondsPerBytePerServer, int maxOpenConnectionsPerServer,
-      long minimumMillisecondsPerFetchPerServer, int connectionTimeoutMilliseconds, int connectionLimit,
+    public ThrottledConnection(String serverName,
+      IConnectionThrottler connectionThrottler,
+      int connectionTimeoutMilliseconds, int connectionLimit,
       String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
       throws ManifoldCFException
     {
-      this.minimumMillisecondsPerBytePerServer = minimumMillisecondsPerBytePerServer;
-      this.maxOpenConnectionsPerServer = maxOpenConnectionsPerServer;
-      this.minimumMillisecondsPerFetchPerServer = minimumMillisecondsPerFetchPerServer;
-      this.server = server;
+      this.serverName = serverName;
+      this.connectionThrottler = connectionThrottler;
       this.connectionTimeoutMilliseconds = connectionTimeoutMilliseconds;
 
       // Create the https scheme for this connection
@@ -329,7 +311,17 @@
       httpClient = localHttpClient;
 
       registerGlobalHandle(connectionLimit);
-      server.registerConnection(maxOpenConnectionsPerServer);
+      try
+      {
+        int result = connectionThrottler.waitConnectionAvailable();
+        if (result != IConnectionThrottler.CONNECTION_FROM_CREATION)
+          throw new IllegalStateException("Got back unexpected value from waitForAConnection() of "+result);
+        fetchThrottler = connectionThrottler.getNewConnectionFetchThrottler();
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+      }
     }
 
     /** Begin the fetch process.
@@ -343,7 +335,8 @@
       fetchCounter = 0L;
       try
       {
-        server.beginFetch(minimumMillisecondsPerFetchPerServer);
+        if (fetchThrottler.obtainFetchDocumentPermission() == false)
+          throw new IllegalStateException("obtainFetchDocumentPermission() had unexpected return value");
       }
       catch (InterruptedException e)
       {
@@ -385,7 +378,7 @@
     {
 
       StringBuilder sb = new StringBuilder(protocol);
-      sb.append("://").append(server.getServerName());
+      sb.append("://").append(serverName);
       if (port != -1)
         sb.append(":").append(Integer.toString(port));
       sb.append(urlPath);
@@ -406,8 +399,8 @@
       if (lastModified != null)
         executeMethod.setHeader(new BasicHeader("Last-Modified",lastModified));
       // Create the execution thread.
-      methodThread = new ExecuteMethodThread(this, server,
-        minimumMillisecondsPerBytePerServer, httpClient, executeMethod);
+      methodThread = new ExecuteMethodThread(this, fetchThrottler,
+        httpClient, executeMethod);
       // Start the method thread, which will start the transaction
       try
       {
@@ -701,7 +694,6 @@
         if (methodThread != null && threadStarted)
           methodThread.abort();
         long endTime = System.currentTimeMillis();
-        server.endFetch();
 
         activities.recordActivity(new Long(startFetchTime),RSSConnector.ACTIVITY_FETCH,
           new Long(fetchCounter),myUrl,Integer.toString(statusCode),(throwable==null)?null:throwable.getMessage(),null);
@@ -748,7 +740,7 @@
     {
       // Clean up the connection pool.  This should do the necessary bookkeeping to release the one connection that's sitting there.
       connectionManager.shutdown();
-      server.releaseConnection();
+      connectionThrottler.noteConnectionDestroyed();
       releaseGlobalHandle();
     }
 
@@ -759,23 +751,20 @@
   */
   protected static class ThrottledInputstream extends InputStream
   {
-    /** Stream throttling parameters */
-    protected double minimumMillisecondsPerBytePerServer;
-    /** The throttled connection we belong to */
-    protected ThrottledConnection throttledConnection;
-    /** The server object we use to track throttling */
-    protected Server server;
+    /** Throttled connection */
+    protected final ThrottledConnection throttledConnection;
+    /** Stream throttler */
+    protected final IStreamThrottler streamThrottler;
     /** The stream we are wrapping. */
-    protected InputStream inputStream;
+    protected final InputStream inputStream;
 
     /** Constructor.
     */
-    public ThrottledInputstream(ThrottledConnection connection, Server server, InputStream is, double minimumMillisecondsPerBytePerServer)
+    public ThrottledInputstream(ThrottledConnection throttledConnection, IStreamThrottler streamThrottler, InputStream is)
     {
-      this.throttledConnection = connection;
-      this.server = server;
+      this.throttledConnection = throttledConnection;
+      this.streamThrottler = streamThrottler;
       this.inputStream = is;
-      this.minimumMillisecondsPerBytePerServer = minimumMillisecondsPerBytePerServer;
     }
 
     /** Read a byte.
@@ -838,7 +827,8 @@
     {
       try
       {
-        server.beginRead(len,minimumMillisecondsPerBytePerServer);
+        if (streamThrottler.obtainReadPermission(len) == false)
+          throw new IllegalStateException("Throttler shut down while still active");
         int amt = 0;
         try
         {
@@ -848,10 +838,10 @@
         finally
         {
           if (amt == -1)
-            server.endRead(len,0);
+            streamThrottler.releaseReadPermission(len,0);
           else
           {
-            server.endRead(len,amt);
+            streamThrottler.releaseReadPermission(len,amt);
             throttledConnection.logFetchCount(amt);
           }
         }
@@ -908,294 +898,16 @@
     public void close()
       throws IOException
     {
-      inputStream.close();
-    }
-
-  }
-
-  /** This class represents the throttling stuff kept around for a single server.
-  *
-  * In order to calculate
-  * the effective "burst" fetches per second and bytes per second, we need to have some idea what the window is.
-  * For example, a long hiatus from fetching could cause overuse of the server when fetching resumes, if the
-  * window length is too long.
-  *
-  * One solution to this problem would be to keep a list of the individual fetches as records.  Then, we could
-  * "expire" a fetch by discarding the old record.  However, this is quite memory consumptive for all but the
-  * smallest intervals.
-  *
-  * Another, better, solution is to hook into the start and end of individual fetches.  These will, presumably, occur
-  * at the fastest possible rate without long pauses spent doing something else.  The only complication is that
-  * fetches may well overlap, so we need to "reference count" the fetches to know when to reset the counters.
-  * For "fetches per second", we can simply make sure we "schedule" the next fetch at an appropriate time, rather
-  * than keep records around.  The overall rate may therefore be somewhat less than the specified rate, but that's perfectly
-  * acceptable.
-  *
-  * For the "maximum open connections" limit, the best thing would be to establish a separate MultiThreadedConnectionPool
-  * for each Server.  Then, the limit would be automatic.
-  *
-  * Some notes on the algorithms used to limit server bandwidth impact
-  * ==================================================================
-  *
-  * In a single connection case, the algorithm we'd want to use works like this.  On the first chunk of a series,
-  * the total length of time and the number of bytes are recorded.  Then, prior to each subsequent chunk, a calculation
-  * is done which attempts to hit the bandwidth target by the end of the chunk read, using the rate of the first chunk
-  * access as a way of estimating how long it will take to fetch those next n bytes.
-  *
-  * For a multi-connection case, which this is, it's harder to either come up with a good maximum bandwidth estimate,
-  * and harder still to "hit the target", because simultaneous fetches will intrude.  The strategy is therefore:
-  *
-  * 1) The first chunk of any series should proceed without interference from other connections to the same server.
-  *    The goal here is to get a decent quality estimate without any possibility of overwhelming the server.
-  *
-  * 2) The bandwidth of the first chunk is treated as the "maximum bandwidth per connection".  That is, if other
-  *    connections are going on, we can presume that each connection will use at most the bandwidth that the first fetch
-  *    took.  Thus, by generating end-time estimates based on this number, we are actually being conservative and
-  *    using less server bandwidth.
-  *
-  * 3) For chunks that have started but not finished, we keep track of their size and estimated elapsed time in order to schedule when
-  *    new chunks from other connections can start.
-  *
-  */
-  protected class Server
-  {
-    /** The fqdn of the server */
-    protected String serverName;
-    /** This is the time of the next allowed fetch (in ms since epoch) */
-    protected long nextFetchTime = 0L;
-
-    // Bandwidth throttling variables
-    /** Reference count for bandwidth variables */
-    protected int refCount = 0;
-    /** The inverse rate estimate of the first fetch, in ms/byte */
-    protected double rateEstimate = 0.0;
-    /** Flag indicating whether a rate estimate is needed */
-    protected boolean estimateValid = false;
-    /** Flag indicating whether rate estimation is in progress yet */
-    protected boolean estimateInProgress = false;
-    /** The start time of this series */
-    protected long seriesStartTime = -1L;
-    /** Total actual bytes read in this series; this includes fetches in progress */
-    protected long totalBytesRead = -1L;
-
-    /** This object is used to gate access while the first chunk is being read */
-    protected Integer firstChunkLock = new Integer(0);
-
-    /** Outstanding connection counter */
-    protected int outstandingConnections = 0;
-
-    /** Constructor */
-    public Server(String serverName)
-    {
-      this.serverName = serverName;
-    }
-
-    /** Get the fqdn of the server */
-    public String getServerName()
-    {
-      return serverName;
-    }
-
-    /** Register an outstanding connection (and wait until it can be obtained before proceeding) */
-    public synchronized void registerConnection(int maxOutstandingConnections)
-      throws ManifoldCFException
-    {
       try
       {
-        while (outstandingConnections >= maxOutstandingConnections)
-        {
-          wait();
-        }
-        outstandingConnections++;
+        inputStream.close();
       }
-      catch (InterruptedException e)
+      finally
       {
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        streamThrottler.closeStream();
       }
     }
 
-    /** Release an outstanding connection back into the pool */
-    public synchronized void releaseConnection()
-    {
-      outstandingConnections--;
-      notifyAll();
-    }
-
-    /** Note the start of a fetch operation.  Call this method just before the actual stream access begins.
-    * May wait until schedule allows.
-    */
-    public void beginFetch(long minimumMillisecondsPerFetchPerServer)
-      throws InterruptedException
-    {
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: Note begin fetch for '"+serverName+"'");
-      // First, do any waiting, and reschedule as needed
-      long waitAmount = 0L;
-      long currentTime = System.currentTimeMillis();
-
-      // System.out.println("Begin fetch for server "+this.toString()+" with minimum milliseconds per fetch of "+new Long(minimumMillisecondsPerFetchPerServer).toString()+
-      //      " Current time: "+new Long(currentTime).toString()+ " Next fetch time: "+new Long(nextFetchTime).toString());
-
-      synchronized (this)
-      {
-        if (currentTime < nextFetchTime)
-        {
-          waitAmount = nextFetchTime-currentTime;
-          nextFetchTime = nextFetchTime + minimumMillisecondsPerFetchPerServer;
-        }
-        else
-          nextFetchTime = currentTime + minimumMillisecondsPerFetchPerServer;
-      }
-      if (waitAmount > 0L)
-      {
-        if (Logging.connectors.isDebugEnabled())
-          Logging.connectors.debug("RSS: Performing a fetch wait for server '"+serverName+"' for "+
-          new Long(waitAmount).toString()+" ms.");
-        ManifoldCF.sleep(waitAmount);
-      }
-
-      // System.out.println("For server "+this.toString()+", at "+new Long(System.currentTimeMillis()).toString()+", the next fetch time is now "+new Long(nextFetchTime).toString());
-
-      synchronized (this)
-      {
-        if (refCount == 0)
-        {
-          // Now, reset bandwidth throttling counters
-          estimateValid = false;
-          rateEstimate = 0.0;
-          totalBytesRead = 0L;
-          estimateInProgress = false;
-          seriesStartTime = -1L;
-        }
-        refCount++;
-      }
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: Begin fetch noted for '"+serverName+"'");
-
-    }
-
-    /** Note the end of a fetch operation.  Call this method just after the fetch completes.
-    */
-    public void endFetch()
-    {
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: Note end fetch for '"+serverName+"'");
-
-      synchronized (this)
-      {
-        refCount--;
-      }
-
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: End fetch noted for '"+serverName+"'");
-
-    }
-
-    /** Note the start of an individual byte read of a specified size.  Call this method just before the
-    * read request takes place.  Performs the necessary delay prior to reading specified number of bytes from the server.
-    */
-    public void beginRead(int byteCount, double minimumMillisecondsPerBytePerServer)
-      throws InterruptedException
-    {
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: Note begin read for '"+serverName+"'");
-
-      long currentTime = System.currentTimeMillis();
-
-      synchronized (firstChunkLock)
-      {
-        while (estimateInProgress)
-          firstChunkLock.wait();
-        if (estimateValid == false)
-        {
-          seriesStartTime = currentTime;
-          estimateInProgress = true;
-          // Add these bytes to the estimated total
-          synchronized (this)
-          {
-            totalBytesRead += (long)byteCount;
-          }
-          // Exit early; this thread isn't going to do any waiting
-          //if (Logging.connectors.isTraceEnabled())
-          //      Logging.connectors.trace("RSS: Read begin noted; gathering stats for '"+serverName+"'");
-
-          return;
-        }
-      }
-
-      long waitTime = 0L;
-      synchronized (this)
-      {
-        // Add these bytes to the estimated total
-        totalBytesRead += (long)byteCount;
-
-        // Estimate the time this read will take, and wait accordingly
-        long estimatedTime = (long)(rateEstimate * (double)byteCount);
-
-        // Figure out how long the total byte count should take, to meet the constraint
-        long desiredEndTime = seriesStartTime + (long)(((double)totalBytesRead) * minimumMillisecondsPerBytePerServer);
-
-        // The wait time is the different between our desired end time, minus the estimated time to read the data, and the
-        // current time.  But it can't be negative.
-        waitTime = (desiredEndTime - estimatedTime) - currentTime;
-      }
-
-      if (waitTime > 0L)
-      {
-        if (Logging.connectors.isDebugEnabled())
-          Logging.connectors.debug("RSS: Performing a read wait on server '"+serverName+"' of "+
-          new Long(waitTime).toString()+" ms.");
-        ManifoldCF.sleep(waitTime);
-      }
-
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: Begin read noted for '"+serverName+"'");
-
-    }
-
-    /** Note the end of an individual read from the server.  Call this just after an individual read completes.
-    * Pass the actual number of bytes read to the method.
-    */
-    public void endRead(int originalCount, int actualCount)
-    {
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: Note end read for '"+serverName+"'");
-
-      long currentTime = System.currentTimeMillis();
-
-      synchronized (this)
-      {
-        totalBytesRead = totalBytesRead + (long)actualCount - (long)originalCount;
-      }
-
-      // Only one thread should get here if it's the first chunk, but we synchronize to be sure
-      synchronized (firstChunkLock)
-      {
-        if (estimateInProgress)
-        {
-          if (actualCount == 0)
-            // Didn't actually get any bytes, so use 0.0
-            rateEstimate = 0.0;
-          else
-            rateEstimate = ((double)(currentTime - seriesStartTime))/(double)actualCount;
-          estimateValid = true;
-          estimateInProgress = false;
-          firstChunkLock.notifyAll();
-        }
-      }
-
-      //if (Logging.connectors.isTraceEnabled())
-      //      Logging.connectors.trace("RSS: End read noted for '"+serverName+"'");
-
-    }
-
-    /** Discard this server.
-    */
-    public void discard()
-    {
-      // Nothing needed anymore
-    }
-
   }
 
   /** This thread does the actual socket communication with the server.
@@ -1218,10 +930,8 @@
   {
     /** The connection */
     protected final ThrottledConnection theConnection;
-    /** The connection bandwidth we want */
-    protected final double minimumMillisecondsPerBytePerServer;
-    /** The server object we use to track connections and fetches. */
-    protected final Server server;
+    /** The fetch throttler */
+    protected final IFetchThrottler fetchThrottler;
     /** Client and method, all preconfigured */
     protected final HttpClient httpClient;
     protected final HttpRequestBase executeMethod;
@@ -1229,6 +939,7 @@
     protected HttpResponse response = null;
     protected Throwable responseException = null;
     protected XThreadInputStream threadStream = null;
+    protected InputStream bodyStream = null;
     protected boolean streamCreated = false;
     protected Throwable streamException = null;
 
@@ -1238,15 +949,13 @@
 
     protected Throwable generalException = null;
     
-    public ExecuteMethodThread(ThrottledConnection theConnection, Server server,
-      double minimumMillisecondsPerBytePerServer,
+    public ExecuteMethodThread(ThrottledConnection theConnection, IFetchThrottler fetchThrottler,
       HttpClient httpClient, HttpRequestBase executeMethod)
     {
       super();
       setDaemon(true);
       this.theConnection = theConnection;
-      this.server = server;
-      this.minimumMillisecondsPerBytePerServer = minimumMillisecondsPerBytePerServer;
+      this.fetchThrottler = fetchThrottler;
       this.httpClient = httpClient;
       this.executeMethod = executeMethod;
     }
@@ -1295,10 +1004,10 @@
               {
                 try
                 {
-                  InputStream bodyStream = response.getEntity().getContent();
+                  bodyStream = response.getEntity().getContent();
                   if (bodyStream != null)
                   {
-                    bodyStream = new ThrottledInputstream(theConnection,server,bodyStream,minimumMillisecondsPerBytePerServer);
+                    bodyStream = new ThrottledInputstream(theConnection,fetchThrottler.createFetchStream(),bodyStream);
                     threadStream = new XThreadInputStream(bodyStream);
                   }
                   streamCreated = true;
@@ -1336,6 +1045,17 @@
         }
         finally
         {
+          if (bodyStream != null)
+          {
+            try
+            {
+              bodyStream.close();
+            }
+            catch (IOException e)
+            {
+            }
+            bodyStream = null;
+          }
           synchronized (this)
           {
             try
@@ -1457,176 +1177,5 @@
 
   }
 
-  /** SSL Socket factory which wraps another socket factory but allows timeout on socket
-  * creation.
-  */
-  protected static class InterruptibleSocketFactory extends javax.net.ssl.SSLSocketFactory
-  {
-    protected final javax.net.ssl.SSLSocketFactory wrappedFactory;
-    protected final long connectTimeoutMilliseconds;
-    
-    public InterruptibleSocketFactory(javax.net.ssl.SSLSocketFactory wrappedFactory, long connectTimeoutMilliseconds)
-    {
-      this.wrappedFactory = wrappedFactory;
-      this.connectTimeoutMilliseconds = connectTimeoutMilliseconds;
-    }
-
-    @Override
-    public Socket createSocket()
-      throws IOException
-    {
-      // Socket isn't open
-      return wrappedFactory.createSocket();
-    }
-    
-    @Override
-    public Socket createSocket(String host, int port)
-      throws IOException, UnknownHostException
-    {
-      return fireOffThread(InetAddress.getByName(host),port,null,-1);
-    }
-
-    @Override
-    public Socket createSocket(InetAddress host, int port)
-      throws IOException
-    {
-      return fireOffThread(host,port,null,-1);
-    }
-    
-    @Override
-    public Socket createSocket(String host, int port, InetAddress localHost, int localPort)
-      throws IOException, UnknownHostException
-    {
-      return fireOffThread(InetAddress.getByName(host),port,localHost,localPort);
-    }
-    
-    @Override
-    public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort)
-      throws IOException
-    {
-      return fireOffThread(address,port,localAddress,localPort);
-    }
-    
-    @Override
-    public Socket createSocket(Socket s, String host, int port, boolean autoClose)
-      throws IOException
-    {
-      // Socket's already open
-      return wrappedFactory.createSocket(s,host,port,autoClose);
-    }
-    
-    @Override
-    public String[] getDefaultCipherSuites()
-    {
-      return wrappedFactory.getDefaultCipherSuites();
-    }
-    
-    @Override
-    public String[] getSupportedCipherSuites()
-    {
-      return wrappedFactory.getSupportedCipherSuites();
-    }
-    
-    protected Socket fireOffThread(InetAddress address, int port, InetAddress localHost, int localPort)
-      throws IOException
-    {
-      SocketCreateThread thread = new SocketCreateThread(wrappedFactory,address,port,localHost,localPort);
-      thread.start();
-      try
-      {
-        // Wait for thread to complete for only a certain amount of time!
-        thread.join(connectTimeoutMilliseconds);
-        // If join() times out, then the thread is going to still be alive.
-        if (thread.isAlive())
-        {
-          // Kill the thread - not that this will necessarily work, but we need to try
-          thread.interrupt();
-          throw new ConnectTimeoutException("Secure connection timed out");
-        }
-        // The thread terminated.  Throw an error if there is one, otherwise return the result.
-        Throwable t = thread.getException();
-        if (t != null)
-        {
-          if (t instanceof java.net.SocketTimeoutException)
-            throw (java.net.SocketTimeoutException)t;
-          else if (t instanceof ConnectTimeoutException)
-            throw (ConnectTimeoutException)t;
-          else if (t instanceof InterruptedIOException)
-            throw (InterruptedIOException)t;
-          else if (t instanceof IOException)
-            throw (IOException)t;
-          else if (t instanceof Error)
-            throw (Error)t;
-          else if (t instanceof RuntimeException)
-            throw (RuntimeException)t;
-          throw new Error("Received an unexpected exception: "+t.getMessage(),t);
-        }
-        return thread.getResult();
-      }
-      catch (InterruptedException e)
-      {
-        throw new InterruptedIOException("Interrupted: "+e.getMessage());
-      }
-
-    }
-    
-  }
-  
-  /** Create a secure socket in a thread, so that we can "give up" after a while if the socket fails to connect.
-  */
-  protected static class SocketCreateThread extends Thread
-  {
-    // Socket factory
-    protected javax.net.ssl.SSLSocketFactory socketFactory;
-    protected InetAddress host;
-    protected int port;
-    protected InetAddress clientHost;
-    protected int clientPort;
-
-    // The return socket
-    protected Socket rval = null;
-    // The return error
-    protected Throwable throwable = null;
-
-    /** Create the thread */
-    public SocketCreateThread(javax.net.ssl.SSLSocketFactory socketFactory,
-      InetAddress host,
-      int port,
-      InetAddress clientHost,
-      int clientPort)
-    {
-      this.socketFactory = socketFactory;
-      this.host = host;
-      this.port = port;
-      this.clientHost = clientHost;
-      this.clientPort = clientPort;
-      setDaemon(true);
-    }
-
-    public void run()
-    {
-      try
-      {
-        if (clientHost == null)
-          rval = socketFactory.createSocket(host,port);
-        else
-          rval = socketFactory.createSocket(host,port,clientHost,clientPort);
-      }
-      catch (Throwable e)
-      {
-        throwable = e;
-      }
-    }
-
-    public Throwable getException()
-    {
-      return throwable;
-    }
-
-    public Socket getResult()
-    {
-      return rval;
-    }
-  }
 
 }
diff --git a/connectors/rss/pom.xml b/connectors/rss/pom.xml
index 83c9df5..8884e1c 100644
--- a/connectors/rss/pom.xml
+++ b/connectors/rss/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -91,7 +101,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/Messages.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/Messages.java
new file mode 100644
index 0000000..5385b1c
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/Messages.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorities.sharepoint;
+
+import java.util.Locale;
+import java.util.Map;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.IHTTPOutput;
+
+public class Messages extends org.apache.manifoldcf.ui.i18n.Messages
+{
+  public static final String DEFAULT_BUNDLE_NAME="org.apache.manifoldcf.authorities.authorities.sharepoint.common";
+  public static final String DEFAULT_PATH_NAME="org.apache.manifoldcf.authorities.authorities.sharepoint";
+  
+  /** Constructor - do no instantiate
+  */
+  protected Messages()
+  {
+  }
+  
+  public static String getString(Locale locale, String messageKey)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyString(Locale locale, String messageKey)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, null);
+  }
+
+  public static String getString(Locale locale, String messageKey, Object[] args)
+  {
+    return getString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+  
+  public static String getBodyString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getAttributeJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(DEFAULT_BUNDLE_NAME, locale, messageKey, args);
+  }
+
+  // More general methods which allow bundlenames and class loaders to be specified.
+  
+  public static String getString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getAttributeString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyString(Messages.class, bundleName, locale, messageKey, args);
+  }
+  
+  public static String getAttributeJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getAttributeJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  public static String getBodyJavascriptString(String bundleName, Locale locale, String messageKey, Object[] args)
+  {
+    return getBodyJavascriptString(Messages.class, bundleName, locale, messageKey, args);
+  }
+
+  // Resource output
+  
+  public static void outputResource(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResource(output,Messages.class,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+  
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,String> substitutionParameters, boolean mapToUpperCase)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      substitutionParameters,mapToUpperCase);
+  }
+
+  public static void outputResourceWithVelocity(IHTTPOutput output, Locale locale, String resourceKey,
+    Map<String,Object> contextObjects)
+    throws ManifoldCFException
+  {
+    outputResourceWithVelocity(output,Messages.class,DEFAULT_BUNDLE_NAME,DEFAULT_PATH_NAME,locale,resourceKey,
+      contextObjects);
+  }
+  
+}
+
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SPSProxyHelper.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SPSProxyHelper.java
new file mode 100644
index 0000000..9c5be78
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SPSProxyHelper.java
@@ -0,0 +1,643 @@
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities.authorities.sharepoint;
+
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.Hashtable;
+import java.util.Iterator;
+import java.util.List;
+import java.util.regex.*;
+
+import java.io.InputStream;
+
+import javax.xml.soap.*;
+
+import org.apache.manifoldcf.core.common.XMLDoc;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.agents.interfaces.ServiceInterruption;
+import org.apache.manifoldcf.authorities.system.Logging;
+
+import com.microsoft.schemas.sharepoint.dsp.*;
+import com.microsoft.schemas.sharepoint.soap.*;
+
+import org.apache.http.client.HttpClient;
+
+import org.apache.axis.EngineConfiguration;
+
+import javax.xml.namespace.QName;
+
+import org.apache.axis.message.MessageElement;
+import org.apache.axis.AxisEngine;
+import org.apache.axis.ConfigurationException;
+import org.apache.axis.Handler;
+import org.apache.axis.WSDDEngineConfiguration;
+import org.apache.axis.components.logger.LogFactory;
+import org.apache.axis.deployment.wsdd.WSDDDeployment;
+import org.apache.axis.deployment.wsdd.WSDDDocument;
+import org.apache.axis.deployment.wsdd.WSDDGlobalConfiguration;
+import org.apache.axis.encoding.TypeMappingRegistry;
+import org.apache.axis.handlers.soap.SOAPService;
+import org.apache.axis.utils.Admin;
+import org.apache.axis.utils.Messages;
+import org.apache.axis.utils.XMLUtils;
+import org.w3c.dom.Document;
+
+/**
+*
+* @author Michael Cummings
+*
+*/
+public class SPSProxyHelper {
+
+
+  public static final String HTTPCLIENT_PROPERTY = org.apache.manifoldcf.sharepoint.CommonsHTTPSender.HTTPCLIENT_PROPERTY;
+
+  private String serverUrl;
+  private String serverLocation;
+  private String decodedServerLocation;
+  private String baseUrl;
+  private String userName;
+  private String password;
+  private EngineConfiguration configuration;
+  private HttpClient httpClient;
+
+  /**
+  *
+  * @param serverUrl
+  * @param userName
+  * @param password
+  */
+  public SPSProxyHelper( String serverUrl, String serverLocation, String decodedServerLocation, String userName, String password,
+    Class resourceClass, String configFileName, HttpClient httpClient )
+  {
+    this.serverUrl = serverUrl;
+    this.serverLocation = serverLocation;
+    this.decodedServerLocation = decodedServerLocation;
+    if (serverLocation.equals("/"))
+      baseUrl = serverUrl;
+    else
+      baseUrl = serverUrl + serverLocation;
+    this.userName = userName;
+    this.password = password;
+    this.configuration = new ResourceProvider(resourceClass,configFileName);
+    this.httpClient = httpClient;
+  }
+
+  /**
+  * Get the access tokens for a user principal.
+  */
+  public List<String> getAccessTokens( String site, String userLoginName )
+    throws ManifoldCFException
+  {
+    try
+    {
+      if ( site.compareTo("/") == 0 )
+        site = ""; // root case
+      
+      UserGroupWS userService = new UserGroupWS( baseUrl + site, userName, password, configuration, httpClient  );
+      com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall = userService.getUserGroupSoapHandler( );
+
+      com.microsoft.schemas.sharepoint.soap.directory.GetUserInfoResponseGetUserInfoResult userResp = userCall.getUserInfo( userLoginName );
+      org.apache.axis.message.MessageElement[] usersList = userResp.get_any();
+
+      /* Response looks like this:
+          <GetUserInfo xmlns="http://schemas.microsoft.com/sharepoint/soap/directory/">
+             <User ID="4" Sid="S-1-5-21-2127521184-1604012920-1887927527-34577" Name="User1_Display_Name" 
+                LoginName="DOMAIN\User1_Alias" Email="User1_E-mail" 
+                Notes="Notes" IsSiteAdmin="False" IsDomainGroup="False" />
+          </GetUserInfo>
+        */
+
+      if (usersList.length != 1)
+        throw new ManifoldCFException("Bad response - expecting one outer 'GetUserInfo' node, saw "+Integer.toString(usersList.length));
+      
+      if (Logging.authorityConnectors.isDebugEnabled()){
+        Logging.authorityConnectors.debug("SharePoint authority: getUserInfo xml response: '" + usersList[0].toString() + "'");
+      }
+
+      MessageElement users = usersList[0];
+      if (!users.getElementName().getLocalName().equals("GetUserInfo"))
+        throw new ManifoldCFException("Bad response - outer node should have been 'GetUserInfo' node");
+          
+      String userID = null;
+      String userName = null;
+      
+      Iterator userIter = users.getChildElements();
+      while (userIter.hasNext())
+      {
+        MessageElement child = (MessageElement)userIter.next();
+        if (child.getElementName().getLocalName().equals("User"))
+        {
+          userID = child.getAttribute("ID");
+          userName = child.getAttribute("LoginName");
+        }
+      }
+
+      // If userID is null, no such user
+      if (userID == null)
+        return null;
+
+      List<String> accessTokens = new ArrayList<String>();
+      accessTokens.add("U"+userName);
+      
+      com.microsoft.schemas.sharepoint.soap.directory.GetGroupCollectionFromUserResponseGetGroupCollectionFromUserResult userGroupResp =
+        userCall.getGroupCollectionFromUser( userLoginName );
+      org.apache.axis.message.MessageElement[] groupsList = userGroupResp.get_any();
+      
+      /* Response looks like this:
+          <GetGroupCollectionFromUser xmlns=
+             "http://schemas.microsoft.com/sharepoint/soap/directory/">
+             <Groups>
+                <Group ID="3" Name="Group1" Description="Description" OwnerID="1" 
+                   OwnerIsUser="False" />
+                <Group ID="15" Name="Group2" Description="Description" 
+                   OwnerID="12" OwnerIsUser="True" />
+                <Group ID="16" Name="Group3" Description="Description" 
+                   OwnerID="7" OwnerIsUser="False" />
+             </Groups>
+          </GetGroupCollectionFromUser>
+        */
+
+      if (groupsList.length != 1)
+        throw new ManifoldCFException("Bad response - expecting one outer 'GetGroupCollectionFromUser' node, saw "+Integer.toString(groupsList.length));
+
+      if (Logging.authorityConnectors.isDebugEnabled()){
+        Logging.authorityConnectors.debug("SharePoint authority: getGroupCollectionFromUser xml response: '" + groupsList[0].toString() + "'");
+      }
+
+      MessageElement groups = groupsList[0];
+      if (!groups.getElementName().getLocalName().equals("GetGroupCollectionFromUser"))
+        throw new ManifoldCFException("Bad response - outer node should have been 'GetGroupCollectionFromUser' node");
+          
+      Iterator groupsIter = groups.getChildElements();
+      while (groupsIter.hasNext())
+      {
+        MessageElement child = (MessageElement)groupsIter.next();
+        if (child.getElementName().getLocalName().equals("Groups"))
+        {
+          Iterator groupIter = child.getChildElements();
+          while (groupIter.hasNext())
+          {
+            MessageElement group = (MessageElement)groupIter.next();
+            if (group.getElementName().getLocalName().equals("Group"))
+            {
+              String groupID = group.getAttribute("ID");
+              String groupName = group.getAttribute("Name");
+              // Add to the access token list
+              accessTokens.add("G"+groupName);
+            }
+          }
+        }
+      }
+
+      com.microsoft.schemas.sharepoint.soap.directory.GetRoleCollectionFromUserResponseGetRoleCollectionFromUserResult userRoleResp =
+        userCall.getRoleCollectionFromUser( userLoginName );
+      org.apache.axis.message.MessageElement[] rolesList = userRoleResp.get_any();
+
+      if (rolesList.length != 1)
+        throw new ManifoldCFException("Bad response - expecting one outer 'GetRoleCollectionFromUser' node, saw "+Integer.toString(rolesList.length));
+      
+      if (Logging.authorityConnectors.isDebugEnabled()){
+        Logging.authorityConnectors.debug("SharePoint authority: getRoleCollectionFromUser xml response: '" + rolesList[0].toString() + "'");
+      }
+
+      // Not specified in doc and must be determined experimentally
+      /*
+<ns1:GetRoleCollectionFromUser xmlns:ns1="http://schemas.microsoft.com/sharepoint/soap/directory/">
+  <ns1:Roles>
+    <ns1:Role ID="1073741825" Name="Limited Access" Description="Can view specific lists, document libraries, list items, folders, or documents when given permissions."
+      Order="160" Hidden="True" Type="Guest" BasePermissions="ViewFormPages, Open, BrowseUserInfo, UseClientIntegration, UseRemoteAPIs"/>
+  </ns1:Roles>
+</ns1:GetRoleCollectionFromUser>'
+      */
+      
+      MessageElement roles = rolesList[0];
+      if (!roles.getElementName().getLocalName().equals("GetRoleCollectionFromUser"))
+        throw new ManifoldCFException("Bad response - outer node should have been 'GetRoleCollectionFromUser' node");
+          
+      Iterator rolesIter = roles.getChildElements();
+      while (rolesIter.hasNext())
+      {
+        MessageElement child = (MessageElement)rolesIter.next();
+        if (child.getElementName().getLocalName().equals("Roles"))
+        {
+          Iterator roleIter = child.getChildElements();
+          while (roleIter.hasNext())
+          {
+            MessageElement role = (MessageElement)roleIter.next();
+            if (role.getElementName().getLocalName().equals("Role"))
+            {
+              String roleID = role.getAttribute("ID");
+              String roleName = role.getAttribute("Name");
+              // Add to the access token list
+              accessTokens.add("R"+roleName);
+            }
+          }
+        }
+      }
+      
+      return accessTokens;
+    }
+    catch (java.net.MalformedURLException e)
+    {
+      throw new ManifoldCFException("Bad SharePoint url: "+e.getMessage(),e);
+    }
+    catch (javax.xml.rpc.ServiceException e)
+    {
+      if (Logging.authorityConnectors.isDebugEnabled())
+        Logging.authorityConnectors.debug("SharePoint: Got a service exception getting the acls for site "+site,e);
+      throw new ManifoldCFException("Service exception: "+e.getMessage(), e);
+    }
+    catch (org.apache.axis.AxisFault e)
+    {
+      if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://xml.apache.org/axis/","HTTP")))
+      {
+        org.w3c.dom.Element elem = e.lookupFaultDetail(new javax.xml.namespace.QName("http://xml.apache.org/axis/","HttpErrorCode"));
+        if (elem != null)
+        {
+          elem.normalize();
+          String httpErrorCode = elem.getFirstChild().getNodeValue().trim();
+          if (httpErrorCode.equals("404"))
+          {
+            // Page did not exist
+            if (Logging.authorityConnectors.isDebugEnabled())
+              Logging.authorityConnectors.debug("SharePoint: The page at "+baseUrl+site+" did not exist");
+            throw new ManifoldCFException("The page at "+baseUrl+site+" did not exist");
+          }
+          else if (httpErrorCode.equals("401"))
+          {
+            // User did not have permissions for this library to get the acls
+            if (Logging.authorityConnectors.isDebugEnabled())
+              Logging.authorityConnectors.debug("SharePoint: The user did not have access to the usergroups service for "+baseUrl+site);
+            throw new ManifoldCFException("The user did not have access to the usergroups service at "+baseUrl+site);
+          }
+          else if (httpErrorCode.equals("403"))
+            throw new ManifoldCFException("Http error "+httpErrorCode+" while reading from "+baseUrl+site+" - check IIS and SharePoint security settings! "+e.getMessage(),e);
+          else
+            throw new ManifoldCFException("Unexpected http error code "+httpErrorCode+" accessing SharePoint at "+baseUrl+site+": "+e.getMessage(),e);
+        }
+        throw new ManifoldCFException("Unknown http error occurred: "+e.getMessage(),e);
+      }
+      else if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://schemas.xmlsoap.org/soap/envelope/","Server")))
+      {
+        org.w3c.dom.Element elem = e.lookupFaultDetail(new javax.xml.namespace.QName("http://schemas.microsoft.com/sharepoint/soap/","errorcode"));
+        if (elem != null)
+        {
+          elem.normalize();
+          String sharepointErrorCode = elem.getFirstChild().getNodeValue().trim();
+          if (sharepointErrorCode.equals("0x80131600"))
+          {
+            // No such user
+            return null;
+          }
+          if (Logging.authorityConnectors.isDebugEnabled())
+          {
+            org.w3c.dom.Element elem2 = e.lookupFaultDetail(new javax.xml.namespace.QName("http://schemas.microsoft.com/sharepoint/soap/","errorstring"));
+            String errorString = "";
+            if (elem != null)
+              errorString = elem2.getFirstChild().getNodeValue().trim();
+
+            Logging.authorityConnectors.debug("SharePoint: Getting usergroups in site "+site+" failed with unexpected SharePoint error code "+sharepointErrorCode+": "+errorString,e);
+          }
+          throw new ManifoldCFException("SharePoint server error code: "+sharepointErrorCode);
+        }
+        if (Logging.authorityConnectors.isDebugEnabled())
+          Logging.authorityConnectors.debug("SharePoint: Unknown SharePoint server error getting usergroups for site "+site+" - axis fault = "+e.getFaultCode().getLocalPart()+", detail = "+e.getFaultString(),e);
+
+        throw new ManifoldCFException("Unknown SharePoint server error: "+e.getMessage());
+      }
+
+      if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://schemas.xmlsoap.org/soap/envelope/","Server.userException")))
+      {
+        String exceptionName = e.getFaultString();
+        if (exceptionName.equals("java.lang.InterruptedException"))
+          throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
+      }
+
+      if (Logging.authorityConnectors.isDebugEnabled())
+        Logging.authorityConnectors.debug("SharePoint: Got an unknown remote exception getting usergroups for "+site+" - axis fault = "+e.getFaultCode().getLocalPart()+", detail = "+e.getFaultString(),e);
+      throw new ManifoldCFException("Remote procedure exception: "+e.getMessage(), e);
+    }
+    catch (java.rmi.RemoteException e)
+    {
+      // We expect the axis exception to be thrown, not this generic one!
+      // So, fail hard if we see it.
+      if (Logging.authorityConnectors.isDebugEnabled())
+        Logging.authorityConnectors.debug("SharePoint: Got an unexpected remote exception usergroups for site "+site,e);
+      throw new ManifoldCFException("Unexpected remote procedure exception: "+e.getMessage(), e);
+    }
+  }
+
+  /**
+  *
+  * @return true if connection OK
+  * @throws java.net.MalformedURLException
+  * @throws javax.xml.rpc.ServiceException
+  * @throws java.rmi.RemoteException
+  */
+  public boolean checkConnection( String site )
+    throws ManifoldCFException
+  {
+    try
+    {
+      if (site.equals("/"))
+        site = "";
+
+      UserGroupWS userService = new UserGroupWS( baseUrl + site, userName, password, configuration, httpClient  );
+      com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall = userService.getUserGroupSoapHandler( );
+
+      // Get the info for the admin user
+      com.microsoft.schemas.sharepoint.soap.directory.GetUserInfoResponseGetUserInfoResult userResp = userCall.getUserInfo( userName );
+      org.apache.axis.message.MessageElement[] userList = userResp.get_any();
+
+      // MHL
+
+      return true;
+    }
+    catch (java.net.MalformedURLException e)
+    {
+      throw new ManifoldCFException("Bad SharePoint url: "+e.getMessage(),e);
+    }
+    catch (javax.xml.rpc.ServiceException e)
+    {
+      if (Logging.authorityConnectors.isDebugEnabled())
+        Logging.authorityConnectors.debug("SharePoint: Got a service exception checking connection",e);
+      throw new ManifoldCFException("Service exception: "+e.getMessage(), e);
+    }
+    catch (org.apache.axis.AxisFault e)
+    {
+      if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://xml.apache.org/axis/","HTTP")))
+      {
+        org.w3c.dom.Element elem = e.lookupFaultDetail(new javax.xml.namespace.QName("http://xml.apache.org/axis/","HttpErrorCode"));
+        if (elem != null)
+        {
+          elem.normalize();
+          String httpErrorCode = elem.getFirstChild().getNodeValue().trim();
+          if (httpErrorCode.equals("404"))
+          {
+            // Page did not exist
+            throw new ManifoldCFException("The site at "+baseUrl+site+" did not exist");
+          }
+          else if (httpErrorCode.equals("401"))
+            throw new ManifoldCFException("User did not authenticate properly, or has insufficient permissions to access "+baseUrl+site+": "+e.getMessage(),e);
+          else if (httpErrorCode.equals("403"))
+            throw new ManifoldCFException("Http error "+httpErrorCode+" while reading from "+baseUrl+site+" - check IIS and SharePoint security settings! "+e.getMessage(),e);
+          else
+            throw new ManifoldCFException("Unexpected http error code "+httpErrorCode+" accessing SharePoint at "+baseUrl+site+": "+e.getMessage(),e);
+        }
+        throw new ManifoldCFException("Unknown http error occurred: "+e.getMessage(),e);
+      }
+      else if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://schemas.xmlsoap.org/soap/envelope/","Server")))
+      {
+        org.w3c.dom.Element elem = e.lookupFaultDetail(new javax.xml.namespace.QName("http://schemas.microsoft.com/sharepoint/soap/","errorcode"));
+        if (elem != null)
+        {
+          elem.normalize();
+          String sharepointErrorCode = elem.getFirstChild().getNodeValue().trim();
+          org.w3c.dom.Element elem2 = e.lookupFaultDetail(new javax.xml.namespace.QName("http://schemas.microsoft.com/sharepoint/soap/","errorstring"));
+          String errorString = "";
+          if (elem != null)
+            errorString = elem2.getFirstChild().getNodeValue().trim();
+
+          throw new ManifoldCFException("Accessing site "+site+" failed with unexpected SharePoint error code "+sharepointErrorCode+": "+errorString,e);
+        }
+        throw new ManifoldCFException("Unknown SharePoint server error accessing site "+site+" - axis fault = "+e.getFaultCode().getLocalPart()+", detail = "+e.getFaultString(),e);
+      }
+
+      if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://schemas.xmlsoap.org/soap/envelope/","Server.userException")))
+      {
+        String exceptionName = e.getFaultString();
+        if (exceptionName.equals("java.lang.InterruptedException"))
+          throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
+      }
+
+      throw new ManifoldCFException("Got an unknown remote exception accessing site "+site+" - axis fault = "+e.getFaultCode().getLocalPart()+", detail = "+e.getFaultString(),e);
+    }
+    catch (java.rmi.RemoteException e)
+    {
+      // We expect the axis exception to be thrown, not this generic one!
+      // So, fail hard if we see it.
+      throw new ManifoldCFException("Got an unexpected remote exception accessing site "+site+": "+e.getMessage(),e);
+    }
+  }
+
+  /**
+  * SharePoint UserGroup Service Wrapper Class
+  */
+  protected static class UserGroupWS extends com.microsoft.schemas.sharepoint.soap.directory.UserGroupLocator
+  {
+    /**
+    *
+    */
+    private static final long serialVersionUID = -2052484076803624502L;
+    private java.net.URL endPoint;
+    private String userName;
+    private String password;
+    private HttpClient httpClient;
+
+    public UserGroupWS ( String siteUrl, String userName, String password, EngineConfiguration configuration, HttpClient httpClient )
+      throws java.net.MalformedURLException
+    {
+      super(configuration);
+      endPoint = new java.net.URL(siteUrl + "/_vti_bin/usergroup.asmx");
+      this.userName = userName;
+      this.password = password;
+      this.httpClient = httpClient;
+    }
+
+    public com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap getUserGroupSoapHandler( )
+      throws javax.xml.rpc.ServiceException, org.apache.axis.AxisFault
+    {
+      com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoapStub _stub = new com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoapStub(endPoint, this);
+      _stub.setPortName(getUserGroupSoapWSDDServiceName());
+      _stub.setUsername( userName );
+      _stub.setPassword( password );
+      _stub._setProperty( HTTPCLIENT_PROPERTY, httpClient );
+      return _stub;
+    }
+  }
+
+  /** Implementation of EngineConfiguration that we'll use to get the wsdd file from a
+  * local resource.
+  */
+  protected static class ResourceProvider implements WSDDEngineConfiguration
+  {
+    private WSDDDeployment deployment = null;
+
+    private Class resourceClass;
+    private String resourceName;
+
+    /**
+     * Constructor setting the resource name.
+     */
+    public ResourceProvider(Class resourceClass, String resourceName)
+    {
+      this.resourceClass = resourceClass;
+      this.resourceName = resourceName;
+    }
+
+    public WSDDDeployment getDeployment() {
+        return deployment;
+    }
+
+    public void configureEngine(AxisEngine engine)
+      throws ConfigurationException
+    {
+      try
+      {
+        InputStream resourceStream = resourceClass.getResourceAsStream(resourceName);
+        if (resourceStream == null)
+          throw new ConfigurationException("Resource not found: '"+resourceName+"'");
+        try
+        {
+          WSDDDocument doc = new WSDDDocument(XMLUtils.newDocument(resourceStream));
+          deployment = doc.getDeployment();
+
+          deployment.configureEngine(engine);
+          engine.refreshGlobalOptions();
+        }
+        finally
+        {
+          resourceStream.close();
+        }
+      }
+      catch (ConfigurationException e)
+      {
+        throw e;
+      }
+      catch (Exception e)
+      {
+        throw new ConfigurationException(e);
+      }
+    }
+
+    public void writeEngineConfig(AxisEngine engine)
+      throws ConfigurationException
+    {
+      // Do nothing
+    }
+
+    /**
+     * retrieve an instance of the named handler
+     * @param qname XXX
+     * @return XXX
+     * @throws ConfigurationException XXX
+     */
+    public Handler getHandler(QName qname) throws ConfigurationException
+    {
+      return deployment.getHandler(qname);
+    }
+
+    /**
+     * retrieve an instance of the named service
+     * @param qname XXX
+     * @return XXX
+     * @throws ConfigurationException XXX
+     */
+    public SOAPService getService(QName qname) throws ConfigurationException
+    {
+      SOAPService service = deployment.getService(qname);
+      if (service == null)
+      {
+        throw new ConfigurationException(Messages.getMessage("noService10",
+          qname.toString()));
+      }
+      return service;
+    }
+
+    /**
+     * Get a service which has been mapped to a particular namespace
+     *
+     * @param namespace a namespace URI
+     * @return an instance of the appropriate Service, or null
+     */
+    public SOAPService getServiceByNamespaceURI(String namespace)
+      throws ConfigurationException
+    {
+      return deployment.getServiceByNamespaceURI(namespace);
+    }
+
+    /**
+     * retrieve an instance of the named transport
+     * @param qname XXX
+     * @return XXX
+     * @throws ConfigurationException XXX
+     */
+    public Handler getTransport(QName qname) throws ConfigurationException
+    {
+      return deployment.getTransport(qname);
+    }
+
+    public TypeMappingRegistry getTypeMappingRegistry()
+        throws ConfigurationException
+    {
+      return deployment.getTypeMappingRegistry();
+    }
+
+    /**
+     * Returns a global request handler.
+     */
+    public Handler getGlobalRequest() throws ConfigurationException
+    {
+      return deployment.getGlobalRequest();
+    }
+
+    /**
+     * Returns a global response handler.
+     */
+    public Handler getGlobalResponse() throws ConfigurationException
+    {
+      return deployment.getGlobalResponse();
+    }
+
+    /**
+     * Returns the global configuration options.
+     */
+    public Hashtable getGlobalOptions() throws ConfigurationException
+    {
+      WSDDGlobalConfiguration globalConfig = deployment.getGlobalConfiguration();
+
+      if (globalConfig != null)
+        return globalConfig.getParametersTable();
+
+      return null;
+    }
+
+    /**
+     * Get an enumeration of the services deployed to this engine
+     */
+    public Iterator getDeployedServices() throws ConfigurationException
+    {
+      return deployment.getDeployedServices();
+    }
+
+    /**
+     * Get a list of roles that this engine plays globally.  Services
+     * within the engine configuration may also add additional roles.
+     *
+     * @return a <code>List</code> of the roles for this engine
+     */
+    public List getRoles()
+    {
+      return deployment.getRoles();
+    }
+  }
+
+}
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointADAuthority.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointADAuthority.java
new file mode 100644
index 0000000..b8ec091
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointADAuthority.java
@@ -0,0 +1,1128 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorities.sharepoint;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.Logging;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import java.net.*;
+import java.util.concurrent.TimeUnit;
+import javax.naming.*;
+import javax.naming.ldap.*;
+import javax.naming.directory.*;
+
+import org.apache.http.conn.ClientConnectionManager;
+import org.apache.http.client.HttpClient;
+import org.apache.http.impl.conn.PoolingClientConnectionManager;
+import org.apache.http.conn.scheme.Scheme;
+import org.apache.http.conn.ssl.SSLSocketFactory;
+import org.apache.http.conn.ssl.BrowserCompatHostnameVerifier;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.NTCredentials;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.impl.client.DefaultHttpClient;
+import org.apache.http.impl.client.DefaultRedirectStrategy;
+import org.apache.http.util.EntityUtils;
+import org.apache.http.params.BasicHttpParams;
+import org.apache.http.params.HttpParams;
+import org.apache.http.params.CoreConnectionPNames;
+import org.apache.http.client.params.ClientPNames;
+import org.apache.http.client.HttpRequestRetryHandler;
+import org.apache.http.protocol.HttpContext;
+
+
+/** This is the Active Directory implementation of the IAuthorityConnector interface, as used
+* by SharePoint in Claim Space.  It is meant to be used in conjunction with other SharePoint authorities,
+* and should ONLY be used if SharePoint native authorization is being performed in ClaimSpace mode.
+*/
+public class SharePointADAuthority extends org.apache.manifoldcf.authorities.authorities.BaseAuthorityConnector
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Data from the parameters
+  
+  /** The list of suffixes and the associated domain controllers */
+  private List<DCRule> dCRules = null;
+  /** How to create a connection for a DC, keyed by DC name */
+  private Map<String,DCConnectionParameters> dCConnectionParameters = null;
+  
+  private boolean hasSessionParameters = false;
+  private String cacheLifetime = null;
+  private String cacheLRUsize = null;
+  private long responseLifetime = 60000L;
+  private int LRUsize = 1000;
+
+  /** Session information for all DC's we talk with. */
+  private Map<String,DCSessionInfo> sessionInfo = null;
+  
+  /** Cache manager. */
+  private ICacheManager cacheManager = null;
+  
+  /** The length of time in milliseconds that an connection remains idle before expiring.  Currently 5 minutes. */
+  private static final long ADExpirationInterval = 300000L;
+  
+  /** Constructor.
+  */
+  public SharePointADAuthority()
+  {
+  }
+
+  /** Set thread context.
+  */
+  @Override
+  public void setThreadContext(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    super.setThreadContext(tc);
+    cacheManager = CacheManagerFactory.make(tc);
+  }
+  
+  /** Clear thread context.
+  */
+  @Override
+  public void clearThreadContext()
+  {
+    super.clearThreadContext();
+    cacheManager = null;
+  }
+  
+  /** Connect.  The configuration parameters are included.
+  *@param configParams are the configuration parameters for this connection.
+  */
+  @Override
+  public void connect(ConfigParams configParams)
+  {
+    super.connect(configParams);
+
+
+    // Allocate the session data, currently empty
+    sessionInfo = new HashMap<String,DCSessionInfo>();
+    
+    // Set up the DC param set, and the rules
+    dCRules = new ArrayList<DCRule>();
+    dCConnectionParameters = new HashMap<String,DCConnectionParameters>();
+    // Read DC info from the config parameters
+    for (int i = 0; i < params.getChildCount(); i++)
+    {
+      ConfigNode cn = params.getChild(i);
+      if (cn.getType().equals(SharePointConfig.NODE_DOMAINCONTROLLER))
+      {
+        // Domain controller name is the actual key...
+        String dcName = cn.getAttributeValue(SharePointConfig.ATTR_DOMAINCONTROLLER);
+        // Set up the parameters for the domain controller
+        dCConnectionParameters.put(dcName,new DCConnectionParameters(cn.getAttributeValue(SharePointConfig.ATTR_USERNAME),
+          deobfuscate(cn.getAttributeValue(SharePointConfig.ATTR_PASSWORD)),
+          cn.getAttributeValue(SharePointConfig.ATTR_AUTHENTICATION),
+          cn.getAttributeValue(SharePointConfig.ATTR_USERACLsUSERNAME)));
+        // Order-based rule, as well
+        dCRules.add(new DCRule(cn.getAttributeValue(SharePointConfig.ATTR_SUFFIX),dcName));
+      }
+    }
+    
+    cacheLifetime = params.getParameter(SharePointConfig.PARAM_CACHELIFETIME);
+    if (cacheLifetime == null)
+      cacheLifetime = "1";
+    cacheLRUsize = params.getParameter(SharePointConfig.PARAM_CACHELRUSIZE);
+    if (cacheLRUsize == null)
+      cacheLRUsize = "1000";    
+  }
+
+  protected static String deobfuscate(String input)
+  {
+    if (input == null)
+      return null;
+    try
+    {
+      return ManifoldCF.deobfuscate(input);
+    }
+    catch (ManifoldCFException e)
+    {
+      return "";
+    }
+  }
+  
+  // All methods below this line will ONLY be called if a connect() call succeeded
+  // on this instance!
+
+  /** Check connection for sanity.
+  */
+  @Override
+  public String check()
+    throws ManifoldCFException
+  {
+    // Set up the basic AD session...
+    getSessionParameters();
+    // Clear the DC session info, so we're forced to redo it
+    for (Map.Entry<String,DCSessionInfo> sessionEntry : sessionInfo.entrySet())
+    {
+      sessionEntry.getValue().closeConnection();
+    }
+    // Loop through all domain controllers and attempt to establish a session with each one.
+    for (String domainController : dCConnectionParameters.keySet())
+    {
+      createDCSession(domainController);
+    }
+    
+    return super.check();
+  }
+
+  /** Create or lookup a session for a domain controller.
+  */
+  protected LdapContext createDCSession(String domainController)
+    throws ManifoldCFException
+  {
+    getSessionParameters();
+    DCConnectionParameters parms = dCConnectionParameters.get(domainController);
+    // Find the session in the hash, if it exists
+    DCSessionInfo session = sessionInfo.get(domainController);
+    if (session == null)
+    {
+      session = new DCSessionInfo();
+      sessionInfo.put(domainController,session);
+    }
+    return session.getADSession(domainController,parms);
+  }
+  
+  /** Poll.  The connection should be closed if it has been idle for too long.
+  */
+  @Override
+  public void poll()
+    throws ManifoldCFException
+  {
+    long currentTime = System.currentTimeMillis();
+    for (Map.Entry<String,DCSessionInfo> sessionEntry : sessionInfo.entrySet())
+    {
+      sessionEntry.getValue().closeIfExpired(currentTime);
+    }
+    super.poll();
+  }
+  
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    for (Map.Entry<String,DCSessionInfo> sessionEntry : sessionInfo.entrySet())
+    {
+      if (sessionEntry.getValue().isOpen())
+        return true;
+    }
+    return false;
+  }
+
+  /** Close the connection.  Call this before discarding the repository connector.
+  */
+  @Override
+  public void disconnect()
+    throws ManifoldCFException
+  {
+    // Clean up caching parameters
+    
+    cacheLifetime = null;
+    cacheLRUsize = null;
+    
+    // Clean up AD parameters
+    
+    hasSessionParameters = false;
+
+    // Close all connections
+    for (Map.Entry<String,DCSessionInfo> sessionEntry : sessionInfo.entrySet())
+    {
+      sessionEntry.getValue().closeConnection();
+    }
+    sessionInfo = null;
+    
+    super.disconnect();
+  }
+
+  /** Obtain the access tokens for a given user name.
+  *@param userName is the user name or identifier.
+  *@return the response tokens (according to the current authority).
+  * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
+  */
+  @Override
+  public AuthorizationResponse getAuthorizationResponse(String userName)
+    throws ManifoldCFException
+  {
+    // This sets up parameters we need to construct the response description
+    getSessionParameters();
+
+    // Construct a cache description object
+    ICacheDescription objectDescription = new AuthorizationResponseDescription(userName,
+      dCConnectionParameters,dCRules,this.responseLifetime,this.LRUsize);
+    
+    // Enter the cache
+    ICacheHandle ch = cacheManager.enterCache(new ICacheDescription[]{objectDescription},null,null);
+    try
+    {
+      ICacheCreateHandle createHandle = cacheManager.enterCreateSection(ch);
+      try
+      {
+        // Lookup the object
+        AuthorizationResponse response = (AuthorizationResponse)cacheManager.lookupObject(createHandle,objectDescription);
+        if (response != null)
+          return response;
+        // Create the object.
+        response = getAuthorizationResponseUncached(userName);
+        // Save it in the cache
+        cacheManager.saveObject(createHandle,objectDescription,response);
+        // And return it...
+        return response;
+      }
+      finally
+      {
+        cacheManager.leaveCreateSection(createHandle);
+      }
+    }
+    finally
+    {
+      cacheManager.leaveCache(ch);
+    }
+  }
+  
+  /** Obtain the access tokens for a given user name, uncached.
+  *@param userName is the user name or identifier.
+  *@return the response tokens (according to the current authority).
+  * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
+  */
+  protected AuthorizationResponse getAuthorizationResponseUncached(String userName)
+    throws ManifoldCFException
+  {
+    //String searchBase = "CN=Administrator,CN=Users,DC=qa-ad-76,DC=metacarta,DC=com";
+    int index = userName.indexOf("@");
+    if (index == -1)
+      throw new ManifoldCFException("Username is in unexpected form (no @): '"+userName+"'");
+
+    String userPart = userName.substring(0,index);
+    String domainPart = userName.substring(index+1);
+
+    try
+    {
+      List<String> adTokens = getADTokens(userPart,domainPart,userName);
+      if (adTokens == null)
+        return RESPONSE_USERNOTFOUND_ADDITIVE;
+      return new AuthorizationResponse(adTokens.toArray(new String[0]),AuthorizationResponse.RESPONSE_OK);
+    }
+    catch (NameNotFoundException e)
+    {
+      // This means that the user doesn't exist
+      return RESPONSE_USERNOTFOUND_ADDITIVE;
+    }
+    catch (NamingException e)
+    {
+      // Unreachable
+      return RESPONSE_UNREACHABLE_ADDITIVE;
+    }
+    
+  }
+
+  /** Obtain the default access tokens for a given user name.
+  *@param userName is the user name or identifier.
+  *@return the default response tokens, presuming that the connect method fails.
+  */
+  @Override
+  public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
+  {
+    // The default response if the getConnection method fails
+    return RESPONSE_UNREACHABLE_ADDITIVE;
+  }
+
+  /** Get the AD-derived access tokens for a user and domain */
+  protected List<String> getADTokens(String userPart, String domainPart, String userName)
+    throws NameNotFoundException, NamingException, ManifoldCFException
+  {
+    // Now, look through the rules for the matching domain controller
+    String domainController = null;
+    for (DCRule rule : dCRules)
+    {
+      String suffix = rule.getSuffix();
+      if (suffix.length() == 0 || domainPart.toLowerCase(Locale.ROOT).endsWith(suffix.toLowerCase(Locale.ROOT)) &&
+        (suffix.length() == domainPart.length() || domainPart.charAt((domainPart.length()-suffix.length())-1) == '.'))
+      {
+        domainController = rule.getDomainControllerName();
+        break;
+      }
+    }
+    
+    if (domainController == null)
+      // No AD user
+      return null;
+    
+    // Look up connection parameters
+    DCConnectionParameters dcParams = dCConnectionParameters.get(domainController);
+    if (dcParams == null)
+      // No AD user
+      return null;
+        
+    // Use the complete fqn if the field is the "userPrincipalName"
+    String userBase;
+    String userACLsUsername = dcParams.getUserACLsUsername();
+    if (userACLsUsername != null && userACLsUsername.equals("userPrincipalName")){
+      userBase = userName;
+    }
+    else
+    {
+      userBase = userPart;
+    }
+        
+    //Build the DN searchBase from domain part
+    StringBuilder domainsb = new StringBuilder();
+    int j = 0;
+    while (true)
+    {
+      if (j > 0)
+        domainsb.append(",");
+
+      int k = domainPart.indexOf(".",j);
+      if (k == -1)
+      {
+        domainsb.append("DC=").append(ldapEscape(domainPart.substring(j)));
+        break;
+      }
+      domainsb.append("DC=").append(ldapEscape(domainPart.substring(j,k)));
+      j = k+1;
+    }
+
+    // Establish a session with the selected domain controller
+    LdapContext ctx = createDCSession(domainController);  
+        
+    //Get DistinguishedName (for this method we are using DomainPart as a searchBase ie: DC=qa-ad-76,DC=metacarta,DC=com")
+    String searchBase = getDistinguishedName(ctx, userBase, domainsb.toString(), userACLsUsername);
+    if (searchBase == null)
+      return null;
+
+    //specify the LDAP search filter
+    String searchFilter = "(objectClass=user)";
+
+    //Create the search controls for finding the access tokens	
+    SearchControls searchCtls = new SearchControls();
+
+    //Specify the search scope, must be base level search for tokenGroups
+    searchCtls.setSearchScope(SearchControls.OBJECT_SCOPE);
+       
+    //Specify the attributes to return
+    String returnedAtts[]={"tokenGroups","objectSid"};
+    searchCtls.setReturningAttributes(returnedAtts);
+
+    //Search for tokens.  Since every user *must* have a SID, the "no user" detection should be safe.
+    NamingEnumeration answer = ctx.search(searchBase, searchFilter, searchCtls);
+
+    List<String> theGroups = new ArrayList<String>();
+    String userToken = userTokenFromLoginName(domainPart + "\\" + userPart);
+    if (userToken != null)
+      theGroups.add(userToken);
+    
+    //Loop through the search results
+    while (answer.hasMoreElements())
+    {
+      SearchResult sr = (SearchResult)answer.next();
+     
+      //the sr.GetName should be null, as it is relative to the base object
+            
+      Attributes attrs = sr.getAttributes();
+      if (attrs != null)
+      {
+        try
+        {
+          for (NamingEnumeration ae = attrs.getAll();ae.hasMore();) 
+          {
+            Attribute attr = (Attribute)ae.next();
+            for (NamingEnumeration e = attr.getAll();e.hasMore();)
+            {
+              String sid = sid2String((byte[])e.next());
+              String token = attr.getID().equals("objectSid")?userTokenFromSID(sid):groupTokenFromSID(sid);
+              theGroups.add(token);
+            }
+          }
+        }	 
+        catch (NamingException e)
+        {
+          throw new ManifoldCFException(e.getMessage(),e);
+        }
+      }
+    }
+
+    if (theGroups.size() == 0)
+      return null;
+    
+    // User is in AD, so add the 'everyone' group
+    theGroups.add(everyoneGroup());
+    return theGroups;
+  }
+
+  protected static String everyoneGroup()
+  {
+    return "c:0!.s|windows";
+  }
+  
+  protected static String groupTokenFromSID(String SID)
+  {
+    return "c:0+.w|"+SID.toLowerCase(Locale.ROOT);
+  }
+
+  protected static String userTokenFromSID(String SID)
+  {
+    return "i:0+.w|"+SID.toLowerCase(Locale.ROOT);
+  }
+  
+  protected static String userTokenFromLoginName(String loginName)
+  {
+    try
+    {
+      return "i:0#.w|"+URLEncoder.encode(loginName,"utf-8");
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      throw new RuntimeException("Utf-8 encoding unrecognized");
+    }
+  }
+  
+  // UI support methods.
+  //
+  // These support methods are involved in setting up authority connection configuration information. The configuration methods cannot assume that the
+  // current authority object is connected.  That is why they receive a thread context argument.
+    
+  /** Output the configuration header section.
+  * This method is called in the head section of the connector's configuration page.  Its purpose is to add the required tabs to the list, and to output any
+  * javascript methods that might be needed by the configuration editing HTML.
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+  */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
+    throws ManifoldCFException, IOException
+  {
+    tabsArray.add(Messages.getString(locale,"SharePointAuthority.DomainController"));
+    tabsArray.add(Messages.getString(locale,"SharePointAuthority.Cache"));
+    Messages.outputResourceWithVelocity(out,locale,"editADConfiguration.js",null);
+  }
+  
+  /** Output the configuration body section.
+  * This method is called in the body section of the authority connector's configuration page.  Its purpose is to present the required form elements for editing.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+  * form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabName is the current tab name.
+  */
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    velocityContext.put("TabName",tabName);
+    fillInDomainControllerTab(velocityContext,out,parameters);
+    fillInCacheTab(velocityContext,out,parameters);
+    Messages.outputResourceWithVelocity(out,locale,"editADConfiguration_DomainController.html",velocityContext);
+    Messages.outputResourceWithVelocity(out,locale,"editADConfiguration_Cache.html",velocityContext);
+  }
+
+  protected static void fillInDomainControllerTab(Map<String,Object> velocityContext, IPasswordMapperActivity mapper, ConfigParams parameters)
+  {
+    List<Map<String,String>> domainControllers = new ArrayList<Map<String,String>>();
+    
+    // Go through nodes looking for DC nodes
+    for (int i = 0; i < parameters.getChildCount(); i++)
+    {
+      ConfigNode cn = parameters.getChild(i);
+      if (cn.getType().equals(SharePointConfig.NODE_DOMAINCONTROLLER))
+      {
+        // Grab the info
+        String dcSuffix = cn.getAttributeValue(SharePointConfig.ATTR_SUFFIX);
+        String dcDomainController = cn.getAttributeValue(SharePointConfig.ATTR_DOMAINCONTROLLER);
+        String dcUserName = cn.getAttributeValue(SharePointConfig.ATTR_USERNAME);
+        String dcPassword = deobfuscate(cn.getAttributeValue(SharePointConfig.ATTR_PASSWORD));
+        String dcAuthentication = cn.getAttributeValue(SharePointConfig.ATTR_AUTHENTICATION);
+        String dcUserACLsUsername = cn.getAttributeValue(SharePointConfig.ATTR_USERACLsUSERNAME);
+        domainControllers.add(createDomainControllerMap(mapper,dcSuffix,dcDomainController,dcUserName,dcPassword,dcAuthentication,dcUserACLsUsername));
+      }
+    }
+    velocityContext.put("DOMAINCONTROLLERS",domainControllers);
+  }
+
+  protected static Map<String,String> createDomainControllerMap(IPasswordMapperActivity mapper, String suffix, String domainControllerName,
+    String userName, String password, String authentication, String userACLsUsername)
+  {
+    Map<String,String> defaultMap = new HashMap<String,String>();
+    if (suffix != null)
+      defaultMap.put("SUFFIX",suffix);
+    if (domainControllerName != null)
+      defaultMap.put("DOMAINCONTROLLER",domainControllerName);
+    if (userName != null)
+      defaultMap.put("USERNAME",userName);
+    if (password != null)
+      defaultMap.put("PASSWORD",mapper.mapPasswordToKey(password));
+    if (authentication != null)
+      defaultMap.put("AUTHENTICATION",authentication);
+    if (userACLsUsername != null)
+      defaultMap.put("USERACLsUSERNAME",userACLsUsername);
+    return defaultMap;
+  }
+  
+  protected static void fillInCacheTab(Map<String,Object> velocityContext, IPasswordMapperActivity mapper, ConfigParams parameters)
+  {
+    String cacheLifetime = parameters.getParameter(SharePointConfig.PARAM_CACHELIFETIME);
+    if (cacheLifetime == null)
+      cacheLifetime = "1";
+    velocityContext.put("CACHELIFETIME",cacheLifetime);
+    String cacheLRUsize = parameters.getParameter(SharePointConfig.PARAM_CACHELRUSIZE);
+    if (cacheLRUsize == null)
+      cacheLRUsize = "1000";
+    velocityContext.put("CACHELRUSIZE",cacheLRUsize);
+  }
+  
+  /** Process a configuration post.
+  * This method is called at the start of the authority connector's configuration page, whenever there is a possibility that form data for a connection has been
+  * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
+  * The name of the posted form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param variableContext is the set of variables available from the post, including binary file post information.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
+  */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, Locale locale, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    String x = variableContext.getParameter("dcrecord_count");
+    if (x != null)
+    {
+      // Delete old nodes
+      int i = 0;
+      while (i < parameters.getChildCount())
+      {
+        ConfigNode cn = parameters.getChild(i);
+        if (cn.getType().equals(SharePointConfig.NODE_DOMAINCONTROLLER))
+          parameters.removeChild(i);
+        else
+          i++;
+      }
+      // Scan form fields and apply operations
+      int count = Integer.parseInt(x);
+      i = 0;
+      String op;
+      
+      Set<String> seenDomains = new HashSet<String>();
+      
+      while (i < count)
+      {
+        op = variableContext.getParameter("dcrecord_op_"+i);
+        if (op != null && op.equals("Insert"))
+        {
+          // Insert a new record right here
+          addDomainController(seenDomains,parameters,
+            variableContext.getParameter("dcrecord_suffix"),
+            variableContext.getParameter("dcrecord_domaincontrollername"),
+            variableContext.getParameter("dcrecord_username"),
+            variableContext.mapKeyToPassword(variableContext.getParameter("dcrecord_password")),
+            variableContext.getParameter("dcrecord_authentication"),
+            variableContext.getParameter("dcrecord_userACLsUsername"));
+        }
+        if (op == null || !op.equals("Delete"))
+        {
+          // Add this record back in
+          addDomainController(seenDomains,parameters,
+            variableContext.getParameter("dcrecord_suffix_"+i),
+            variableContext.getParameter("dcrecord_domaincontrollername_"+i),
+            variableContext.getParameter("dcrecord_username_"+i),
+            variableContext.mapKeyToPassword(variableContext.getParameter("dcrecord_password_"+i)),
+            variableContext.getParameter("dcrecord_authentication_"+i),
+            variableContext.getParameter("dcrecord_userACLsUsername_"+i));
+        }
+        i++;
+      }
+      op = variableContext.getParameter("dcrecord_op");
+      if (op != null && op.equals("Add"))
+      {
+        // Insert a new record right here
+        addDomainController(seenDomains,parameters,
+          variableContext.getParameter("dcrecord_suffix"),
+          variableContext.getParameter("dcrecord_domaincontrollername"),
+          variableContext.getParameter("dcrecord_username"),
+          variableContext.getParameter("dcrecord_password"),
+          variableContext.getParameter("dcrecord_authentication"),
+          variableContext.getParameter("dcrecord_userACLsUsername"));
+      }
+    }
+
+    // Cache parameters
+    
+    String cacheLifetime = variableContext.getParameter("cachelifetime");
+    if (cacheLifetime != null)
+      parameters.setParameter(SharePointConfig.PARAM_CACHELIFETIME,cacheLifetime);
+    String cacheLRUsize = variableContext.getParameter("cachelrusize");
+    if (cacheLRUsize != null)
+      parameters.setParameter(SharePointConfig.PARAM_CACHELRUSIZE,cacheLRUsize);
+    
+    return null;
+  }
+  
+  protected static void addDomainController(Set<String> seenDomains, ConfigParams parameters,
+    String suffix, String domainControllerName, String userName, String password, String authentication,
+    String userACLsUsername)
+    throws ManifoldCFException
+  {
+    if (!seenDomains.contains(domainControllerName))
+    {
+      ConfigNode cn = new ConfigNode(SharePointConfig.NODE_DOMAINCONTROLLER);
+      cn.setAttribute(SharePointConfig.ATTR_SUFFIX,suffix);
+      cn.setAttribute(SharePointConfig.ATTR_DOMAINCONTROLLER,domainControllerName);
+      cn.setAttribute(SharePointConfig.ATTR_USERNAME,userName);
+      cn.setAttribute(SharePointConfig.ATTR_PASSWORD,ManifoldCF.obfuscate(password));
+      cn.setAttribute(SharePointConfig.ATTR_AUTHENTICATION,authentication);
+      cn.setAttribute(SharePointConfig.ATTR_USERACLsUSERNAME,userACLsUsername);
+      parameters.addChild(parameters.getChildCount(),cn);
+      seenDomains.add(domainControllerName);
+    }
+  }
+  
+  /** View configuration.
+  * This method is called in the body section of the authority connector's view configuration page.  Its purpose is to present the connection information to the user.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters)
+    throws ManifoldCFException, IOException
+  {
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    fillInDomainControllerTab(velocityContext,out,parameters);
+    fillInCacheTab(velocityContext,out,parameters);
+    Messages.outputResourceWithVelocity(out,locale,"viewADConfiguration.html",velocityContext);
+  }
+
+  // Protected methods
+
+  /** Get parameters needed for caching.
+  */
+  protected void getSessionParameters()
+    throws ManifoldCFException
+  {
+    if (!hasSessionParameters)
+    {
+      try
+      {
+        responseLifetime = Long.parseLong(this.cacheLifetime) * 60L * 1000L;
+        LRUsize = Integer.parseInt(this.cacheLRUsize);
+      }
+      catch (NumberFormatException e)
+      {
+        throw new ManifoldCFException("Cache lifetime or Cache LRU size must be an integer: "+e.getMessage(),e);
+      }
+      hasSessionParameters = true;
+    }
+  }
+  
+  /** Obtain the DistinguishedName for a given user logon name.
+  *@param ctx is the ldap context to use.
+  *@param userName (Domain Logon Name) is the user name or identifier.
+  *@param searchBase (Full Domain Name for the search ie: DC=qa-ad-76,DC=metacarta,DC=com)
+  *@return DistinguishedName for given domain user logon name. 
+  * (Should throws an exception if user is not found.)
+  */
+  protected String getDistinguishedName(LdapContext ctx, String userName, String searchBase, String userACLsUsername)
+    throws ManifoldCFException
+  {
+    String returnedAtts[] = {"distinguishedName"};
+    String searchFilter = "(&(objectClass=user)(" + userACLsUsername + "=" + userName + "))";
+    SearchControls searchCtls = new SearchControls();
+    searchCtls.setReturningAttributes(returnedAtts);
+    //Specify the search scope  
+    searchCtls.setSearchScope(SearchControls.SUBTREE_SCOPE);
+    searchCtls.setReturningAttributes(returnedAtts);
+
+    try
+    {
+      NamingEnumeration answer = ctx.search(searchBase, searchFilter, searchCtls);
+      while (answer.hasMoreElements())
+      {
+        SearchResult sr = (SearchResult)answer.next();
+        Attributes attrs = sr.getAttributes();
+        if (attrs != null)
+        {
+          String dn = attrs.get("distinguishedName").get().toString();
+          return dn;
+        }
+      }
+      return null;
+    }
+    catch (NamingException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+   
+  /** LDAP escape a string.
+  */
+  protected static String ldapEscape(String input)
+  {
+    //Add escape sequence to all commas
+    StringBuilder sb = new StringBuilder();
+    int index = 0;
+    while (true)
+    {
+      int oldIndex = index;
+      index = input.indexOf(",",oldIndex);
+      if (index == -1)
+      {
+        sb.append(input.substring(oldIndex));
+        break;
+      }
+      sb.append(input.substring(oldIndex,index)).append("\\,");
+      index++;
+    }
+    return sb.toString();
+  }
+    	
+  /** Convert a binary SID to a string */
+  protected static String sid2String(byte[] SID)
+  {
+    StringBuilder strSID = new StringBuilder("S");
+    long version = SID[0];
+    strSID.append("-").append(Long.toString(version));
+    long authority = SID[4];
+    for (int i = 0;i<4;i++)
+    {
+      authority <<= 8;
+      authority += SID[4+i] & 0xFF;
+    }
+    strSID.append("-").append(Long.toString(authority));
+    long count = SID[2];
+    count <<= 8;
+    count += SID[1] & 0xFF;
+    for (int j=0;j<count;j++)
+    {
+      long rid = SID[11 + (j*4)] & 0xFF;
+      for (int k=1;k<4;k++)
+      {
+        rid <<= 8;
+        rid += SID[11-k + (j*4)] & 0xFF;
+      }
+      strSID.append("-").append(Long.toString(rid));
+    }
+    return strSID.toString();
+  }
+
+  /** Class representing the session information for a specific domain controller
+  * connection.
+  */
+  protected static class DCSessionInfo
+  {
+    /** The initialized LDAP context (which functions as a session) */
+    private LdapContext ctx = null;
+    /** The time of last access to this ctx object */
+    private long expiration = -1L;
+    
+    public DCSessionInfo()
+    {
+    }
+
+    /** Initialize the session. */
+    public LdapContext getADSession(String domainControllerName, DCConnectionParameters params)
+      throws ManifoldCFException
+    {
+      String authentication = params.getAuthentication();
+      String userName = params.getUserName();
+      String password = params.getPassword();
+      
+      while (true)
+      {
+        if (ctx == null)
+        {
+          // Calculate the ldap url first
+          String ldapURL = "ldap://" + domainControllerName + ":389";
+          
+          Hashtable env = new Hashtable();
+          env.put(Context.INITIAL_CONTEXT_FACTORY,"com.sun.jndi.ldap.LdapCtxFactory");
+          env.put(Context.SECURITY_AUTHENTICATION,authentication);      
+          env.put(Context.SECURITY_PRINCIPAL,userName);
+          env.put(Context.SECURITY_CREDENTIALS,password);
+                                    
+          //connect to my domain controller
+          env.put(Context.PROVIDER_URL,ldapURL);
+                    
+          //specify attributes to be returned in binary format
+          env.put("java.naming.ldap.attributes.binary","tokenGroups objectSid");
+     
+          // Now, try the connection...
+          try
+          {
+            ctx = new InitialLdapContext(env,null);
+            // If successful, break
+            break;
+          }
+          catch (AuthenticationException e)
+          {
+            // This means we couldn't authenticate!
+            throw new ManifoldCFException("Authentication problem authenticating admin user '"+userName+"': "+e.getMessage(),e);
+          }
+          catch (CommunicationException e)
+          {
+            // This means we couldn't connect, most likely
+            throw new ManifoldCFException("Couldn't communicate with domain controller '"+domainControllerName+"': "+e.getMessage(),e);
+          }
+          catch (NamingException e)
+          {
+            throw new ManifoldCFException(e.getMessage(),e);
+          }
+        }
+        else
+        {
+          // Attempt to reconnect.  I *hope* this is efficient and doesn't do unnecessary work.
+          try
+          {
+            ctx.reconnect(null);
+            // Break on apparent success
+            break;
+          }
+          catch (AuthenticationException e)
+          {
+            // This means we couldn't authenticate!  Log it and retry creating a whole new context.
+            Logging.authorityConnectors.warn("Reconnect: Authentication problem authenticating admin user '"+userName+"': "+e.getMessage(),e);
+          }
+          catch (CommunicationException e)
+          {
+            // This means we couldn't connect, most likely.  Log it and retry creating a whole new context.
+            Logging.authorityConnectors.warn("Reconnect: Couldn't communicate with domain controller '"+domainControllerName+"': "+e.getMessage(),e);
+          }
+          catch (NamingException e)
+          {
+            Logging.authorityConnectors.warn("Reconnect: Naming exception: "+e.getMessage(),e);
+          }
+          
+          // So we have no chance of leaking resources, attempt to close the context.
+          closeConnection();
+          // Loop back around to try our luck with a fresh connection.
+
+        }
+      }
+      
+      // Set the expiration time anew
+      expiration = System.currentTimeMillis() + ADExpirationInterval;
+      return ctx;
+    }
+    
+    /** Close the connection handle. */
+    protected void closeConnection()
+    {
+      if (ctx != null)
+      {
+        try
+        {
+          ctx.close();
+        }
+        catch (NamingException e)
+        {
+          // Eat this error
+        }
+        ctx = null;
+        expiration = -1L;
+      }
+    }
+
+    /** Close connection if it has expired. */
+    protected void closeIfExpired(long currentTime)
+    {
+      if (expiration != -1L && currentTime > expiration)
+        closeConnection();
+    }
+
+    /** Check if open */
+    protected boolean isOpen()
+    {
+      return ctx != null;
+    }
+
+  }
+
+  /** Class describing a domain suffix and corresponding domain controller name rule.
+  */
+  protected static class DCRule
+  {
+    private String suffix;
+    private String domainControllerName;
+    
+    public DCRule(String suffix, String domainControllerName)
+    {
+      this.suffix = suffix;
+      this.domainControllerName = domainControllerName;
+    }
+    
+    public String getSuffix()
+    {
+      return suffix;
+    }
+    
+    public String getDomainControllerName()
+    {
+      return domainControllerName;
+    }
+  }
+  
+  /** Class describing the connection parameters to a domain controller.
+  */
+  protected static class DCConnectionParameters
+  {
+    private String userName;
+    private String password;
+    private String authentication;
+    private String userACLsUsername;
+
+    public DCConnectionParameters(String userName, String password, String authentication, String userACLsUsername)
+    {
+      this.userName = userName;
+      this.password = password;
+      this.authentication = authentication;
+      this.userACLsUsername = userACLsUsername;
+    }
+    
+    public String getUserName()
+    {
+      return userName;
+    }
+    
+    public String getPassword()
+    {
+      return password;
+    }
+    
+    public String getAuthentication()
+    {
+      return authentication;
+    }
+    
+    public String getUserACLsUsername()
+    {
+      return userACLsUsername;
+    }
+  }
+  
+  protected static StringSet emptyStringSet = new StringSet();
+  
+  /** This is the cache object descriptor for cached access tokens from
+  * this connector.
+  */
+  protected static class AuthorizationResponseDescription extends org.apache.manifoldcf.core.cachemanager.BaseDescription
+  {
+    /** The user name */
+    protected String userName;
+    /** Connection parameters */
+    protected Map<String,DCConnectionParameters> dcConnectionParams;
+    /** Rules */
+    protected List<DCRule> dcRules;
+    /** The response lifetime */
+    protected long responseLifetime;
+    /** The expiration time */
+    protected long expirationTime = -1;
+    
+    /** Constructor. */
+    public AuthorizationResponseDescription(String userName, Map<String,DCConnectionParameters> dcConnectionParams,
+      List<DCRule> dcRules, long responseLifetime, int LRUsize)
+    {
+      super("SharePointADAuthority",LRUsize);
+      this.userName = userName;
+      this.dcConnectionParams = dcConnectionParams;
+      this.dcRules = dcRules;
+      this.responseLifetime = responseLifetime;
+    }
+
+    /** Return the invalidation keys for this object. */
+    public StringSet getObjectKeys()
+    {
+      return emptyStringSet;
+    }
+
+    /** Get the critical section name, used for synchronizing the creation of the object */
+    public String getCriticalSectionName()
+    {
+      StringBuilder sb = new StringBuilder(getClass().getName());
+      sb.append("-").append(userName);
+      for (DCRule rule : dcRules)
+      {
+        sb.append("-").append(rule.getSuffix());
+        String domainController = rule.getDomainControllerName();
+        DCConnectionParameters params = dcConnectionParams.get(domainController);
+        sb.append("-").append(domainController).append("-").append(params.getUserName()).append("-").append(params.getPassword());
+      }
+      return sb.toString();
+    }
+
+    /** Return the object expiration interval */
+    public long getObjectExpirationTime(long currentTime)
+    {
+      if (expirationTime == -1)
+        expirationTime = currentTime + responseLifetime;
+      return expirationTime;
+    }
+
+    public int hashCode()
+    {
+      int rval = userName.hashCode();
+      for (DCRule rule : dcRules)
+      {
+        String domainController = rule.getDomainControllerName();
+        DCConnectionParameters params = dcConnectionParams.get(domainController);
+        rval += rule.getSuffix().hashCode() + domainController.hashCode() + params.getUserName().hashCode() + params.getPassword().hashCode();
+      }
+      return rval;
+    }
+    
+    public boolean equals(Object o)
+    {
+      if (!(o instanceof AuthorizationResponseDescription))
+        return false;
+      AuthorizationResponseDescription ard = (AuthorizationResponseDescription)o;
+      if (!ard.userName.equals(userName))
+        return false;
+      if (ard.dcRules.size() != dcRules.size())
+        return false;
+      for (int i = 0 ; i < dcRules.size() ; i++)
+      {
+        DCRule rule = dcRules.get(i);
+        DCRule ardRule = ard.dcRules.get(i);
+        if (!rule.getSuffix().equals(ardRule.getSuffix()) || !rule.getDomainControllerName().equals(ardRule.getDomainControllerName()))
+          return false;
+        String domainController = rule.getDomainControllerName();
+        DCConnectionParameters params = dcConnectionParams.get(domainController);
+        DCConnectionParameters ardParams = ard.dcConnectionParams.get(domainController);
+        if (!params.getUserName().equals(ardParams.getUserName()) || !params.getPassword().equals(ardParams.getPassword()))
+          return false;
+      }
+      return true;
+    }
+    
+  }
+  
+}
+
+
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointAuthority.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointAuthority.java
new file mode 100644
index 0000000..f1841ca
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointAuthority.java
@@ -0,0 +1,931 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorities.sharepoint;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.Logging;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import java.net.*;
+import java.util.concurrent.TimeUnit;
+import javax.naming.*;
+import javax.naming.ldap.*;
+import javax.naming.directory.*;
+
+import org.apache.http.conn.ClientConnectionManager;
+import org.apache.http.client.HttpClient;
+import org.apache.http.impl.conn.PoolingClientConnectionManager;
+import org.apache.http.conn.scheme.Scheme;
+import org.apache.http.conn.ssl.SSLSocketFactory;
+import org.apache.http.conn.ssl.BrowserCompatHostnameVerifier;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.NTCredentials;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.impl.client.DefaultHttpClient;
+import org.apache.http.impl.client.DefaultRedirectStrategy;
+import org.apache.http.util.EntityUtils;
+import org.apache.http.params.BasicHttpParams;
+import org.apache.http.params.HttpParams;
+import org.apache.http.params.CoreConnectionPNames;
+import org.apache.http.client.params.ClientPNames;
+import org.apache.http.client.HttpRequestRetryHandler;
+import org.apache.http.protocol.HttpContext;
+
+
+/** This is the native SharePoint implementation of the IAuthorityConnector interface.
+*/
+public class SharePointAuthority extends org.apache.manifoldcf.authorities.authorities.BaseAuthorityConnector
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Data from the parameters
+  
+  /** Cache manager. */
+  private ICacheManager cacheManager = null;
+  
+  private boolean hasSessionParameters = false;
+  
+  /** Length of time that a SharePoint session can remain idle */
+  private static final long SharePointExpirationInterval = 300000L;
+  
+  // SharePoint server parameters
+  // These are needed for caching, so they are set at connect() time
+  private String serverProtocol = null;
+  private String serverUrl = null;
+  private String fileBaseUrl = null;
+  private String serverUserName = null;
+  private String password = null;
+  private String ntlmDomain = null;
+  private String serverName = null;
+  private String serverPortString = null;
+  private String serverLocation = null;
+  private String strippedUserName = null;
+  private String encodedServerLocation = null;
+  private String keystoreData = null;
+  
+  private String cacheLRUsize = null;
+  private String cacheLifetime = null;
+  
+  // These are calculated when the session is set up
+  private int serverPort = -1;
+  private SPSProxyHelper proxy = null;
+  private long sharepointSessionTimeout;
+  
+  private long responseLifetime = -1L;
+  private int LRUsize = -1;
+  
+  private IKeystoreManager keystoreManager = null;
+  
+  private ClientConnectionManager connectionManager = null;
+  private HttpClient httpClient = null;
+
+
+  // Current host name
+  private static String currentHost = null;
+  static
+  {
+    // Find the current host name
+    try
+    {
+      java.net.InetAddress addr = java.net.InetAddress.getLocalHost();
+
+      // Get hostname
+      currentHost = addr.getHostName();
+    }
+    catch (UnknownHostException e)
+    {
+    }
+  }
+
+  /** Constructor.
+  */
+  public SharePointAuthority()
+  {
+  }
+
+  /** Set thread context.
+  */
+  @Override
+  public void setThreadContext(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    super.setThreadContext(tc);
+    cacheManager = CacheManagerFactory.make(tc);
+  }
+  
+  /** Clear thread context.
+  */
+  @Override
+  public void clearThreadContext()
+  {
+    super.clearThreadContext();
+    cacheManager = null;
+  }
+  
+  /** Connect.  The configuration parameters are included.
+  *@param configParams are the configuration parameters for this connection.
+  */
+  @Override
+  public void connect(ConfigParams configParams)
+  {
+    super.connect(configParams);
+
+    // Pick up all the parameters that go into the cache key here
+    cacheLifetime = configParams.getParameter(SharePointConfig.PARAM_CACHELIFETIME);
+    if (cacheLifetime == null)
+      cacheLifetime = "1";
+    cacheLRUsize = configParams.getParameter(SharePointConfig.PARAM_CACHELRUSIZE);
+    if (cacheLRUsize == null)
+      cacheLRUsize = "1000";
+    
+    String serverVersion = configParams.getParameter( SharePointConfig.PARAM_SERVERVERSION );
+    if (serverVersion == null)
+      serverVersion = "2.0";
+    // Authority needs to do nothing with SharePoint version right now.
+      
+    serverProtocol = configParams.getParameter( SharePointConfig.PARAM_SERVERPROTOCOL );
+    if (serverProtocol == null)
+      serverProtocol = "http";
+      
+    serverName = configParams.getParameter( SharePointConfig.PARAM_SERVERNAME );
+    serverPortString = configParams.getParameter( SharePointConfig.PARAM_SERVERPORT );
+    serverLocation = configParams.getParameter(SharePointConfig.PARAM_SERVERLOCATION);
+    if (serverLocation == null)
+      serverLocation = "";
+    if (serverLocation.endsWith("/"))
+      serverLocation = serverLocation.substring(0,serverLocation.length()-1);
+    if (serverLocation.length() > 0 && !serverLocation.startsWith("/"))
+      serverLocation = "/" + serverLocation;
+    encodedServerLocation = serverLocation;
+    serverLocation = decodePath(serverLocation);
+
+    serverUserName = configParams.getParameter(SharePointConfig.PARAM_SERVERUSERNAME);
+    password = configParams.getObfuscatedParameter(SharePointConfig.PARAM_SERVERPASSWORD);
+    int index = serverUserName.indexOf("\\");
+    if (index != -1)
+    {
+      strippedUserName = serverUserName.substring(index+1);
+      ntlmDomain = serverUserName.substring(0,index);
+    }
+    else
+    {
+      strippedUserName = null;
+      ntlmDomain = null;
+    }
+    
+    keystoreData = params.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
+
+  }
+
+  // All methods below this line will ONLY be called if a connect() call succeeded
+  // on this instance!
+
+  /** Check connection for sanity.
+  */
+  @Override
+  public String check()
+    throws ManifoldCFException
+  {
+    getSharePointSession();
+    try
+    {
+      URL urlServer = new URL( serverUrl );
+    }
+    catch ( MalformedURLException e )
+    {
+      return "Illegal SharePoint url: "+e.getMessage();
+    }
+
+    try
+    {
+      proxy.checkConnection( "/" );
+    }
+    catch (ManifoldCFException e)
+    {
+      return e.getMessage();
+    }
+
+    return super.check();
+  }
+
+  /** Poll.  The connection should be closed if it has been idle for too long.
+  */
+  @Override
+  public void poll()
+    throws ManifoldCFException
+  {
+    long currentTime = System.currentTimeMillis();
+    if (proxy != null && System.currentTimeMillis() >= sharepointSessionTimeout)
+      expireSharePointSession();
+    if (connectionManager != null)
+      connectionManager.closeIdleConnections(60000L,TimeUnit.MILLISECONDS);
+    super.poll();
+  }
+
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return connectionManager != null;
+  }
+
+  /** Close the connection.  Call this before discarding the repository connector.
+  */
+  @Override
+  public void disconnect()
+    throws ManifoldCFException
+  {
+    // Clean up caching parameters
+    
+    cacheLifetime = null;
+    cacheLRUsize = null;
+    
+    // Clean up SharePoint parameters
+    
+    serverUrl = null;
+    fileBaseUrl = null;
+    serverUserName = null;
+    strippedUserName = null;
+    password = null;
+    ntlmDomain = null;
+    serverName = null;
+    serverLocation = null;
+    encodedServerLocation = null;
+    serverPort = -1;
+
+    keystoreData = null;
+    keystoreManager = null;
+
+    proxy = null;
+    httpClient = null;
+    if (connectionManager != null)
+      connectionManager.shutdown();
+    connectionManager = null;
+
+    hasSessionParameters = false;
+    
+    super.disconnect();
+  }
+
+  /** Obtain the access tokens for a given user name.
+  *@param userName is the user name or identifier.
+  *@return the response tokens (according to the current authority).
+  * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
+  */
+  @Override
+  public AuthorizationResponse getAuthorizationResponse(String userName)
+    throws ManifoldCFException
+  {
+    getSessionParameters();
+    // Construct a cache description object
+    ICacheDescription objectDescription = new AuthorizationResponseDescription(userName,
+      serverName,serverPortString,serverLocation,serverProtocol,serverUserName,password,
+      this.responseLifetime,this.LRUsize);
+    
+    // Enter the cache
+    ICacheHandle ch = cacheManager.enterCache(new ICacheDescription[]{objectDescription},null,null);
+    try
+    {
+      ICacheCreateHandle createHandle = cacheManager.enterCreateSection(ch);
+      try
+      {
+        // Lookup the object
+        AuthorizationResponse response = (AuthorizationResponse)cacheManager.lookupObject(createHandle,objectDescription);
+        if (response != null)
+          return response;
+        // Create the object.
+        response = getAuthorizationResponseUncached(userName);
+        // Save it in the cache
+        cacheManager.saveObject(createHandle,objectDescription,response);
+        // And return it...
+        return response;
+      }
+      finally
+      {
+        cacheManager.leaveCreateSection(createHandle);
+      }
+    }
+    finally
+    {
+      cacheManager.leaveCache(ch);
+    }
+  }
+  
+  /** Obtain the access tokens for a given user name, uncached.
+  *@param userName is the user name or identifier.
+  *@return the response tokens (according to the current authority).
+  * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
+  */
+  protected AuthorizationResponse getAuthorizationResponseUncached(String userName)
+    throws ManifoldCFException
+  {
+    //String searchBase = "CN=Administrator,CN=Users,DC=qa-ad-76,DC=metacarta,DC=com";
+    int index = userName.indexOf("@");
+    if (index == -1)
+      throw new ManifoldCFException("Username is in unexpected form (no @): '"+userName+"'");
+
+    String userPart = userName.substring(0,index);
+    String domainPart = userName.substring(index+1);
+
+    // First, look up user in SharePoint.
+    getSharePointSession();
+    List<String> sharePointTokens = proxy.getAccessTokens("/", domainPart + "\\" + userPart);
+    if (sharePointTokens == null)
+      return RESPONSE_USERNOTFOUND_ADDITIVE;
+    
+    return new AuthorizationResponse(sharePointTokens.toArray(new String[0]),AuthorizationResponse.RESPONSE_OK);
+  }
+
+  /** Obtain the default access tokens for a given user name.
+  *@param userName is the user name or identifier.
+  *@return the default response tokens, presuming that the connect method fails.
+  */
+  @Override
+  public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
+  {
+    // The default response if the getConnection method fails
+    return RESPONSE_UNREACHABLE_ADDITIVE;
+  }
+
+  // UI support methods.
+  //
+  // These support methods are involved in setting up authority connection configuration information. The configuration methods cannot assume that the
+  // current authority object is connected.  That is why they receive a thread context argument.
+    
+  /** Output the configuration header section.
+  * This method is called in the head section of the connector's configuration page.  Its purpose is to add the required tabs to the list, and to output any
+  * javascript methods that might be needed by the configuration editing HTML.
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
+  */
+  @Override
+  public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
+    throws ManifoldCFException, IOException
+  {
+    tabsArray.add(Messages.getString(locale,"SharePointAuthority.Server"));
+    tabsArray.add(Messages.getString(locale,"SharePointAuthority.Cache"));
+    Messages.outputResourceWithVelocity(out,locale,"editConfiguration.js",null);
+  }
+  
+  /** Output the configuration body section.
+  * This method is called in the body section of the authority connector's configuration page.  Its purpose is to present the required form elements for editing.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html>, <body>, and <form> tags.  The name of the
+  * form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@param tabName is the current tab name.
+  */
+  @Override
+  public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    velocityContext.put("TabName",tabName);
+    fillInCacheTab(velocityContext,out,parameters);
+    fillInServerTab(velocityContext,out,parameters);
+    Messages.outputResourceWithVelocity(out,locale,"editConfiguration_Cache.html",velocityContext);
+    Messages.outputResourceWithVelocity(out,locale,"editConfiguration_Server.html",velocityContext);
+  }
+
+  protected static void fillInServerTab(Map<String,Object> velocityContext, IHTTPOutput out, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    String serverVersion = parameters.getParameter(SharePointConfig.PARAM_SERVERVERSION);
+    if (serverVersion == null)
+      serverVersion = "2.0";
+
+    String serverProtocol = parameters.getParameter(SharePointConfig.PARAM_SERVERPROTOCOL);
+    if (serverProtocol == null)
+      serverProtocol = "http";
+
+    String serverName = parameters.getParameter(SharePointConfig.PARAM_SERVERNAME);
+    if (serverName == null)
+      serverName = "localhost";
+
+    String serverPort = parameters.getParameter(SharePointConfig.PARAM_SERVERPORT);
+    if (serverPort == null)
+      serverPort = "";
+
+    String serverLocation = parameters.getParameter(SharePointConfig.PARAM_SERVERLOCATION);
+    if (serverLocation == null)
+      serverLocation = "";
+      
+    String userName = parameters.getParameter(SharePointConfig.PARAM_SERVERUSERNAME);
+    if (userName == null)
+      userName = "";
+
+    String password = parameters.getObfuscatedParameter(SharePointConfig.PARAM_SERVERPASSWORD);
+    if (password == null)
+      password = "";
+    else
+      password = out.mapPasswordToKey(password);
+
+    String keystore = parameters.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
+    IKeystoreManager localKeystore;
+    if (keystore == null)
+      localKeystore = KeystoreManagerFactory.make("");
+    else
+      localKeystore = KeystoreManagerFactory.make("",keystore);
+
+    List<Map<String,String>> certificates = new ArrayList<Map<String,String>>();
+    
+    String[] contents = localKeystore.getContents();
+    for (String alias : contents)
+    {
+      String description = localKeystore.getDescription(alias);
+      if (description.length() > 128)
+        description = description.substring(0,125) + "...";
+      Map<String,String> certificate = new HashMap<String,String>();
+      certificate.put("ALIAS", alias);
+      certificate.put("DESCRIPTION", description);
+      certificates.add(certificate);
+    }
+    
+    // Fill in context
+    velocityContext.put("SERVERVERSION", serverVersion);
+    velocityContext.put("SERVERPROTOCOL", serverProtocol);
+    velocityContext.put("SERVERNAME", serverName);
+    velocityContext.put("SERVERPORT", serverPort);
+    velocityContext.put("SERVERLOCATION", serverLocation);
+    velocityContext.put("USERNAME", userName);
+    velocityContext.put("PASSWORD", password);
+    if (keystore != null)
+      velocityContext.put("KEYSTORE", keystore);
+    velocityContext.put("CERTIFICATELIST", certificates);
+    
+  }
+
+  protected static void fillInCacheTab(Map<String,Object> velocityContext, IPasswordMapperActivity mapper, ConfigParams parameters)
+  {
+    String cacheLifetime = parameters.getParameter(SharePointConfig.PARAM_CACHELIFETIME);
+    if (cacheLifetime == null)
+      cacheLifetime = "1";
+    velocityContext.put("CACHELIFETIME",cacheLifetime);
+    String cacheLRUsize = parameters.getParameter(SharePointConfig.PARAM_CACHELRUSIZE);
+    if (cacheLRUsize == null)
+      cacheLRUsize = "1000";
+    velocityContext.put("CACHELRUSIZE",cacheLRUsize);
+  }
+  
+  /** Process a configuration post.
+  * This method is called at the start of the authority connector's configuration page, whenever there is a possibility that form data for a connection has been
+  * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
+  * The name of the posted form is "editconnection".
+  *@param threadContext is the local thread context.
+  *@param variableContext is the set of variables available from the post, including binary file post information.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
+  */
+  @Override
+  public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, Locale locale, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    // Cache parameters
+    
+    String cacheLifetime = variableContext.getParameter("cachelifetime");
+    if (cacheLifetime != null)
+      parameters.setParameter(SharePointConfig.PARAM_CACHELIFETIME,cacheLifetime);
+    String cacheLRUsize = variableContext.getParameter("cachelrusize");
+    if (cacheLRUsize != null)
+      parameters.setParameter(SharePointConfig.PARAM_CACHELRUSIZE,cacheLRUsize);
+    
+    // SharePoint server parameters
+    
+    String serverVersion = variableContext.getParameter("serverVersion");
+    if (serverVersion != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERVERSION,serverVersion);
+
+    String serverProtocol = variableContext.getParameter("serverProtocol");
+    if (serverProtocol != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERPROTOCOL,serverProtocol);
+
+    String serverName = variableContext.getParameter("serverName");
+
+    if (serverName != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERNAME,serverName);
+
+    String serverPort = variableContext.getParameter("serverPort");
+    if (serverPort != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERPORT,serverPort);
+
+    String serverLocation = variableContext.getParameter("serverLocation");
+    if (serverLocation != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERLOCATION,serverLocation);
+
+    String userName = variableContext.getParameter("userName");
+    if (userName != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERUSERNAME,userName);
+
+    String password = variableContext.getParameter("password");
+    if (password != null)
+      parameters.setObfuscatedParameter(SharePointConfig.PARAM_SERVERPASSWORD,variableContext.mapKeyToPassword(password));
+
+    String keystoreValue = variableContext.getParameter("keystoredata");
+    if (keystoreValue != null)
+      parameters.setParameter(SharePointConfig.PARAM_SERVERKEYSTORE,keystoreValue);
+
+    String configOp = variableContext.getParameter("configop");
+    if (configOp != null)
+    {
+      if (configOp.equals("Delete"))
+      {
+        String alias = variableContext.getParameter("shpkeystorealias");
+        keystoreValue = parameters.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
+        IKeystoreManager mgr;
+        if (keystoreValue != null)
+          mgr = KeystoreManagerFactory.make("",keystoreValue);
+        else
+          mgr = KeystoreManagerFactory.make("");
+        mgr.remove(alias);
+        parameters.setParameter(SharePointConfig.PARAM_SERVERKEYSTORE,mgr.getString());
+      }
+      else if (configOp.equals("Add"))
+      {
+        String alias = IDFactory.make(threadContext);
+        byte[] certificateValue = variableContext.getBinaryBytes("shpcertificate");
+        keystoreValue = parameters.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
+        IKeystoreManager mgr;
+        if (keystoreValue != null)
+          mgr = KeystoreManagerFactory.make("",keystoreValue);
+        else
+          mgr = KeystoreManagerFactory.make("");
+        java.io.InputStream is = new java.io.ByteArrayInputStream(certificateValue);
+        String certError = null;
+        try
+        {
+          mgr.importCertificate(alias,is);
+        }
+        catch (Throwable e)
+        {
+          certError = e.getMessage();
+        }
+        finally
+        {
+          try
+          {
+            is.close();
+          }
+          catch (IOException e)
+          {
+            // Don't report anything
+          }
+        }
+
+        if (certError != null)
+        {
+          // Redirect to error page
+          return "Illegal certificate: "+certError;
+        }
+        parameters.setParameter(SharePointConfig.PARAM_SERVERKEYSTORE,mgr.getString());
+      }
+    }
+    
+    return null;
+  }
+  
+  /** View configuration.
+  * This method is called in the body section of the authority connector's view configuration page.  Its purpose is to present the connection information to the user.
+  * The coder can presume that the HTML that is output from this configuration will be within appropriate <html> and <body> tags.
+  *@param threadContext is the local thread context.
+  *@param out is the output to which any HTML should be sent.
+  *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
+  */
+  @Override
+  public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters)
+    throws ManifoldCFException, IOException
+  {
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    fillInCacheTab(velocityContext,out,parameters);
+    fillInServerTab(velocityContext,out,parameters);
+    Messages.outputResourceWithVelocity(out,locale,"viewConfiguration.html",velocityContext);
+  }
+
+  // Protected methods
+
+  /** Get parameters needed for caching.
+  */
+  protected void getSessionParameters()
+    throws ManifoldCFException
+  {
+    if (!hasSessionParameters)
+    {
+      try
+      {
+        responseLifetime = Long.parseLong(this.cacheLifetime) * 60L * 1000L;
+        LRUsize = Integer.parseInt(this.cacheLRUsize);
+      }
+      catch (NumberFormatException e)
+      {
+        throw new ManifoldCFException("Cache lifetime or Cache LRU size must be an integer: "+e.getMessage(),e);
+      }
+      hasSessionParameters = true;
+    }
+  }
+  
+  protected void getSharePointSession()
+    throws ManifoldCFException
+  {
+    if (proxy == null)
+    {
+      // Set up server URL
+      try
+      {
+        if (serverPortString == null || serverPortString.length() == 0)
+        {
+          if (serverProtocol.equals("https"))
+            this.serverPort = 443;
+          else
+            this.serverPort = 80;
+        }
+        else
+          this.serverPort = Integer.parseInt(serverPortString);
+      }
+      catch (NumberFormatException e)
+      {
+        throw new ManifoldCFException(e.getMessage(),e);
+      }
+      
+      serverUrl = serverProtocol + "://" + serverName;
+      if (serverProtocol.equals("https"))
+      {
+        if (serverPort != 443)
+          serverUrl += ":" + Integer.toString(serverPort);
+      }
+      else
+      {
+        if (serverPort != 80)
+          serverUrl += ":" + Integer.toString(serverPort);
+      }
+
+      // Set up ssl if indicated
+
+      PoolingClientConnectionManager localConnectionManager = new PoolingClientConnectionManager();
+      localConnectionManager.setMaxTotal(1);
+      connectionManager = localConnectionManager;
+
+      if (keystoreData != null)
+      {
+        keystoreManager = KeystoreManagerFactory.make("",keystoreData);
+        SSLSocketFactory myFactory = new SSLSocketFactory(keystoreManager.getSecureSocketFactory(), new BrowserCompatHostnameVerifier());
+        Scheme myHttpsProtocol = new Scheme("https", 443, myFactory);
+        connectionManager.getSchemeRegistry().register(myHttpsProtocol);
+      }
+
+      fileBaseUrl = serverUrl + encodedServerLocation;
+
+      BasicHttpParams params = new BasicHttpParams();
+      params.setBooleanParameter(CoreConnectionPNames.TCP_NODELAY,true);
+      params.setBooleanParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK,false);
+      params.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT,60000);
+      params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT,900000);
+      params.setBooleanParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS,true);
+      DefaultHttpClient localHttpClient = new DefaultHttpClient(connectionManager,params);
+      // No retries
+      localHttpClient.setHttpRequestRetryHandler(new HttpRequestRetryHandler()
+        {
+          public boolean retryRequest(
+            IOException exception,
+            int executionCount,
+            HttpContext context)
+          {
+            return false;
+          }
+       
+        });
+      localHttpClient.setRedirectStrategy(new DefaultRedirectStrategy());
+      if (strippedUserName != null)
+      {
+        localHttpClient.getCredentialsProvider().setCredentials(
+          new AuthScope(serverName,serverPort),
+          new NTCredentials(strippedUserName, password, currentHost, ntlmDomain));
+      }
+
+      httpClient = localHttpClient;
+      
+      proxy = new SPSProxyHelper( serverUrl, encodedServerLocation, serverLocation, serverUserName, password,
+        org.apache.manifoldcf.sharepoint.CommonsHTTPSender.class, "sharepoint-client-config.wsdd",
+        httpClient );
+      
+    }
+    sharepointSessionTimeout = System.currentTimeMillis() + SharePointExpirationInterval;
+  }
+  
+  protected void expireSharePointSession()
+    throws ManifoldCFException
+  {
+    serverPort = -1;
+    serverUrl = null;
+    fileBaseUrl = null;
+    keystoreManager = null;
+    proxy = null;
+    httpClient = null;
+    if (connectionManager != null)
+      connectionManager.shutdown();
+    connectionManager = null;
+  }
+
+  /** Decode a path item.
+  */
+  public static String pathItemDecode(String pathItem)
+  {
+    try
+    {
+      return java.net.URLDecoder.decode(pathItem.replaceAll("\\%20","+"),"utf-8");
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      // Bad news, utf-8 not available!
+      throw new RuntimeException("No utf-8 encoding available");
+    }
+  }
+
+  /** Encode a path item.
+  */
+  public static String pathItemEncode(String pathItem)
+  {
+    try
+    {
+      String output = java.net.URLEncoder.encode(pathItem,"utf-8");
+      return output.replaceAll("\\+","%20");
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      // Bad news, utf-8 not available!
+      throw new RuntimeException("No utf-8 encoding available");
+    }
+  }
+
+  /** Given a path that is /-separated, and otherwise encoded, decode properly to convert to
+  * unencoded form.
+  */
+  public static String decodePath(String relPath)
+  {
+    StringBuilder sb = new StringBuilder();
+    String[] pathEntries = relPath.split("/");
+    int k = 0;
+
+    boolean isFirst = true;
+    while (k < pathEntries.length)
+    {
+      if (isFirst)
+        isFirst = false;
+      else
+        sb.append("/");
+      sb.append(pathItemDecode(pathEntries[k++]));
+    }
+    return sb.toString();
+  }
+
+  /** Given a path that is /-separated, and otherwise unencoded, encode properly for an actual
+  * URI
+  */
+  public static String encodePath(String relPath)
+  {
+    StringBuilder sb = new StringBuilder();
+    String[] pathEntries = relPath.split("/");
+    int k = 0;
+
+    boolean isFirst = true;
+    while (k < pathEntries.length)
+    {
+      if (isFirst)
+        isFirst = false;
+      else
+        sb.append("/");
+      sb.append(pathItemEncode(pathEntries[k++]));
+    }
+    return sb.toString();
+  }
+
+  protected static StringSet emptyStringSet = new StringSet();
+  
+  /** This is the cache object descriptor for cached access tokens from
+  * this connector.
+  */
+  protected static class AuthorizationResponseDescription extends org.apache.manifoldcf.core.cachemanager.BaseDescription
+  {
+    /** The user name */
+    protected final String userName;
+    /** The response lifetime */
+    protected final long responseLifetime;
+    /** The expiration time */
+    protected long expirationTime = -1;
+    // Parameters designed to guarantee cache key uniqueness
+    protected final String serverName;
+    protected final String serverPortString;
+    protected final String serverLocation;
+    protected final String serverProtocol;
+    protected final String serverUserName;
+    protected final String password;
+    
+    /** Constructor. */
+    public AuthorizationResponseDescription(String userName,
+      String serverName, String serverPortString, String serverLocation, String serverProtocol, String serverUserName, String password,
+      long responseLifetime, int LRUsize)
+    {
+      super("SharePointAuthority",LRUsize);
+      this.userName = userName;
+      this.responseLifetime = responseLifetime;
+      this.serverName = serverName;
+      this.serverPortString = serverPortString;
+      this.serverLocation = serverLocation;
+      this.serverProtocol = serverProtocol;
+      this.serverUserName = serverUserName;
+      this.password = password;
+    }
+
+    /** Return the invalidation keys for this object. */
+    public StringSet getObjectKeys()
+    {
+      return emptyStringSet;
+    }
+
+    /** Get the critical section name, used for synchronizing the creation of the object */
+    public String getCriticalSectionName()
+    {
+      StringBuilder sb = new StringBuilder(getClass().getName());
+      sb.append("-").append(userName);
+      sb.append("-").append(serverName);
+      sb.append("-").append(serverPortString);
+      sb.append("-").append(serverLocation);
+      sb.append("-").append(serverProtocol);
+      sb.append("-").append(serverUserName);
+      sb.append("-").append(password);
+      return sb.toString();
+    }
+
+    /** Return the object expiration interval */
+    public long getObjectExpirationTime(long currentTime)
+    {
+      if (expirationTime == -1)
+        expirationTime = currentTime + responseLifetime;
+      return expirationTime;
+    }
+
+    public int hashCode()
+    {
+      int rval = userName.hashCode();
+      rval += serverName.hashCode();
+      rval += serverPortString.hashCode();
+      rval += serverLocation.hashCode();
+      rval += serverProtocol.hashCode();
+      rval += serverUserName.hashCode();
+      rval += password.hashCode();
+      return rval;
+    }
+    
+    public boolean equals(Object o)
+    {
+      if (!(o instanceof AuthorizationResponseDescription))
+        return false;
+      AuthorizationResponseDescription ard = (AuthorizationResponseDescription)o;
+      if (!ard.userName.equals(userName))
+        return false;
+      if (!ard.serverName.equals(serverName))
+        return false;
+      if (!ard.serverPortString.equals(serverPortString))
+        return false;
+      if (!ard.serverLocation.equals(serverLocation))
+        return false;
+      if (!ard.serverProtocol.equals(serverProtocol))
+        return false;
+      if (!ard.serverUserName.equals(serverUserName))
+        return false;
+      if (!ard.password.equals(password))
+        return false;
+      return true;
+    }
+    
+  }
+  
+}
+
+
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointConfig.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointConfig.java
new file mode 100644
index 0000000..c97308d
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/authorities/authorities/sharepoint/SharePointConfig.java
@@ -0,0 +1,72 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorities.sharepoint;
+
+
+/** Parameters and output data for SharePoint authority.
+*/
+public class SharePointConfig
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Configuration parameters
+
+  /** Cache lifetime */
+  public static final String PARAM_CACHELIFETIME = "Cache lifetime";
+  /** Cache LRU size */
+  public static final String PARAM_CACHELRUSIZE = "Cache LRU size";
+
+  /** SharePoint server version */
+  public static final String PARAM_SERVERVERSION = "serverVersion";
+  /** SharePoint server protocol */
+  public static final String PARAM_SERVERPROTOCOL = "serverProtocol";
+  /** SharePoint server name */
+  public static final String PARAM_SERVERNAME = "serverName";
+  /** SharePoint server port */
+  public static final String PARAM_SERVERPORT = "serverPort";
+  /** SharePoint server location */
+  public static final String PARAM_SERVERLOCATION = "serverLocation";
+  /** SharePoint server user name */
+  public static final String PARAM_SERVERUSERNAME = "userName";
+  /** SharePoint server password */
+  public static final String PARAM_SERVERPASSWORD = "password";
+  /** SharePoint server certificate store */
+  public static final String PARAM_SERVERKEYSTORE = "keystore";
+
+  // Nodes
+  
+  /** Domain controller node */
+  public static final String NODE_DOMAINCONTROLLER = "domaincontroller";
+  
+  // Attributes
+  
+  /** Domain suffix */
+  public static final String ATTR_SUFFIX = "suffix";
+  /** DC server name */
+  public static final String ATTR_DOMAINCONTROLLER = "domaincontroller";
+  /** DC user name */
+  public static final String ATTR_USERNAME = "username";
+  /** DC password */
+  public static final String ATTR_PASSWORD = "password";
+  /** DC authentication method */
+  public static final String ATTR_AUTHENTICATION = "authentication";
+  /** DC user acls username attribute name */
+  public static final String ATTR_USERACLsUSERNAME = "useraclsusername";
+
+}
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/CommonsHTTPSender.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/CommonsHTTPSender.java
deleted file mode 100644
index 3ecb7e9..0000000
--- a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/CommonsHTTPSender.java
+++ /dev/null
@@ -1,909 +0,0 @@
-/*
-* Copyright 2001-2004 The Apache Software Foundation.
-*
-* Licensed under the Apache License, Version 2.0 (the "License");
-* you may not use this file except in compliance with the License.
-* You may obtain a copy of the License at
-*
-*      http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*
-* $Id: CommonsHTTPSender.java 988245 2010-08-23 18:39:35Z kwright $
-*/
-package org.apache.manifoldcf.crawler.connectors.sharepoint;
-
-import org.apache.manifoldcf.core.common.XThreadInputStream;
-
-import org.apache.axis.AxisFault;
-import org.apache.axis.Constants;
-import org.apache.axis.Message;
-import org.apache.axis.MessageContext;
-import org.apache.axis.components.logger.LogFactory;
-import org.apache.axis.components.net.CommonsHTTPClientProperties;
-import org.apache.axis.components.net.CommonsHTTPClientPropertiesFactory;
-import org.apache.axis.components.net.TransportClientProperties;
-import org.apache.axis.components.net.TransportClientPropertiesFactory;
-import org.apache.axis.transport.http.HTTPConstants;
-import org.apache.axis.handlers.BasicHandler;
-import org.apache.axis.soap.SOAP12Constants;
-import org.apache.axis.soap.SOAPConstants;
-import org.apache.axis.utils.JavaUtils;
-import org.apache.axis.utils.Messages;
-import org.apache.axis.utils.NetworkUtils;
-
-import org.apache.http.client.HttpClient;
-import org.apache.http.client.methods.HttpRequestBase;
-import org.apache.http.client.methods.HttpGet;
-import org.apache.http.client.methods.HttpPost;
-import org.apache.http.HttpEntity;
-import org.apache.http.HttpResponse;
-import org.apache.http.Header;
-import org.apache.http.params.CoreProtocolPNames;
-import org.apache.http.params.HttpProtocolParams;
-import org.apache.http.ProtocolVersion;
-import org.apache.http.util.EntityUtils;
-import org.apache.http.message.BasicHeader;
-
-import org.apache.http.conn.ConnectTimeoutException;
-import org.apache.http.client.RedirectException;
-import org.apache.http.client.CircularRedirectException;
-import org.apache.http.NoHttpResponseException;
-import org.apache.http.HttpException;
-
-import org.apache.commons.logging.Log;
-
-import javax.xml.soap.MimeHeader;
-import javax.xml.soap.MimeHeaders;
-import javax.xml.soap.SOAPException;
-import java.io.ByteArrayOutputStream;
-import java.io.FilterInputStream;
-import java.io.InterruptedIOException;
-import java.io.IOException;
-import java.io.InputStream;
-import java.io.OutputStream;
-import java.io.Reader;
-import java.io.InputStreamReader;
-import java.io.Writer;
-import java.io.StringWriter;
-import java.io.File;
-import java.io.FileOutputStream;
-import java.io.FileInputStream;
-import java.net.URL;
-import java.util.ArrayList;
-import java.util.Hashtable;
-import java.util.Iterator;
-import java.util.HashMap;
-import java.util.Map;
-import java.util.List;
-
-/* Class to use httpcomponents to communicate with a SOAP server.
-* I've replaced the original rather complicated class with a much simpler one that
-* relies on having an HttpClient object passed into the invoke() method.  Since
-* the object is already set up, not much needs to be done in here.
-*/
-
-public class CommonsHTTPSender extends BasicHandler {
-
-  /** Field log           */
-  protected static Log log =
-    LogFactory.getLog(CommonsHTTPSender.class.getName());
-
-  /** Properties */
-  protected CommonsHTTPClientProperties clientProperties;
-
-  public CommonsHTTPSender() {
-    this.clientProperties = CommonsHTTPClientPropertiesFactory.create();
-  }
-
-  /**
-  * invoke creates a socket connection, sends the request SOAP message and then
-  * reads the response SOAP message back from the SOAP server
-  *
-  * @param msgContext the messsage context
-  *
-  * @throws AxisFault
-  */
-  public void invoke(MessageContext msgContext) throws AxisFault {
-    if (log.isDebugEnabled())
-    {
-      log.debug(Messages.getMessage("enter00",
-        "CommonsHTTPSender::invoke"));
-    }
-    
-    // Catch all exceptions and turn them into AxisFaults
-    try
-    {
-      // Get the URL
-      URL targetURL =
-        new URL(msgContext.getStrProp(MessageContext.TRANS_URL));
-
-      // Get the HttpClient
-      HttpClient httpClient = (HttpClient)msgContext.getProperty(SPSProxyHelper.HTTPCLIENT_PROPERTY);
-
-      boolean posting = true;
-      // If we're SOAP 1.2, allow the web method to be set from the
-      // MessageContext.
-      if (msgContext.getSOAPConstants() == SOAPConstants.SOAP12_CONSTANTS) {
-        String webMethod = msgContext.getStrProp(SOAP12Constants.PROP_WEBMETHOD);
-        if (webMethod != null) {
-          posting = webMethod.equals(HTTPConstants.HEADER_POST);
-        }
-      }
-
-      boolean http10 = false;
-      String httpVersion = msgContext.getStrProp(MessageContext.HTTP_TRANSPORT_VERSION);
-      if (httpVersion != null) {
-        if (httpVersion.equals(HTTPConstants.HEADER_PROTOCOL_V10)) {
-          http10 = true;
-        }
-        // assume 1.1
-      }
-
-      HttpRequestBase method;
-        
-      if (posting) {
-        HttpPost postMethod = new HttpPost(targetURL.toString());
-          
-        // set false as default, addContetInfo can overwrite
-        HttpProtocolParams.setUseExpectContinue(postMethod.getParams(),false);
-
-        Message reqMessage = msgContext.getRequestMessage();
-          
-        boolean httpChunkStream = addContextInfo(postMethod, msgContext);
-
-        HttpEntity requestEntity = null;
-        requestEntity = new MessageRequestEntity(reqMessage, httpChunkStream,
-          http10 || !httpChunkStream);
-        postMethod.setEntity(requestEntity);
-        method = postMethod;
-      } else {
-        method = new HttpGet(targetURL.toString());
-      }
-        
-      if (http10)
-        HttpProtocolParams.setVersion(method.getParams(),new ProtocolVersion("HTTP",1,0));
-
-      BackgroundHTTPThread methodThread = new BackgroundHTTPThread(httpClient,method);
-      methodThread.start();
-      try
-      {
-        int returnCode = methodThread.getResponseCode();
-          
-        String contentType =
-          getHeader(methodThread, HTTPConstants.HEADER_CONTENT_TYPE);
-        String contentLocation =
-          getHeader(methodThread, HTTPConstants.HEADER_CONTENT_LOCATION);
-        String contentLength =
-          getHeader(methodThread, HTTPConstants.HEADER_CONTENT_LENGTH);
-        
-        if ((returnCode > 199) && (returnCode < 300)) {
-
-          // SOAP return is OK - so fall through
-        } else if (msgContext.getSOAPConstants() ==
-          SOAPConstants.SOAP12_CONSTANTS) {
-          // For now, if we're SOAP 1.2, fall through, since the range of
-          // valid result codes is much greater
-        } else if ((contentType != null) && !contentType.equals("text/html")
-          && ((returnCode > 499) && (returnCode < 600))) {
-
-          // SOAP Fault should be in here - so fall through
-        } else {
-          String statusMessage = methodThread.getResponseStatus();
-          AxisFault fault = new AxisFault("HTTP",
-            "(" + returnCode + ")"
-          + statusMessage, null,
-            null);
-
-          fault.setFaultDetailString(
-            Messages.getMessage("return01",
-            "" + returnCode,
-            getResponseBodyAsString(methodThread)));
-          fault.addFaultDetail(Constants.QNAME_FAULTDETAIL_HTTPERRORCODE,
-            Integer.toString(returnCode));
-          throw fault;
-        }
-
-        String contentEncoding =
-         methodThread.getFirstHeader(HTTPConstants.HEADER_CONTENT_ENCODING);
-        if (contentEncoding != null) {
-          AxisFault fault = new AxisFault("HTTP",
-            "unsupported content-encoding of '"
-          + contentEncoding
-          + "' found", null, null);
-          throw fault;
-        }
-
-        Map<String,List<String>> responseHeaders = methodThread.getResponseHeaders();
-
-        InputStream dataStream = methodThread.getSafeInputStream();
-
-        Message outMsg = new Message(new BackgroundInputStream(methodThread,dataStream),
-          false, contentType, contentLocation);
-          
-        // Transfer HTTP headers of HTTP message to MIME headers of SOAP message
-        MimeHeaders responseMimeHeaders = outMsg.getMimeHeaders();
-        for (String name : responseHeaders.keySet())
-        {
-          List<String> values = responseHeaders.get(name);
-          for (String value : values) {
-            responseMimeHeaders.addHeader(name,value);
-          }
-        }
-        outMsg.setMessageType(Message.RESPONSE);
-          
-        // Put the message in the message context.
-        msgContext.setResponseMessage(outMsg);
-        
-        // Pass off the method thread to the stream for closure
-        methodThread = null;
-      }
-      finally
-      {
-        if (methodThread != null)
-        {
-          methodThread.abort();
-          methodThread.finishUp();
-        }
-      }
-
-    } catch (AxisFault af) {
-      log.debug(af);
-      throw af;
-    } catch (Exception e) {
-      log.debug(e);
-      throw AxisFault.makeFault(e);
-    }
-
-    if (log.isDebugEnabled()) {
-      log.debug(Messages.getMessage("exit00",
-        "CommonsHTTPSender::invoke"));
-    }
-  }
-
-  /**
-  * Extracts info from message context.
-  *
-  * @param method Post or get method
-  * @param msgContext the message context
-  */
-  private static boolean addContextInfo(HttpPost method,
-    MessageContext msgContext)
-    throws AxisFault {
-
-    boolean httpChunkStream = false;
-
-    // Get SOAPAction, default to ""
-    String action = msgContext.useSOAPAction()
-      ? msgContext.getSOAPActionURI()
-      : "";
-
-    if (action == null) {
-      action = "";
-    }
-
-    Message msg = msgContext.getRequestMessage();
-
-    if (msg != null){
-
-      // First, transfer MIME headers of SOAPMessage to HTTP headers.
-      // Some of these might be overridden later.
-      MimeHeaders mimeHeaders = msg.getMimeHeaders();
-      if (mimeHeaders != null) {
-        for (Iterator i = mimeHeaders.getAllHeaders(); i.hasNext(); ) {
-          MimeHeader mimeHeader = (MimeHeader) i.next();
-          method.addHeader(mimeHeader.getName(),
-            mimeHeader.getValue());
-        }
-      }
-
-      method.setHeader(new BasicHeader(HTTPConstants.HEADER_CONTENT_TYPE,
-        msg.getContentType(msgContext.getSOAPConstants())));
-    }
-    
-    method.setHeader(new BasicHeader("Accept","*/*"));
-
-    method.setHeader(new BasicHeader(HTTPConstants.HEADER_SOAP_ACTION,
-      "\"" + action + "\""));
-    method.setHeader(new BasicHeader(HTTPConstants.HEADER_USER_AGENT, Messages.getMessage("axisUserAgent")));
-
-
-    // process user defined headers for information.
-    Hashtable userHeaderTable =
-      (Hashtable) msgContext.getProperty(HTTPConstants.REQUEST_HEADERS);
-
-    if (userHeaderTable != null) {
-      for (Iterator e = userHeaderTable.entrySet().iterator();
-        e.hasNext();) {
-        Map.Entry me = (Map.Entry) e.next();
-        Object keyObj = me.getKey();
-
-        if (null == keyObj) {
-          continue;
-        }
-        String key = keyObj.toString().trim();
-        String value = me.getValue().toString().trim();
-
-        if (key.equalsIgnoreCase(HTTPConstants.HEADER_EXPECT) &&
-          value.equalsIgnoreCase(HTTPConstants.HEADER_EXPECT_100_Continue)) {
-          HttpProtocolParams.setUseExpectContinue(method.getParams(),true);
-        } else if (key.equalsIgnoreCase(HTTPConstants.HEADER_TRANSFER_ENCODING_CHUNKED)) {
-          String val = me.getValue().toString();
-          if (null != val)  {
-            httpChunkStream = JavaUtils.isTrue(val);
-          }
-        } else {
-          method.addHeader(key, value);
-        }
-      }
-    }
-    
-    return httpChunkStream;
-  }
-
-  private static String getHeader(BackgroundHTTPThread methodThread, String headerName)
-    throws IOException, InterruptedException, HttpException {
-    String header = methodThread.getFirstHeader(headerName);
-    return (header == null) ? null : header.trim();
-  }
-
-  private static String getResponseBodyAsString(BackgroundHTTPThread methodThread)
-    throws IOException, InterruptedException, HttpException {
-    InputStream is = methodThread.getSafeInputStream();
-    if (is != null)
-    {
-      try
-      {
-        String charSet = methodThread.getCharSet();
-        if (charSet == null)
-          charSet = "utf-8";
-        char[] buffer = new char[65536];
-        Reader r = new InputStreamReader(is,charSet);
-        Writer w = new StringWriter();
-        try
-        {
-          while (true)
-          {
-            int amt = r.read(buffer);
-            if (amt == -1)
-              break;
-            w.write(buffer,0,amt);
-          }
-        }
-        finally
-        {
-          w.flush();
-        }
-        return w.toString();
-      }
-      finally
-      {
-        is.close();
-      }
-    }
-    return "";
-  }
-  
-  private static class MessageRequestEntity implements HttpEntity {
-
-    private final Message message;
-    private final boolean httpChunkStream; //Use HTTP chunking or not.
-    private final boolean contentLengthNeeded;
-
-    public MessageRequestEntity(Message message, boolean httpChunkStream, boolean contentLengthNeeded) {
-      this.message = message;
-      this.httpChunkStream = httpChunkStream;
-      this.contentLengthNeeded = contentLengthNeeded;
-    }
-
-    @Override
-    public boolean isChunked() {
-      return httpChunkStream;
-    }
-    
-    @Override
-    public void consumeContent()
-      throws IOException {
-      EntityUtils.consume(this);
-    }
-    
-    @Override
-    public boolean isRepeatable() {
-      return true;
-    }
-
-    @Override
-    public boolean isStreaming() {
-      return false;
-    }
-    
-    @Override
-    public InputStream getContent()
-      throws IOException, IllegalStateException {
-      // MHL
-      return null;
-    }
-    
-    @Override
-    public void writeTo(OutputStream out)
-      throws IOException {
-      try {
-        this.message.writeTo(out);
-      } catch (SOAPException e) {
-        throw new IOException(e.getMessage());
-      }
-    }
-
-    @Override
-    public long getContentLength() {
-      if (contentLengthNeeded) {
-        try {
-          return message.getContentLength();
-        } catch (Exception e) {
-        }
-      }
-      // Unknown (chunked) length
-      return -1L;
-    }
-
-    @Override
-    public Header getContentType() {
-      return null; // a separate header is added
-    }
-
-    @Override
-    public Header getContentEncoding() {
-      return null;
-    }
-  }
-
-  /** This input stream wraps a background http transaction thread, so that
-  * the thread is ended when the stream is closed.
-  */
-  private static class BackgroundInputStream extends InputStream {
-    
-    private BackgroundHTTPThread methodThread = null;
-    private InputStream xThreadInputStream = null;
-    
-    /** Construct an http transaction stream.  The stream is driven by a background
-    * thread, whose existence is tied to this class.  The sequence of activity that
-    * this class expects is as follows:
-    * (1) Construct the httpclient and request object and initialize them
-    * (2) Construct a background method thread, and start it
-    * (3) If the response calls for it, call this constructor, and put the resulting stream
-    *    into the message response
-    * (4) Otherwise, terminate the background method thread in the standard manner,
-    *    being sure NOT
-    */
-    public BackgroundInputStream(BackgroundHTTPThread methodThread, InputStream xThreadInputStream)
-    {
-      this.methodThread = methodThread;
-      this.xThreadInputStream = xThreadInputStream;
-    }
-    
-    @Override
-    public int available()
-      throws IOException
-    {
-      if (xThreadInputStream != null)
-        return xThreadInputStream.available();
-      return super.available();
-    }
-    
-    @Override
-    public void close()
-      throws IOException
-    {
-      try
-      {
-        if (xThreadInputStream != null)
-        {
-          xThreadInputStream.close();
-          xThreadInputStream = null;
-        }
-      }
-      finally
-      {
-        if (methodThread != null)
-        {
-          methodThread.abort();
-          try
-          {
-            methodThread.finishUp();
-          }
-          catch (InterruptedException e)
-          {
-            throw new InterruptedIOException(e.getMessage());
-          }
-          methodThread = null;
-        }
-      }
-    }
-    
-    @Override
-    public void mark(int readlimit)
-    {
-      if (xThreadInputStream != null)
-        xThreadInputStream.mark(readlimit);
-      else
-        super.mark(readlimit);
-    }
-    
-    @Override
-    public void reset()
-      throws IOException
-    {
-      if (xThreadInputStream != null)
-        xThreadInputStream.reset();
-      else
-        super.reset();
-    }
-    
-    @Override
-    public boolean markSupported()
-    {
-      if (xThreadInputStream != null)
-        return xThreadInputStream.markSupported();
-      return super.markSupported();
-    }
-    
-    @Override
-    public long skip(long n)
-      throws IOException
-    {
-      if (xThreadInputStream != null)
-        return xThreadInputStream.skip(n);
-      return super.skip(n);
-    }
-    
-    @Override
-    public int read(byte[] b, int off, int len)
-      throws IOException
-    {
-      if (xThreadInputStream != null)
-        return xThreadInputStream.read(b,off,len);
-      return super.read(b,off,len);
-    }
-
-    @Override
-    public int read(byte[] b)
-      throws IOException
-    {
-      if (xThreadInputStream != null)
-        return xThreadInputStream.read(b);
-      return super.read(b);
-    }
-    
-    @Override
-    public int read()
-      throws IOException
-    {
-      if (xThreadInputStream != null)
-        return xThreadInputStream.read();
-      return -1;
-    }
-    
-  }
-
-  /** This thread does the actual socket communication with the server.
-  * It's set up so that it can be abandoned at shutdown time.
-  *
-  * The way it works is as follows:
-  * - it starts the transaction
-  * - it receives the response, and saves that for the calling class to inspect
-  * - it transfers the data part to an input stream provided to the calling class
-  * - it shuts the connection down
-  *
-  * If there is an error, the sequence is aborted, and an exception is recorded
-  * for the calling class to examine.
-  *
-  * The calling class basically accepts the sequence above.  It starts the
-  * thread, and tries to get a response code.  If instead an exception is seen,
-  * the exception is thrown up the stack.
-  */
-  protected static class BackgroundHTTPThread extends Thread
-  {
-    /** Client and method, all preconfigured */
-    protected final HttpClient httpClient;
-    protected final HttpRequestBase executeMethod;
-    
-    protected HttpResponse response = null;
-    protected Throwable responseException = null;
-    protected XThreadInputStream threadStream = null;
-    protected String charSet = null;
-    protected boolean streamCreated = false;
-    protected Throwable streamException = null;
-    protected boolean abortThread = false;
-
-    protected Throwable shutdownException = null;
-
-    protected Throwable generalException = null;
-    
-    public BackgroundHTTPThread(HttpClient httpClient, HttpRequestBase executeMethod)
-    {
-      super();
-      setDaemon(true);
-      this.httpClient = httpClient;
-      this.executeMethod = executeMethod;
-    }
-
-    public void run()
-    {
-      try
-      {
-        try
-        {
-          // Call the execute method appropriately
-          synchronized (this)
-          {
-            if (!abortThread)
-            {
-              try
-              {
-                response = httpClient.execute(executeMethod);
-              }
-              catch (java.net.SocketTimeoutException e)
-              {
-                responseException = e;
-              }
-              catch (ConnectTimeoutException e)
-              {
-                responseException = e;
-              }
-              catch (InterruptedIOException e)
-              {
-                throw e;
-              }
-              catch (Throwable e)
-              {
-                responseException = e;
-              }
-              this.notifyAll();
-            }
-          }
-          
-          // Start the transfer of the content
-          if (responseException == null)
-          {
-            synchronized (this)
-            {
-              if (!abortThread)
-              {
-                try
-                {
-                  HttpEntity entity = response.getEntity();
-                  InputStream bodyStream = entity.getContent();
-                  if (bodyStream != null)
-                  {
-                    threadStream = new XThreadInputStream(bodyStream);
-                    charSet = EntityUtils.getContentCharSet(entity);
-                  }
-                  streamCreated = true;
-                }
-                catch (java.net.SocketTimeoutException e)
-                {
-                  streamException = e;
-                }
-                catch (ConnectTimeoutException e)
-                {
-                  streamException = e;
-                }
-                catch (InterruptedIOException e)
-                {
-                  throw e;
-                }
-                catch (Throwable e)
-                {
-                  streamException = e;
-                }
-                this.notifyAll();
-              }
-            }
-          }
-          
-          if (responseException == null && streamException == null)
-          {
-            if (threadStream != null)
-            {
-              // Stuff the content until we are done
-              threadStream.stuffQueue();
-            }
-          }
-          
-        }
-        finally
-        {
-          synchronized (this)
-          {
-            try
-            {
-              executeMethod.abort();
-            }
-            catch (Throwable e)
-            {
-              shutdownException = e;
-            }
-            this.notifyAll();
-          }
-        }
-      }
-      catch (Throwable e)
-      {
-        // We catch exceptions here that should ONLY be InterruptedExceptions, as a result of the thread being aborted.
-        this.generalException = e;
-      }
-    }
-
-    public int getResponseCode()
-      throws InterruptedException, IOException, HttpException
-    {
-      // Must wait until the response object is there
-      while (true)
-      {
-        synchronized (this)
-        {
-          checkException(responseException);
-          if (response != null)
-            return response.getStatusLine().getStatusCode();
-          wait();
-        }
-      }
-    }
-
-    public String getResponseStatus()
-      throws InterruptedException, IOException, HttpException
-    {
-      // Must wait until the response object is there
-      while (true)
-      {
-        synchronized (this)
-        {
-          checkException(responseException);
-          if (response != null)
-            return response.getStatusLine().toString();
-          wait();
-        }
-      }
-    }
-
-    public Map<String,List<String>> getResponseHeaders()
-      throws InterruptedException, IOException, HttpException
-    {
-      // Must wait for the response object to appear
-      while (true)
-      {
-        synchronized (this)
-        {
-          checkException(responseException);
-          if (response != null)
-          {
-            Header[] headers = response.getAllHeaders();
-            Map<String,List<String>> rval = new HashMap<String,List<String>>();
-            int i = 0;
-            while (i < headers.length)
-            {
-              Header h = headers[i++];
-              String name = h.getName();
-              String value = h.getValue();
-              List<String> values = rval.get(name);
-              if (values == null)
-              {
-                values = new ArrayList<String>();
-                rval.put(name,values);
-              }
-              values.add(value);
-            }
-            return rval;
-          }
-          wait();
-        }
-      }
-
-    }
-    
-    public String getFirstHeader(String headerName)
-      throws InterruptedException, IOException, HttpException
-    {
-      // Must wait for the response object to appear
-      while (true)
-      {
-        synchronized (this)
-        {
-          checkException(responseException);
-          if (response != null)
-          {
-            Header h = response.getFirstHeader(headerName);
-            if (h == null)
-              return null;
-            return h.getValue();
-          }
-          wait();
-        }
-      }
-    }
-
-    public InputStream getSafeInputStream()
-      throws InterruptedException, IOException, HttpException
-    {
-      // Must wait until stream is created, or until we note an exception was thrown.
-      while (true)
-      {
-        synchronized (this)
-        {
-          if (responseException != null)
-            throw new IllegalStateException("Check for response before getting stream");
-          checkException(streamException);
-          if (streamCreated)
-            return threadStream;
-          wait();
-        }
-      }
-    }
-    
-    public String getCharSet()
-      throws InterruptedException, IOException, HttpException
-    {
-      while (true)
-      {
-        synchronized (this)
-        {
-          if (responseException != null)
-            throw new IllegalStateException("Check for response before getting charset");
-          checkException(streamException);
-          if (streamCreated)
-            return charSet;
-          wait();
-        }
-      }
-    }
-    
-    public void abort()
-    {
-      // This will be called during the finally
-      // block in the case where all is well (and
-      // the stream completed) and in the case where
-      // there were exceptions.
-      synchronized (this)
-      {
-        if (streamCreated)
-        {
-          if (threadStream != null)
-            threadStream.abort();
-        }
-        abortThread = true;
-      }
-    }
-    
-    public void finishUp()
-      throws InterruptedException
-    {
-      join();
-    }
-    
-    protected synchronized void checkException(Throwable exception)
-      throws IOException, HttpException
-    {
-      if (exception != null)
-      {
-        Throwable e = exception;
-        if (e instanceof IOException)
-          throw (IOException)e;
-        else if (e instanceof HttpException)
-          throw (HttpException)e;
-        else if (e instanceof RuntimeException)
-          throw (RuntimeException)e;
-        else if (e instanceof Error)
-          throw (Error)e;
-        else
-          throw new RuntimeException("Unhandled exception of type: "+e.getClass().getName(),e);
-      }
-    }
-
-  }
-
-}
-
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/IFileStream.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/IFileStream.java
index 1b67fb1..97355de 100644
--- a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/IFileStream.java
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/IFileStream.java
@@ -20,6 +20,6 @@
 
 public interface IFileStream
 {
-  public void addFile(String relPath)
+  public void addFile(String relPath, String displayURI)
     throws ManifoldCFException;
 }
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SPSProxyHelper.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SPSProxyHelper.java
index 7dbea1b..39d37b1 100644
--- a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SPSProxyHelper.java
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SPSProxyHelper.java
@@ -23,6 +23,8 @@
 import java.util.Hashtable;
 import java.util.Iterator;
 import java.util.List;
+import java.util.Set;
+import java.util.HashSet;
 import java.util.regex.*;
 
 import java.io.InputStream;
@@ -67,7 +69,7 @@
 public class SPSProxyHelper {
 
 
-  public static final String HTTPCLIENT_PROPERTY = "ManifoldCF_HttpClient";
+  public static final String HTTPCLIENT_PROPERTY = org.apache.manifoldcf.sharepoint.CommonsHTTPSender.HTTPCLIENT_PROPERTY;
 
   private String serverUrl;
   private String serverLocation;
@@ -107,7 +109,7 @@
   * @return array of sids
   * @throws Exception
   */
-  public String[] getACLs(String site, String guid )
+  public String[] getACLs(String site, String guid, boolean activeDirectoryAuthority )
     throws ManifoldCFException, ServiceInterruption
   {
     long currentTime;
@@ -145,7 +147,7 @@
       parent = nodeList.get(0);
       nodeList.clear();
       doc.processPath( nodeList, "*", parent );
-      java.util.HashSet sids = new java.util.HashSet();
+      Set<String> sids = new HashSet<String>();
       int i = 0;
       for (; i< nodeList.size(); i++ )
       {
@@ -161,34 +163,33 @@
           {
             // Use AD user or group
             String userLogin = doc.getValue( node, "UserLogin" );
-            String userSid = getSidForUser( userCall, userLogin );
+            String userSid = getSidForUser( userCall, userLogin, activeDirectoryAuthority );
             sids.add( userSid );
           }
           else
           {
             // Role
-            String[] roleSids;
+            List<String> roleSids;
             String roleName = doc.getValue( node, "RoleName" );
             if ( roleName.length() == 0)
             {
               roleName = doc.getValue(node,"GroupName");
-              roleSids = getSidsForGroup(userCall, roleName);
+              roleSids = getSidsForGroup(userCall, roleName, activeDirectoryAuthority);
             }
             else
             {
-              roleSids = getSidsForRole(userCall, roleName);
+              roleSids = getSidsForRole(userCall, roleName, activeDirectoryAuthority);
             }
 
-            int j = 0;
-            for (; j < roleSids.length; j++ )
+            for (String sid : roleSids)
             {
-              sids.add( roleSids[ j ] );
+              sids.add( sid );
             }
           }
         }
       }
 
-      return (String[]) sids.toArray( new String[0] );
+      return sids.toArray( new String[0] );
     }
     catch (java.net.MalformedURLException e)
     {
@@ -299,7 +300,7 @@
   * @throws ManifoldCFException
   * @throws ServiceInterruption
   */
-  public String[] getDocumentACLs(String site, String file)
+  public String[] getDocumentACLs(String site, String file, boolean activeDirectoryAuthority)
     throws ManifoldCFException, ServiceInterruption
   {
     long currentTime;
@@ -356,7 +357,7 @@
       parent = nodeList.get(0);
       nodeList.clear();
       doc.processPath( nodeList, "*", parent );
-      java.util.HashSet sids = new java.util.HashSet();
+      Set<String> sids = new HashSet<String>();
       int i = 0;
       for (; i< nodeList.size(); i++ )
       {
@@ -372,34 +373,33 @@
           {
             // Use AD user or group
             String userLogin = doc.getValue( node, "UserLogin" );
-            String userSid = getSidForUser( userCall, userLogin );
+            String userSid = getSidForUser( userCall, userLogin, activeDirectoryAuthority );
             sids.add( userSid );
           }
           else
           {
             // Role
-            String[] roleSids;
+            List<String> roleSids;
             String roleName = doc.getValue( node, "RoleName" );
             if ( roleName.length() == 0)
             {
               roleName = doc.getValue(node,"GroupName");
-              roleSids = getSidsForGroup(userCall, roleName);
+              roleSids = getSidsForGroup(userCall, roleName, activeDirectoryAuthority);
             }
             else
             {
-              roleSids = getSidsForRole(userCall, roleName);
+              roleSids = getSidsForRole(userCall, roleName, activeDirectoryAuthority);
             }
 
-            int j = 0;
-            for (; j < roleSids.length; j++ )
+            for (String sid : roleSids)
             {
-              sids.add( roleSids[ j ] );
+              sids.add( sid );
             }
           }
         }
       }
 
-      return (String[]) sids.toArray( new String[0] );
+      return sids.toArray( new String[0] );
     }
     catch (java.net.MalformedURLException e)
     {
@@ -589,22 +589,7 @@
           Object node = nodeDocs.get(j);
           Logging.connectors.debug( node.toString() );
           String relPath = docs.getData( docs.getElement( node, "FileRef" ) );
-
-          // This relative path is apparently from the domain on down; if there's a location offset we therefore
-          // need to get rid of it before checking the document against the site/library tuples.  The recorded
-          // document identifier should also not include it.
-
-          if (!relPath.toLowerCase().startsWith(serverLocation.toLowerCase()))
-          {
-            // Unexpected processing error; the path to the folder or document did not start with the location
-            // offset, so throw up.
-            throw new ManifoldCFException("Internal error: Relative path '"+relPath+"' was expected to start with '"+
-              serverLocation+"'");
-          }
-
-          relPath = relPath.substring(serverLocation.length());
-
-          fileStream.addFile( relPath );
+          fileStream.addFile( relPath, null );
         }
       }
       else
@@ -649,10 +634,8 @@
                 {
                   resultCount++;
                   String relPath = result.getAttribute("FileRef");
-
-                  relPath = "/" + relPath;
-
-                  fileStream.addFile( relPath );
+                  String displayURL = result.getAttribute("ListItemURL");
+                  fileStream.addFile( relPath, displayURL );
                 }
               }
               
@@ -768,6 +751,7 @@
     }
   }
 
+
   /**
   *
   * @param parentSite
@@ -815,7 +799,7 @@
       nodeList.clear();
       doc.processPath(nodeList, "*", parent);  // <ns1:Lists>
 
-      int chuckIndex = decodedServerLocation.length() + parentSiteDecoded.length();
+      String prefixPath = decodedServerLocation + parentSiteDecoded + "/";
 
       int i = 0;
       while (i < nodeList.size())
@@ -833,22 +817,29 @@
           // If it has no view url, we don't have any idea what to do with it
           if (urlPath != null && urlPath.length() > 0)
           {
-            if (urlPath.length() < chuckIndex)
-              throw new ManifoldCFException("Library view url is not in the expected form: '"+urlPath+"'");
-            urlPath = urlPath.substring(chuckIndex);
+            // Normalize conditionally
             if (!urlPath.startsWith("/"))
-              throw new ManifoldCFException("Library view url without site is not in the expected form: '"+urlPath+"'");
-            // We're at the library name.  Figure out where the end of it is.
-            int index = urlPath.indexOf("/",1);
-            if (index == -1)
-              throw new ManifoldCFException("Bad library view url without site: '"+urlPath+"'");
-            String pathpart = urlPath.substring(1,index);
-
-            if ( pathpart.equals(docLibrary) )
+              urlPath = prefixPath + urlPath;
+            // Get rid of what we don't want, unconditionally
+            if (urlPath.startsWith(prefixPath))
             {
-              // We found it!
-              // Return its ID
-              return doc.getValue( o, "ID" );
+              urlPath = urlPath.substring(prefixPath.length());
+              // We're at the library name.  Figure out where the end of it is.
+              int index = urlPath.indexOf("/");
+              if (index == -1)
+                throw new ManifoldCFException("Bad library view url without site: '"+urlPath+"'");
+              String pathpart = urlPath.substring(0,index);
+
+              if ( pathpart.equals(docLibrary) )
+              {
+                // We found it!
+                // Return its ID
+                return doc.getValue( o, "ID" );
+              }
+            }
+            else
+            {
+              Logging.connectors.warn("SharePoint: Library view url is not in the expected form: '"+urlPath+"'; it should start with '"+prefixPath+"'; skipping");
             }
           }
         }
@@ -1004,7 +995,7 @@
       nodeList.clear();
       doc.processPath(nodeList, "*", parent);  // <ns1:Lists>
 
-      int chuckIndex = decodedServerLocation.length() + parentSiteDecoded.length();
+      String prefixPath = decodedServerLocation + parentSiteDecoded + "/";
 
       int i = 0;
       while (i < nodeList.size())
@@ -1022,29 +1013,36 @@
           // If it has no view url, we don't have any idea what to do with it
           if (urlPath != null && urlPath.length() > 0)
           {
-            if (urlPath.length() < chuckIndex)
-              throw new ManifoldCFException("List view url is not in the expected form: '"+urlPath+"'");
-            urlPath = urlPath.substring(chuckIndex);
+            // Normalize conditionally
             if (!urlPath.startsWith("/"))
-              throw new ManifoldCFException("List view url without site is not in the expected form: '"+urlPath+"'");
-            // We're at the /Lists/listname part of the name.  Figure out where the end of it is.
-            int index = urlPath.indexOf("/",1);
-            if (index == -1)
-              throw new ManifoldCFException("Bad list view url without site: '"+urlPath+"'");
-            String pathpart = urlPath.substring(1,index);
-            if("Lists".equals(pathpart))
+              urlPath = prefixPath + urlPath;
+            // Get rid of what we don't want, unconditionally
+            if (urlPath.startsWith(prefixPath))
             {
-              int k = urlPath.indexOf("/",index+1);
-              if (k == -1)
-                throw new ManifoldCFException("Bad list view url without 'Lists': '"+urlPath+"'");
-              pathpart = urlPath.substring(index+1,k);
-            }
+              urlPath = urlPath.substring(prefixPath.length());
+              // We're at the Lists/listname part of the name.  Figure out where the end of it is.
+              int index = urlPath.indexOf("/");
+              if (index == -1)
+                throw new ManifoldCFException("Bad list view url without site: '"+urlPath+"'");
+              String pathpart = urlPath.substring(0,index);
+              if("Lists".equals(pathpart))
+              {
+                int k = urlPath.indexOf("/",index+1);
+                if (k == -1)
+                  throw new ManifoldCFException("Bad list view url without 'Lists': '"+urlPath+"'");
+                pathpart = urlPath.substring(index+1,k);
+              }
 
-            if ( pathpart.equals(listName) )
+              if ( pathpart.equals(listName) )
+              {
+                // We found it!
+                // Return its ID
+                return doc.getValue( o, "ID" );
+              }
+            }
+            else
             {
-              // We found it!
-              // Return its ID
-              return doc.getValue( o, "ID" );
+              Logging.connectors.warn("SharePoint: List view url is not in the expected form: '"+urlPath+"'; expected something beginning with '"+prefixPath+"'; skipping");
             }
           }
         }
@@ -1298,36 +1296,48 @@
   * @return
   * @throws Exception
   */
-  private String getSidForUser(com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall, String userLogin )
-  throws ManifoldCFException, java.net.MalformedURLException, javax.xml.rpc.ServiceException,
-    java.rmi.RemoteException
+  private String getSidForUser(com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall, String userLogin,
+    boolean activeDirectoryAuthority)
+    throws ManifoldCFException, java.net.MalformedURLException, javax.xml.rpc.ServiceException, java.rmi.RemoteException
   {
-    com.microsoft.schemas.sharepoint.soap.directory.GetUserInfoResponseGetUserInfoResult userResp = userCall.getUserInfo( userLogin );
-    org.apache.axis.message.MessageElement[] userList = userResp.get_any();
-
-    XMLDoc doc = new XMLDoc( userList[0].toString() );
-    ArrayList nodeList = new ArrayList();
-
-    doc.processPath(nodeList, "*", null);
-    if (nodeList.size() != 1)
+    String rval;
+    
+    if (!activeDirectoryAuthority)
     {
-      throw new ManifoldCFException("Bad xml - missing outer 'ns1:GetUserInfo' node - there are "+Integer.toString(nodeList.size())+" nodes");
+      // Do we want to return user ID via getUserInfo?
+      // MHL
+      rval = "U"+userLogin;
     }
-    Object parent = nodeList.get(0);
-    if (!doc.getNodeName(parent).equals("ns1:GetUserInfo"))
-      throw new ManifoldCFException("Bad xml - outer node is not 'ns1:GetUserInfo'");
-
-    nodeList.clear();
-    doc.processPath(nodeList, "*", parent);  // ns1:User
-
-    if ( nodeList.size() != 1 )
+    else
     {
-      throw new ManifoldCFException( " No User found." );
+      com.microsoft.schemas.sharepoint.soap.directory.GetUserInfoResponseGetUserInfoResult userResp = userCall.getUserInfo( userLogin );
+      org.apache.axis.message.MessageElement[] userList = userResp.get_any();
+
+      if (userList.length != 1)
+        throw new ManifoldCFException("Bad response - expecting one outer 'GetUserInfo' node, saw "+Integer.toString(userList.length));
+      
+      MessageElement users = userList[0];
+      if (!users.getElementName().getLocalName().equals("GetUserInfo"))
+        throw new ManifoldCFException("Bad response - outer node should have been 'GetUserInfo' node");
+          
+      String userID = null;
+      
+      Iterator userIter = users.getChildElements();
+      while (userIter.hasNext())
+      {
+        MessageElement child = (MessageElement)userIter.next();
+        if (child.getElementName().getLocalName().equals("User"))
+        {
+          userID = child.getAttribute("Sid");
+        }
+      }
+      
+      if (userID == null)
+        throw new ManifoldCFException("Could not find user login '"+userLogin+"' so could not get SID");
+
+      rval = userID;
     }
-    parent = nodeList.get(0);
-    nodeList.clear();
-    String sid = doc.getValue( parent, "Sid" );
-    return sid;
+    return rval;
   }
 
   /**
@@ -1337,46 +1347,49 @@
   * @return
   * @throws Exception
   */
-  private String[] getSidsForGroup(com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall, String groupName)
+  private List<String> getSidsForGroup(com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall, String groupName,
+    boolean activeDirectoryAuthority)
     throws ManifoldCFException, java.net.MalformedURLException, javax.xml.rpc.ServiceException, java.rmi.RemoteException
   {
-    com.microsoft.schemas.sharepoint.soap.directory.GetUserCollectionFromGroupResponseGetUserCollectionFromGroupResult roleResp = userCall.getUserCollectionFromGroup(groupName);
-    org.apache.axis.message.MessageElement[] roleList = roleResp.get_any();
-
-    XMLDoc doc = new XMLDoc(roleList[0].toString());
-    ArrayList nodeList = new ArrayList();
-
-    doc.processPath(nodeList, "*", null);
-    if (nodeList.size() != 1)
+    List<String> rval = new ArrayList<String>();
+    if (!activeDirectoryAuthority)
     {
-      throw new ManifoldCFException("Bad xml - missing outer 'ns1:GetUserCollectionFromGroup' node - there are "
-      + Integer.toString(nodeList.size()) + " nodes");
+      // Do we want to map this to an ID using usergroup.getGroupInfo?  Or will we be unable to find the group
+      // then if it is an AD group?
+      // MHL
+      rval.add("G"+groupName);
     }
-    Object parent = nodeList.get(0);
-    if (!doc.getNodeName(parent).equals("ns1:GetUserCollectionFromGroup"))
-      throw new ManifoldCFException("Bad xml - outer node is not 'ns1:GetUserCollectionFromGroup'");
-
-    nodeList.clear();
-    doc.processPath(nodeList, "*", parent); // <ns1:Users>
-
-    if (nodeList.size() != 1)
+    else
     {
-      throw new ManifoldCFException(" No Users collection found.");
-    }
-    parent = nodeList.get(0);
-    nodeList.clear();
-    doc.processPath(nodeList, "*", parent); // <ns1:User>
+      com.microsoft.schemas.sharepoint.soap.directory.GetUserCollectionFromGroupResponseGetUserCollectionFromGroupResult roleResp = userCall.getUserCollectionFromGroup(groupName);
+      org.apache.axis.message.MessageElement[] roleList = roleResp.get_any();
 
-    ArrayList sidsList = new ArrayList();
-    String[] sids = new String[0];
-    int i = 0;
-    while (i < nodeList.size())
-    {
-      Object o = nodeList.get(i++);
-      sidsList.add(doc.getValue(o, "Sid"));
+      if (roleList.length != 1)
+        throw new ManifoldCFException("Bad response - expecting one outer 'GetUserCollectionFromGroup' node, saw "+Integer.toString(roleList.length));
+
+      MessageElement roles = roleList[0];
+      if (!roles.getElementName().getLocalName().equals("GetUserCollectionFromGroup"))
+        throw new ManifoldCFException("Bad response - outer node should have been 'GetUserCollectionFromGroup' node");
+
+      Iterator rolesIter = roles.getChildElements();
+      while (rolesIter.hasNext())
+      {
+        MessageElement child = (MessageElement)rolesIter.next();
+        if (child.getElementName().getLocalName().equals("Users"))
+        {
+          Iterator usersIterator = child.getChildElements();
+          while (usersIterator.hasNext())
+          {
+            MessageElement user = (MessageElement)usersIterator.next();
+            if (user.getElementName().getLocalName().equals("User"))
+            {
+              rval.add(user.getAttribute("Sid"));
+            }
+          }
+        }
+      }      
     }
-    sids = (String[]) sidsList.toArray((Object[]) sids);
-    return sids;
+    return rval;
   }
 
   /**
@@ -1386,47 +1399,48 @@
   * @return
   * @throws Exception
   */
-  private String[] getSidsForRole( com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall, String roleName )
-  throws ManifoldCFException, java.net.MalformedURLException, javax.xml.rpc.ServiceException,
-    java.rmi.RemoteException
+  private List<String> getSidsForRole( com.microsoft.schemas.sharepoint.soap.directory.UserGroupSoap userCall, String roleName,
+    boolean activeDirectoryAuthority)
+    throws ManifoldCFException, java.net.MalformedURLException, javax.xml.rpc.ServiceException, java.rmi.RemoteException
   {
-
-    com.microsoft.schemas.sharepoint.soap.directory.GetUserCollectionFromRoleResponseGetUserCollectionFromRoleResult roleResp = userCall.getUserCollectionFromRole( roleName );
-    org.apache.axis.message.MessageElement[] roleList = roleResp.get_any();
-
-    XMLDoc doc = new XMLDoc( roleList[0].toString() );
-    ArrayList nodeList = new ArrayList();
-
-    doc.processPath(nodeList, "*", null);
-    if (nodeList.size() != 1)
+    List<String> rval = new ArrayList<String>();
+    if (!activeDirectoryAuthority)
     {
-      throw new ManifoldCFException("Bad xml - missing outer 'ns1:GetUserCollectionFromRole' node - there are "+Integer.toString(nodeList.size())+" nodes");
+      // Do we want to look up role ID, using usergroup.getRoleInfo?
+      // MHL
+      rval.add("R"+roleName);
     }
-    Object parent = nodeList.get(0);
-    if (!doc.getNodeName(parent).equals("ns1:GetUserCollectionFromRole"))
-      throw new ManifoldCFException("Bad xml - outer node is not 'ns1:GetUserCollectionFromRole'");
-
-    nodeList.clear();
-    doc.processPath(nodeList, "*", parent);  // <ns1:Users>
-
-    if ( nodeList.size() != 1 )
+    else
     {
-      throw new ManifoldCFException( " No Users collection found." );
-    }
-    parent = nodeList.get(0);
-    nodeList.clear();
-    doc.processPath( nodeList, "*", parent ); // <ns1:User>
+      com.microsoft.schemas.sharepoint.soap.directory.GetUserCollectionFromRoleResponseGetUserCollectionFromRoleResult roleResp = userCall.getUserCollectionFromRole( roleName );
+      org.apache.axis.message.MessageElement[] roleList = roleResp.get_any();
 
-    ArrayList sidsList = new ArrayList();
-    String[] sids = new String[0];
-    int i = 0;
-    while (i < nodeList.size())
-    {
-      Object o = nodeList.get( i++ );
-      sidsList.add( doc.getValue( o, "Sid" ) );
+      if (roleList.length != 1)
+        throw new ManifoldCFException("Bad response - expecting one outer 'GetUserCollectionFromRole' node, saw "+Integer.toString(roleList.length));
+
+      MessageElement roles = roleList[0];
+      if (!roles.getElementName().getLocalName().equals("GetUserCollectionFromRole"))
+        throw new ManifoldCFException("Bad response - outer node should have been 'GetUserCollectionFromRole' node");
+
+      Iterator rolesIter = roles.getChildElements();
+      while (rolesIter.hasNext())
+      {
+        MessageElement child = (MessageElement)rolesIter.next();
+        if (child.getElementName().getLocalName().equals("Users"))
+        {
+          Iterator usersIterator = child.getChildElements();
+          while (usersIterator.hasNext())
+          {
+            MessageElement user = (MessageElement)usersIterator.next();
+            if (user.getElementName().getLocalName().equals("User"))
+            {
+              rval.add(user.getAttribute("Sid"));
+            }
+          }
+        }
+      }      
     }
-    sids = (String[])sidsList.toArray( (Object[])sids );
-    return sids;
+    return rval;
   }
 
   /**
@@ -1455,14 +1469,9 @@
       {
         // The web service allows us to get acls for a site, so that's what we will attempt
 
-        // This fails:
         MCPermissionsWS aclService = new MCPermissionsWS( baseUrl + site, userName, password, configuration, httpClient );
         com.microsoft.sharepoint.webpartpages.PermissionsSoap aclCall = aclService.getPermissionsSoapHandler( );
 
-        // This works:
-        //PermissionsWS aclService = new PermissionsWS( baseUrl + site, userName, password, myFactory, configuration );
-        //com.microsoft.schemas.sharepoint.soap.directory.PermissionsSoap aclCall = aclService.getPermissionsSoapHandler( );
-
         aclCall.getPermissionCollection( "/", "Web" );
       }
 
@@ -1499,7 +1508,7 @@
           else if (httpErrorCode.equals("403"))
             throw new ManifoldCFException("Http error "+httpErrorCode+" while reading from "+baseUrl+site+" - check IIS and SharePoint security settings! "+e.getMessage(),e);
 	  else if (httpErrorCode.equals("302"))
-	    throw new ManifoldCFException("ManifoldCF's MCPermissions web service may not be installed on the target SharePoint server.  MCPermissions service is needed for SharePoint repositories version 3.0 or higher, to allow access to security information for files and folders.  Consult your system administrator.");
+	    throw new ManifoldCFException("The correct version of ManifoldCF's MCPermissions web service may not be installed on the target SharePoint server.  MCPermissions service is needed for SharePoint repositories version 3.0 or higher, to allow access to security information for files and folders.  Consult your system administrator.");
           else
             throw new ManifoldCFException("Unexpected http error code "+httpErrorCode+" accessing SharePoint at "+baseUrl+site+": "+e.getMessage(),e);
         }
@@ -1539,6 +1548,125 @@
     }
   }
 
+  /** Gets a list of attachment URLs, given a site, list name, and list item ID.  These will be returned
+  * as name/value pairs; the "name" is the name of the attachment, and the "value" is the full URL.
+  */
+  public List<NameValue> getAttachmentNames( String site, String listName, String itemID )
+    throws ManifoldCFException, ServiceInterruption
+  {
+    long currentTime;
+    try
+    {
+      ArrayList<NameValue> result = new ArrayList<NameValue>();
+      
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("SharePoint: In getAttachmentNames; site='"+site+"', listName='"+listName+"', itemID='"+itemID+"'");
+
+      // The docLibrary must be a GUID, because we don't have  title.
+
+      if ( site.compareTo( "/") == 0 )
+        site = "";
+      ListsWS listService = new ListsWS( baseUrl + site, userName, password, configuration, httpClient );
+      ListsSoap listCall = listService.getListsSoapHandler();
+
+      GetAttachmentCollectionResponseGetAttachmentCollectionResult listResponse =
+        listCall.getAttachmentCollection( listName, itemID );
+      org.apache.axis.message.MessageElement[] List = listResponse.get_any();
+
+      XMLDoc doc = new XMLDoc( List[0].toString() );
+      ArrayList nodeList = new ArrayList();
+
+      doc.processPath(nodeList, "*", null);
+      if (nodeList.size() != 1)
+      {
+        throw new ManifoldCFException("Bad xml - missing outer node - there are "+Integer.toString(nodeList.size())+" nodes");
+      }
+
+      Object attachments = nodeList.get(0);
+      if ( !doc.getNodeName(attachments).equals("ns1:Attachments") )
+        throw new ManifoldCFException( "Bad xml - outer node '" + doc.getNodeName(attachments) + "' is not 'ns1:Attachments'");
+
+      nodeList.clear();
+      doc.processPath(nodeList, "*", attachments);
+
+      int i = 0;
+      while (i < nodeList.size())
+      {
+        Object o = nodeList.get( i++ );
+        if ( !doc.getNodeName(o).equals("ns1:Attachment") )
+          throw new ManifoldCFException( "Bad xml - inner node '" + doc.getNodeName(o) + "' is not 'ns1:Attachment'");
+        String attachmentURL = doc.getData( o );
+        if (attachmentURL != null)
+        {
+          int index = attachmentURL.lastIndexOf("/");
+          if (index == -1)
+            throw new ManifoldCFException("Unexpected attachment URL form: '"+attachmentURL+"'");
+          result.add(new NameValue(attachmentURL.substring(index+1), new java.net.URL(attachmentURL).getPath()));
+        }
+      }
+
+      return result;
+    }
+    catch (java.net.MalformedURLException e)
+    {
+      throw new ManifoldCFException("Bad SharePoint url: "+e.getMessage(),e);
+    }
+    catch (javax.xml.rpc.ServiceException e)
+    {
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("SharePoint: Got a service exception getting attachments for site "+site+" listName "+listName+" itemID "+itemID+" - retrying",e);
+      currentTime = System.currentTimeMillis();
+      throw new ServiceInterruption("Service exception: "+e.getMessage(), e, currentTime + 300000L,
+        currentTime + 12 * 60 * 60000L,-1,true);
+    }
+    catch (org.apache.axis.AxisFault e)
+    {
+      if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://xml.apache.org/axis/","HTTP")))
+      {
+        org.w3c.dom.Element elem = e.lookupFaultDetail(new javax.xml.namespace.QName("http://xml.apache.org/axis/","HttpErrorCode"));
+        if (elem != null)
+        {
+          elem.normalize();
+          String httpErrorCode = elem.getFirstChild().getNodeValue().trim();
+          if (httpErrorCode.equals("404"))
+            return null;
+          else if (httpErrorCode.equals("403"))
+            throw new ManifoldCFException("Remote procedure exception: "+e.getMessage(),e);
+          else if (httpErrorCode.equals("401"))
+          {
+            if (Logging.connectors.isDebugEnabled())
+              Logging.connectors.debug("SharePoint: Crawl user does not have sufficient privileges to get attachment list for site "+site+" listName "+listName+" itemID "+itemID+" - skipping",e);
+            return null;
+          }
+          throw new ManifoldCFException("Unexpected http error code "+httpErrorCode+" accessing SharePoint at "+baseUrl+site+": "+e.getMessage(),e);
+        }
+        throw new ManifoldCFException("Unknown http error occurred: "+e.getMessage(),e);
+      }
+
+      if (e.getFaultCode().equals(new javax.xml.namespace.QName("http://schemas.xmlsoap.org/soap/envelope/","Server.userException")))
+      {
+        String exceptionName = e.getFaultString();
+        if (exceptionName.equals("java.lang.InterruptedException"))
+          throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
+      }
+
+      // I don't know if this is what you get when the library is missing, but here's hoping.
+      if (e.getMessage().indexOf("List does not exist") != -1)
+        return null;
+
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("SharePoint: Got a remote exception getting attachments for site "+site+" listName "+listName+" itemID "+itemID+" - retrying",e);
+      currentTime = System.currentTimeMillis();
+      throw new ServiceInterruption("Remote procedure exception: "+e.getMessage(), e, currentTime + 300000L,
+        currentTime + 3 * 60 * 60000L,-1,false);
+    }
+    catch (java.rmi.RemoteException e)
+    {
+      throw new ManifoldCFException("Unexpected remote exception occurred: "+e.getMessage(),e);
+    }
+
+  }
+  
   /**
   * Gets a list of field names of the given document library
   * @param site
@@ -1558,8 +1686,9 @@
 
       // The docLibrary must be a GUID, because we don't have  title.
 
-      if ( site.compareTo( "/") == 0 ) site = "";
-        ListsWS listService = new ListsWS( baseUrl + site, userName, password, configuration, httpClient );
+      if ( site.compareTo( "/") == 0 )
+        site = "";
+      ListsWS listService = new ListsWS( baseUrl + site, userName, password, configuration, httpClient );
       ListsSoap listCall = listService.getListsSoapHandler();
 
       GetListResponseGetListResult listResponse = listCall.getList( listName );
@@ -1674,13 +1803,13 @@
   * @param docId
   * @return set of the field values
   */
-  public Map getFieldValues( ArrayList fieldNames, String site, String docLibrary, String docId, boolean dspStsWorks )
+  public Map<String,String> getFieldValues( ArrayList fieldNames, String site, String docLibrary, String docId, boolean dspStsWorks )
     throws ManifoldCFException, ServiceInterruption
   {
     long currentTime;
     try
     {
-      HashMap result = new HashMap();
+      HashMap<String,String> result = new HashMap<String,String>();
 
       if ( site.compareTo("/") == 0 ) site = ""; // root case
 
@@ -1810,7 +1939,7 @@
         ListsWS lservice = new ListsWS(baseUrl + site, userName, password, configuration, httpClient );
         ListsSoapStub stub1 = (ListsSoapStub)lservice.getListsSoapHandler();
         
-        String sitePlusDocId = serverLocation + site + "/" + docId;
+        String sitePlusDocId = serverLocation + site + docId;
         if (sitePlusDocId.startsWith("/"))
           sitePlusDocId = sitePlusDocId.substring(1);
         
@@ -1869,7 +1998,7 @@
           String attrValue = doc.getValue(o,"ows_"+(String)attrName);
           if (attrValue != null)
           {
-            result.put(attrName,valueMunge(attrValue));
+            result.put(attrName.toString(),valueMunge(attrValue));
           }
         }
       }
@@ -1944,14 +2073,15 @@
   * @param parentSite the site to search for subsites, empty string for root
   * @return lists of sites as an arraylist of NameValue objects
   */
-  public ArrayList getSites( String parentSite )
+  public List<NameValue> getSites( String parentSite )
     throws ManifoldCFException, ServiceInterruption
   {
     long currentTime;
     try
     {
-      ArrayList result = new ArrayList();
+      ArrayList<NameValue> result = new ArrayList<NameValue>();
 
+      // Call the webs service
       if ( parentSite.equals( "/") ) parentSite = "";
         WebsWS webService = new WebsWS( baseUrl + parentSite, userName, password, configuration, httpClient );
       WebsSoap webCall = webService.getWebsSoapHandler();
@@ -1986,7 +2116,7 @@
         // Leave here for now
         if (Logging.connectors.isDebugEnabled())
           Logging.connectors.debug("SharePoint: Subsite list: '"+url+"', '"+title+"'");
-
+        
         // A full path to the site is tacked on the front of each one of these.  However, due to nslookup differences, we cannot guarantee that
         // the server name part of the path will actually match what got passed in.  Therefore, we want to look only at the last path segment, whatever that is.
         if (url != null && url.length() > 0)
@@ -2004,7 +2134,7 @@
           }
         }
       }
-
+      
       return result;
     }
     catch (java.net.MalformedURLException e)
@@ -2068,13 +2198,13 @@
   * @param parentSite the site to search for document libraries, empty string for root
   * @return lists of NameValue objects, representing document libraries
   */
-  public ArrayList getDocumentLibraries( String parentSite, String parentSiteDecoded )
+  public List<NameValue> getDocumentLibraries( String parentSite, String parentSiteDecoded )
     throws ManifoldCFException, ServiceInterruption
   {
     long currentTime;
     try
     {
-      ArrayList result = new ArrayList();
+      ArrayList<NameValue> result = new ArrayList<NameValue>();
 
       String parentSiteRequest = parentSite;
 
@@ -2107,7 +2237,7 @@
       nodeList.clear();
       doc.processPath(nodeList, "*", parent);  // <ns1:Lists>
 
-      int chuckIndex = decodedServerLocation.length() + parentSiteDecoded.length();
+      String prefixPath = decodedServerLocation + parentSiteDecoded + "/";
 
       int i = 0;
       while (i < nodeList.size())
@@ -2131,22 +2261,29 @@
           // It's a library.  If it has no view url, we don't have any idea what to do with it
           if (urlPath != null && urlPath.length() > 0)
           {
-            if (urlPath.length() < chuckIndex)
-              throw new ManifoldCFException("Library view url is not in the expected form: '"+urlPath+"'");
-            urlPath = urlPath.substring(chuckIndex);
+            // Normalize conditionally
             if (!urlPath.startsWith("/"))
-              throw new ManifoldCFException("Library view url without site is not in the expected form: '"+urlPath+"'");
-            // We're at the library name.  Figure out where the end of it is.
-            int index = urlPath.indexOf("/",1);
-            if (index == -1)
-              throw new ManifoldCFException("Bad library view url without site: '"+urlPath+"'");
-            String pathpart = urlPath.substring(1,index);
-
-            if ( pathpart.length() != 0 && !pathpart.equals("_catalogs"))
+              urlPath = prefixPath + urlPath;
+            // Get rid of what we don't want, unconditionally
+            if (urlPath.startsWith(prefixPath))
             {
-              if (title == null || title.length() == 0)
-                title = pathpart;
-              result.add( new NameValue(pathpart, title) );
+              urlPath = urlPath.substring(prefixPath.length());
+              // We're at the library name.  Figure out where the end of it is.
+              int index = urlPath.indexOf("/");
+              if (index == -1)
+                throw new ManifoldCFException("Bad library view url without site: '"+urlPath+"'");
+              String pathpart = urlPath.substring(0,index);
+
+              if ( pathpart.length() != 0 && !pathpart.equals("_catalogs"))
+              {
+                if (title == null || title.length() == 0)
+                  title = pathpart;
+                result.add( new NameValue(pathpart, title) );
+              }
+            }
+            else
+            {
+              Logging.connectors.warn("SharePoint: Library view url is not in the expected form: '"+urlPath+"'; expected something beginning with '"+prefixPath+"'; skipping");
             }
           }
         }
@@ -2212,13 +2349,13 @@
   * @param parentSite the site to search for lists, empty string for root
   * @return lists of NameValue objects, representing lists
   */
-  public ArrayList getLists( String parentSite, String parentSiteDecoded )
+  public List<NameValue> getLists( String parentSite, String parentSiteDecoded )
     throws ManifoldCFException, ServiceInterruption
   {
     long currentTime;
     try
     {
-      ArrayList result = new ArrayList();
+      ArrayList<NameValue> result = new ArrayList<NameValue>();
 
       String parentSiteRequest = parentSite;
 
@@ -2251,7 +2388,7 @@
       nodeList.clear();
       doc.processPath(nodeList, "*", parent);  // <ns1:Lists>
 
-      int chuckIndex = decodedServerLocation.length() + parentSiteDecoded.length();
+      String prefixPath = decodedServerLocation + parentSiteDecoded + "/";
 
       int i = 0;
       while (i < nodeList.size())
@@ -2275,30 +2412,37 @@
           // If it has no view url, we don't have any idea what to do with it
           if (urlPath != null && urlPath.length() > 0)
           {
-            if (urlPath.length() < chuckIndex)
-              throw new ManifoldCFException("List view url is not in the expected form: '"+urlPath+"'");
-            urlPath = urlPath.substring(chuckIndex);
+            // Normalize conditionally
             if (!urlPath.startsWith("/"))
-              throw new ManifoldCFException("List view url without site is not in the expected form: '"+urlPath+"'");
-            // We're at the /Lists/listname part of the name.  Figure out where the end of it is.
-            int index = urlPath.indexOf("/",1);
-            if (index == -1)
-              throw new ManifoldCFException("Bad list view url without site: '"+urlPath+"'");
-            String pathpart = urlPath.substring(1,index);
-
-            if("Lists".equals(pathpart))
+              urlPath = prefixPath + urlPath;
+            // Get rid of what we don't want, unconditionally
+            if (urlPath.startsWith(prefixPath))
             {
-              int k = urlPath.indexOf("/",index+1);
-              if (k == -1)
-                throw new ManifoldCFException("Bad list view url without 'Lists': '"+urlPath+"'");
-              pathpart = urlPath.substring(index+1,k);
+              urlPath = urlPath.substring(prefixPath.length());
+              // We're at the /Lists/listname part of the name.  Figure out where the end of it is.
+              int index = urlPath.indexOf("/");
+              if (index == -1)
+                throw new ManifoldCFException("Bad list view url without site: '"+urlPath+"'");
+              String pathpart = urlPath.substring(0,index);
+
+              if("Lists".equals(pathpart))
+              {
+                int k = urlPath.indexOf("/",index+1);
+                if (k == -1)
+                  throw new ManifoldCFException("Bad list view url without 'Lists': '"+urlPath+"'");
+                pathpart = urlPath.substring(index+1,k);
+              }
+
+              if ( pathpart.length() != 0 && !pathpart.equals("_catalogs"))
+              {
+                if (title == null || title.length() == 0)
+                  title = pathpart;
+                result.add( new NameValue(pathpart, title) );
+              }
             }
-
-            if ( pathpart.length() != 0 && !pathpart.equals("_catalogs"))
+            else
             {
-              if (title == null || title.length() == 0)
-                title = pathpart;
-              result.add( new NameValue(pathpart, title) );
+              Logging.connectors.warn("SharePoint: List view url is not in the expected form: '"+urlPath+"'; expected something beginning with '"+prefixPath+"'; skipping");
             }
           }
         }
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointConfig.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointConfig.java
new file mode 100644
index 0000000..32d3b10
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointConfig.java
@@ -0,0 +1,49 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.connectors.sharepoint;
+
+
+/** Parameters and output data for SharePoint repository.
+*/
+public class SharePointConfig
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Configuration parameters
+
+  /** SharePoint server version */
+  public static final String PARAM_SERVERVERSION = "serverVersion";
+  /** SharePoint server protocol */
+  public static final String PARAM_SERVERPROTOCOL = "serverProtocol";
+  /** SharePoint server name */
+  public static final String PARAM_SERVERNAME = "serverName";
+  /** SharePoint server port */
+  public static final String PARAM_SERVERPORT = "serverPort";
+  /** SharePoint server location */
+  public static final String PARAM_SERVERLOCATION = "serverLocation";
+  /** SharePoint server user name */
+  public static final String PARAM_SERVERUSERNAME = "userName";
+  /** SharePoint server password */
+  public static final String PARAM_SERVERPASSWORD = "password";
+  /** SharePoint server certificate store */
+  public static final String PARAM_SERVERKEYSTORE = "keystore";
+	
+  /** Authority type */
+  public static final String PARAM_AUTHORITYTYPE = "authorityType";
+}
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointRepository.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointRepository.java
index fdaca5e..e4ace2f 100644
--- a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointRepository.java
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/sharepoint/SharePointRepository.java
@@ -39,6 +39,9 @@
 import java.util.concurrent.TimeUnit;
 import java.net.*;
 
+import org.apache.log4j.Logger;
+import org.apache.log4j.Level;
+
 import org.apache.http.conn.ClientConnectionManager;
 import org.apache.http.client.HttpClient;
 import org.apache.http.impl.conn.PoolingClientConnectionManager;
@@ -64,12 +67,14 @@
 * Document identifiers for this connector come in three forms:
 * (1) An "S" followed by the encoded subsite/library path, which represents the encoded relative path from the root site to a library. [deprecated and no longer supported];
 * (2) A "D" followed by a subsite/library/folder/file path, which represents the relative path from the root site to a file. [deprecated and no longer supported]
-* (3) Five different kinds of unencoded path, each of which starts with a "/" at the beginning, where the "/" represents the root site of the connection, as follows:
+* (3) Six different kinds of unencoded path, each of which starts with a "/" at the beginning, where the "/" represents the root site of the connection, as follows:
 *   /sitepath/ - the relative path to a site.  The path MUST both begin and end with a single "/".
 *   /sitepath/libraryname// - the relative path to a library.  The path MUST begin with a single "/" and end with "//".
 *   /sitepath/libraryname//folderfilepath - the relative path to a file.  The path MUST begin with a single "/" and MUST include a "//" after the library, and must NOT end with a "/".
 *   /sitepath/listname/// - the relative path to a list.  The path MUST begin with a single "/" and end with "///".
 *   /sitepath/listname///rowid - the relative path to a list item.  The path MUST begin with a single "/" and MUST include a "///" after the list name, and must NOT end in a "/".
+*   /sitepath/listname///rowid//attachment_filename - the relative path to a list attachment.  The path MUST begin with a single "/", MUST include a "///" after the list name, and
+*      MUST include a "//" separating the rowid from the filename.
 */
 public class SharePointRepository extends org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector
 {
@@ -85,6 +90,9 @@
   
   private boolean supportsItemSecurity = false;
   private boolean dspStsWorks = true;
+  private boolean attachmentsSupported = false;
+  private boolean activeDirectoryAuthority = true;
+  
   private String serverProtocol = null;
   private String serverUrl = null;
   private String fileBaseUrl = null;
@@ -125,8 +133,15 @@
     }
   }
 
+  // Turn off AXIS debug output that we don't want
+  static
+  {
+    Logger logger = Logger.getLogger("org.apache.axis.ConfigurationException");
+    logger.setLevel(Level.INFO);
+  }
+  
   /** Deny access token for default authority */
-  private final static String defaultAuthorityDenyToken = "DEAD_AUTHORITY";
+  private final static String defaultAuthorityDenyToken = GLOBAL_DENY_TOKEN;
 
   /** Constructor.
   */
@@ -140,18 +155,25 @@
   {
     if (proxy == null)
     {
-      String serverVersion = params.getParameter( "serverVersion" );
+      String serverVersion = params.getParameter( SharePointConfig.PARAM_SERVERVERSION );
       if (serverVersion == null)
         serverVersion = "2.0";
       supportsItemSecurity = !serverVersion.equals("2.0");
       dspStsWorks = !serverVersion.equals("4.0");
+      attachmentsSupported = !serverVersion.equals("2.0");
+      
+      String authorityType = params.getParameter( SharePointConfig.PARAM_AUTHORITYTYPE );
+      if (authorityType == null)
+        authorityType = "ActiveDirectory";
+      
+      activeDirectoryAuthority = authorityType.equals("ActiveDirectory");
 
-      serverProtocol = params.getParameter( "serverProtocol" );
+      serverProtocol = params.getParameter( SharePointConfig.PARAM_SERVERPROTOCOL );
       if (serverProtocol == null)
         serverProtocol = "http";
       try
       {
-        String serverPort = params.getParameter( "serverPort" );
+        String serverPort = params.getParameter( SharePointConfig.PARAM_SERVERPORT );
         if (serverPort == null || serverPort.length() == 0)
         {
           if (serverProtocol.equals("https"))
@@ -166,7 +188,7 @@
       {
         throw new ManifoldCFException(e.getMessage(),e);
       }
-      serverLocation = params.getParameter("serverLocation");
+      serverLocation = params.getParameter(SharePointConfig.PARAM_SERVERLOCATION);
       if (serverLocation == null)
         serverLocation = "";
       if (serverLocation.endsWith("/"))
@@ -176,8 +198,8 @@
       encodedServerLocation = serverLocation;
       serverLocation = decodePath(serverLocation);
 
-      userName = params.getParameter( "userName" );
-      password = params.getObfuscatedParameter( "password" );
+      userName = params.getParameter(SharePointConfig.PARAM_SERVERUSERNAME);
+      password = params.getObfuscatedParameter(SharePointConfig.PARAM_SERVERPASSWORD);
       int index = userName.indexOf("\\");
       if (index != -1)
       {
@@ -203,7 +225,7 @@
       }
 
       // Set up ssl if indicated
-      keystoreData = params.getParameter( "keystore" );
+      keystoreData = params.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
 
       PoolingClientConnectionManager localConnectionManager = new PoolingClientConnectionManager();
       localConnectionManager.setMaxTotal(1);
@@ -249,7 +271,7 @@
       httpClient = localHttpClient;
       
       proxy = new SPSProxyHelper( serverUrl, encodedServerLocation, serverLocation, userName, password,
-        getClass(), "sharepoint-client-config.wsdd",
+        org.apache.manifoldcf.sharepoint.CommonsHTTPSender.class, "sharepoint-client-config.wsdd",
         httpClient );
       
     }
@@ -298,7 +320,7 @@
   {
     super.connect(configParameters);
     // This is needed by getBins()
-    serverName = configParameters.getParameter( "serverName" );
+    serverName = configParameters.getParameter( SharePointConfig.PARAM_SERVERNAME );
   }
 
   /** Close the connection.  Call this before discarding the repository connector.
@@ -351,8 +373,8 @@
   @Override
   public int getMaxDocumentRequest()
   {
-    // Since we pick up acls on a per-lib basis, it helps to have this bigger than 1.
-    return 10;
+    // Since we went to a carrydown-based implementation, having this greater than 1 does not help.
+    return 1;
   }
 
   /** Test the connection.  Returns a string describing the connection integrity.
@@ -401,6 +423,16 @@
       connectionManager.closeIdleConnections(60000L,TimeUnit.MILLISECONDS);
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return connectionManager != null;
+  }
+
   /** Request arbitrary connector information.
   * This method is called directly from the API in order to allow API users to perform any one of several connector-specific
   * queries.
@@ -433,12 +465,12 @@
           sitePath = remainder.substring(index+1);
         }
         
-        Map fieldSet = getLibFieldList(sitePath,library);
-        Iterator iter = fieldSet.keySet().iterator();
+        Map<String,String> fieldSet = getLibFieldList(sitePath,library);
+        Iterator<String> iter = fieldSet.keySet().iterator();
         while (iter.hasNext())
         {
-          String fieldName = (String)iter.next();
-          String displayName = (String)fieldSet.get(fieldName);
+          String fieldName = iter.next();
+          String displayName = fieldSet.get(fieldName);
           ConfigurationNode node = new ConfigurationNode("field");
           ConfigurationNode child;
           child = new ConfigurationNode("name");
@@ -480,12 +512,12 @@
           sitePath = remainder.substring(index+1);
         }
         
-        Map fieldSet = getListFieldList(sitePath,listName);
-        Iterator iter = fieldSet.keySet().iterator();
+        Map<String,String> fieldSet = getListFieldList(sitePath,listName);
+        Iterator<String> iter = fieldSet.keySet().iterator();
         while (iter.hasNext())
         {
-          String fieldName = (String)iter.next();
-          String displayName = (String)fieldSet.get(fieldName);
+          String fieldName = iter.next();
+          String displayName = fieldSet.get(fieldName);
           ConfigurationNode node = new ConfigurationNode("field");
           ConfigurationNode child;
           child = new ConfigurationNode("name");
@@ -511,13 +543,19 @@
       try
       {
         String sitePath = command.substring("sites/".length());
-        ArrayList sites = getSites(sitePath);
+        List<NameValue> sites = getSites(sitePath);
         int i = 0;
         while (i < sites.size())
         {
-          String site = (String)sites.get(i++);
+          NameValue site = sites.get(i++);
           ConfigurationNode node = new ConfigurationNode("site");
-          node.setValue(site);
+          ConfigurationNode child;
+          child = new ConfigurationNode("name");
+          child.setValue(site.getValue());
+          node.addChild(node.getChildCount(),child);
+          child = new ConfigurationNode("display_name");
+          child.setValue(site.getPrettyName());
+          node.addChild(node.getChildCount(),child);
           output.addChild(output.getChildCount(),node);
         }
       }
@@ -535,13 +573,19 @@
       try
       {
         String sitePath = command.substring("libraries/".length());
-        ArrayList libs = getDocLibsBySite(sitePath);
+        List<NameValue> libs = getDocLibsBySite(sitePath);
         int i = 0;
         while (i < libs.size())
         {
-          String lib = (String)libs.get(i++);
+          NameValue lib = libs.get(i++);
           ConfigurationNode node = new ConfigurationNode("library");
-          node.setValue(lib);
+          ConfigurationNode child;
+          child = new ConfigurationNode("name");
+          child.setValue(lib.getValue());
+          node.addChild(node.getChildCount(),child);
+          child = new ConfigurationNode("display_name");
+          child.setValue(lib.getPrettyName());
+          node.addChild(node.getChildCount(),child);
           output.addChild(output.getChildCount(),node);
         }
       }
@@ -559,13 +603,19 @@
       try
       {
         String sitePath = command.substring("lists/".length());
-        ArrayList libs = getListsBySite(sitePath);
+        List<NameValue> libs = getListsBySite(sitePath);
         int i = 0;
         while (i < libs.size())
         {
-          String lib = (String)libs.get(i++);
+          NameValue lib = libs.get(i++);
           ConfigurationNode node = new ConfigurationNode("list");
-          node.setValue(lib);
+          ConfigurationNode child;
+          child = new ConfigurationNode("name");
+          child.setValue(lib.getValue());
+          node.addChild(node.getChildCount(),child);
+          child = new ConfigurationNode("display_name");
+          child.setValue(lib.getPrettyName());
+          node.addChild(node.getChildCount(),child);
           output.addChild(output.getChildCount(),node);
         }
       }
@@ -645,16 +695,8 @@
   {
     getSession();
 
-    // Before we begin looping, make sure we know what to add to the version string to account for the forced acls.
-
-    // Read the forced acls.  A null return indicates that security is disabled!!!
-    // A zero-length return indicates that the native acls should be used.
-    // All of this is germane to how we ingest the document, so we need to note it in
-    // the version string completely.
-    String[] acls = getAcls(spec);
-    // Make sure they are in sorted order, since we need the version strings to be comparable
-    if (acls != null)
-      java.util.Arrays.sort(acls);
+    // Get the forced acls.  (We need this only for the case where documents have their own acls)
+    String[] forcedAcls = getAcls(spec);
 
     // Look at the metadata attributes.
     // So that the version strings are comparable, we will put them in an array first, and sort them.
@@ -677,14 +719,6 @@
 
     }
 
-    // This is the cached map of sitelibstring to library identifier
-    Map<String,String> libIDMap = new HashMap<String,String>();
-    // This is the cached map of siteliststring to list identifier
-    Map<String,String> listIDMap = new HashMap<String,String>();
-    
-    // This is the cached map if guid to field list
-    Map<String,Map<String,String>> fieldListMap = new HashMap<String,Map<String,String>>();
-    
     // Calculate the part of the version string that comes from path name and mapping.
     // This starts with = since ; is used by another optional component (the forced acls)
     StringBuilder pathNameAttributeVersion = new StringBuilder();
@@ -693,10 +727,6 @@
 
     String[] rval = new String[documentIdentifiers.length];
     
-    // Build a cache of the acls for a given site, guid.
-    // The key is the guid, and the value is a String[]
-    Map<String,String[]> ACLmap = new HashMap<String,String[]>();
-    
     i = 0;
     while (i < rval.length)
     {
@@ -724,7 +754,7 @@
           // === List-style identifier ===
           if (dListSeparatorIndex == documentIdentifier.length() - 3)
           {
-            // List path!
+            // == List path! ==
             if (checkIncludeList(documentIdentifier.substring(0,documentIdentifier.length()-3),spec))
               // This is the path for the list: No versioning
               rval[i] = "";
@@ -737,46 +767,66 @@
           }
           else
           {
-            // List item path!
+            // == List item or attachment path! ==
             // Convert the modified document path to an unmodified one, plus a library path.
             String decodedListPath = documentIdentifier.substring(0,dListSeparatorIndex);
-            String decodedItemPath = decodedListPath + documentIdentifier.substring(dListSeparatorIndex+2);
-            if (checkIncludeListItem(decodedItemPath,spec))
+            String itemAndAttachment = documentIdentifier.substring(dListSeparatorIndex+2);
+            String decodedItemPath = decodedListPath + itemAndAttachment;
+            
+            int cutoff = decodedListPath.lastIndexOf("/");
+            String sitePath = decodedListPath.substring(0,cutoff);
+            String list = decodedListPath.substring(cutoff+1);
+
+            String encodedSitePath = encodePath(sitePath);
+
+            int attachmentSeparatorIndex = itemAndAttachment.indexOf("//",1);
+            if (attachmentSeparatorIndex == -1)
             {
-              // This file is included, so calculate a version string.  This will include metadata info, so get that first.
-              MetadataInformation metadataInfo = getMetadataSpecification(decodedItemPath,spec);
-
-              int lastIndex = decodedListPath.lastIndexOf("/");
-              String sitePath = decodedListPath.substring(0,lastIndex);
-              String list = decodedListPath.substring(lastIndex+1);
-
-              String encodedSitePath = encodePath(sitePath);
-
-              // Need to get the library id.  Cache it if we need to calculate it.
-              String listID = listIDMap.get(decodedListPath);
-              if (listID == null)
+              // == List item path! ==
+              if (checkIncludeListItem(decodedItemPath,spec))
               {
-                listID = proxy.getListID(encodedSitePath, sitePath, list);
-                if (listID != null)
-                  listIDMap.put(decodedListPath,listID);
-              }
+                // This file is included, so calculate a version string.  This will include metadata info, so get that first.
+                MetadataInformation metadataInfo = getMetadataSpecification(decodedItemPath,spec);
 
-              if (listID != null)
-              {
-                String[] sortedMetadataFields = getInterestingFieldSetSorted(metadataInfo,encodedSitePath,listID,fieldListMap);
+                String[] accessTokens = activities.retrieveParentData(documentIdentifier, "accessTokens");
+                String[] denyTokens = activities.retrieveParentData(documentIdentifier, "denyTokens");
+                String[] listIDs = activities.retrieveParentData(documentIdentifier, "guids");
+                String[] listFields = activities.retrieveParentData(documentIdentifier, "fields");
+                String[] displayURLs = activities.retrieveParentData(documentIdentifier, "displayURLs");
                 
-                if (sortedMetadataFields != null)
+                String listID;
+                if (listIDs.length >= 1)
+                  listID = listIDs[0];
+                else
+                  listID = null;
+
+                String displayURL;
+                if (displayURLs.length >= 1)
+                  displayURL = displayURLs[0];
+                else
+                  displayURL = null;
+
+                if (listID != null)
                 {
+                  String[] sortedMetadataFields = getInterestingFieldSetSorted(metadataInfo,listFields);
+                  
+                  // Sort access tokens so they are comparable in the version string
+                  java.util.Arrays.sort(accessTokens);
+                  java.util.Arrays.sort(denyTokens);
+
                   // Next, get the actual timestamp field for the file.
                   ArrayList metadataDescription = new ArrayList();
                   metadataDescription.add("Modified");
                   metadataDescription.add("Created");
+                  metadataDescription.add("ID");
+                  metadataDescription.add("GUID");
                   // The document path includes the library, with no leading slash, and is decoded.
-                  int cutoff = decodedListPath.lastIndexOf("/");
                   String decodedItemPathWithoutSite = decodedItemPath.substring(cutoff+1);
-                  Map values = proxy.getFieldValues( metadataDescription, encodedSitePath, listID, "/Lists/" + decodedItemPathWithoutSite, dspStsWorks );
-                  String modifiedDate = (String)values.get("Modified");
-                  String createdDate = (String)values.get("Created");
+                  Map<String,String> values = proxy.getFieldValues( metadataDescription, encodedSitePath, listID, "/Lists/" + decodedItemPathWithoutSite, dspStsWorks );
+                  String modifiedDate = values.get("Modified");
+                  String createdDate = values.get("Created");
+                  String id = values.get("ID");
+                  String guid = values.get("GUID");
                   if (modifiedDate != null)
                   {
                     // Item has a modified date so we presume it exists.
@@ -786,64 +836,28 @@
                     
                     // Build version string
                     String versionToken = modifiedDate;
+                      
+                    // Revamped version string on 9/21/2013 to make parseability better
                     
-                    // Revamped version string on 11/8/2006 to make parseability better
-
                     StringBuilder sb = new StringBuilder();
 
                     packList(sb,sortedMetadataFields,'+');
-
-                    // Do the acls.
-                    boolean foundAcls = true;
-                    if (acls != null)
-                    {
-                      sb.append('+');
-
-                      // If there are forced acls, use those in the version string instead.
-                      String[] accessTokens;
-                      if (acls.length == 0)
-                      {
-                        // The goal here is simply to record what should get ingested with the document, so that
-                        // we can compare against future values.
-                        // Grab the acls for this combo, if we haven't already
-                        accessTokens = lookupAccessTokensSorted(encodedSitePath,listID,ACLmap);
-                          
-                        if (accessTokens == null)
-                          foundAcls = false;
-                        
-                      }
-                      else
-                        accessTokens = acls;
-                      // Only pack access tokens if they are non-null; we'll be giving up anyhow otherwise
-                      if (foundAcls)
-                      {
-                        packList(sb,accessTokens,'+');
-                        // Added 4/21/2008 to handle case when AD authority is down
-                        pack(sb,defaultAuthorityDenyToken,'+');
-                      }
-                    }
-                    else
-                      sb.append('-');
-                    if (foundAcls)
-                    {
-                      packDate(sb,modifiedDateValue);
-                      packDate(sb,createdDateValue);
-                      // The rest of this is unparseable
-                      sb.append(versionToken);
-                      sb.append(pathNameAttributeVersion);
-                      // Added 9/7/07
-                      sb.append("_").append(fileBaseUrl);
-                      //
-                      rval[i] = sb.toString();
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug( "SharePoint: Complete version string for '"+documentIdentifier+"': " + rval[i]);
-                    }
-                    else
-                    {
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug("SharePoint: Couldn't get access tokens for list '"+decodedListPath+"'; removing list item '"+documentIdentifier+"'");
-                      rval[i] = null;
-                    }
+                    packList(sb,accessTokens,'+');
+                    packList(sb,denyTokens,'+');
+                    packDate(sb,modifiedDateValue);
+                    packDate(sb,createdDateValue);
+                    pack(sb,id,'+');
+                    pack(sb,guid,'+');
+                    pack(sb,displayURL,'+');
+                    // The rest of this is unparseable
+                    sb.append(versionToken);
+                    sb.append(pathNameAttributeVersion);
+                    // Added 9/7/07
+                    sb.append("_").append(fileBaseUrl);
+                    //
+                    rval[i] = sb.toString();
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug( "SharePoint: Complete version string for '"+documentIdentifier+"': " + rval[i]);
                   }
                   else
                   {
@@ -855,22 +869,106 @@
                 else
                 {
                   if (Logging.connectors.isDebugEnabled())
-                    Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because list '"+decodedListPath+"' doesn't respond to metadata requests - removing");
+                    Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because list '"+decodedListPath+"' does not exist - removing");
                   rval[i] = null;
                 }
               }
               else
               {
                 if (Logging.connectors.isDebugEnabled())
-                  Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because list '"+decodedListPath+"' does not exist - removing");
+                  Logging.connectors.debug("SharePoint: List item '"+documentIdentifier+"' is no longer included - removing");
                 rval[i] = null;
               }
             }
             else
             {
-              if (Logging.connectors.isDebugEnabled())
-                Logging.connectors.debug("SharePoint: List item '"+documentIdentifier+"' is no longer included - removing");
-              rval[i] = null;
+              // == List item attachment path! ==
+              if (checkIncludeListItemAttachment(decodedItemPath,spec))
+              {
+
+                // To save work, we retrieve most of what we need in version info from the parent.
+
+                // Retrieve modified and created dates
+                String[] modifiedDateSet = activities.retrieveParentData(documentIdentifier, "modifiedDate");
+                String[] createdDateSet = activities.retrieveParentData(documentIdentifier, "createdDate");
+                String[] accessTokens = activities.retrieveParentData(documentIdentifier, "accessTokens");
+                String[] denyTokens = activities.retrieveParentData(documentIdentifier, "denyTokens");
+                String[] urlSet = activities.retrieveParentData(documentIdentifier, "url");
+
+                // Only one modifiedDate and createdDate can be used.  If there's more than one, just pick one - the item will be reindexed
+                // anyhow.
+                String modifiedDate;
+                if (modifiedDateSet.length >= 1)
+                  modifiedDate = modifiedDateSet[0];
+                else
+                  modifiedDate = null;
+                String createdDate;
+                if (createdDateSet.length >= 1)
+                  createdDate = createdDateSet[0];
+                else
+                  createdDate = null;
+                String url;
+                if (urlSet.length >=1)
+                  url = urlSet[0];
+                else
+                  url = null;
+
+                // If we have no modified or created date, it means that the parent has gone away, so we go away too.
+                if (modifiedDate != null && url != null)
+                {
+                  // Item has a modified date so we presume it exists.
+                      
+                  Date modifiedDateValue;
+                  if (modifiedDate != null)
+                    modifiedDateValue = new Date(new Long(modifiedDate).longValue());
+                  else
+                    modifiedDateValue = null;
+                  Date createdDateValue;
+                  if (createdDate != null)
+                    createdDateValue = new Date(new Long(createdDate).longValue());
+                  else
+                    createdDateValue = null;
+                      
+                  // Build version string
+                  String versionToken = modifiedDate;
+                      
+                  StringBuilder sb = new StringBuilder();
+
+                  // Pack the URL to get the data from
+                  pack(sb,url,'+');
+                  
+                  // Do the acls.  If we get this far, we are guaranteed to have them, but we need to sort.
+                  java.util.Arrays.sort(accessTokens);
+                  java.util.Arrays.sort(denyTokens);
+                  
+                  packList(sb,accessTokens,'+');
+                  packList(sb,denyTokens,'+');
+                  packDate(sb,modifiedDateValue);
+                  packDate(sb,createdDateValue);
+
+                  // The rest of this is unparseable
+                  sb.append(versionToken);
+                  sb.append(pathNameAttributeVersion);
+                  sb.append("_").append(fileBaseUrl);
+                  //
+                  rval[i] = sb.toString();
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug( "SharePoint: Complete version string for '"+documentIdentifier+"': " + rval[i]);
+                }
+                else
+                {
+                  // Can't look up list ID, which means the list is gone, so delete
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because modified date or attachment url not found");
+                  rval[i] = null;
+                }
+              }
+              else
+              {
+                if (Logging.connectors.isDebugEnabled())
+                  Logging.connectors.debug("SharePoint: List item attachment '"+documentIdentifier+"' is no longer included - removing");
+                rval[i] = null;
+              }
             }
           }
         }
@@ -892,7 +990,7 @@
           }
           else
           {
-            // Document path!
+            // == Document path ==
             // Convert the modified document path to an unmodified one, plus a library path.
             String decodedLibPath = documentIdentifier.substring(0,dLibSeparatorIndex);
             String decodedDocumentPath = decodedLibPath + documentIdentifier.substring(dLibSeparatorIndex+1);
@@ -905,132 +1003,110 @@
               String sitePath = decodedLibPath.substring(0,lastIndex);
               String lib = decodedLibPath.substring(lastIndex+1);
 
-              String encodedSitePath = encodePath(sitePath);
+              // Retrieve the carry-down data we will be using.
+              // Note well: for sharepoint versions that include document/folder acls, these access tokens will be ignored,
+              // but they will still be carried down nonetheless, in case someone switches versions on us.
+              String[] accessTokens = activities.retrieveParentData(documentIdentifier, "accessTokens");
+              String[] denyTokens = activities.retrieveParentData(documentIdentifier, "denyTokens");
+              String[] libIDs = activities.retrieveParentData(documentIdentifier, "guids");
+              String[] libFields = activities.retrieveParentData(documentIdentifier, "fields");
 
-              // Need to get the library id.  Cache it if we need to calculate it.
-              String libID = libIDMap.get(decodedLibPath);
-              if (libID == null)
-              {
-                libID = proxy.getDocLibID(encodedSitePath, sitePath, lib);
-                if (libID != null)
-                  libIDMap.put(decodedLibPath,libID);
-              }
-
+              String libID;
+              if (libIDs.length >= 1)
+                libID = libIDs[0];
+              else
+                libID = null;
+              
               if (libID != null)
               {
-                String[] sortedMetadataFields = getInterestingFieldSetSorted(metadataInfo,encodedSitePath,libID,fieldListMap);
+                String encodedSitePath = encodePath(sitePath);
+                String[] sortedMetadataFields = getInterestingFieldSetSorted(metadataInfo,libFields);
                 
-                if (sortedMetadataFields != null)
+                // Sort access tokens
+                java.util.Arrays.sort(accessTokens);
+                java.util.Arrays.sort(denyTokens);
+
+                // Next, get the actual timestamp field for the file.
+                ArrayList metadataDescription = new ArrayList();
+                metadataDescription.add("Last_x0020_Modified");
+                metadataDescription.add("Modified");
+                metadataDescription.add("Created");
+                metadataDescription.add("GUID");
+                // The document path includes the library, with no leading slash, and is decoded.
+                int cutoff = decodedLibPath.lastIndexOf("/");
+                String decodedDocumentPathWithoutSite = decodedDocumentPath.substring(cutoff);
+                Map<String,String> values = proxy.getFieldValues( metadataDescription, encodedSitePath, libID, decodedDocumentPathWithoutSite, dspStsWorks );
+
+                String modifiedDate = values.get("Modified");
+                String createdDate = values.get("Created");
+                String guid = values.get("GUID");
+                String modifyDate = values.get("Last_x0020_Modified");
+
+                if (modifyDate != null)
                 {
-                  // Next, get the actual timestamp field for the file.
-                  ArrayList metadataDescription = new ArrayList();
-                  metadataDescription.add("Last_x0020_Modified");
-                  metadataDescription.add("Modified");
-                  metadataDescription.add("Created");
-                  // The document path includes the library, with no leading slash, and is decoded.
-                  int cutoff = decodedLibPath.lastIndexOf("/");
-                  String decodedDocumentPathWithoutSite = decodedDocumentPath.substring(cutoff+1);
-                  Map values = proxy.getFieldValues( metadataDescription, encodedSitePath, libID, decodedDocumentPathWithoutSite, dspStsWorks );
+                  // Item has a modified date, so we presume it exists
+                  Date modifiedDateValue = DateParser.parseISO8601Date(modifiedDate);
+                  Date createdDateValue = DateParser.parseISO8601Date(createdDate);
 
-                  String modifiedDate = (String)values.get("Modified");
-                  String createdDate = (String)values.get("Created");
-                  
-                  String modifyDate = (String)values.get("Last_x0020_Modified");
-                  if (modifyDate != null)
+                  // Build version string
+                  String versionToken = modifyDate;
+
+                  if (supportsItemSecurity)
                   {
-                    // Item has a modified date, so we presume it exists
-                    Date modifiedDateValue = DateParser.parseISO8601Date(modifiedDate);
-                    Date createdDateValue = DateParser.parseISO8601Date(createdDate);
-
-                    // Build version string
-                    String versionToken = modifyDate;
-                    // Revamped version string on 11/8/2006 to make parseability better
+                    // Do the acls.
+                    if (forcedAcls == null)
+                    {
+                      // Security is off
+                      accessTokens = new String[0];
+                      denyTokens = new String[0];
+                    }
+                    else if (forcedAcls.length > 0)
+                    {
+                      // Security on, forced acls
+                      accessTokens = forcedAcls;
+                      denyTokens = new String[0];
+                    }
+                    else
+                    {
+                      // Security on, is native
+                      accessTokens = proxy.getDocumentACLs( encodedSitePath, encodePath(decodedDocumentPath), activeDirectoryAuthority );
+                      denyTokens = new String[]{defaultAuthorityDenyToken};
+                    }
+                  }
+                  
+                  if (accessTokens != null)
+                  {
+                    // Revamped version string on 9/21/2013 to make parseability better
 
                     StringBuilder sb = new StringBuilder();
 
                     packList(sb,sortedMetadataFields,'+');
-
-                    // Do the acls.
-                    boolean foundAcls = true;
-                    if (acls != null)
-                    {
-                      sb.append('+');
-
-                      // If there are forced acls, use those in the version string instead.
-                      String[] accessTokens;
-                      if (acls.length == 0)
-                      {
-                        if (supportsItemSecurity)
-                        {
-                          // For documents, just fetch
-                          accessTokens = proxy.getDocumentACLs( encodedSitePath, encodePath(decodedDocumentPath) );
-                          if (accessTokens != null)
-                          {
-                            java.util.Arrays.sort(accessTokens);
-                            if (Logging.connectors.isDebugEnabled())
-                              Logging.connectors.debug( "SharePoint: Received " + accessTokens.length + " acls for '" +decodedDocumentPath+"'");
-                          }
-                          else
-                          {
-                            foundAcls = false;
-                          }
-                        }
-                        else
-                        {
-                          // The goal here is simply to record what should get ingested with the document, so that
-                          // we can compare against future values.
-                          // Grab the acls for this combo, if we haven't already
-                          accessTokens = lookupAccessTokensSorted(encodedSitePath,libID,ACLmap);
-                          
-                          if (accessTokens == null)
-                            foundAcls = false;
-
-                        }
-                      }
-                      else
-                        accessTokens = acls;
-                      // Only pack access tokens if they are non-null; we'll be giving up anyhow otherwise
-                      if (foundAcls)
-                      {
-                        packList(sb,accessTokens,'+');
-                        // Added 4/21/2008 to handle case when AD authority is down
-                        pack(sb,defaultAuthorityDenyToken,'+');
-                      }
-                    }
-                    else
-                      sb.append('-');
-                    if (foundAcls)
-                    {
-                      packDate(sb,modifiedDateValue);
-                      packDate(sb,createdDateValue);
-                      // The rest of this is unparseable
-                      sb.append(versionToken);
-                      sb.append(pathNameAttributeVersion);
-                      // Added 9/7/07
-                      sb.append("_").append(fileBaseUrl);
-                      //
-                      rval[i] = sb.toString();
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug( "SharePoint: Complete version string for '"+documentIdentifier+"': " + rval[i]);
-                    }
-                    else
-                    {
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug("SharePoint: Couldn't get access tokens for library '"+decodedLibPath+"'; removing document '"+documentIdentifier+"'");
-                      rval[i] = null;
-                    }
+                    packList(sb,accessTokens,'+');
+                    packList(sb,denyTokens,'+');
+                    packDate(sb,modifiedDateValue);
+                    packDate(sb,createdDateValue);
+                    pack(sb,guid,'+');
+                    // The rest of this is unparseable
+                    sb.append(versionToken);
+                    sb.append(pathNameAttributeVersion);
+                    // Added 9/7/07
+                    sb.append("_").append(fileBaseUrl);
+                    //
+                    rval[i] = sb.toString();
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug( "SharePoint: Complete version string for '"+documentIdentifier+"': " + rval[i]);
                   }
                   else
                   {
                     if (Logging.connectors.isDebugEnabled())
-                      Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because it has no modify date");
+                      Logging.connectors.debug("SharePoint: Couldn't get access tokens for item '"+decodedDocumentPath+"'; removing document '"+documentIdentifier+"'");
                     rval[i] = null;
                   }
                 }
                 else
                 {
                   if (Logging.connectors.isDebugEnabled())
-                    Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because library '"+decodedLibPath+"' doesn't respond to metadata requests - removing");
+                    Logging.connectors.debug("SharePoint: Can't get version of '"+documentIdentifier+"' because it has no modify date");
                   rval[i] = null;
                 }
               }
@@ -1101,81 +1177,41 @@
     return index;
   }
   
-  protected String[] lookupAccessTokensSorted(String encodedSitePath, String guid, Map<String,String[]> ACLmap)
-    throws ManifoldCFException, ServiceInterruption
+  protected String[] getInterestingFieldSetSorted(MetadataInformation metadataInfo, String[] allFields)
   {
-    String[] accessTokens = ACLmap.get(guid);
-    if (accessTokens == null)
-    {
-      if (Logging.connectors.isDebugEnabled())
-        Logging.connectors.debug( "SharePoint: Compiling acl list for guid "+guid+"... ");
-      accessTokens = proxy.getACLs( encodedSitePath, guid );
-      if (accessTokens != null)
-      {
-        java.util.Arrays.sort(accessTokens);
-        if (Logging.connectors.isDebugEnabled())
-          Logging.connectors.debug( "SharePoint: Received " + accessTokens.length + " acls for  guid " +guid);
-        ACLmap.put(guid,accessTokens);
-      }
-    }
-    return accessTokens;
-  }
-
-  protected String[] getInterestingFieldSetSorted(MetadataInformation metadataInfo,
-    String encodedSitePath, String guid, Map<String,Map<String,String>> fieldListMap)
-    throws ManifoldCFException, ServiceInterruption
-  {
-    Set<String> metadataFields = null;
+    Set<String> metadataFields = new HashSet<String>();
 
     // Figure out the actual metadata fields we will request
     if (metadataInfo.getAllMetadata())
     {
-      // Fetch the fields
-      Map<String,String> fieldNames = fieldListMap.get(guid);
-      if (fieldNames == null)
+      for (String field : allFields)
       {
-        fieldNames = proxy.getFieldList( encodedSitePath, guid );
-        if (fieldNames != null)
-          fieldListMap.put(guid,fieldNames);
-      }
-
-      if (fieldNames != null)
-      {
-        metadataFields = new HashSet<String>();
-        for (Iterator<String> e = fieldNames.keySet().iterator(); e.hasNext();)
-        {
-          String key = e.next();
-          metadataFields.add(key);
-        }
+        metadataFields.add(field);
       }
     }
     else
     {
-      metadataFields = new HashSet<String>();
       String[] fields = metadataInfo.getMetadataFields();
-      int q = 0;
-      while (q < fields.length)
+      for (String field : fields)
       {
-        String field = fields[q++];
         metadataFields.add(field);
       }
     }
-    if (metadataFields == null)
-      return null;
     
     // Convert the hashtable to an array and sort it.
     String[] sortedMetadataFields = new String[metadataFields.size()];
     int z = 0;
-    Iterator<String> iter = metadataFields.iterator();
-    while (iter.hasNext())
+    for (String field : metadataFields)
     {
-      sortedMetadataFields[z++] = iter.next();
+      sortedMetadataFields[z++] = field;
     }
     java.util.Arrays.sort(sortedMetadataFields);
 
     return sortedMetadataFields;
   }
 
+  protected static final String[] attachmentDataNames = new String[]{"createdDate","modifiedDate","accessTokens","denyTokens","url","guids"};
+
   /** Process a set of documents.
   * This is the method that should cause each document to be fetched, processed, and the results either added
   * to the queue of documents for the current job, and/or entered into the incremental ingestion manager.
@@ -1193,12 +1229,15 @@
   {
     getSession();
 
+    // Read the forced acls.  A null return indicates that security is disabled!!!
+    // A zero-length return indicates that the native acls should be used.
+    // All of this is germane to how we ingest the document, so we need to note it in
+    // the version string completely.
+    String[] forcedAcls = getAcls(spec);
+
     // Decode the system metadata part of the specification
     SystemMetadataDescription sDesc = new SystemMetadataDescription(spec);
 
-    Map<String,String> docLibIDMap = new HashMap<String,String>();
-    Map<String,String> listIDMap = new HashMap<String,String>();
-
     int i = 0;
     while (i < documentIdentifiers.length)
     {
@@ -1228,19 +1267,68 @@
             if (Logging.connectors.isDebugEnabled())
               Logging.connectors.debug( "SharePoint: Document identifier is a list: '" + siteListPath + "'" );
 
-            // Calculate the start of the path part that would contain the list item name
-            int listItemPathIndex = site.length() + 1 + listName.length();
-
             String listID = proxy.getListID( encodePath(site), site, listName );
             if (listID != null)
             {
-              ListItemStream fs = new ListItemStream( activities, listItemPathIndex, spec );
-              boolean success = proxy.getChildren( fs, encodePath(site) , listID, dspStsWorks );
-              if (!success)
+              String encodedSitePath = encodePath(site);
+              
+              // Get the list's fields
+              Map<String,String> fieldNames = proxy.getFieldList( encodedSitePath, listID );
+              if (fieldNames != null)
               {
-                // Site/list no longer exists, so delete entry
+                String[] fields = new String[fieldNames.size()];
+                int j = 0;
+                for (String field : fieldNames.keySet())
+                {
+                  fields[j++] = field;
+                }
+                
+                String[] accessTokens;
+                String[] denyTokens;
+                
+                if (forcedAcls == null)
+                {
+                  // Security is off
+                  accessTokens = new String[0];
+                  denyTokens = new String[0];
+                }
+                else if (forcedAcls.length != 0)
+                {
+                  // Forced security
+                  accessTokens = forcedAcls;
+                  denyTokens = new String[0];
+                }
+                else
+                {
+                  // Security enabled, native security
+                  accessTokens = proxy.getACLs( encodedSitePath, listID, activeDirectoryAuthority );
+                  denyTokens = new String[]{defaultAuthorityDenyToken};
+                }
+
+                if (accessTokens != null)
+                {
+                  ListItemStream fs = new ListItemStream( activities, encodedServerLocation, site, siteListPath, spec,
+                    documentIdentifier, accessTokens, denyTokens, listID, fields );
+                  boolean success = proxy.getChildren( fs, encodedSitePath , listID, dspStsWorks );
+                  if (!success)
+                  {
+                    // Site/list no longer exists, so delete entry
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug("SharePoint: No list found for list '"+siteListPath+"' - deleting");
+                    activities.deleteDocument(documentIdentifier,version);
+                  }
+                }
+                else
+                {
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Access token lookup failed for list '"+siteListPath+"' - deleting");
+                  activities.deleteDocument(documentIdentifier,version);
+                }
+              }
+              else
+              {
                 if (Logging.connectors.isDebugEnabled())
-                  Logging.connectors.debug("SharePoint: No list found for list '"+siteListPath+"' - deleting");
+                  Logging.connectors.debug("SharePoint: Field list lookup failed for list '"+siteListPath+"' - deleting");
                 activities.deleteDocument(documentIdentifier,version);
               }
             }
@@ -1253,42 +1341,35 @@
           }
           else
           {
-            // List item identifier
-            if ( !scanOnly[ i ] )
+            // == List item or attachment identifier ==
+            
+            // Get the item part of the path
+            String decodedListPath = documentIdentifier.substring(0,dListSeparatorIndex);
+            String itemAndAttachment = documentIdentifier.substring(dListSeparatorIndex+2);
+            String decodedItemPath = decodedListPath + itemAndAttachment;
+            
+            // If the item part has a slash, we're looking at an attachment
+            int attachmentSeparatorIndex = itemAndAttachment.indexOf("//",1);
+            if (attachmentSeparatorIndex == -1)
             {
-              // Convert the modified document path to an unmodified one, plus a library path.
-              String decodedListPath = documentIdentifier.substring(0,dListSeparatorIndex);
-              String decodedItemPath = decodedListPath + documentIdentifier.substring(dListSeparatorIndex+2);
+              // == List item identifier ==
               
-              int cutoff = decodedListPath.lastIndexOf("/");
-
-              String encodedItemPath = encodePath(decodedListPath.substring(cutoff) + "/Lists/" + decodedItemPath.substring(cutoff+1));
-
+              // Before we index, we queue up any attachments
               int listCutoff = decodedListPath.lastIndexOf( "/" );
               String site = decodedListPath.substring(0,listCutoff);
               String listName = decodedListPath.substring( listCutoff + 1 );
 
-              // Parse what we need out of version string.
-
               // Placeholder for metadata specification
               ArrayList metadataDescription = new ArrayList();
               int startPosition = unpackList(metadataDescription,version,0,'+');
 
               // Acls
-              ArrayList acls = null;
-              String denyAcl = null;
-              if (startPosition < version.length() && version.charAt(startPosition++) == '+')
-              {
-                acls = new ArrayList();
-                startPosition = unpackList(acls,version,startPosition,'+');
-                if (startPosition < version.length())
-                {
-                  StringBuilder denyAclBuffer = new StringBuilder();
-                  startPosition = unpack(denyAclBuffer,version,startPosition,'+');
-                  denyAcl = denyAclBuffer.toString();
-                }
-              }
+              ArrayList acls = new ArrayList();
+              ArrayList denyAcls = new ArrayList();
+              startPosition = unpackList(acls,version,startPosition,'+');
+              startPosition = unpackList(denyAcls,version,startPosition,'+');
 
+              // Dates
               Date modifiedDate = new Date(0L);
               startPosition = unpackDate(version,startPosition,modifiedDate);
               if (modifiedDate.getTime() == 0L)
@@ -1297,93 +1378,218 @@
               startPosition = unpackDate(version,startPosition,createdDate);
               if (createdDate.getTime() == 0L)
                 createdDate = null;
+
+              // ID (for looking up attachments)
+              StringBuilder idBuffer = new StringBuilder();
+              startPosition = unpack(idBuffer,version,startPosition,'+');
+
+              // List item GUID (for metadata)
+              StringBuilder guidBuffer = new StringBuilder();
+              startPosition = unpack(guidBuffer,version,startPosition,'+');
+              String guid = guidBuffer.toString();
               
-              // Generate the URL we are going to use
-              String itemUrl = fileBaseUrl + encodedItemPath;
-              if (Logging.connectors.isDebugEnabled())
-                Logging.connectors.debug( "SharePoint: Processing item '"+documentIdentifier+"'; url: '" + itemUrl + "'" );
+              // List item URL
+              StringBuilder relURLBuffer = new StringBuilder();
+              startPosition = unpack(relURLBuffer,version,startPosition,'+');
+              String relURL = relURLBuffer.toString();
+              
+              // We need the list ID, which we've already fetched, so grab that from the parent data.
+              String[] listIDs = activities.retrieveParentData(documentIdentifier, "guids");
 
-              if (activities.checkLengthIndexable(0L))
+              String listID;
+              if (listIDs.length >= 1)
+                listID = listIDs[0];
+              else
+                listID = null;
+
+              if (listID == null)
               {
-                InputStream is = new ByteArrayInputStream(new byte[0]);
-                try
+                if (Logging.connectors.isDebugEnabled())
+                  Logging.connectors.debug("SharePoint: List '"+decodedListPath+"' no longer exists - deleting item '"+documentIdentifier+"'");
+                activities.deleteDocument(documentIdentifier,version);
+                i++;
+                continue;
+              }
+
+              // Now, do any queuing that is needed.
+              if (attachmentsSupported)
+              {
+                String itemNumber = idBuffer.toString();
+
+
+                List<NameValue> attachmentNames = proxy.getAttachmentNames( site, listID, itemNumber );
+                // Now, queue up each attachment as a separate entry
+                for (NameValue attachmentName : attachmentNames)
                 {
-                  RepositoryDocument data = new RepositoryDocument();
-                  data.setBinary( is, 0L );
-
-                  if (modifiedDate != null)
-                    data.setModifiedDate(modifiedDate);
-                  if (createdDate != null)
-                    data.setCreatedDate(createdDate);
+                  // For attachments, we use the carry-down feature to get the data where we need it.  That's why
+                  // we unpacked the version information early above.
                   
-                  setDataACLs(data,acls,denyAcl);
-                  
-                  setPathAttribute(data,sDesc,documentIdentifier);
+                  // No check for inclusion; if the list item is included, so is this
+                  String[][] dataValues = new String[attachmentDataNames.length][];
+                  if (createdDate == null)
+                    dataValues[0] = new String[0];
+                  else
+                    dataValues[0] = new String[]{new Long(createdDate.getTime()).toString()};
+                  if (modifiedDate == null)
+                    dataValues[1] = new String[0];
+                  else
+                    dataValues[1] = new String[]{new Long(modifiedDate.getTime()).toString()};
+                  if (acls == null)
+                    dataValues[2] = new String[0];
+                  else
+                    dataValues[2] = (String[])acls.toArray(new String[0]);
+                  if (denyAcls == null)
+                    dataValues[3] = new String[0];
+                  else
+                    dataValues[3] = (String[])denyAcls.toArray(new String[0]);
+                  dataValues[4] = new String[]{attachmentName.getPrettyName()};
+                  dataValues[5] = new String[]{guid};
 
-                  // Retrieve field values from SharePoint
-                  if (metadataDescription.size() > 0)
+                  activities.addDocumentReference(documentIdentifier + "//" + attachmentName.getValue(),
+                    documentIdentifier, null, attachmentDataNames, dataValues);
+                  
+                }
+              }
+              
+              if ( !scanOnly[ i ] )
+              {
+                // Convert the modified document path to an unmodified one, plus a library path.
+                String encodedItemPath = encodePath(decodedListPath.substring(0,listCutoff) + "/Lists/" + decodedItemPath.substring(listCutoff+1));
+                
+                // Generate the URL we are going to use
+                String itemUrl = fileBaseUrl + relURL;  //fileBaseUrl + encodedItemPath;
+                
+                if (Logging.connectors.isDebugEnabled())
+                  Logging.connectors.debug( "SharePoint: Processing list item '"+documentIdentifier+"'; url: '" + itemUrl + "'" );
+
+                // Fetch the metadata we will be indexing
+                Map<String,String> metadataValues = null;
+                if (metadataDescription.size() > 0)
+                {
+                  metadataValues = proxy.getFieldValues( metadataDescription, encodePath(site), listID, "/Lists/" + decodedItemPath.substring(listCutoff+1), dspStsWorks );
+                  if (metadataValues == null)
                   {
-                    String listID = listIDMap.get(decodedListPath);
-                    if (listID == null)
-                    {
-                      listID = proxy.getListID( encodePath(site), site, listName);
-                      if (listID == null)
-                        listID = "";
-                      listIDMap.put(decodedListPath,listID);
-                    }
+                    // Item has vanished
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug("SharePoint: Item metadata fetch failure indicated that item is gone: '"+documentIdentifier+"' - removing");
+                    activities.deleteDocument(documentIdentifier,version);
+                    i++;
+                    continue;
+                  }
+                }
+                
+                if (activities.checkLengthIndexable(0L))
+                {
+                  InputStream is = new ByteArrayInputStream(new byte[0]);
+                  try
+                  {
+                    RepositoryDocument data = new RepositoryDocument();
+                    data.setBinary( is, 0L );
 
-                    if (listID.length() == 0)
-                    {
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug("SharePoint: List '"+decodedListPath+"' no longer exists - deleting item '"+documentIdentifier+"'");
-                      activities.deleteDocument(documentIdentifier,version);
-                      i++;
-                      continue;
-                    }
+                    if (modifiedDate != null)
+                      data.setModifiedDate(modifiedDate);
+                    if (createdDate != null)
+                      data.setCreatedDate(createdDate);
+                    
+                    setDataACLs(data,acls,denyAcls);
+                    
+                    setPathAttribute(data,sDesc,documentIdentifier);
 
-                    Map values = proxy.getFieldValues( metadataDescription, encodePath(site), listID, "/Lists/" + decodedItemPath.substring(cutoff+1), dspStsWorks );
-                    if (values != null)
+                    if (metadataValues != null)
                     {
-                      Iterator iter = values.keySet().iterator();
+                      Iterator<String> iter = metadataValues.keySet().iterator();
                       while (iter.hasNext())
                       {
-                        String fieldName = (String)iter.next();
-                        String fieldData = (String)values.get(fieldName);
+                        String fieldName = iter.next();
+                        String fieldData = metadataValues.get(fieldName);
                         data.addField(fieldName,fieldData);
                       }
                     }
-                    else
+                    data.addField("GUID",guid);
+                    
+                    activities.ingestDocument( documentIdentifier, version, itemUrl , data );
+                  }
+                  finally
+                  {
+                    try
                     {
-                      // Item has vanished
-                      if (Logging.connectors.isDebugEnabled())
-                        Logging.connectors.debug("SharePoint: Item metadata fetch failure indicated that item is gone: '"+documentIdentifier+"' - removing");
-                      activities.deleteDocument(documentIdentifier,version);
-                      i++;
-                      continue;
+                      is.close();
+                    }
+                    catch (InterruptedIOException e)
+                    {
+                      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                    }
+                    catch (IOException e)
+                    {
+                      // This should never happen; we're closing a bytearrayinputstream
                     }
                   }
-
-                  activities.ingestDocument( documentIdentifier, version, itemUrl , data );
                 }
-                finally
-                {
-                  try
-                  {
-                    is.close();
-                  }
-                  catch (InterruptedIOException e)
-                  {
-                    throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-                  }
-                  catch (IOException e)
-                  {
-                    // This should never happen; we're closing a bytearrayinputstream
-                  }
-                }
+                else
+                  // Document too long (should never happen; length is 0)
+                  activities.deleteDocument( documentIdentifier, version );
               }
-              else
-                // Document too long (should never happen; length is 0)
-                activities.deleteDocument( documentIdentifier, version );
+            }
+            else
+            {
+              // == List item attachment identifier ==
+              if (!scanOnly[i])
+              {
+                // Unpack the version info.
+                int startPosition = 0;
+                StringBuilder urlBuffer = new StringBuilder();
+                ArrayList accessTokens = new ArrayList();
+                ArrayList denyTokens = new ArrayList();
+                Date modifiedDate = new Date(0L);
+                Date createdDate = new Date(0L);
+                
+                startPosition = unpack(urlBuffer,version,startPosition,'+');
+                startPosition = unpackList(accessTokens,version,startPosition,'+');
+                startPosition = unpackList(denyTokens,version,startPosition,'+');
+                startPosition = unpackDate(version,startPosition,modifiedDate);
+                startPosition = unpackDate(version,startPosition,createdDate);
+
+                if (modifiedDate.getTime() == 0L)
+                  modifiedDate = null;
+                if (createdDate.getTime() == 0L)
+                  createdDate = null;
+
+                // We need the list ID, which we've already fetched, so grab that from the parent data.
+                String[] guids = activities.retrieveParentData(documentIdentifier, "guids");
+                String guid;
+                if (guids.length >= 1)
+                  guid = guids[0];
+                else
+                  guid = null;
+                
+                if (guid != null)
+                {
+                  String url = urlBuffer.toString();
+                  int lastIndex = url.lastIndexOf("/");
+                  guid = guid + ":" + url.substring(lastIndex+1);
+                  
+                  // Fetch and index.  This also filters documents based on output connector restrictions.
+                  String fileUrl = serverUrl + encodePath(url);
+                  String fetchUrl = fileUrl;
+                  if (!fetchAndIndexFile(activities, documentIdentifier, version, fileUrl, fetchUrl,
+                    accessTokens, denyTokens, createdDate, modifiedDate, null, guid, sDesc))
+                  {
+                    // Document not indexed for whatever reason
+                    activities.deleteDocument(documentIdentifier,version);
+                    i++;
+                    continue;
+                  }
+                }
+                else
+                {
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Skipping attachment '"+documentIdentifier+"' because no parent guid found");
+                  activities.deleteDocument(documentIdentifier,version);
+                  i++;
+                  continue;
+                }
+                
+              }
             }
           }
         }
@@ -1401,19 +1607,68 @@
             if (Logging.connectors.isDebugEnabled())
               Logging.connectors.debug( "SharePoint: Document identifier is a library: '" + siteLibPath + "'" );
 
-            // Calculate the start of the path part that would contain the folders/file
-            int foldersFilePathIndex = site.length() + 1 + libName.length();
-
             String libID = proxy.getDocLibID( encodePath(site), site, libName );
             if (libID != null)
             {
-              FileStream fs = new FileStream( activities, foldersFilePathIndex, spec );
-              boolean success = proxy.getChildren( fs, encodePath(site) , libID, dspStsWorks );
-              if (!success)
+              String encodedSitePath = encodePath(site);
+              
+              // Get the lib's fields
+              Map<String,String> fieldNames = proxy.getFieldList( encodedSitePath, libID );
+              if (fieldNames != null)
               {
-                // Site/library no longer exists, so delete entry
+                String[] fields = new String[fieldNames.size()];
+                int j = 0;
+                for (String field : fieldNames.keySet())
+                {
+                  fields[j++] = field;
+                }
+                
+                String[] accessTokens;
+                String[] denyTokens;
+                
+                if (forcedAcls == null)
+                {
+                  // Security is off
+                  accessTokens = new String[0];
+                  denyTokens = new String[0];
+                }
+                else if (forcedAcls.length != 0)
+                {
+                  // Forced security
+                  accessTokens = forcedAcls;
+                  denyTokens = new String[0];
+                }
+                else
+                {
+                  // Security enabled, native security
+                  accessTokens = proxy.getACLs( encodedSitePath, libID, activeDirectoryAuthority );
+                  denyTokens = new String[]{defaultAuthorityDenyToken};
+                }
+
+                if (accessTokens != null)
+                {
+                  FileStream fs = new FileStream( activities, encodedServerLocation, site, siteLibPath, spec,
+                    documentIdentifier, accessTokens, denyTokens, libID, fields );
+                  boolean success = proxy.getChildren( fs, encodedSitePath , libID, dspStsWorks );
+                  if (!success)
+                  {
+                    // Site/library no longer exists, so delete entry
+                    if (Logging.connectors.isDebugEnabled())
+                      Logging.connectors.debug("SharePoint: No list found for library '"+siteLibPath+"' - deleting");
+                    activities.deleteDocument(documentIdentifier,version);
+                  }
+                }
+                else
+                {
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Access token lookup failed for library '"+siteLibPath+"' - deleting");
+                  activities.deleteDocument(documentIdentifier,version);
+                }
+              }
+              else
+              {
                 if (Logging.connectors.isDebugEnabled())
-                  Logging.connectors.debug("SharePoint: No list found for library '"+siteLibPath+"' - deleting");
+                  Logging.connectors.debug("SharePoint: Field list lookup failed for library '"+siteLibPath+"' - deleting");
                 activities.deleteDocument(documentIdentifier,version);
               }
             }
@@ -1445,20 +1700,12 @@
               int startPosition = unpackList(metadataDescription,version,0,'+');
 
               // Acls
-              ArrayList acls = null;
-              String denyAcl = null;
-              if (startPosition < version.length() && version.charAt(startPosition++) == '+')
-              {
-                acls = new ArrayList();
-                startPosition = unpackList(acls,version,startPosition,'+');
-                if (startPosition < version.length())
-                {
-                  StringBuilder denyAclBuffer = new StringBuilder();
-                  startPosition = unpack(denyAclBuffer,version,startPosition,'+');
-                  denyAcl = denyAclBuffer.toString();
-                }
-              }
+              ArrayList acls = new ArrayList();
+              ArrayList denyAcls = new ArrayList();
+              startPosition = unpackList(acls,version,startPosition,'+');
+              startPosition = unpackList(denyAcls,version,startPosition,'+');
 
+              // Dates
               Date modifiedDate = new Date(0L);
               startPosition = unpackDate(version,startPosition,modifiedDate);
               if (modifiedDate.getTime() == 0L)
@@ -1468,247 +1715,59 @@
               if (createdDate.getTime() == 0L)
                 createdDate = null;
               
+              // Document GUID (for metadata)
+              StringBuilder guidBuffer = new StringBuilder();
+              startPosition = unpack(guidBuffer,version,startPosition,'+');
+              String guid = guidBuffer.toString();
+
               // Generate the URL we are going to use
               String fileUrl = fileBaseUrl + encodedDocumentPath;
               if (Logging.connectors.isDebugEnabled())
                 Logging.connectors.debug( "SharePoint: Processing file '"+documentIdentifier+"'; url: '" + fileUrl + "'" );
 
-
-              // Set stuff up for fetch activity logging
-              long startFetchTime = System.currentTimeMillis();
-              try
+              // First, fetch the metadata we plan to index.
+              Map<String,String> metadataValues = null;
+              if (metadataDescription.size() > 0)
               {
-                // Read the document into a local temporary file, so I get a reliable length.
-                File tempFile = File.createTempFile("__shp__",".tmp");
-                try
+                // Retrieve the library guid from carrydown data
+                String[] libIDs = activities.retrieveParentData(documentIdentifier, "guids");
+
+                String documentLibID;
+                if (libIDs.length >= 1)
+                  documentLibID = libIDs[0];
+                else
+                  documentLibID = null;
+
+                if (documentLibID == null)
                 {
-                  // Open the output stream
-                  OutputStream os = new FileOutputStream(tempFile);
-                  try
-                  {
-                    // Catch all exceptions having to do with reading the document
-                    try
-                    {
-                      ExecuteMethodThread emt = new ExecuteMethodThread(httpClient,
-                        serverUrl + encodedServerLocation + encodedDocumentPath, os);
-                      emt.start();
-                      emt.join();
-                      Throwable t = emt.getException();
-                      if (t instanceof InterruptedException)
-                        throw (InterruptedException)t;
-                      if (t instanceof IOException)
-                        throw (IOException)t;
-                      else if (t instanceof Error)
-                        throw (Error)t;
-                      else if (t instanceof org.apache.http.HttpException)
-                        throw (org.apache.http.HttpException)t;
-                      else if (t instanceof RuntimeException)
-                        throw (RuntimeException)t;
-                      
-                      int returnCode = emt.getResponse();
-                        
-                      if (returnCode == 404 || returnCode == 401 || returnCode == 400)
-                      {
-                        // Well, sharepoint thought the document was there, but it really isn't, so delete it.
-                        if (Logging.connectors.isDebugEnabled())
-                          Logging.connectors.debug("SharePoint: Document at '"+encodedServerLocation+encodedDocumentPath+"' failed to fetch with code "+Integer.toString(returnCode)+", deleting");
-                        activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                          null,documentIdentifier,"Not found",Integer.toString(returnCode),null);
-                        activities.deleteDocument(documentIdentifier,version);
-                        i++;
-                        continue;
-                      }
-                      else if (returnCode != 200)
-                      {
-                        activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                          null,documentIdentifier,"Error","Http status "+Integer.toString(returnCode),null);
-                        throw new ManifoldCFException("Error fetching document '"+fileUrl+"': "+Integer.toString(returnCode));
-                      }
-
-                      // Log the normal fetch activity
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                        new Long(tempFile.length()),documentIdentifier,"Success",null,null);
-
-                    }
-                    catch (InterruptedException e)
-                    {
-                      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-                    }
-                    catch (java.net.SocketTimeoutException e)
-                    {
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                        new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
-                      Logging.connectors.warn("SharePoint: SocketTimeoutException thrown: "+e.getMessage(),e);
-                      long currentTime = System.currentTimeMillis();
-                      throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
-                        currentTime + 12 * 60 * 60000L,-1,true);
-                    }
-                    catch (org.apache.http.conn.ConnectTimeoutException e)
-                    {
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                        new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
-                      Logging.connectors.warn("SharePoint: ConnectTimeoutException thrown: "+e.getMessage(),e);
-                      long currentTime = System.currentTimeMillis();
-                      throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
-                        currentTime + 12 * 60 * 60000L,-1,true);
-                    }
-                    catch (InterruptedIOException e)
-                    {
-                      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-                    }
-                    catch (IllegalArgumentException e)
-                    {
-                      Logging.connectors.error("SharePoint: Illegal argument", e);
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                        new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
-                      throw new ManifoldCFException("SharePoint: Illegal argument: "+e.getMessage(),e);
-                    }
-                    catch (org.apache.http.HttpException e)
-                    {
-                      Logging.connectors.warn("SharePoint: HttpException thrown",e);
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                        new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
-                      long currentTime = System.currentTimeMillis();
-                      throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
-                        currentTime + 12 * 60 * 60000L,-1,true);
-                    }
-                    catch (IOException e)
-                    {
-                      activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
-                        new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
-                      Logging.connectors.warn("SharePoint: IOException thrown: "+e.getMessage(),e);
-                      long currentTime = System.currentTimeMillis();
-                      throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
-                        currentTime + 12 * 60 * 60000L,-1,true);
-                    }
-                  }
-                  finally
-                  {
-                    os.close();
-                  }
-                  
-                  // Ingest the document
-                  long documentLength = tempFile.length();
-                  if (activities.checkLengthIndexable(documentLength))
-                  {
-                    InputStream is = new FileInputStream(tempFile);
-                    try
-                    {
-                      RepositoryDocument data = new RepositoryDocument();
-                      data.setBinary( is, documentLength );
-
-                      data.setFileName(mapToFileName(documentIdentifier));
-                      
-		      String contentType = mapExtensionToMimeType(documentIdentifier);
-		      if (contentType != null)
-                        data.setMimeType(contentType);
-                      
-                      setDataACLs(data,acls,denyAcl);
-
-                      setPathAttribute(data,sDesc,documentIdentifier);
-                      
-                      if (modifiedDate != null)
-                        data.setModifiedDate(modifiedDate);
-                      if (createdDate != null)
-                        data.setCreatedDate(createdDate);
-
-                      // Retrieve field values from SharePoint
-                      if (metadataDescription.size() > 0)
-                      {
-                        String documentLibID = docLibIDMap.get(decodedLibPath);
-                        if (documentLibID == null)
-                        {
-                          documentLibID = proxy.getDocLibID( encodePath(site), site, libName);
-                          if (documentLibID == null)
-                            documentLibID = "";
-                          docLibIDMap.put(decodedLibPath,documentLibID);
-                        }
-
-                        if (documentLibID.length() == 0)
-                        {
-                          if (Logging.connectors.isDebugEnabled())
-                            Logging.connectors.debug("SharePoint: Library '"+decodedLibPath+"' no longer exists - deleting document '"+documentIdentifier+"'");
-                          activities.deleteDocument(documentIdentifier,version);
-                          i++;
-                          continue;
-                        }
-
-                        int cutoff = decodedLibPath.lastIndexOf("/");
-                        Map values = proxy.getFieldValues( metadataDescription, encodePath(site), documentLibID, decodedDocumentPath.substring(cutoff+1), dspStsWorks );
-                        if (values != null)
-                        {
-                          Iterator iter = values.keySet().iterator();
-                          while (iter.hasNext())
-                          {
-                            String fieldName = (String)iter.next();
-                            String fieldData = (String)values.get(fieldName);
-                            data.addField(fieldName,fieldData);
-                          }
-                        }
-                        else
-                        {
-                          // Document has vanished
-                          if (Logging.connectors.isDebugEnabled())
-                            Logging.connectors.debug("SharePoint: Document metadata fetch failure indicated that document is gone: '"+documentIdentifier+"' - removing");
-                          activities.deleteDocument(documentIdentifier,version);
-                          i++;
-                          continue;
-                        }
-                      }
-
-                      activities.ingestDocument( documentIdentifier, version, fileUrl , data );
-                    }
-                    finally
-                    {
-                      try
-                      {
-                        is.close();
-                      }
-                      catch (java.net.SocketTimeoutException e)
-                      {
-                        // This is not fatal
-                        Logging.connectors.debug("SharePoint: Timeout before read could finish for '"+fileUrl+"': "+e.getMessage(),e);
-                      }
-                      catch (org.apache.http.conn.ConnectTimeoutException e)
-                      {
-                        // This is not fatal
-                        Logging.connectors.debug("SharePoint: Connect timeout before read could finish for '"+fileUrl+"': "+e.getMessage(),e);
-                      }
-                      catch (InterruptedIOException e)
-                      {
-                        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-                      }
-                      catch (IOException e)
-                      {
-                        // This is not fatal
-                        Logging.connectors.debug("SharePoint: Server closed connection before read could finish for '"+fileUrl+"': "+e.getMessage(),e);
-                      }
-                    }
-                  }
-                  else
-                    // Document too long
-                    activities.deleteDocument( documentIdentifier, version );
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Library '"+decodedLibPath+"' no longer exists - deleting document '"+documentIdentifier+"'");
+                  activities.deleteDocument(documentIdentifier,version);
+                  i++;
+                  continue;
                 }
-                finally
+
+                int cutoff = decodedLibPath.lastIndexOf("/");
+                metadataValues = proxy.getFieldValues( metadataDescription, encodePath(site), documentLibID, decodedDocumentPath.substring(cutoff+1), dspStsWorks );
+                if (metadataValues == null)
                 {
-                  tempFile.delete();
+                  // Document has vanished
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Document metadata fetch failure indicated that document is gone: '"+documentIdentifier+"' - removing");
+                  activities.deleteDocument(documentIdentifier,version);
+                  i++;
+                  continue;
                 }
               }
-              catch (java.net.SocketTimeoutException e)
+
+              // Fetch and index.  This also filters documents based on output connector restrictions.
+              if (!fetchAndIndexFile(activities, documentIdentifier, version, fileUrl, serverUrl + encodedServerLocation + encodedDocumentPath,
+                acls, denyAcls, createdDate, modifiedDate, metadataValues, guid, sDesc))
               {
-                throw new ManifoldCFException("Socket timeout error writing '"+fileUrl+"' to temporary file: "+e.getMessage(),e);
-              }
-              catch (org.apache.http.conn.ConnectTimeoutException e)
-              {
-                throw new ManifoldCFException("Connect timeout error writing '"+fileUrl+"' to temporary file: "+e.getMessage(),e);
-              }
-              catch (InterruptedIOException e)
-              {
-                throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-              }
-              catch (IOException e)
-              {
-                throw new ManifoldCFException("IO error writing '"+fileUrl+"' to temporary file: "+e.getMessage(),e);
+                // Document not indexed for whatever reason
+                activities.deleteDocument(documentIdentifier,version);
+                i++;
+                continue;
               }
             }
           }
@@ -1723,13 +1782,13 @@
             Logging.connectors.debug( "SharePoint: Document identifier is a site: '" + decodedSitePath + "'" );
 
           // Look at subsites
-          ArrayList subsites = proxy.getSites( encodePath(decodedSitePath) );
+          List<NameValue> subsites = proxy.getSites( encodePath(decodedSitePath) );
           if (subsites != null)
           {
             int j = 0;
             while (j < subsites.size())
             {
-              NameValue subSiteName = (NameValue)subsites.get(j++);
+              NameValue subSiteName = subsites.get(j++);
               String newPath = decodedSitePath + "/" + subSiteName.getValue();
 
               String encodedNewPath = encodePath(newPath);
@@ -1744,13 +1803,13 @@
           }
 
           // Look at libraries
-          ArrayList libraries = proxy.getDocumentLibraries( encodePath(decodedSitePath), decodedSitePath );
+          List<NameValue> libraries = proxy.getDocumentLibraries( encodePath(decodedSitePath), decodedSitePath );
           if (libraries != null)
           {
             int j = 0;
             while (j < libraries.size())
             {
-              NameValue library = (NameValue)libraries.get(j++);
+              NameValue library = libraries.get(j++);
               String newPath = decodedSitePath + "/" + library.getValue();
 
               if (checkIncludeLibrary(newPath,spec))
@@ -1765,13 +1824,13 @@
           }
 
           // Look at lists
-          ArrayList lists = proxy.getLists( encodePath(decodedSitePath), decodedSitePath );
+          List<NameValue> lists = proxy.getLists( encodePath(decodedSitePath), decodedSitePath );
           if (lists != null)
           {
             int j = 0;
             while (j < lists.size())
             {
-              NameValue list = (NameValue)lists.get(j++);
+              NameValue list = lists.get(j++);
               String newPath = decodedSitePath + "/" + list.getValue();
 
               if (checkIncludeList(newPath,spec))
@@ -1794,6 +1853,231 @@
     }
   }
 
+  /** Method that fetches and indexes a file fetched from a SharePoint URL, with appropriate error handling
+  * etc.
+  */
+  protected boolean fetchAndIndexFile(IProcessActivity activities, String documentIdentifier, String version,
+    String fileUrl, String fetchUrl, ArrayList acls, ArrayList denyAcls, Date createdDate, Date modifiedDate,
+    Map<String,String> metadataValues, String guid, SystemMetadataDescription sDesc)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    // Before we fetch, confirm that the output connector will accept the document
+    if (activities.checkURLIndexable(fileUrl))
+    {
+      // Also check mime type
+      String contentType = mapExtensionToMimeType(documentIdentifier);
+      if (activities.checkMimeTypeIndexable(contentType))
+      {
+        // Set stuff up for fetch activity logging
+        long startFetchTime = System.currentTimeMillis();
+        try
+        {
+          // Read the document into a local temporary file, so I get a reliable length.
+          File tempFile = File.createTempFile("__shp__",".tmp");
+          try
+          {
+            // Open the output stream
+            OutputStream os = new FileOutputStream(tempFile);
+            try
+            {
+              // Catch all exceptions having to do with reading the document
+              try
+              {
+                ExecuteMethodThread emt = new ExecuteMethodThread(httpClient, fetchUrl, os);
+                emt.start();
+                int returnCode = emt.finishUp();
+                  
+                if (returnCode == 404 || returnCode == 401 || returnCode == 400)
+                {
+                  // Well, sharepoint thought the document was there, but it really isn't, so delete it.
+                  if (Logging.connectors.isDebugEnabled())
+                    Logging.connectors.debug("SharePoint: Document at '"+fileUrl+"' failed to fetch with code "+Integer.toString(returnCode)+", deleting");
+                  activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                    null,documentIdentifier,"Not found",Integer.toString(returnCode),null);
+                  return false;
+                }
+                else if (returnCode != 200)
+                {
+                  activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                    null,documentIdentifier,"Error","Http status "+Integer.toString(returnCode),null);
+                  throw new ManifoldCFException("Error fetching document '"+fileUrl+"': "+Integer.toString(returnCode));
+                }
+
+                // Log the normal fetch activity
+                activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                  new Long(tempFile.length()),documentIdentifier,"Success",null,null);
+                
+              }
+              catch (InterruptedException e)
+              {
+                throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+              }
+              catch (java.net.SocketTimeoutException e)
+              {
+                activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                  new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
+                Logging.connectors.warn("SharePoint: SocketTimeoutException thrown: "+e.getMessage(),e);
+                long currentTime = System.currentTimeMillis();
+                throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
+                  currentTime + 12 * 60 * 60000L,-1,true);
+              }
+              catch (org.apache.http.conn.ConnectTimeoutException e)
+              {
+                activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                  new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
+                Logging.connectors.warn("SharePoint: ConnectTimeoutException thrown: "+e.getMessage(),e);
+                long currentTime = System.currentTimeMillis();
+                throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
+                  currentTime + 12 * 60 * 60000L,-1,true);
+              }
+              catch (InterruptedIOException e)
+              {
+                throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+              }
+              catch (IllegalArgumentException e)
+              {
+                Logging.connectors.error("SharePoint: Illegal argument", e);
+                activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                  new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
+                throw new ManifoldCFException("SharePoint: Illegal argument: "+e.getMessage(),e);
+              }
+              catch (org.apache.http.HttpException e)
+              {
+                Logging.connectors.warn("SharePoint: HttpException thrown",e);
+                activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                  new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
+                long currentTime = System.currentTimeMillis();
+                throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
+                  currentTime + 12 * 60 * 60000L,-1,true);
+              }
+              catch (IOException e)
+              {
+                activities.recordActivity(new Long(startFetchTime),ACTIVITY_FETCH,
+                  new Long(tempFile.length()),documentIdentifier,"Error",e.getMessage(),null);
+                Logging.connectors.warn("SharePoint: IOException thrown: "+e.getMessage(),e);
+                long currentTime = System.currentTimeMillis();
+                throw new ServiceInterruption("SharePoint is down attempting to read '"+fileUrl+"', retrying: "+e.getMessage(),e,currentTime + 300000L,
+                  currentTime + 12 * 60 * 60000L,-1,true);
+              }
+            }
+            finally
+            {
+              os.close();
+            }
+                      
+            // Ingest the document
+            long documentLength = tempFile.length();
+            if (activities.checkLengthIndexable(documentLength))
+            {
+              InputStream is = new FileInputStream(tempFile);
+              try
+              {
+                RepositoryDocument data = new RepositoryDocument();
+                data.setBinary( is, documentLength );
+                
+                data.setFileName(mapToFileName(documentIdentifier));
+                          
+                if (contentType != null)
+                  data.setMimeType(contentType);
+                
+                setDataACLs(data,acls,denyAcls);
+
+                setPathAttribute(data,sDesc,documentIdentifier);
+                          
+                if (modifiedDate != null)
+                  data.setModifiedDate(modifiedDate);
+                if (createdDate != null)
+                  data.setCreatedDate(createdDate);
+
+                if (metadataValues != null)
+                {
+                  Iterator<String> iter = metadataValues.keySet().iterator();
+                  while (iter.hasNext())
+                  {
+                    String fieldName = iter.next();
+                    String fieldData = metadataValues.get(fieldName);
+                    data.addField(fieldName,fieldData);
+                  }
+                }
+                data.addField("GUID",guid);
+                
+                activities.ingestDocument( documentIdentifier, version, fileUrl , data );
+                return true;
+              }
+              finally
+              {
+                try
+                {
+                  is.close();
+                }
+                catch (java.net.SocketTimeoutException e)
+                {
+                  // This is not fatal
+                  Logging.connectors.debug("SharePoint: Timeout before read could finish for '"+fileUrl+"': "+e.getMessage(),e);
+                }
+                catch (org.apache.http.conn.ConnectTimeoutException e)
+                {
+                  // This is not fatal
+                  Logging.connectors.debug("SharePoint: Connect timeout before read could finish for '"+fileUrl+"': "+e.getMessage(),e);
+                }
+                catch (InterruptedIOException e)
+                {
+                  throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+                }
+                catch (IOException e)
+                {
+                  // This is not fatal
+                  Logging.connectors.debug("SharePoint: Server closed connection before read could finish for '"+fileUrl+"': "+e.getMessage(),e);
+                }
+              }
+            }
+            else
+            {
+              // Document too long
+              if (Logging.connectors.isDebugEnabled())
+                Logging.connectors.debug("SharePoint: Document '"+documentIdentifier+"' was too long, according to output connector");
+              return false;
+            }
+          }
+          finally
+          {
+            tempFile.delete();
+          }
+        }
+        catch (java.net.SocketTimeoutException e)
+        {
+          throw new ManifoldCFException("Socket timeout error writing '"+fileUrl+"' to temporary file: "+e.getMessage(),e);
+        }
+        catch (org.apache.http.conn.ConnectTimeoutException e)
+        {
+          throw new ManifoldCFException("Connect timeout error writing '"+fileUrl+"' to temporary file: "+e.getMessage(),e);
+        }
+        catch (InterruptedIOException e)
+        {
+          throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        }
+        catch (IOException e)
+        {
+          throw new ManifoldCFException("IO error writing '"+fileUrl+"' to temporary file: "+e.getMessage(),e);
+        }
+      }
+      else
+      {
+        // Mime type failed
+        if (Logging.connectors.isDebugEnabled())
+          Logging.connectors.debug("SharePoint: Skipping document '"+documentIdentifier+"' because output connector says mime type '"+((contentType==null)?"null":contentType)+"' is not indexable");
+        return false;
+      }
+    }
+    else
+    {
+      // URL failed
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("SharePoint: Skipping document '"+documentIdentifier+"' because output connector says URL '"+fileUrl+"' is not indexable");
+      return false;
+    }
+  }
+
   /** Map an extension to a mime type */
   protected static String mapExtensionToMimeType(String fileName)
   {
@@ -1815,25 +2099,22 @@
     return fileName;
   }
   
-  protected static void setDataACLs(RepositoryDocument data, ArrayList acls, String denyAcl)
+  protected static void setDataACLs(RepositoryDocument data, ArrayList acls, ArrayList denyAcls)
   {
     if (acls != null)
     {
       String[] actualAcls = new String[acls.size()];
-      int j = 0;
-      while (j < actualAcls.length)
+      for (int j = 0; j < actualAcls.length; j++)
       {
         actualAcls[j] = (String)acls.get(j);
-        j++;
       }
 
       if (Logging.connectors.isDebugEnabled())
       {
-        j = 0;
         StringBuilder sb = new StringBuilder("SharePoint: Acls: [ ");
-        while (j < actualAcls.length)
+        for (int j = 0; j < actualAcls.length; j++)
         {
-          sb.append(actualAcls[j++]).append(" ");
+          sb.append(actualAcls[j]).append(" ");
         }
         sb.append("]");
         Logging.connectors.debug( sb.toString() );
@@ -1842,9 +2123,25 @@
       data.setACL( actualAcls );
     }
 
-    if (denyAcl != null)
+    if (denyAcls != null)
     {
-      String[] actualDenyAcls = new String[]{denyAcl};
+      String[] actualDenyAcls = new String[denyAcls.size()];
+      for (int j = 0; j < actualDenyAcls.length; j++)
+      {
+        actualDenyAcls[j] = (String)denyAcls.get(j);
+      }
+
+      if (Logging.connectors.isDebugEnabled())
+      {
+        StringBuilder sb = new StringBuilder("SharePoint: DenyAcls: [ ");
+        for (int j = 0; j < actualDenyAcls.length; j++)
+        {
+          sb.append(actualDenyAcls[j]).append(" ");
+        }
+        sb.append("]");
+        Logging.connectors.debug( sb.toString() );
+      }
+
       data.setDenyACL(actualDenyAcls);
     }
   }
@@ -1867,60 +2164,164 @@
       Logging.connectors.debug("SharePoint: Path attribute name is null");
   }
 
+  protected final static String[] fileStreamDataNames = new String[]{"accessTokens", "denyTokens", "guids", "fields"};
+
   protected class FileStream implements IFileStream
   {
-    protected IProcessActivity activities;
-    protected int foldersFilePathIndex;
-    protected DocumentSpecification spec;
+    protected final IProcessActivity activities;
+    protected final DocumentSpecification spec;
+    protected final String rootPath;
+    protected final String sitePath;
+    protected final String siteLibPath;
     
-    public FileStream(IProcessActivity activities, int foldersFilePathIndex, DocumentSpecification spec)
+    // For carry-down
+    protected final String documentIdentifier;
+    protected final String[][] dataValues;
+    
+    public FileStream(IProcessActivity activities, String rootPath, String sitePath, String siteLibPath, DocumentSpecification spec,
+      String documentIdentifier, String[] accessTokens, String denyTokens[], String libID, String[] fields)
     {
       this.activities = activities;
-      this.foldersFilePathIndex = foldersFilePathIndex;
       this.spec = spec;
+      this.rootPath = rootPath;
+      this.sitePath = sitePath;
+      this.siteLibPath = siteLibPath;
+      this.documentIdentifier = documentIdentifier;
+      this.dataValues = new String[fileStreamDataNames.length][];
+      this.dataValues[0] = accessTokens;
+      this.dataValues[1] = denyTokens;
+      this.dataValues[2] = new String[]{libID};
+      this.dataValues[3] = fields;
     }
     
-    public void addFile(String relPath)
+    @Override
+    public void addFile(String relPath, String displayURL)
       throws ManifoldCFException
     {
-      if ( checkIncludeFile( relPath, spec ) )
-      {
-        // Since the processing for a file needs to know the library path, we need a way to signal the cutoff between library and folder levels.
-        // The way I've chosen to do this is to use a double slash at that point, as a separator.
-        String modifiedPath = relPath.substring(0,foldersFilePathIndex) + "/" + relPath.substring(foldersFilePathIndex);
 
-        activities.addDocumentReference( modifiedPath );
+      // First, convert the relative path to a full path
+      if ( !relPath.startsWith("/") )
+      {
+        relPath = rootPath + sitePath + "/" + relPath;
+      }
+      
+      // Now, strip away what we don't want - namely, the root path.  This makes the path relative to the root.
+      if ( relPath.startsWith(rootPath) )
+      {
+        relPath = relPath.substring(rootPath.length());
+      
+        if ( checkIncludeFile( relPath, spec ) )
+        {
+          // Since the processing for a file needs to know the library path, we need a way to signal the cutoff between library and folder levels.
+          // The way I've chosen to do this is to use a double slash at that point, as a separator.
+          if (relPath.startsWith(siteLibPath))
+          {
+            // Split at the libpath/file boundary
+            String modifiedPath = siteLibPath + "/" + relPath.substring(siteLibPath.length());
+            activities.addDocumentReference( modifiedPath, documentIdentifier, null, fileStreamDataNames, dataValues );
+          }
+          else
+          {
+            Logging.connectors.warn("SharePoint: Unexpected relPath structure; path is '"+relPath+"', but expected to see something beginning with '"+siteLibPath+"'");
+          }
+        }
+      }
+      else
+      {
+        Logging.connectors.warn("SharePoint: Unexpected relPath structure; path is '"+relPath+"', but expected to see something beginning with '"+rootPath+"'");
       }
     }
   }
   
+  protected final static String[] listItemStreamDataNames = new String[]{"accessTokens", "denyTokens", "guids", "fields", "displayURLs"};
+
   protected class ListItemStream implements IFileStream
   {
-    protected IProcessActivity activities;
-    protected int foldersFilePathIndex;
-    protected DocumentSpecification spec;
-    
-    public ListItemStream(IProcessActivity activities, int foldersFilePathIndex, DocumentSpecification spec)
+    protected final IProcessActivity activities;
+    protected final DocumentSpecification spec;
+    protected final String rootPath;
+    protected final String sitePath;
+    protected final String siteListPath;
+
+    // For carry-down
+    protected final String documentIdentifier;
+    protected final String[][] dataValues;
+
+    public ListItemStream(IProcessActivity activities, String rootPath, String sitePath, String siteListPath, DocumentSpecification spec,
+      String documentIdentifier, String[] accessTokens, String denyTokens[], String listID, String[] fields)
     {
       this.activities = activities;
-      this.foldersFilePathIndex = foldersFilePathIndex;
       this.spec = spec;
+      this.rootPath = rootPath;
+      this.sitePath = sitePath;
+      this.siteListPath = siteListPath;
+      this.documentIdentifier = documentIdentifier;
+      this.dataValues = new String[listItemStreamDataNames.length][];
+      this.dataValues[0] = accessTokens;
+      this.dataValues[1] = denyTokens;
+      this.dataValues[2] = new String[]{listID};
+      this.dataValues[3] = fields;
     }
     
-    public void addFile(String relPath)
+    @Override
+    public void addFile(String relPath, String displayURL)
       throws ManifoldCFException
     {
-      // First, strip "Lists" from relPath
-      if (!relPath.startsWith("/Lists/"))
-        throw new ManifoldCFException("Expected path to start with /Lists/");
-      relPath = relPath.substring("/Lists".length());
-      if ( checkIncludeListItem( relPath, spec ) )
+      // First, convert the relative path to a full path
+      if ( !relPath.startsWith("/") )
       {
-        // Since the processing for a item needs to know the list path, we need a way to signal the cutoff between list and item levels.
-        // The way I've chosen to do this is to use a triple slash at that point, as a separator.
-        String modifiedPath = relPath.substring(0,foldersFilePathIndex) + "//" + relPath.substring(foldersFilePathIndex);
+        relPath = rootPath + sitePath + "/" + relPath;
+      }
 
-        activities.addDocumentReference( modifiedPath );
+      String fullPath = relPath;
+
+      // Now, strip away what we don't want - namely, the root path.  This makes the path relative to the root.
+      if ( relPath.startsWith(rootPath) )
+      {
+        relPath = relPath.substring(rootPath.length());
+
+        if (relPath.startsWith(sitePath))
+        {
+          relPath = relPath.substring(sitePath.length());
+          
+          // Now, strip "Lists" from relPath.  If it doesn't start with /Lists/, ignore it.
+          if (relPath.startsWith("/Lists/"))
+          {
+            relPath = sitePath + relPath.substring("/Lists".length());
+            if ( checkIncludeListItem( relPath, spec ) )
+            {
+              if (relPath.startsWith(siteListPath))
+              {
+                // Since the processing for a item needs to know the list path, we need a way to signal the cutoff between list and item levels.
+                // The way I've chosen to do this is to use a triple slash at that point, as a separator.
+                String modifiedPath = relPath.substring(0,siteListPath.length()) + "//" + relPath.substring(siteListPath.length());
+                
+                if (displayURL != null)
+                  dataValues[4] = new String[]{displayURL};
+                else
+                  dataValues[4] = new String[]{fullPath};
+
+                activities.addDocumentReference( modifiedPath, documentIdentifier, null, listItemStreamDataNames, dataValues );
+              }
+              else
+              {
+                Logging.connectors.warn("SharePoint: Unexpected relPath structure; site path is '"+relPath+"', but expected to see something beginning with '"+siteListPath+"'");
+              }
+            }
+          }
+          else
+          {
+            Logging.connectors.warn("SharePoint: Unexpected relPath structure; rel path is '"+relPath+"', but expected to see something beginning with '/Lists/'");
+          }
+        }
+        else
+        {
+          Logging.connectors.warn("SharePoint: Unexpected relPath structure; site path is '"+relPath+"', but expected to see something beginning with '"+sitePath+"'");
+        }
+      }
+      else
+      {
+        Logging.connectors.warn("SharePoint: Unexpected relPath structure; path is '"+relPath+"', but expected to see something beginning with '"+rootPath+"'");
       }
     }
 
@@ -1949,117 +2350,8 @@
     throws ManifoldCFException, IOException
   {
     tabsArray.add(Messages.getString(locale,"SharePointRepository.Server"));
-    out.print(
-"<script type=\"text/javascript\">\n"+
-"<!--\n"+
-"function ShpDeleteCertificate(aliasName)\n"+
-"{\n"+
-"  editconnection.shpkeystorealias.value = aliasName;\n"+
-"  editconnection.configop.value = \"Delete\";\n"+
-"  postForm();\n"+
-"}\n"+
-"\n"+
-"function ShpAddCertificate()\n"+
-"{\n"+
-"  if (editconnection.shpcertificate.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.ChooseACertificateFile")+"\");\n"+
-"    editconnection.shpcertificate.focus();\n"+
-"  }\n"+
-"  else\n"+
-"  {\n"+
-"    editconnection.configop.value = \"Add\";\n"+
-"    postForm();\n"+
-"  }\n"+
-"}\n"+
-"\n"+
-"function checkConfig()\n"+
-"{\n"+
-"  if (editconnection.serverPort.value != \"\" && !isInteger(editconnection.serverPort.value))\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSupplyAValidNumber")+"\");\n"+
-"    editconnection.serverPort.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.serverName.value.indexOf(\"/\") >= 0)\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSpecifyAnyServerPathInformation")+"\");\n"+
-"    editconnection.serverName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  var svrloc = editconnection.serverLocation.value;\n"+
-"  if (svrloc != \"\" && svrloc.charAt(0) != \"/\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.SitePathMustBeginWithWCharacter")+"\");\n"+
-"    editconnection.serverLocation.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (svrloc != \"\" && svrloc.charAt(svrloc.length - 1) == \"/\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.SitePathCannotEndWithACharacter")+"\");\n"+
-"    editconnection.serverLocation.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.userName.value != \"\" && editconnection.userName.value.indexOf(\"\\\\\") <= 0)\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.AValidSharePointUserNameHasTheForm")+"\");\n"+
-"    editconnection.userName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  return true;\n"+
-"}\n"+
-"\n"+
-"function checkConfigForSave() \n"+
-"{\n"+
-"  if (editconnection.serverName.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseFillInASharePointServerName")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"SharePointRepository.Server") + "\");\n"+
-"    editconnection.serverName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.serverName.value.indexOf(\"/\") >= 0)\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSpecifyAnyServerPathInformationInTheSitePathField")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"SharePointRepository.Server") + "\");\n"+
-"    editconnection.serverName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  var svrloc = editconnection.serverLocation.value;\n"+
-"  if (svrloc != \"\" && svrloc.charAt(0) != \"/\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.SitePathMustBeginWithWCharacter")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"SharePointRepository.Server") + "\");\n"+
-"    editconnection.serverLocation.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (svrloc != \"\" && svrloc.charAt(svrloc.length - 1) == \"/\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.SitePathCannotEndWithACharacter")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"SharePointRepository.Server") + "\");\n"+
-"    editconnection.serverLocation.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.serverPort.value != \"\" && !isInteger(editconnection.serverPort.value))\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSupplyASharePointPortNumber")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"SharePointRepository.Server") + "\");\n"+
-"    editconnection.serverPort.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  if (editconnection.userName.value != \"\" && editconnection.userName.value.indexOf(\"\\\\\") <= 0)\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.TheConnectionRequiresAValidSharePointUserName")+"\");\n"+
-"    SelectTab(\"" + Messages.getBodyJavascriptString(locale,"SharePointRepository.Server") + "\");\n"+
-"    editconnection.userName.focus();\n"+
-"    return false;\n"+
-"  }\n"+
-"  return true;\n"+
-"}\n"+
-"\n"+
-"//-->\n"+
-"</script>\n"
-    );
+    tabsArray.add(Messages.getString(locale,"SharePointRepository.AuthorityType"));
+    Messages.outputResourceWithVelocity(out,locale,"editConfiguration.js",null);
   }
   
   /** Output the configuration body section.
@@ -2076,152 +2368,15 @@
     Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    String serverVersion = parameters.getParameter("serverVersion");
-    if (serverVersion == null)
-      serverVersion = "2.0";
-
-    String serverProtocol = parameters.getParameter("serverProtocol");
-    if (serverProtocol == null)
-      serverProtocol = "http";
-
-    String serverName = parameters.getParameter("serverName");
-    if (serverName == null)
-      serverName = "localhost";
-
-    String serverPort = parameters.getParameter("serverPort");
-    if (serverPort == null)
-      serverPort = "";
-
-    String serverLocation = parameters.getParameter("serverLocation");
-    if (serverLocation == null)
-      serverLocation = "";
-      
-    String userName = parameters.getParameter("userName");
-    if (userName == null)
-      userName = "";
-
-    String password = parameters.getObfuscatedParameter("password");
-    if (password == null)
-      password = "";
-
-    String keystore = parameters.getParameter("keystore");
-    IKeystoreManager localKeystore;
-    if (keystore == null)
-      localKeystore = KeystoreManagerFactory.make("");
-    else
-      localKeystore = KeystoreManagerFactory.make("",keystore);
-
-    // "Server" tab
-    // Always send along the keystore.
-    if (keystore != null)
-    {
-      out.print(
-"<input type=\"hidden\" name=\"keystoredata\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(keystore)+"\"/>\n"
-      );
-    }
-
-    if (tabName.equals(Messages.getString(locale,"SharePointRepository.Server")))
-    {
-      out.print(
-"<table class=\"displaytable\">\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.ServerSharePointVersion") + "</nobr></td>\n"+
-"    <td class=\"value\">\n"+
-"      <select name=\"serverVersion\">\n"+
-"        <option value=\"2.0\" "+((serverVersion.equals("2.0"))?"selected=\"true\"":"")+">SharePoint Services 2.0 (2003)</option>\n"+
-"        <option value=\"3.0\" "+(serverVersion.equals("3.0")?"selected=\"true\"":"")+">SharePoint Services 3.0 (2007)</option>\n"+
-"        <option value=\"4.0\" "+(serverVersion.equals("4.0")?"selected=\"true\"":"")+">SharePoint Services 4.0 (2010)</option>\n"+
-"      </select>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.ServerProtocol") + "</nobr></td>\n"+
-"    <td class=\"value\">\n"+
-"      <select name=\"serverProtocol\">\n"+
-"        <option value=\"http\" "+((serverProtocol.equals("http"))?"selected=\"true\"":"")+">http</option>\n"+
-"        <option value=\"https\" "+(serverProtocol.equals("https")?"selected=\"true\"":"")+">https</option>\n"+
-"      </select>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.ServerName") + "</nobr></td>\n"+
-"    <td class=\"value\"><input type=\"text\" size=\"64\" name=\"serverName\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(serverName)+"\"/></td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.ServerPort") + "</nobr></td>\n"+
-"    <td class=\"value\"><input type=\"text\" size=\"5\" name=\"serverPort\" value=\""+serverPort+"\"/></td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.SitePath") + "</nobr></td>\n"+
-"    <td class=\"value\"><input type=\"text\" size=\"64\" name=\"serverLocation\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(serverLocation)+"\"/></td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.UserName") + "</nobr></td>\n"+
-"    <td class=\"value\"><input type=\"text\" size=\"32\" name=\"userName\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(userName)+"\"/></td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Password") + "</nobr></td>\n"+
-"    <td class=\"value\"><input type=\"password\" size=\"32\" name=\"password\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(password)+"\"/></td>\n"+
-"  </tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.SSLCertificateList") + "</nobr></td>\n"+
-"    <td class=\"value\">\n"+
-"      <input type=\"hidden\" name=\"configop\" value=\"Continue\"/>\n"+
-"      <input type=\"hidden\" name=\"shpkeystorealias\" value=\"\"/>\n"+
-"      <table class=\"displaytable\">\n"
-      );
-      // List the individual certificates in the store, with a delete button for each
-      String[] contents = localKeystore.getContents();
-      if (contents.length == 0)
-      {
-        out.print(
-"        <tr><td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale,"SharePointRepository.NoCertificatesPresent") + "</td></tr>\n"
-        );
-      }
-      else
-      {
-        int i = 0;
-        while (i < contents.length)
-        {
-          String alias = contents[i];
-          String description = localKeystore.getDescription(alias);
-          if (description.length() > 128)
-            description = description.substring(0,125) + "...";
-          out.print(
-"        <tr>\n"+
-"          <td class=\"value\">\n"+
-"            <input type=\"button\" onclick='Javascript:ShpDeleteCertificate(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(alias)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteCert")+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(alias)+"\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\"/>\n"+
-"          </td>\n"+
-"          <td>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)+"</td>\n"+
-"        </tr>\n"
-          );
-
-          i++;
-        }
-      }
-      out.print(
-"      </table>\n"+
-"      <input type=\"button\" onclick='Javascript:ShpAddCertificate()' alt=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddCert") + "\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Add") + "\"/>&nbsp;\n"+
-"      "+Messages.getBodyString(locale,"SharePointRepository.Certificate")+"&nbsp;<input name=\"shpcertificate\" size=\"50\" type=\"file\"/>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"</table>\n"
-      );
-    }
-    else
-    {
-      // Server tab hiddens
-      out.print(
-"<input type=\"hidden\" name=\"serverProtocol\" value=\""+serverProtocol+"\"/>\n"+
-"<input type=\"hidden\" name=\"serverName\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(serverName)+"\"/>\n"+
-"<input type=\"hidden\" name=\"serverPort\" value=\""+serverPort+"\"/>\n"+
-"<input type=\"hidden\" name=\"serverLocation\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(serverLocation)+"\"/>\n"+
-"<input type=\"hidden\" name=\"userName\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(userName)+"\"/>\n"+
-"<input type=\"hidden\" name=\"password\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(password)+"\"/>\n"
-      );
-    }
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    velocityContext.put("TabName",tabName);
+    fillInServerTab(velocityContext,out,parameters);
+    fillInAuthorityTypeTab(velocityContext,out,parameters);
+    Messages.outputResourceWithVelocity(out,locale,"editConfiguration_Server.html",velocityContext);
+    Messages.outputResourceWithVelocity(out,locale,"editConfiguration_AuthorityType.html",velocityContext);
   }
   
+  
   /** Process a configuration post.
   * This method is called at the start of the connector's configuration page, whenever there is a possibility that form data for a connection has been
   * posted.  Its purpose is to gather form information and modify the configuration parameters accordingly.
@@ -2238,36 +2393,36 @@
   {
     String serverVersion = variableContext.getParameter("serverVersion");
     if (serverVersion != null)
-      parameters.setParameter("serverVersion",serverVersion);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERVERSION,serverVersion);
 
     String serverProtocol = variableContext.getParameter("serverProtocol");
     if (serverProtocol != null)
-      parameters.setParameter("serverProtocol",serverProtocol);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERPROTOCOL,serverProtocol);
 
     String serverName = variableContext.getParameter("serverName");
 
     if (serverName != null)
-      parameters.setParameter("serverName",serverName);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERNAME,serverName);
 
     String serverPort = variableContext.getParameter("serverPort");
     if (serverPort != null)
-      parameters.setParameter("serverPort",serverPort);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERPORT,serverPort);
 
     String serverLocation = variableContext.getParameter("serverLocation");
     if (serverLocation != null)
-      parameters.setParameter("serverLocation",serverLocation);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERLOCATION,serverLocation);
 
     String userName = variableContext.getParameter("userName");
     if (userName != null)
-      parameters.setParameter("userName",userName);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERUSERNAME,userName);
 
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter("password",password);
+      parameters.setObfuscatedParameter(SharePointConfig.PARAM_SERVERPASSWORD,variableContext.mapKeyToPassword(password));
 
     String keystoreValue = variableContext.getParameter("keystoredata");
     if (keystoreValue != null)
-      parameters.setParameter("keystore",keystoreValue);
+      parameters.setParameter(SharePointConfig.PARAM_SERVERKEYSTORE,keystoreValue);
 
     String configOp = variableContext.getParameter("configop");
     if (configOp != null)
@@ -2275,20 +2430,20 @@
       if (configOp.equals("Delete"))
       {
         String alias = variableContext.getParameter("shpkeystorealias");
-        keystoreValue = parameters.getParameter("keystore");
+        keystoreValue = parameters.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
         IKeystoreManager mgr;
         if (keystoreValue != null)
           mgr = KeystoreManagerFactory.make("",keystoreValue);
         else
           mgr = KeystoreManagerFactory.make("");
         mgr.remove(alias);
-        parameters.setParameter("keystore",mgr.getString());
+        parameters.setParameter(SharePointConfig.PARAM_SERVERKEYSTORE,mgr.getString());
       }
       else if (configOp.equals("Add"))
       {
         String alias = IDFactory.make(threadContext);
         byte[] certificateValue = variableContext.getBinaryBytes("shpcertificate");
-        keystoreValue = parameters.getParameter("keystore");
+        keystoreValue = parameters.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
         IKeystoreManager mgr;
         if (keystoreValue != null)
           mgr = KeystoreManagerFactory.make("",keystoreValue);
@@ -2321,9 +2476,14 @@
           // Redirect to error page
           return "Illegal certificate: "+certError;
         }
-        parameters.setParameter("keystore",mgr.getString());
+        parameters.setParameter(SharePointConfig.PARAM_SERVERKEYSTORE,mgr.getString());
       }
     }
+    
+    String authorityType = variableContext.getParameter("authorityType");
+    if (authorityType != null)
+      parameters.setParameter(SharePointConfig.PARAM_AUTHORITYTYPE,authorityType);
+
     return null;
   }
   
@@ -2339,44 +2499,90 @@
     Locale locale, ConfigParams parameters)
     throws ManifoldCFException, IOException
   {
-    out.print(
-"<table class=\"displaytable\">\n"+
-"  <tr>\n"+
-"    <td class=\"description\" colspan=\"1\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Parameters") + "</nobr></td>\n"+
-"    <td class=\"value\" colspan=\"3\">\n"
-    );
-    Iterator iter = parameters.listParameters();
-    while (iter.hasNext())
-    {
-      String param = (String)iter.next();
-      String value = parameters.getParameter(param);
-      if (param.length() >= "password".length() && param.substring(param.length()-"password".length()).equalsIgnoreCase("password"))
-      {
-        out.print(
-"      <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(param)+"=********</nobr><br/>\n"
-        );
-      }
-      else if (param.length() >="keystore".length() && param.substring(param.length()-"keystore".length()).equalsIgnoreCase("keystore"))
-      {
-        IKeystoreManager kmanager = KeystoreManagerFactory.make("",value);
-        out.print(
-"      <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(param)+"=&lt;"+Integer.toString(kmanager.getContents().length)+" certificate(s)&gt;</nobr><br/>\n"
-        );
-      }
-      else
-      {
-        out.print(
-"      <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(param)+"="+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(value)+"</nobr><br/>\n"
-        );
-      }
-    }
-    out.print(
-"    </td>\n"+
-"  </tr>\n"+
-"</table>\n"
-    );
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    fillInServerTab(velocityContext,out,parameters);
+    fillInAuthorityTypeTab(velocityContext,out,parameters);
+    Messages.outputResourceWithVelocity(out,locale,"viewConfiguration.html",velocityContext);
+  }
+
+  protected static void fillInAuthorityTypeTab(Map<String,Object> velocityContext, IHTTPOutput out, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    // Default to Active Directory, for backwards compatibility
+    String authorityType = parameters.getParameter(SharePointConfig.PARAM_AUTHORITYTYPE);
+    if (authorityType == null)
+      authorityType = "ActiveDirectory";
+    velocityContext.put("AUTHORITYTYPE", authorityType);
   }
   
+  protected static void fillInServerTab(Map<String,Object> velocityContext, IHTTPOutput out, ConfigParams parameters)
+    throws ManifoldCFException
+  {
+    String serverVersion = parameters.getParameter(SharePointConfig.PARAM_SERVERVERSION);
+    if (serverVersion == null)
+      serverVersion = "2.0";
+
+    String serverProtocol = parameters.getParameter(SharePointConfig.PARAM_SERVERPROTOCOL);
+    if (serverProtocol == null)
+      serverProtocol = "http";
+
+    String serverName = parameters.getParameter(SharePointConfig.PARAM_SERVERNAME);
+    if (serverName == null)
+      serverName = "localhost";
+
+    String serverPort = parameters.getParameter(SharePointConfig.PARAM_SERVERPORT);
+    if (serverPort == null)
+      serverPort = "";
+
+    String serverLocation = parameters.getParameter(SharePointConfig.PARAM_SERVERLOCATION);
+    if (serverLocation == null)
+      serverLocation = "";
+      
+    String userName = parameters.getParameter(SharePointConfig.PARAM_SERVERUSERNAME);
+    if (userName == null)
+      userName = "";
+
+    String password = parameters.getObfuscatedParameter(SharePointConfig.PARAM_SERVERPASSWORD);
+    if (password == null)
+      password = "";
+    else
+      password = out.mapPasswordToKey(password);
+
+    String keystore = parameters.getParameter(SharePointConfig.PARAM_SERVERKEYSTORE);
+    IKeystoreManager localKeystore;
+    if (keystore == null)
+      localKeystore = KeystoreManagerFactory.make("");
+    else
+      localKeystore = KeystoreManagerFactory.make("",keystore);
+
+    List<Map<String,String>> certificates = new ArrayList<Map<String,String>>();
+    
+    String[] contents = localKeystore.getContents();
+    for (String alias : contents)
+    {
+      String description = localKeystore.getDescription(alias);
+      if (description.length() > 128)
+        description = description.substring(0,125) + "...";
+      Map<String,String> certificate = new HashMap<String,String>();
+      certificate.put("ALIAS", alias);
+      certificate.put("DESCRIPTION", description);
+      certificates.add(certificate);
+    }
+    
+    // Fill in context
+    velocityContext.put("SERVERVERSION", serverVersion);
+    velocityContext.put("SERVERPROTOCOL", serverProtocol);
+    velocityContext.put("SERVERNAME", serverName);
+    velocityContext.put("SERVERPORT", serverPort);
+    velocityContext.put("SERVERLOCATION", serverLocation);
+    velocityContext.put("USERNAME", userName);
+    velocityContext.put("PASSWORD", password);
+    if (keystore != null)
+      velocityContext.put("KEYSTORE", keystore);
+    velocityContext.put("CERTIFICATELIST", certificates);
+    
+  }
+
   /** Output the specification header section.
   * This method is called in the head section of a job page which has selected a repository connection of the current type.  Its purpose is to add the required tabs
   * to the list, and to output any javascript methods that might be needed by the job editing HTML.
@@ -2391,188 +2597,7 @@
     tabsArray.add(Messages.getString(locale,"SharePointRepository.Paths"));
     tabsArray.add(Messages.getString(locale,"SharePointRepository.Security"));
     tabsArray.add(Messages.getString(locale,"SharePointRepository.Metadata"));
-    out.print(
-"<script type=\"text/javascript\">\n"+
-"<!--\n"+
-"\n"+
-"function checkSpecification()\n"+
-"{\n"+
-"  // Does nothing right now.\n"+
-"  return true;\n"+
-"}\n"+
-"\n"+
-"function SpecRuleAddPath(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.spectype.value==\"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectATypeFirst")+"\");\n"+
-"    editjob.spectype.focus();\n"+
-"  }\n"+
-"  else if (editjob.specflavor.value==\"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectAnActionFirst")+"\");\n"+
-"    editjob.specflavor.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"specop\",\"Add\",anchorvalue);\n"+
-"}\n"+
-"  \n"+
-"function SpecPathReset(anchorvalue)\n"+
-"{\n"+
-"  SpecOp(\"specpathop\",\"Reset\",anchorvalue);\n"+
-"}\n"+
-"  \n"+
-"function SpecPathAppendSite(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.specsite.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectASiteFirst")+"\");\n"+
-"    editjob.specsite.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"specpathop\",\"AppendSite\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecPathAppendLibrary(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.speclibrary.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectALibraryFirst")+"\");\n"+
-"    editjob.speclibrary.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"specpathop\",\"AppendLibrary\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecPathAppendList(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.speclist.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectAListFirst")+"\");\n"+
-"    editjob.speclist.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"specpathop\",\"AppendList\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecPathAppendText(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.specmatch.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseProvideMatchTextFirst")+"\");\n"+
-"    editjob.specmatch.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"specpathop\",\"AppendText\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecPathRemove(anchorvalue)\n"+
-"{\n"+
-"  SpecOp(\"specpathop\",\"Remove\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function MetaRuleAddPath(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.metaflavor.value==\"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectAnActionFirst")+"\");\n"+
-"    editjob.metaflavor.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"metaop\",\"Add\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function MetaPathReset(anchorvalue)\n"+
-"{\n"+
-"  SpecOp(\"metapathop\",\"Reset\",anchorvalue);\n"+
-"}\n"+
-"  \n"+
-"function MetaPathAppendSite(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.metasite.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectASiteFirst")+"\");\n"+
-"    editjob.metasite.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"metapathop\",\"AppendSite\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function MetaPathAppendLibrary(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.metalibrary.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectALibraryFirst")+"\");\n"+
-"    editjob.metalibrary.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"metapathop\",\"AppendLibrary\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function MetaPathAppendList(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.metalist.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseSelectAListFirst")+"\");\n"+
-"    editjob.metalist.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"metapathop\",\"AppendList\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function MetaPathAppendText(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.metamatch.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.PleaseProvideMatchTextFirst")+"\");\n"+
-"    editjob.metamatch.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"metapathop\",\"AppendText\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function MetaPathRemove(anchorvalue)\n"+
-"{\n"+
-"  SpecOp(\"metapathop\",\"Remove\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecAddAccessToken(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.spectoken.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.AccessTokenCannotBeNull")+"\");\n"+
-"    editjob.spectoken.focus();\n"+
-"  }\n"+
-"  else\n"+
-"    SpecOp(\"accessop\",\"Add\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecAddMapping(anchorvalue)\n"+
-"{\n"+
-"  if (editjob.specmatch.value == \"\")\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.MatchStringCannotBeEmpty")+"\");\n"+
-"    editjob.specmatch.focus();\n"+
-"    return;\n"+
-"  }\n"+
-"  if (!isRegularExpression(editjob.specmatch.value))\n"+
-"  {\n"+
-"    alert(\""+Messages.getBodyJavascriptString(locale,"SharePointRepository.MatchStringMustBeValidRegularExpression")+"\");\n"+
-"    editjob.specmatch.focus();\n"+
-"    return;\n"+
-"  }\n"+
-"  SpecOp(\"specmappingop\",\"Add\",anchorvalue);\n"+
-"}\n"+
-"\n"+
-"function SpecOp(n, opValue, anchorvalue)\n"+
-"{\n"+
-"  eval(\"editjob.\"+n+\".value = \\\"\"+opValue+\"\\\"\");\n"+
-"  postFormSetAnchor(anchorvalue);\n"+
-"}\n"+
-"\n"+
-"//-->\n"+
-"</script>\n"
-    );
+    Messages.outputResourceWithVelocity(out,locale,"editSpecification.js",null);
   }
   
   /** Output the specification body section.
@@ -2587,1346 +2612,439 @@
   public void outputSpecificationBody(IHTTPOutput out, Locale locale, DocumentSpecification ds, String tabName)
     throws ManifoldCFException, IOException
   {
-    int i;
-    int k;
-    int l;
-
-    // Paths tab
-
-
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    velocityContext.put("TabName",tabName);
+    
+    fillInSecurityTab(velocityContext,out,ds);
+    fillInPathsTab(velocityContext,out,ds);
+    fillInMetadataTab(velocityContext,out,ds);
+    
+    // Now, do the part of the tabs that requires context logic
     if (tabName.equals(Messages.getString(locale,"SharePointRepository.Paths")))
-    {
-      out.print(
-"<table class=\"displaytable\">\n"+
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathRules") + "</nobr></td>\n"+
-"    <td class=\"boxcell\">\n"+
-"      <table class=\"formtable\">\n"+
-"        <tr class=\"formheaderrow\">\n"+
-"          <td class=\"formcolumnheader\"></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMatch") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Type") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Action") + "</nobr></td>\n"+
-"        </tr>\n"
-      );
-      i = 0;
-      l = 0;
-      k = 0;
-      while (i < ds.getChildCount())
-      {
-        SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals("startpoint"))
-        {
-          String site = sn.getAttributeValue("site");
-          String lib = sn.getAttributeValue("lib");
-          String siteLib = site + "/" + lib + "/";
-
-          // Go through all the file/folder rules for the startpoint, and generate new "rules" corresponding to each.
-          int j = 0;
-          while (j < sn.getChildCount())
-          {
-            SpecificationNode node = sn.getChild(j++);
-            if (node.getType().equals("include") || node.getType().equals("exclude"))
-            {
-              String matchPart = node.getAttributeValue("match");
-              String ruleType = node.getAttributeValue("type");
-              
-              String theFlavor = node.getType();
-
-              String pathDescription = "_"+Integer.toString(k);
-              String pathOpName = "specop"+pathDescription;
-              String thePath = siteLib + matchPart;
-              out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"path_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\""+pathOpName+"\" value=\"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.InsertNewRule") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Insert Here\",\"path_"+Integer.toString(k)+"\")' alt=\""+"Insert new rule before rule #"+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"3\"></td>\n"+
-"        </tr>\n"
-              );
-              l++;
-              out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Delete\",\"path_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(thePath)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(thePath)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"spectype"+pathDescription+"\" value=\"file\"/>\n"+
-"              file\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specflav"+pathDescription+"\" value=\""+theFlavor+"\"/>\n"+
-"              "+theFlavor+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"
-              );
-              l++;
-              k++;
-              if (ruleType.equals("file") && !matchPart.startsWith("*"))
-              {
-                // Generate another rule corresponding to all matching paths.
-                pathDescription = "_"+Integer.toString(k);
-                pathOpName = "specop"+pathDescription;
-
-                thePath = siteLib + "*/" + matchPart;
-                out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"path_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\""+pathOpName+"\" value=\"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.InsertNewRule") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Insert Here\",\"path_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.InsertNewRuleBeforeRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"3\"></td>\n"+
-"        </tr>\n"
-                );
-                l++;
-                out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Delete\",\"path_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(thePath)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(thePath)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"spectype"+pathDescription+"\" value=\"file\"/>\n"+
-"              file\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specflav"+pathDescription+"\" value=\""+theFlavor+"\"/>\n"+
-"              "+theFlavor+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"
-                );
-                l++;
-                  
-                k++;
-              }
-            }
-          }
-        }
-        else if (sn.getType().equals("pathrule"))
-        {
-          String match = sn.getAttributeValue("match");
-          String type = sn.getAttributeValue("type");
-          String action = sn.getAttributeValue("action");
-          
-          String pathDescription = "_"+Integer.toString(k);
-          String pathOpName = "specop"+pathDescription;
-
-          out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"path_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\""+pathOpName+"\" value=\"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.InsertNewRule") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Insert Here\",\"path_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.InsertNewRuleBeforeRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"3\"></td>\n"+
-"        </tr>\n"
-          );
-          l++;
-          out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Delete\",\"path_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(match)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(match)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"spectype"+pathDescription+"\" value=\""+type+"\"/>\n"+
-"              "+type+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specflav"+pathDescription+"\" value=\""+action+"\"/>\n"+
-"              "+action+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"
-          );
-          l++;
-          k++;
-        }
-      }
-      if (k == 0)
-      {
-        out.print(
-"        <tr class=\"formrow\"><td colspan=\"4\" class=\"formmessage\">" + Messages.getBodyString(locale,"SharePointRepository.NoDocumentsCurrentlyIncluded") + "</td></tr>\n"
-        );
-      }
-      out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"path_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\"specop\" value=\"\"/>\n"+
-"              <input type=\"hidden\" name=\"specpathcount\" value=\""+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddNewRule") + "\" onClick='Javascript:SpecRuleAddPath(\"path_"+Integer.toString(k)+"\")' alt=\"Add rule\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"3\"></td>\n"+
-"        </tr>\n"+
-"        <tr class=\"formrow\"><td colspan=\"4\" class=\"formseparator\"><hr/></td></tr>\n"+
-"        <tr class=\"formrow\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>"+Messages.getBodyString(locale,"SharePointRepository.NewRule")+"</nobr>\n"
-      );
-
-      // The following variables may be in the thread context because postspec.jsp put them there:
-      // (1) "specpath", which contains the rule path as it currently stands;
-      // (2) "specpathstate", which describes what the current path represents.  Values are "unknown", "site", "library", "list".
-      // Once the widget is in the state "unknown", it can only be reset, and cannot be further modified
-      // specsitepath may be in the thread context, put there by postspec.jsp 
-      String pathSoFar = (String)currentContext.get("specpath");
-      String pathState = (String)currentContext.get("specpathstate");
-      String pathLibrary = (String)currentContext.get("specpathlibrary");
-      if (pathState == null)
-      {
-        pathState = "unknown";
-        pathLibrary = null;
-      }
-      if (pathSoFar == null)
-      {
-        pathSoFar = "/";
-        pathState = "site";
-        pathLibrary = null;
-      }
-
-      // Grab next site list and lib list
-      ArrayList childSiteList = null;
-      ArrayList childLibList = null;
-      ArrayList childListList = null;
-      String message = null;
-      if (pathState.equals("site"))
-      {
-        try
-        {
-          String queryPath = pathSoFar;
-          if (queryPath.equals("/"))
-            queryPath = "";
-          childSiteList = getSites(queryPath);
-          if (childSiteList == null)
-          {
-            // Illegal path - state becomes "unknown".
-            pathState = "unknown";
-            pathLibrary = null;
-          }
-          childLibList = getDocLibsBySite(queryPath);
-          if (childLibList == null)
-          {
-            // Illegal path - state becomes "unknown"
-            pathState = "unknown";
-            pathLibrary = null;
-          }
-          childListList = getListsBySite(queryPath);
-          if (childListList == null)
-          {
-            // Illegal path - state becomes "unknown"
-            pathState = "unknown";
-            pathLibrary = null;
-          }
-        }
-        catch (ManifoldCFException e)
-        {
-          e.printStackTrace();
-          message = e.getMessage();
-        }
-        catch (ServiceInterruption e)
-        {
-          message = "SharePoint unavailable: "+e.getMessage();
-        }
-      }
-      out.print(
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\"specpathop\" value=\"\"/>\n"+
-"              <input type=\"hidden\" name=\"specpath\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(pathSoFar)+"\"/>\n"+
-"              <input type=\"hidden\" name=\"specpathstate\" value=\""+pathState+"\"/>\n"
-      );
-      if (pathLibrary != null)
-      {
-        out.print(
-"              <input type=\"hidden\" name=\"specpathlibrary\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(pathLibrary)+"\"/>\n"
-        );
-      }
-      out.print(
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(pathSoFar)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"
-      );
-      if (pathState.equals("unknown"))
-      {
-        if (pathLibrary == null)
-        {
-          out.print(
-"              <select name=\"spectype\" size=\"4\">\n"+
-"                <option value=\"file\" selected=\"true\">" + Messages.getBodyString(locale,"SharePointRepository.File") + "</option>\n"+
-"                <option value=\"library\">" + Messages.getBodyString(locale,"SharePointRepository.Library") + "</option>\n"+
-"                <option value=\"list\">" + Messages.getBodyString(locale,"SharePointRepository.List") + "</option>\n"+
-"                <option value=\"site\">" + Messages.getBodyString(locale,"SharePointRepository.Site") + "</option>\n"+
-"              </select>\n"
-          );
-        }
-        else
-        {
-          out.print(
-"              <input type=\"hidden\" name=\"spectype\" value=\"file\"/>\n"+
-"              file\n"
-          );
-        }
-      }
-      else
-      {
-        out.print(
-"              <input type=\"hidden\" name=\"spectype\" value=\""+pathState+"\"/>\n"+
-"              "+pathState+"\n"
-        );
-      }
-      out.print(
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <select name=\"specflavor\" size=\"2\">\n"+
-"                <option value=\"include\" selected=\"true\">" + Messages.getBodyString(locale,"SharePointRepository.Include") + "</option>\n"+
-"                <option value=\"exclude\">" + Messages.getBodyString(locale,"SharePointRepository.Exclude") + "</option>\n"+
-
-"              </select>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"+
-"        <tr class=\"formrow\"><td colspan=\"4\" class=\"formseparator\"><hr/></td></tr>\n"+
-"        <tr class=\"formrow\">\n"
-      );
-      if (message != null)
-      {
-        // Display the error message, with no widgets
-        out.print(
-"          <td class=\"formmessage\" colspan=\"4\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(message)+"</td>\n"
-        );
-      }
-      else
-      {
-        // What we display depends on the determined state of the path.  If the path is a library or is unknown, all we can do is allow a type-in to append
-        // to it, or allow a reset.  If the path is a site, then we can optionally display libraries, sites, lists, OR allow a type-in.
-        // The path buttons are on the left; they consist of "Reset" (to reset the path), "+" (to add to the path), and "-" (to remove from the path).
-        out.print(
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\"pathwidget\"/>\n"+
-"              <input type=\"button\" value=\"Reset Path\" onClick='Javascript:SpecPathReset(\"pathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.ResetRulePath")+"\"/>\n"
-        );
-        if (pathSoFar.length() > 1 && (pathState.equals("site") || pathState.equals("library") || pathState.equals("list")))
-        {
-          out.print(
-"              <input type=\"button\" value=\"-\" onClick='Javascript:SpecPathRemove(\"pathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.RemoveFromRulePath")+"\"/>\n"
-          );
-        }
-        out.print(
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"3\">\n"+
-"            <nobr>\n"
-        );
-        if (pathState.equals("site") && childSiteList != null && childSiteList.size() > 0)
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddSite") + "\" onClick='Javascript:SpecPathAppendSite(\"pathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddSiteToRulePath")+"\"/>\n"+
-"              <select name=\"specsite\" size=\"5\">\n"+
-"                <option value=\"\" selected=\"true\">-- " + Messages.getBodyString(locale,"SharePointRepository.SelectSite") + " --</option>\n"
-          );
-          int q = 0;
-          while (q < childSiteList.size())
-          {
-            org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue childSite = (org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue)childSiteList.get(q++);
-            out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(childSite.getValue())+"\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(childSite.getPrettyName())+"</option>\n"
-            );
-          }
-          out.print(
-"              </select>\n"
-          );
-        }
-        
-        if (pathState.equals("site") && childLibList != null && childLibList.size() > 0)
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddLibrary") + "\" onClick='Javascript:SpecPathAppendLibrary(\"pathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddLibraryToRulePath")+"\"/>\n"+
-"              <select name=\"speclibrary\" size=\"5\">\n"+
-"                <option value=\"\" selected=\"true\">-- " + Messages.getBodyString(locale,"SharePointRepository.SelectLibrary") + " --</option>\n"
-          );
-          int q = 0;
-          while (q < childLibList.size())
-          {
-            org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue childLib = (org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue)childLibList.get(q++);
-            out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(childLib.getValue())+"\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(childLib.getPrettyName())+"</option>\n"
-            );
-          }
-          out.print(
-"              </select>\n"
-          );
-        }
-
-        if (pathState.equals("site") && childListList != null && childListList.size() > 0)
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddList") + "\" onClick='Javascript:SpecPathAppendList(\"pathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddListToRulePath")+"\"/>\n"+
-"              <select name=\"speclist\" size=\"5\">\n"+
-"                <option value=\"\" selected=\"true\">-- " + Messages.getBodyString(locale,"SharePointRepository.SelectList") + " --</option>\n"
-          );
-          int q = 0;
-          while (q < childListList.size())
-          {
-            org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue childList = (org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue)childListList.get(q++);
-            out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(childList.getValue())+"\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(childList.getPrettyName())+"</option>\n"
-            );
-          }
-          out.print(
-"              </select>\n"
-          );
-        }
-        
-        // If it's a list name, we're done; no text allowed
-        if (!pathState.equals("list"))
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddText") + "\" onClick='Javascript:SpecPathAppendText(\"pathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddTextToRulePath")+"\"/>\n"+
-"              <input type=\"text\" name=\"specmatch\" size=\"32\" value=\"\"/>\n"
-          );
-        }
-        
-        out.print(
-"            </nobr>\n"+
-"          </td>\n"
-        );
-      }
-      out.print(
-"        </tr>\n"+
-"      </table>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"</table>\n"
-      );
-    }
-    else
-    {
-      // Hiddens for path rules
-      i = 0;
-      k = 0;
-      while (i < ds.getChildCount())
-      {
-        SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals("startpoint"))
-        {
-          String site = sn.getAttributeValue("site");
-          String lib = sn.getAttributeValue("lib");
-          String siteLib = site + "/" + lib + "/";
-
-          // Go through all the file/folder rules for the startpoint, and generate new "rules" corresponding to each.
-          int j = 0;
-          while (j < sn.getChildCount())
-          {
-            SpecificationNode node = sn.getChild(j++);
-            if (node.getType().equals("include") || node.getType().equals("exclude"))
-            {
-              String matchPart = node.getAttributeValue("match");
-              String ruleType = node.getAttributeValue("type");
-              
-              String theFlavor = node.getType();
-
-              String pathDescription = "_"+Integer.toString(k);
-              
-              String thePath = siteLib + matchPart;
-              out.print(
-"<input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(thePath)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"spectype"+pathDescription+"\" value=\"file\"/>\n"+
-"<input type=\"hidden\" name=\""+"specflav"+pathDescription+"\" value=\""+theFlavor+"\"/>\n"
-              );
-              k++;
-
-              if (ruleType.equals("file") && !matchPart.startsWith("*"))
-              {
-                // Generate another rule corresponding to all matching paths.
-                pathDescription = "_"+Integer.toString(k);
-
-                thePath = siteLib + "*/" + matchPart;
-                out.print(
-"<input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(thePath)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"spectype"+pathDescription+"\" value=\"file\"/>\n"+
-"<input type=\"hidden\" name=\""+"specflav"+pathDescription+"\" value=\""+theFlavor+"\"/>\n"
-                );
-                k++;
-              }
-            }
-          }
-        }
-        else if (sn.getType().equals("pathrule"))
-        {
-          String match = sn.getAttributeValue("match");
-          String type = sn.getAttributeValue("type");
-          String action = sn.getAttributeValue("action");
-          
-          String pathDescription = "_"+Integer.toString(k);
-          out.print(
-"<input type=\"hidden\" name=\""+"specpath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(match)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"spectype"+pathDescription+"\" value=\""+type+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"specflav"+pathDescription+"\" value=\""+action+"\"/>\n"
-          );
-          k++;
-        }
-      }
-      out.print(
-"<input type=\"hidden\" name=\"specpathcount\" value=\""+Integer.toString(k)+"\"/>\n"
-      );
-    }
-
-    // Security tab
-
-    // Find whether security is on or off
-    i = 0;
-    boolean securityOn = true;
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("security"))
-      {
-        String securityValue = sn.getAttributeValue("value");
-        if (securityValue.equals("off"))
-          securityOn = false;
-        else if (securityValue.equals("on"))
-          securityOn = true;
-      }
-    }
-
-    if (tabName.equals(Messages.getString(locale,"SharePointRepository.Security")))
-    {
-      out.print(
-"<table class=\"displaytable\">\n"+
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Security2") + "</nobr></td>\n"+
-"    <td class=\"value\" colspan=\"1\">\n"+
-"      <nobr>\n"+
-"        <input type=\"radio\" name=\"specsecurity\" value=\"on\" "+(securityOn?"checked=\"true\"":"")+" />"+Messages.getBodyString(locale,"SharePointRepository.Enabled")+"&nbsp;\n"+
-"        <input type=\"radio\" name=\"specsecurity\" value=\"off\" "+((securityOn==false)?"checked=\"true\"":"")+" />"+Messages.getBodyString(locale,"SharePointRepository.Disabled")+"\n"+
-"      </nobr>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"\n"+
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
-      );
-      // Finally, go through forced ACL
-      i = 0;
-      k = 0;
-      while (i < ds.getChildCount())
-      {
-        SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals("access"))
-        {
-          String accessDescription = "_"+Integer.toString(k);
-          String accessOpName = "accessop"+accessDescription;
-          String token = sn.getAttributeValue("token");
-          out.print(
-"  <tr>\n"+
-"    <td class=\"description\">\n"+
-"      <input type=\"hidden\" name=\""+accessOpName+"\" value=\"\"/>\n"+
-"      <input type=\"hidden\" name=\""+"spectoken"+accessDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(token)+"\"/>\n"+
-"      <a name=\""+"token_"+Integer.toString(k)+"\">\n"+
-"        <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\" onClick='Javascript:SpecOp(\""+accessOpName+"\",\"Delete\",\"token_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteToken")+Integer.toString(k)+"\"/>\n"+
-"      </a>\n"+
-"    </td>\n"+
-"    <td class=\"value\">\n"+
-"      <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(token)+"</nobr>\n"+
-"    </td>\n"+
-"  </tr>\n"
-          );
-          k++;
-        }
-      }
-      if (k == 0)
-      {
-        out.print(
-"  <tr>\n"+
-"    <td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale,"SharePointRepository.NoAccessTokensPresent") + "</td>\n"+
-"  </tr>\n"
-        );
-      }
-      out.print(
-"  <tr><td class=\"lightseparator\" colspan=\"2\"><hr/></td></tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\">\n"+
-"      <input type=\"hidden\" name=\"tokencount\" value=\""+Integer.toString(k)+"\"/>\n"+
-"      <input type=\"hidden\" name=\"accessop\" value=\"\"/>\n"+
-"      <a name=\""+"token_"+Integer.toString(k)+"\">\n"+
-"        <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Add") + "\" onClick='Javascript:SpecAddAccessToken(\"token_"+Integer.toString(k+1)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddAccessToken")+"\"/>\n"+
-"      </a>\n"+
-"    </td>\n"+
-"    <td class=\"value\">\n"+
-"      <input type=\"text\" size=\"30\" name=\"spectoken\" value=\"\"/>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"</table>\n"
-      );
-    }
-    else
-    {
-      out.print(
-"<input type=\"hidden\" name=\"specsecurity\" value=\""+(securityOn?"on":"off")+"\"/>\n"
-      );
-      // Finally, go through forced ACL
-      i = 0;
-      k = 0;
-      while (i < ds.getChildCount())
-      {
-        SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals("access"))
-        {
-          String accessDescription = "_"+Integer.toString(k);
-          String token = sn.getAttributeValue("token");
-          out.print(
-"<input type=\"hidden\" name=\""+"spectoken"+accessDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(token)+"\"/>\n"
-          );
-          k++;
-        }
-      }
-      out.print(
-"<input type=\"hidden\" name=\"tokencount\" value=\""+Integer.toString(k)+"\"/>\n"
-      );
-    }
-
-    // Metadata tab
-
+      fillInTransientPathsInfo(velocityContext);
+    else if (tabName.equals(Messages.getString(locale,"SharePointRepository.Metadata")))
+      fillInTransientMetadataInfo(velocityContext);
+    
+    Messages.outputResourceWithVelocity(out,locale,"editSpecification_Security.html",velocityContext);
+    Messages.outputResourceWithVelocity(out,locale,"editSpecification_Paths.html",velocityContext);
+    Messages.outputResourceWithVelocity(out,locale,"editSpecification_Metadata.html",velocityContext);
+  }
+  
+  /** Fill in metadata tab */
+  protected static void fillInMetadataTab(Map<String,Object> velocityContext, IHTTPOutput out, DocumentSpecification ds)
+  {
     // Find the path-value metadata attribute name
-    i = 0;
     String pathNameAttribute = "";
-    while (i < ds.getChildCount())
+    MatchMap matchMap = new MatchMap();
+    List<Map<String,Object>> metadataRules = new ArrayList<Map<String,Object>>();
+    for (int i = 0; i < ds.getChildCount(); i++)
     {
-      SpecificationNode sn = ds.getChild(i++);
+      SpecificationNode sn = ds.getChild(i);
       if (sn.getType().equals("pathnameattribute"))
       {
         pathNameAttribute = sn.getAttributeValue("value");
       }
-    }
-
-    // Find the path-value mapping data
-    i = 0;
-    org.apache.manifoldcf.crawler.connectors.sharepoint.MatchMap matchMap = new org.apache.manifoldcf.crawler.connectors.sharepoint.MatchMap();
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("pathmap"))
+      else if (sn.getType().equals("pathmap"))
       {
         String pathMatch = sn.getAttributeValue("match");
         String pathReplace = sn.getAttributeValue("replace");
         matchMap.appendMatchPair(pathMatch,pathReplace);
       }
-    }
-
-    if (tabName.equals(Messages.getString(locale,"SharePointRepository.Metadata")))
-    {
-      out.print(
-"<input type=\"hidden\" name=\"specmappingcount\" value=\""+Integer.toString(matchMap.getMatchCount())+"\"/>\n"+
-"<input type=\"hidden\" name=\"specmappingop\" value=\"\"/>\n"+
-"\n"+
-"<table class=\"displaytable\">\n"+
-"<tr><td class=\"separator\" colspan=\"4\"><hr/></td></tr>\n"+
-"<tr>\n"+
-"  <td class=\"description\" colspan=\"1\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.MetadataRules") + "</nobr></td>\n"+
-"    <td class=\"boxcell\" colspan=\"3\">\n"+
-"      <table class=\"formtable\">\n"+
-"        <tr class=\"formheaderrow\">\n"+
-"          <td class=\"formcolumnheader\"></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMatch") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Action") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.AllMetadata") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Fields") + "</nobr></td>\n"+
-"        </tr>\n"
-      );
-      i = 0;
-      l = 0;
-      k = 0;
-      while (i < ds.getChildCount())
+      else if (sn.getType().equals("startpoint"))
       {
-        SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals("startpoint"))
+        String site = sn.getAttributeValue("site");
+        String lib = sn.getAttributeValue("lib");
+        String path = site + "/" + lib + "/*";
+        String allmetadata = sn.getAttributeValue("allmetadata");
+        StringBuilder metadataFieldList = new StringBuilder();
+        List<String> metadataFieldArray = new ArrayList<String>();
+        if (allmetadata == null || !allmetadata.equals("true"))
         {
-          String site = sn.getAttributeValue("site");
-          String lib = sn.getAttributeValue("lib");
-          String path = site + "/" + lib + "/*";
-          String allmetadata = sn.getAttributeValue("allmetadata");
-          StringBuilder metadataFieldList = new StringBuilder();
-          ArrayList metadataFieldArray = new ArrayList();
+          for (int j = 0; j < sn.getChildCount(); j++)
+          {
+            SpecificationNode node = sn.getChild(j);
+            if (node.getType().equals("metafield"))
+            {
+              if (metadataFieldList.length() > 0)
+                metadataFieldList.append(", ");
+              String val = node.getAttributeValue("value");
+              metadataFieldList.append(val);
+              metadataFieldArray.add(val);
+            }
+          }
+          allmetadata = "false";
+        }
+          
+        if (allmetadata.equals("true") || metadataFieldList.length() > 0)
+        {
+          Map<String,Object> item = new HashMap<String,Object>();
+          item.put("THEPATH",path);
+          item.put("THEACTION","include");
+          item.put("ALLFLAG",allmetadata);
+          item.put("FIELDLIST",metadataFieldArray);
+          item.put("FIELDS",metadataFieldList.toString());
+          metadataRules.add(item);
+        }
+      }
+      else if (sn.getType().equals("metadatarule"))
+      {
+        String path = sn.getAttributeValue("match");
+        String action = sn.getAttributeValue("action");
+        String allmetadata = sn.getAttributeValue("allmetadata");
+        StringBuilder metadataFieldList = new StringBuilder();
+        List<String> metadataFieldArray = new ArrayList<String>();
+        if (action.equals("include"))
+        {
           if (allmetadata == null || !allmetadata.equals("true"))
           {
-            int j = 0;
-            while (j < sn.getChildCount())
+            for (int j = 0; j < sn.getChildCount(); j++)
             {
-              SpecificationNode node = sn.getChild(j++);
+              SpecificationNode node = sn.getChild(j);
               if (node.getType().equals("metafield"))
               {
+                String val = node.getAttributeValue("value");
                 if (metadataFieldList.length() > 0)
                   metadataFieldList.append(", ");
-                String val = node.getAttributeValue("value");
                 metadataFieldList.append(val);
                 metadataFieldArray.add(val);
               }
             }
-            allmetadata = "false";
-          }
-          
-          if (allmetadata.equals("true") || metadataFieldList.length() > 0)
-          {
-            String pathDescription = "_"+Integer.toString(k);
-            String pathOpName = "metaop"+pathDescription;
-            out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"meta_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\""+pathOpName+"\" value=\"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.InsertNewRule") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Insert Here\",\"meta_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.InsertNewMetadataRuleBeforeRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"4\"></td>\n"+
-"        </tr>\n"
-            );
-            l++;
-            out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Delete\",\"meta_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteMetadataRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"metapath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"metaflav"+pathDescription+"\" value=\"include\"/>\n"+
-"              "+Messages.getBodyString(locale,"SharePointRepository.include")+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"metaall"+pathDescription+"\" value=\""+allmetadata+"\"/>\n"+
-"              "+allmetadata+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"
-            );
-            int q = 0;
-            while (q < metadataFieldArray.size())
-            {
-              String field = (String)metadataFieldArray.get(q++);
-              out.print(
-"              <input type=\"hidden\" name=\""+"metafields"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(field)+"\"/>\n"
-              );
-            }
-            out.print(
-"            "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(metadataFieldList.toString())+"\n"+
-"          </td>\n"+
-"        </tr>\n"
-            );
-            l++;
-            k++;
+            allmetadata="false";
           }
         }
-        else if (sn.getType().equals("metadatarule"))
-        {
-          String path = sn.getAttributeValue("match");
-          String action = sn.getAttributeValue("action");
-          String allmetadata = sn.getAttributeValue("allmetadata");
-          StringBuilder metadataFieldList = new StringBuilder();
-          ArrayList metadataFieldArray = new ArrayList();
-          if (action.equals("include"))
-          {
-            if (allmetadata == null || !allmetadata.equals("true"))
-            {
-              int j = 0;
-              while (j < sn.getChildCount())
-              {
-                SpecificationNode node = sn.getChild(j++);
-                if (node.getType().equals("metafield"))
-                {
-                  String val = node.getAttributeValue("value");
-                  if (metadataFieldList.length() > 0)
-                    metadataFieldList.append(", ");
-                  metadataFieldList.append(val);
-                  metadataFieldArray.add(val);
-                }
-              }
-              allmetadata="false";
-            }
-          }
-          else
-            allmetadata = "";
-          
-          String pathDescription = "_"+Integer.toString(k);
-          String pathOpName = "metaop"+pathDescription;
-          out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"meta_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\""+pathOpName+"\" value=\"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.InsertNewRule") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Insert Here\",\"meta_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.InsertNewMetadataRuleBeforeRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"4\"></td>\n"+
-"        </tr>\n"
-          );
-          l++;
-          out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.Delete") + "\" onClick='Javascript:SpecOp(\""+pathOpName+"\",\"Delete\",\"meta_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteMetadataRule")+Integer.toString(k)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"metapath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"metaflav"+pathDescription+"\" value=\""+action+"\"/>\n"+
-"              "+action+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"metaall"+pathDescription+"\" value=\""+allmetadata+"\"/>\n"+
-"              "+allmetadata+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"
-          );
-          int q = 0;
-          while (q < metadataFieldArray.size())
-          {
-            String field = (String)metadataFieldArray.get(q++);
-            out.print(
-"              <input type=\"hidden\" name=\""+"metafields"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(field)+"\"/>\n"
-            );
-          }
-          out.print(
-"            "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(metadataFieldList.toString())+"\n"+
-"          </td>\n"+
-"        </tr>\n"
-          );
-          l++;
-          k++;
-
-        }
-      }
-
-      if (k == 0)
-      {
-        out.print(
-"        <tr class=\"formrow\"><td class=\"formmessage\" colspan=\"5\">"+Messages.getBodyString(locale,"SharePointRepository.NoMetadataIncluded")+"</td></tr>\n"
-        );
-      }
-      out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\""+"meta_"+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"hidden\" name=\"metaop\" value=\"\"/>\n"+
-"              <input type=\"hidden\" name=\"metapathcount\" value=\""+Integer.toString(k)+"\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddNewRule") + "\" onClick='Javascript:MetaRuleAddPath(\"meta_"+Integer.toString(k)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddRule")+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"4\"></td>\n"+
-"        </tr>\n"+
-"        <tr class=\"formrow\"><td colspan=\"5\" class=\"formseparator\"><hr/></td></tr>\n"+
-"        <tr class=\"formrow\">\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>"+Messages.getBodyString(locale,"SharePointRepository.NewRule")+"</nobr>\n"
-      );
-      // The following variables may be in the thread context because postspec.jsp put them there:
-      // (1) "metapath", which contains the rule path as it currently stands;
-      // (2) "metapathstate", which describes what the current path represents.  Values are "unknown", "site", "library".
-      // (3) "metapathlibrary" is the library or list path (if this is known yet).
-      // Once the widget is in the state "unknown", it can only be reset, and cannot be further modified
-      String metaPathSoFar = (String)currentContext.get("metapath");
-      String metaPathState = (String)currentContext.get("metapathstate");
-      String metaPathLibrary = (String)currentContext.get("metapathlibrary");
-      if (metaPathState == null)
-        metaPathState = "unknown";
-      if (metaPathSoFar == null)
-      {
-        metaPathSoFar = "/";
-        metaPathState = "site";
-      }
-
-      String message = null;
-      String[] fields = null;
-      if (metaPathLibrary != null)
-      {
-        // Look up metadata fields
-        int index = metaPathLibrary.lastIndexOf("/");
-        String site = metaPathLibrary.substring(0,index);
-        String libOrList = metaPathLibrary.substring(index+1);
-        Map metaFieldList = null;
-        try
-        {
-          if (metaPathState.equals("library") || metaPathState.equals("file"))
-            metaFieldList = getLibFieldList(site,libOrList);
-          else if (metaPathState.equals("list"))
-            metaFieldList = getListFieldList(site,libOrList);
-        }
-        catch (ManifoldCFException e)
-        {
-          e.printStackTrace();
-          message = e.getMessage();
-        }
-        catch (ServiceInterruption e)
-        {
-          message = "SharePoint unavailable: "+e.getMessage();
-        }
-        if (metaFieldList != null)
-        {
-          fields = new String[metaFieldList.size()];
-          int j = 0;
-          Iterator iter = metaFieldList.keySet().iterator();
-          while (iter.hasNext())
-          {
-            fields[j++] = (String)iter.next();
-          }
-          java.util.Arrays.sort(fields);
-        }
-      }
-      
-      // Grab next site list and lib list
-      ArrayList childSiteList = null;
-      ArrayList childLibList = null;
-      ArrayList childListList = null;
-
-      if (message == null && metaPathState.equals("site"))
-      {
-        try
-        {
-          String queryPath = metaPathSoFar;
-          if (queryPath.equals("/"))
-            queryPath = "";
-          childSiteList = getSites(queryPath);
-          if (childSiteList == null)
-          {
-            // Illegal path - state becomes "unknown".
-            metaPathState = "unknown";
-            metaPathLibrary = null;
-          }
-          childLibList = getDocLibsBySite(queryPath);
-          if (childLibList == null)
-          {
-            // Illegal path - state becomes "unknown"
-            metaPathState = "unknown";
-            metaPathLibrary = null;
-          }
-          childListList = getListsBySite(queryPath);
-          if (childListList == null)
-          {
-            // Illegal path - state becomes "unknown"
-            metaPathState = "unknown";
-            metaPathLibrary = null;
-          }
-        }
-        catch (ManifoldCFException e)
-        {
-          e.printStackTrace();
-          message = e.getMessage();
-        }
-        catch (ServiceInterruption e)
-        {
-          message = "SharePoint unavailable: "+e.getMessage();
-        }
-      }
-      out.print(
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\"metapathop\" value=\"\"/>\n"+
-"              <input type=\"hidden\" name=\"metapath\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(metaPathSoFar)+"\"/>\n"+
-"              <input type=\"hidden\" name=\"metapathstate\" value=\""+metaPathState+"\"/>\n"
-      );
-      if (metaPathLibrary != null)
-      {
-        out.print(
-"              <input type=\"hidden\" name=\"metapathlibrary\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(metaPathLibrary)+"\"/>\n"
-        );
-      }
-      out.print(
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(metaPathSoFar)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <select name=\"metaflavor\" size=\"2\">\n"+
-"                <option value=\"include\" selected=\"true\">" + Messages.getBodyString(locale,"SharePointRepository.Include") + "</option>\n"+
-"                <option value=\"exclude\">" + Messages.getBodyString(locale,"SharePointRepository.Exclude") + "</option>\n"+
-"              </select>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <input type=\"checkbox\" name=\"metaall\" value=\"true\"/>"+Messages.getBodyString(locale,"SharePointRepository.IncludeAllMetadata")+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"
-      );
-      if (fields != null && fields.length > 0)
-      {
-        out.print(
-"              <select name=\"metafields\" multiple=\"true\" size=\"5\">\n"
-        );
-        int q = 0;
-        while (q < fields.length)
-        {
-          String field = fields[q++];
-          out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(field)+"\"/>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(field)+"</option>\n"
-          );
-        }
-        out.print(
-"              </select>\n"
-        );
-      }
-      out.print(
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"+
-"        <tr class=\"formrow\"><td colspan=\"5\" class=\"formseparator\"><hr/></td></tr>\n"+
-"        <tr class=\"formrow\">\n"
-      );
-      if (message != null)
-      {
-        out.print(
-"          <td class=\"formmessage\" colspan=\"5\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(message)+"</td></tr>\n"
-        );
-      }
-      else
-      {
-        // What we display depends on the determined state of the path.  If the path is a library or is unknown, all we can do is allow a type-in to append
-        // to it, or allow a reset.  If the path is a site, then we can optionally display libraries, sites, OR allow a type-in.
-        // The path buttons are on the left; they consist of "Reset" (to reset the path), "+" (to add to the path), and "-" (to remove from the path).
-        out.print(
-"          <td class=\"formcolumncell\">\n"+
-"            <nobr>\n"+
-"              <a name=\"metapathwidget\"/>\n"+
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.ResetPath") + "\" onClick='Javascript:MetaPathReset(\"metapathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.ResetMetadataRulePath")+"\"/>\n"
-        );
-        if (metaPathSoFar.length() > 1 && (metaPathState.equals("site") || metaPathState.equals("library")))
-        {
-          out.print(
-"              <input type=\"button\" value=\"-\" onClick='Javascript:MetaPathRemove(\"metapathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.RemoveFromMetadataRulePath")+"\"/>\n"
-          );
-        }
-        out.print(
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"formcolumncell\" colspan=\"4\">\n"+
-"            <nobr>\n"
-        );
-        if (metaPathState.equals("site") && childSiteList != null && childSiteList.size() > 0)
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddSite") + "\" onClick='Javascript:MetaPathAppendSite(\"metapathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddSiteToMetadataRulePath")+"\"/>\n"+
-"              <select name=\"metasite\" size=\"5\">\n"+
-"                <option value=\"\" selected=\"true\">"+Messages.getBodyString(locale,"SharePointRepository.SelectSite")+"</option>\n"
-          );
-          int q = 0;
-          while (q < childSiteList.size())
-          {
-            org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue childSite = (org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue)childSiteList.get(q++);
-            out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(childSite.getValue())+"\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(childSite.getPrettyName())+"</option>\n"
-            );
-          }
-          out.print(
-"              </select>\n"
-          );
-        }
+        else
+          allmetadata = "";
         
-        if (metaPathState.equals("site") && childLibList != null && childLibList.size() > 0)
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddLibrary") + "\" onClick='Javascript:MetaPathAppendLibrary(\"metapathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddLibraryToMetadataRulePath")+"\"/>\n"+
-"              <select name=\"metalibrary\" size=\"5\">\n"+
-"                <option value=\"\" selected=\"true\">"+Messages.getBodyString(locale,"SharePointRepository.SelectLibrary")+"</option>\n"
-          );
-          int q = 0;
-          while (q < childLibList.size())
-          {
-            org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue childLib = (org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue)childLibList.get(q++);
-            out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(childLib.getValue())+"\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(childLib.getPrettyName())+"</option>\n"
-            );
-          }
-          out.print(
-"              </select>\n"
-          );
-        }
-        
-        if (metaPathState.equals("site") && childListList != null && childListList.size() > 0)
-        {
-          out.print(
-"              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddList") + "\" onClick='Javascript:MetaPathAppendList(\"metapathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddListToMetadataRulePath")+"\"/>\n"+
-"              <select name=\"metalist\" size=\"5\">\n"+
-"                <option value=\"\" selected=\"true\">"+Messages.getBodyString(locale,"SharePointRepository.SelectList")+"</option>\n"
-          );
-          int q = 0;
-          while (q < childListList.size())
-          {
-            org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue childList = (org.apache.manifoldcf.crawler.connectors.sharepoint.NameValue)childListList.get(q++);
-            out.print(
-"                <option value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(childList.getValue())+"\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(childList.getPrettyName())+"</option>\n"
-            );
-          }
-          out.print(
-"              </select>\n"
-          );
-        }
-
-        if (!metaPathState.equals("list"))
-        {
-          out.print(
-  "              <input type=\"button\" value=\"" + Messages.getAttributeString(locale,"SharePointRepository.AddText") + "\" onClick='Javascript:MetaPathAppendText(\"metapathwidget\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.AddTextToMetadataRulePath")+"\"/>\n"+
-  "              <input type=\"text\" name=\"metamatch\" size=\"32\" value=\"\"/>\n"
-          );
-        }
-        
-        out.print(
-"            </nobr>\n"+
-"          </td>\n"
-        );
+        Map<String,Object> item = new HashMap<String,Object>();
+        item.put("THEPATH",path);
+        item.put("THEACTION",action);
+        item.put("ALLFLAG",allmetadata);
+        item.put("FIELDLIST",metadataFieldArray);
+        item.put("FIELDS",metadataFieldList.toString());
+        metadataRules.add(item);
       }
-      out.print(
-"        </tr>\n"+
-"      </table>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"  <tr><td class=\"separator\" colspan=\"4\"><hr/></td></tr>\n"+
-"  <tr>\n"+
-"    <td class=\"description\" colspan=\"1\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMetadata") + "</nobr></td>\n"+
-"    <td class=\"boxcell\" colspan=\"3\">\n"+
-"      <table class=\"displaytable\">\n"+
-"        <tr>\n"+
-"          <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.AttributeName") + "</nobr></td>\n"+
-"          <td class=\"value\" colspan=\"3\">\n"+
-"            <nobr>\n"+
-"              <input type=\"text\" name=\"specpathnameattribute\" size=\"20\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(pathNameAttribute)+"\"/>\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"+
-"        <tr><td class=\"separator\" colspan=\"4\"><hr/></td></tr>\n"
-      );
-      i = 0;
-      while (i < matchMap.getMatchCount())
-      {
-        String matchString = matchMap.getMatchString(i);
-        String replaceString = matchMap.getReplaceString(i);
-        out.print(
-"        <tr>\n"+
-"          <td class=\"description\">\n"+
-"            <input type=\"hidden\" name=\""+"specmappingop_"+Integer.toString(i)+"\" value=\"\"/>\n"+
-"            <a name=\""+"mapping_"+Integer.toString(i)+"\">\n"+
-"              <input type=\"button\" onClick='Javascript:SpecOp(\"specmappingop_"+Integer.toString(i)+"\",\"Delete\",\"mapping_"+Integer.toString(i)+"\")' alt=\""+Messages.getAttributeString(locale,"SharePointRepository.DeleteMapping")+Integer.toString(i)+"\" value=\""+Messages.getAttributeString(locale,"SharePointRepository.DeletePathMapping")+"\"/>\n"+
-"            </a>\n"+
-"          </td>\n"+
-"          <td class=\"value\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specmatch_"+Integer.toString(i)+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(matchString)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(matchString)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"          <td class=\"value\">==></td>\n"+
-"          <td class=\"value\">\n"+
-"            <nobr>\n"+
-"              <input type=\"hidden\" name=\""+"specreplace_"+Integer.toString(i)+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(replaceString)+"\"/>\n"+
-"              "+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(replaceString)+"\n"+
-"            </nobr>\n"+
-"          </td>\n"+
-"        </tr>\n"
-        );
-        i++;
-      }
-      if (i == 0)
-      {
-        out.print(
-"        <tr><td colspan=\"4\" class=\"message\">" + Messages.getBodyString(locale,"SharePointRepository.NoMappingsSpecified") + "</td></tr>\n"
-        );
-      }
-      out.print(
-"        <tr><td class=\"lightseparator\" colspan=\"4\"><hr/></td></tr>\n"+
-"\n"+
-"        <tr>\n"+
-"          <td class=\"description\">\n"+
-"            <a name=\""+"mapping_"+Integer.toString(i)+"\">\n"+
-"              <input type=\"button\" onClick='Javascript:SpecAddMapping(\"mapping_"+Integer.toString(i+1)+"\")' alt=\"Add to mappings\" value=\"Add Path Mapping\"/>\n"+
-"            </a>\n"+
-"          </td>\n"+
-"          <td class=\"value\"><nobr>"+Messages.getBodyString(locale,"SharePointRepository.MatchRegexp") + "&nbsp;<input type=\"text\" name=\"specmatch\" size=\"32\" value=\"\"/></nobr></td>\n"+
-"          <td class=\"value\">==></td>\n"+
-"          <td class=\"value\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.ReplaceString") + "&nbsp;<input type=\"text\" name=\"specreplace\" size=\"32\" value=\"\"/></nobr></td>\n"+
-"        </tr>\n"+
-"      </table>\n"+
-"    </td>\n"+
-"  </tr>\n"+
-"</table>\n"
-      );
     }
-    else
+    
+    List<Map<String,String>> mapList = new ArrayList<Map<String,String>>();
+    for (int i = 0; i < matchMap.getMatchCount(); i++)
     {
-      // Hiddens for metadata rules
-      i = 0;
-      k = 0;
-      while (i < ds.getChildCount())
+      String matchString = matchMap.getMatchString(i);
+      String replaceString = matchMap.getReplaceString(i);
+
+      Map<String,String> item = new HashMap<String,String>();
+      item.put("MATCH",matchString);
+      item.put("REPLACE",replaceString);
+      mapList.add(item);
+    }
+    
+    velocityContext.put("PATHNAMEATTRIBUTE",pathNameAttribute);
+    velocityContext.put("MAPLIST",mapList);
+    velocityContext.put("METADATARULES",metadataRules);
+  }
+  
+  /** Fill in transient metadata info */
+  protected void fillInTransientMetadataInfo(Map<String,Object> velocityContext)
+  {
+    // The following variables may be in the thread context because postspec.jsp put them there:
+    // (1) "metapath", which contains the rule path as it currently stands;
+    // (2) "metapathstate", which describes what the current path represents.  Values are "unknown", "site", "library".
+    // (3) "metapathlibrary" is the library or list path (if this is known yet).
+    // Once the widget is in the state "unknown", it can only be reset, and cannot be further modified
+    String metaPathSoFar = (String)currentContext.get("metapath");
+    String metaPathState = (String)currentContext.get("metapathstate");
+    String metaPathLibrary = (String)currentContext.get("metapathlibrary");
+    if (metaPathState == null)
+      metaPathState = "unknown";
+    if (metaPathSoFar == null)
+    {
+      metaPathSoFar = "/";
+      metaPathState = "site";
+    }
+
+    String message = null;
+    List<NameValue> fieldList = null;
+    if (metaPathLibrary != null)
+    {
+      // Look up metadata fields
+      int index = metaPathLibrary.lastIndexOf("/");
+      String site = metaPathLibrary.substring(0,index);
+      String libOrList = metaPathLibrary.substring(index+1);
+      Map<String,String> metaFieldList = null;
+      try
       {
-        SpecificationNode sn = ds.getChild(i++);
-        if (sn.getType().equals("startpoint"))
-        {
-          String site = sn.getAttributeValue("site");
-          String lib = sn.getAttributeValue("lib");
-          String path = site + "/" + lib + "/*";
-          
-          String allmetadata = sn.getAttributeValue("allmetadata");
-          ArrayList metadataFieldArray = new ArrayList();
-          if (allmetadata == null || !allmetadata.equals("true"))
-          {
-            int j = 0;
-            while (j < sn.getChildCount())
-            {
-              SpecificationNode node = sn.getChild(j++);
-              if (node.getType().equals("metafield"))
-              {
-                String val = node.getAttributeValue("value");
-                metadataFieldArray.add(val);
-              }
-            }
-            allmetadata = "false";
-          }
-          
-          if (allmetadata.equals("true") || metadataFieldArray.size() > 0)
-          {
-            String pathDescription = "_"+Integer.toString(k);
-            out.print(
-"<input type=\"hidden\" name=\""+"metapath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"metaflav"+pathDescription+"\" value=\"include\"/>\n"+
-"<input type=\"hidden\" name=\""+"metaall"+pathDescription+"\" value=\""+allmetadata+"\"/>\n"
-            );
-            int q = 0;
-            while (q < metadataFieldArray.size())
-            {
-              String field = (String)metadataFieldArray.get(q++);
-              out.print(
-"<input type=\"hidden\" name=\""+"metafields"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(field)+"\"/>\n"
-              );
-            }
-            k++;
-          }
-        }
-        else if (sn.getType().equals("metadatarule"))
-        {
-          String match = sn.getAttributeValue("match");
-          String action = sn.getAttributeValue("action");
-          String allmetadata = sn.getAttributeValue("allmetadata");
-          
-          String pathDescription = "_"+Integer.toString(k);
-          out.print(
-"<input type=\"hidden\" name=\""+"metapath"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(match)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"metaflav"+pathDescription+"\" value=\""+action+"\"/>\n"
-          );
-          if (action.equals("include"))
-          {
-            if (allmetadata == null || allmetadata.length() == 0)
-              allmetadata = "false";
-            out.print(
-"<input type=\"hidden\" name=\""+"metaall"+pathDescription+"\" value=\""+allmetadata+"\"/>\n"
-            );
-            int j = 0;
-            while (j < sn.getChildCount())
-            {
-              SpecificationNode node = sn.getChild(j++);
-              if (node.getType().equals("metafield"))
-              {
-                String value = node.getAttributeValue("value");
-                out.print(
-"<input type=\"hidden\" name=\""+"metafields"+pathDescription+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(value)+"\"/>\n"
-                );
-              }
-            }
-          }
-          k++;
-        }
+        if (metaPathState.equals("library") || metaPathState.equals("file"))
+          metaFieldList = getLibFieldList(site,libOrList);
+        else if (metaPathState.equals("list"))
+          metaFieldList = getListFieldList(site,libOrList);
       }
-      out.print(
-"<input type=\"hidden\" name=\"metapathcount\" value=\""+Integer.toString(k)+"\"/>\n"+
-"\n"+
-"<input type=\"hidden\" name=\"specpathnameattribute\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(pathNameAttribute)+"\"/>\n"+
-"<input type=\"hidden\" name=\"specmappingcount\" value=\""+Integer.toString(matchMap.getMatchCount())+"\"/>\n"
-      );
-      i = 0;
-      while (i < matchMap.getMatchCount())
+      catch (ManifoldCFException e)
       {
-        String matchString = matchMap.getMatchString(i);
-        String replaceString = matchMap.getReplaceString(i);
-        out.print(
-"<input type=\"hidden\" name=\""+"specmatch_"+Integer.toString(i)+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(matchString)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+"specreplace_"+Integer.toString(i)+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(replaceString)+"\"/>\n"
-        );
-        i++;
+        e.printStackTrace();
+        message = e.getMessage();
+      }
+      catch (ServiceInterruption e)
+      {
+        message = "SharePoint unavailable: "+e.getMessage();
+      }
+      if (metaFieldList != null)
+      {
+        String[] fields = new String[metaFieldList.size()];
+        int j = 0;
+        Iterator<String> iter = metaFieldList.keySet().iterator();
+        while (iter.hasNext())
+        {
+          fields[j++] = iter.next();
+        }
+        java.util.Arrays.sort(fields);
+        fieldList = new ArrayList<NameValue>();
+        for (String field : fields)
+        {
+          fieldList.add(new NameValue(field,metaFieldList.get(field)));
+        }
       }
     }
+      
+    // Grab next site list and lib list
+    List<NameValue> childSiteList = null;
+    List<NameValue> childLibList = null;
+    List<NameValue> childListList = null;
+
+    if (message == null && metaPathState.equals("site"))
+    {
+      try
+      {
+        String queryPath = metaPathSoFar;
+        if (queryPath.equals("/"))
+          queryPath = "";
+        childSiteList = getSites(queryPath);
+        if (childSiteList == null)
+        {
+          if (queryPath.length() == 0)
+            throw new ManifoldCFException("Root site is unreachable, or user has no permissions");
+          // Illegal path - state becomes "unknown".
+          metaPathState = "unknown";
+          metaPathLibrary = null;
+        }
+        childLibList = getDocLibsBySite(queryPath);
+        if (childLibList == null)
+        {
+          // Illegal path - state becomes "unknown"
+          if (queryPath.length() == 0)
+            throw new ManifoldCFException("Root site is unreachable, or user has no permissions");
+          metaPathState = "unknown";
+          metaPathLibrary = null;
+        }
+        childListList = getListsBySite(queryPath);
+        if (childListList == null)
+        {
+          // Illegal path - state becomes "unknown"
+          if (queryPath.length() == 0)
+            throw new ManifoldCFException("Root site is unreachable, or user has no permissions");
+          metaPathState = "unknown";
+          metaPathLibrary = null;
+        }
+      }
+      catch (ManifoldCFException e)
+      {
+        Logging.connectors.warn(e.getMessage(),e);
+        message = e.getMessage();
+      }
+      catch (ServiceInterruption e)
+      {
+        message = "SharePoint unavailable: "+e.getMessage();
+      }
+    }
+    
+    if (metaPathSoFar != null)
+      velocityContext.put("METAPATHSOFAR",metaPathSoFar);
+    if (metaPathState != null)
+      velocityContext.put("METAPATHSTATE",metaPathState);
+    if (metaPathLibrary != null)
+      velocityContext.put("METAPATHLIBRARY",metaPathLibrary);
+    if (message != null)
+      velocityContext.put("METAMESSAGE",message);
+    if (fieldList != null)
+      velocityContext.put("METAFIELDLIST",fieldList);
+    if (childSiteList != null)
+      velocityContext.put("METACHILDSITELIST",childSiteList);
+    if (childLibList != null)
+      velocityContext.put("METACHILDLIBLIST",childLibList);
+    if (childListList != null)
+      velocityContext.put("METACHILDLISTLIST",childListList);
+  }
+  
+  /** Fill in paths tab */
+  protected static void fillInPathsTab(Map<String,Object> velocityContext, IHTTPOutput out, DocumentSpecification ds)
+  {
+    List<Map<String,String>> rules = new ArrayList<Map<String,String>>();
+    for (int i = 0; i < ds.getChildCount(); i++)
+    {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals("startpoint"))
+      {
+        String site = sn.getAttributeValue("site");
+        String lib = sn.getAttributeValue("lib");
+        String siteLib = site + "/" + lib + "/";
+
+        // Go through all the file/folder rules for the startpoint, and generate new "rules" corresponding to each.
+        for (int j = 0; j < sn.getChildCount(); j++)
+        {
+          SpecificationNode node = sn.getChild(j);
+          if (node.getType().equals("include") || node.getType().equals("exclude"))
+          {
+            String matchPart = node.getAttributeValue("match");
+            String ruleType = node.getAttributeValue("type");
+            String theFlavor = node.getType();
+            String thePath = siteLib + matchPart;
+            
+            Map<String,String> item = new HashMap<String,String>();
+            item.put("THEPATH",thePath);
+            item.put("THETYPE","file");
+            item.put("THEACTION",theFlavor);
+            rules.add(item);
+            
+            if (ruleType.equals("file") && !matchPart.startsWith("*"))
+            {
+              thePath = siteLib + "*/" + matchPart;
+              item = new HashMap<String,String>();
+              item.put("THEPATH",thePath);
+              item.put("THETYPE","file");
+              item.put("THEACTION",theFlavor);
+              rules.add(item);
+            }
+          }
+        }
+      }
+      else if (sn.getType().equals("pathrule"))
+      {
+        String match = sn.getAttributeValue("match");
+        String type = sn.getAttributeValue("type");
+        String action = sn.getAttributeValue("action");
+        
+        Map<String,String> item = new HashMap<String,String>();
+        item.put("THEPATH",match);
+        item.put("THETYPE",type);
+        item.put("THEACTION",action);
+        rules.add(item);
+        
+      }
+    }
+    
+    velocityContext.put("RULES",rules);
+  }
+  
+  /** Fill in the transient portion of the Paths tab */
+  protected void fillInTransientPathsInfo(Map<String,Object> velocityContext)
+  {
+    // The following variables may be in the thread context because postspec.jsp put them there:
+    // (1) "specpath", which contains the rule path as it currently stands;
+    // (2) "specpathstate", which describes what the current path represents.  Values are "unknown", "site", "library", "list".
+    // Once the widget is in the state "unknown", it can only be reset, and cannot be further modified
+    // specsitepath may be in the thread context, put there by postspec.jsp 
+    String pathSoFar = (String)currentContext.get("specpath");
+    String pathState = (String)currentContext.get("specpathstate");
+    String pathLibrary = (String)currentContext.get("specpathlibrary");
+    if (pathState == null)
+    {
+      pathState = "unknown";
+      pathLibrary = null;
+    }
+    if (pathSoFar == null)
+    {
+      pathSoFar = "/";
+      pathState = "site";
+      pathLibrary = null;
+    }
+
+    // Grab next site list and lib list
+    List<NameValue> childSiteList = null;
+    List<NameValue> childLibList = null;
+    List<NameValue> childListList = null;
+    String message = null;
+    if (pathState.equals("site"))
+    {
+      try
+      {
+        String queryPath = pathSoFar;
+        if (queryPath.equals("/"))
+          queryPath = "";
+        childSiteList = getSites(queryPath);
+        if (childSiteList == null)
+        {
+          // Illegal path - state becomes "unknown".
+          if (queryPath.length() == 0)
+            throw new ManifoldCFException("Root site is unreachable, or user has no permissions");
+          pathState = "unknown";
+          pathLibrary = null;
+        }
+        childLibList = getDocLibsBySite(queryPath);
+        if (childLibList == null)
+        {
+          // Illegal path - state becomes "unknown"
+          if (queryPath.length() == 0)
+            throw new ManifoldCFException("Root site is unreachable, or user has no permissions");
+          pathState = "unknown";
+          pathLibrary = null;
+        }
+        childListList = getListsBySite(queryPath);
+        if (childListList == null)
+        {
+          // Illegal path - state becomes "unknown"
+          if (queryPath.length() == 0)
+            throw new ManifoldCFException("Root site is unreachable, or user has no permissions");
+          pathState = "unknown";
+          pathLibrary = null;
+        }
+      }
+      catch (ManifoldCFException e)
+      {
+        Logging.connectors.warn(e.getMessage(),e);
+        message = e.getMessage();
+      }
+      catch (ServiceInterruption e)
+      {
+        message = "SharePoint unavailable: "+e.getMessage();
+      }
+    }
+      
+    if (pathSoFar != null)
+      velocityContext.put("PATHSOFAR",pathSoFar);
+    if (pathState != null)
+      velocityContext.put("PATHSTATE",pathState);
+    if (pathLibrary != null)
+      velocityContext.put("PATHLIBRARY",pathLibrary);
+    if (message != null)
+      velocityContext.put("MESSAGE",message);
+    if (childSiteList != null)
+      velocityContext.put("CHILDSITELIST",childSiteList);
+    if (childLibList != null)
+      velocityContext.put("CHILDLIBLIST",childLibList);
+    if (childListList != null)
+      velocityContext.put("CHILDLISTLIST",childListList);
+  }
+  
+  /** Fill in security tab */
+  protected static void fillInSecurityTab(Map<String,Object> velocityContext, IHTTPOutput out, DocumentSpecification ds)
+  {
+    // Security tab
+    String security = "on";
+    List<String> accessTokens = new ArrayList<String>();
+    for (int i = 0; i < ds.getChildCount(); i++)
+    {
+      SpecificationNode sn = ds.getChild(i);
+      if (sn.getType().equals("security"))
+      {
+        security = sn.getAttributeValue("value");
+      }
+      else if (sn.getType().equals("access"))
+      {
+        String token = sn.getAttributeValue("token");
+        accessTokens.add(token);
+      }
+    }
+
+    velocityContext.put("SECURITY",security);
+    velocityContext.put("ACCESSTOKENS",accessTokens);
   }
   
   /** Process a specification post.
@@ -4507,403 +3625,13 @@
   public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)
     throws ManifoldCFException, IOException
   {
-    // Display path rules
-    out.print(
-"<table class=\"displaytable\">\n"+
-"  <tr>\n"
-    );
-    int i = 0;
-    int l = 0;
-    boolean seenAny = false;
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("startpoint"))
-      {
-        String site = sn.getAttributeValue("site");
-        String lib = sn.getAttributeValue("lib");
-        String siteLib = site + "/" + lib + "/";
-
-        // Old-style path.
-        // There will be an inclusion or exclusion rule for every entry in the path rules for this startpoint, so loop through them.
-        int j = 0;
-        while (j < sn.getChildCount())
-        {
-          SpecificationNode node = sn.getChild(j++);
-          if (node.getType().equals("include") || node.getType().equals("exclude"))
-          {
-            String matchPart = node.getAttributeValue("match");
-            String ruleType = node.getAttributeValue("type");
-            // Whatever happens, we're gonna display a rule here, so go ahead and set that up.
-            if (seenAny == false)
-            {
-              seenAny = true;
-              out.print(
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathRules") + "</nobr></td>\n"+
-"    <td class=\"boxcell\">\n"+
-"      <table class=\"formtable\">\n"+
-"        <tr class=\"formheaderrow\">\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMatch") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.RuleType") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Action") + "</nobr></td>\n"+
-"        </tr>\n"
-              );
-            }
-            String action = node.getType();
-            // Display the path rule corresponding to this match rule
-            // The first part comes from the site/library
-            String completePath;
-            // The match applies to only the file portion.  Therefore, there are TWO rules needed to emulate: sitelib/<match>, and sitelib/*/<match>
-            completePath = siteLib + matchPart;
-            out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(completePath)+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.file") + "</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+action+"</nobr></td>\n"+
-"        </tr>\n"
-            );
-            l++;
-            if (ruleType.equals("file") && !matchPart.startsWith("*"))
-            {
-              completePath = siteLib + "*/" + matchPart;
-              out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(completePath)+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.file") + "</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+action+"</nobr></td>\n"+
-"        </tr>\n"
-              );
-              l++;
-            }
-          }
-        }
-      }
-      else if (sn.getType().equals("pathrule"))
-      {
-        String path = sn.getAttributeValue("match");
-        String action = sn.getAttributeValue("action");
-        String ruleType = sn.getAttributeValue("type");
-        if (seenAny == false)
-        {
-          seenAny = true;
-          out.print(
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathRules") + "</nobr></td>\n"+
-"    <td class=\"boxcell\">\n"+
-"      <table class=\"formtable\">\n"+
-"        <tr class=\"formheaderrow\">\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMatch") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.RuleType") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Action") + "</nobr></td>\n"+
-"        </tr>\n"
-          );
-        }
-        out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+ruleType+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+action+"</nobr></td>\n"+
-"        </tr>\n"
-        );
-        l++;
-      }
-    }
-    if (seenAny)
-    {
-      out.print(
-"      </table>\n"+
-"    </td>\n"
-      );
-    }
-    else
-    {
-      out.print(
-"    <td colspan=\"2\" class=\"message\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.NoDocumentsWillBeIncluded") + "</nobr></td>\n"
-      );
-    }
-    out.print(
-"  </tr>\n"
-    );
-  
-    // Finally, display metadata rules
-    out.print(
-"  <tr>\n"
-    );
-
-    i = 0;
-    l = 0;
-    seenAny = false;
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("startpoint"))
-      {
-        // Old-style
-        String site = sn.getAttributeValue("site");
-        String lib = sn.getAttributeValue("lib");
-        String path = site + "/" + lib + "/*";
-        
-        String allmetadata = sn.getAttributeValue("allmetadata");
-        StringBuilder metadataFieldList = new StringBuilder();
-        if (allmetadata == null || !allmetadata.equals("true"))
-        {
-          int j = 0;
-          while (j < sn.getChildCount())
-          {
-            SpecificationNode node = sn.getChild(j++);
-            if (node.getType().equals("metafield"))
-            {
-              String value = node.getAttributeValue("value");
-              if (metadataFieldList.length() > 0)
-                metadataFieldList.append(", ");
-              metadataFieldList.append(value);
-            }
-          }
-          allmetadata = "false";
-        }
-        if (allmetadata.equals("true") || metadataFieldList.length() > 0)
-        {
-          if (seenAny == false)
-          {
-            seenAny = true;
-            out.print(
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Metadata2") + "</nobr></td>\n"+
-"    <td class=\"boxcell\">\n"+
-"      <table class=\"formtable\">\n"+
-"        <tr class=\"formheaderrow\">\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMatch") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Action") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.AllMetadata") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Fields") + "</nobr></td>\n"+
-"        </tr>\n"
-            );
-          }
-          out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.include2") + "</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+allmetadata+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(metadataFieldList.toString())+"</td>\n"+
-"        </tr>\n"
-          );
-          l++;
-        }
-      }
-      else if (sn.getType().equals("metadatarule"))
-      {
-        String path = sn.getAttributeValue("match");
-        String action = sn.getAttributeValue("action");
-        String allmetadata = sn.getAttributeValue("allmetadata");
-        StringBuilder metadataFieldList = new StringBuilder();
-        if (action.equals("include"))
-        {
-          if (allmetadata == null || !allmetadata.equals("true"))
-          {
-            int j = 0;
-            while (j < sn.getChildCount())
-            {
-              SpecificationNode node = sn.getChild(j++);
-              if (node.getType().equals("metafield"))
-              {
-                String fieldName = node.getAttributeValue("value");
-                if (metadataFieldList.length() > 0)
-                  metadataFieldList.append(", ");
-                metadataFieldList.append(fieldName);
-              }
-            }
-            allmetadata = "false";
-          }
-        }
-        else
-          allmetadata = "";
-        if (seenAny == false)
-        {
-          seenAny = true;
-          out.print(
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Metadata2") + "</nobr></td>\n"+
-"    <td class=\"boxcell\">\n"+
-"      <table class=\"formtable\">\n"+
-"        <tr class=\"formheaderrow\">\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMatch") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Action") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.AllMetadata") + "</nobr></td>\n"+
-"          <td class=\"formcolumnheader\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Fields") + "</nobr></td>\n"+
-"        </tr>\n"
-          );
-        }
-        out.print(
-"        <tr class=\""+(((l % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
-"          <td class=\"formcolumncell\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(path)+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+action+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\"><nobr>"+allmetadata+"</nobr></td>\n"+
-"          <td class=\"formcolumncell\">"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(metadataFieldList.toString())+"</td>\n"+
-"        </tr>\n"
-        );
-        l++;
-      }
-    }
-    if (seenAny)
-    {
-      out.print(
-"      </table>\n"+
-"    </td>\n"
-      );
-    }
-    else
-    {
-      out.print(
-"    <td colspan=\"2\" class=\"message\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.NoMetadataWillBeIncluded") + "</nobr></td>\n"
-      );
-    }
-    out.print(
-"  </tr>\n"+
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
-    );
-    // Find whether security is on or off
-    i = 0;
-    boolean securityOn = true;
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("security"))
-      {
-        String securityValue = sn.getAttributeValue("value");
-        if (securityValue.equals("off"))
-          securityOn = false;
-        else if (securityValue.equals("on"))
-          securityOn = true;
-      }
-    }
-    out.print(
-"  <tr>\n"+
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.Security2") + "</nobr></td>\n"+
-"    <td class=\"value\"><nobr>"+(securityOn?Messages.getBodyString(locale,"SharePointRepository.Enabled2"):Messages.getBodyString(locale,"SharePointRepository.Disabled"))+"</nobr></td>\n"+
-"  </tr>\n"+
-"\n"+
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
-    );
-    // Go through looking for access tokens
-    seenAny = false;
-    i = 0;
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("access"))
-      {
-        if (seenAny == false)
-        {
-          out.print(
-"  <tr><td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.AccessToken") + "</nobr></td>\n"+
-"    <td class=\"value\">\n"
-          );
-          seenAny = true;
-        }
-        String token = sn.getAttributeValue("token");
-        out.print(
-"      <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(token)+"</nobr><br/>\n"
-        );
-      }
-    }
-
-    if (seenAny)
-    {
-      out.print(
-"    </td>\n"+
-"  </tr>\n"
-      );
-    }
-    else
-    {
-      out.print(
-"  <tr><td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale,"SharePointRepository.NoAccessTokensSpecified") + "</td></tr>\n"
-      );
-    }
-    out.print(
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"
-    );
-    // Find the path-name metadata attribute name
-    i = 0;
-    String pathNameAttribute = "";
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("pathnameattribute"))
-      {
-        pathNameAttribute = sn.getAttributeValue("value");
-      }
-    }
-    out.print(
-"  <tr>\n"
-    );
-    if (pathNameAttribute.length() > 0)
-    {
-      out.print(
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathMetadataAttributeName") + "</nobr></td>\n"+
-"    <td class=\"value\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(pathNameAttribute)+"</nobr></td>\n"
-      );
-    }
-    else
-    {
-      out.print(
-"    <td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale,"SharePointRepository.NoPathNameMetadataAttributeSpecified") + "</td>\n"
-      );
-    }
-    out.print(
-"  </tr>\n"+
-"\n"+
-"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
-"\n"+
-"  <tr>\n"+
-"\n"
-    );
-    // Find the path-value mapping data
-    i = 0;
-    org.apache.manifoldcf.crawler.connectors.sharepoint.MatchMap matchMap = new org.apache.manifoldcf.crawler.connectors.sharepoint.MatchMap();
-    while (i < ds.getChildCount())
-    {
-      SpecificationNode sn = ds.getChild(i++);
-      if (sn.getType().equals("pathmap"))
-      {
-        String pathMatch = sn.getAttributeValue("match");
-        String pathReplace = sn.getAttributeValue("replace");
-        matchMap.appendMatchPair(pathMatch,pathReplace);
-      }
-    }
-    if (matchMap.getMatchCount() > 0)
-    {
-      out.print(
-"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SharePointRepository.PathValueMapping") + "</nobr></td>\n"+
-"    <td class=\"value\">\n"+
-"      <table class=\"displaytable\">\n"
-      );
-      i = 0;
-      while (i < matchMap.getMatchCount())
-      {
-        String matchString = matchMap.getMatchString(i);
-        String replaceString = matchMap.getReplaceString(i);
-        out.print(
-"        <tr>\n"+
-"          <td class=\"value\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(matchString)+"</nobr></td>\n"+
-"          <td class=\"value\">==></td>\n"+
-"          <td class=\"value\"><nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(replaceString)+"</nobr></td>\n"+
-"        </tr>\n"
-        );
-        i++;
-      }
-      out.print(
-"      </table>\n"+
-"    </td>\n"
-      );
-    }
-    else
-    {
-      out.print(
-"    <td class=\"message\" colspan=\"2\">" + Messages.getBodyString(locale,"SharePointRepository.NoMappingsSpecified") + "</td>\n"
-      );
-    }
-    out.print(
-"  </tr>\n"+
-"</table>\n"
-    );
+    Map<String,Object> velocityContext = new HashMap<String,Object>();
+    
+    fillInSecurityTab(velocityContext,out,ds);
+    fillInPathsTab(velocityContext,out,ds);
+    fillInMetadataTab(velocityContext,out,ds);
+    
+    Messages.outputResourceWithVelocity(out,locale,"viewSpecification.html",velocityContext);
   }
 
   protected static class ExecuteMethodThread extends Thread
@@ -4968,13 +3696,23 @@
       }
     }
 
-    public Throwable getException()
+    public int finishUp()
+      throws InterruptedException, IOException, org.apache.http.HttpException
     {
-      return exception;
-    }
-
-    public int getResponse()
-    {
+      join();
+      if (exception != null)
+      {
+        if (exception instanceof IOException)
+          throw (IOException)exception;
+        else if (exception instanceof Error)
+          throw (Error)exception;
+        else if (exception instanceof org.apache.http.HttpException)
+          throw (org.apache.http.HttpException)exception;
+        else if (exception instanceof RuntimeException)
+          throw (RuntimeException)exception;
+        else
+          throw new RuntimeException("Unexpected exception type thrown: "+exception.getClass().getName());
+      }
       return returnCode;
     }
   }
@@ -4987,11 +3725,11 @@
   * @param docLibrary name
   * @return list of the fields
   */
-  public Map getLibFieldList( String parentSite, String docLibrary )
+  public Map<String,String> getLibFieldList( String parentSite, String docLibrary )
     throws ServiceInterruption, ManifoldCFException
   {
     getSession();
-    return proxy.getFieldList( encodePath(parentSite), proxy.getDocLibID( encodePath(parentSite), parentSite, docLibrary) );
+    return proxy.getFieldList( encodePath(parentSite), proxy.getDocLibID( encodePath(parentSite), parentSite, docLibrary ) );
   }
 
   /**
@@ -5000,11 +3738,11 @@
   * @param docLibrary name
   * @return list of the fields
   */
-  public Map getListFieldList( String parentSite, String listName )
+  public Map<String,String> getListFieldList( String parentSite, String listName )
     throws ServiceInterruption, ManifoldCFException
   {
     getSession();
-    return proxy.getFieldList( encodePath(parentSite), proxy.getListID( encodePath(parentSite), parentSite, listName) );
+    return proxy.getFieldList( encodePath(parentSite), proxy.getListID( encodePath(parentSite), parentSite, listName ) );
   }
 
   /**
@@ -5012,7 +3750,7 @@
   * @param parentSite the unencoded parent site path to search for subsites, empty for root.
   * @return list of the sites
   */
-  public ArrayList getSites( String parentSite )
+  public List<NameValue> getSites( String parentSite )
     throws ServiceInterruption, ManifoldCFException
   {
     getSession();
@@ -5024,7 +3762,7 @@
   * @param parentSite the unencoded parent site to search for libraries, empty for root.
   * @return list of the libraries
   */
-  public ArrayList getDocLibsBySite( String parentSite )
+  public List<NameValue> getDocLibsBySite( String parentSite )
     throws ManifoldCFException, ServiceInterruption
   {
     getSession();
@@ -5036,7 +3774,7 @@
   * @param parentSite the unencoded parent site to search for lists, empty for root.
   * @return list of the lists
   */
-  public ArrayList getListsBySite( String parentSite )
+  public List<NameValue> getListsBySite( String parentSite )
     throws ManifoldCFException, ServiceInterruption
   {
     getSession();
@@ -5478,6 +4216,20 @@
     return false;
   }
 
+  /** Check if a list item attachment should be included.
+  *@param attachmentPath is the path to the attachment, including sites and list name, beneath the root site.
+  *@param documentSpecification is the document specification.
+  *@return true if file should be included.
+  */
+  protected boolean checkIncludeListItemAttachment( String attachmentPath, DocumentSpecification documentSpecification )
+  {
+    if (Logging.connectors.isDebugEnabled())
+      Logging.connectors.debug( "SharePoint: Checking whether to include list item attachment '" + attachmentPath + "'" );
+
+    // There are no attachment rules, so they are always included
+    return true;
+  }
+
   /** Check if a list item should be included.
   *@param itemPath is the path to the item, including sites and list name, beneath the root site.
   *@param documentSpecification is the document specification.
diff --git a/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/sharepoint/CommonsHTTPSender.java b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/sharepoint/CommonsHTTPSender.java
new file mode 100644
index 0000000..36b4f0a
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/java/org/apache/manifoldcf/sharepoint/CommonsHTTPSender.java
@@ -0,0 +1,923 @@
+/*
+* Copyright 2001-2004 The Apache Software Foundation.
+*
+* Licensed under the Apache License, Version 2.0 (the "License");
+* you may not use this file except in compliance with the License.
+* You may obtain a copy of the License at
+*
+*      http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*
+* $Id: CommonsHTTPSender.java 988245 2010-08-23 18:39:35Z kwright $
+*/
+package org.apache.manifoldcf.sharepoint;
+
+import org.apache.manifoldcf.core.common.XThreadInputStream;
+
+import org.apache.axis.AxisFault;
+import org.apache.axis.Constants;
+import org.apache.axis.Message;
+import org.apache.axis.MessageContext;
+import org.apache.axis.components.logger.LogFactory;
+import org.apache.axis.components.net.CommonsHTTPClientProperties;
+import org.apache.axis.components.net.CommonsHTTPClientPropertiesFactory;
+import org.apache.axis.components.net.TransportClientProperties;
+import org.apache.axis.components.net.TransportClientPropertiesFactory;
+import org.apache.axis.transport.http.HTTPConstants;
+import org.apache.axis.handlers.BasicHandler;
+import org.apache.axis.soap.SOAP12Constants;
+import org.apache.axis.soap.SOAPConstants;
+import org.apache.axis.utils.JavaUtils;
+import org.apache.axis.utils.Messages;
+import org.apache.axis.utils.NetworkUtils;
+
+import org.apache.http.client.HttpClient;
+import org.apache.http.client.methods.HttpRequestBase;
+import org.apache.http.client.methods.HttpGet;
+import org.apache.http.client.methods.HttpPost;
+import org.apache.http.HttpEntity;
+import org.apache.http.HttpResponse;
+import org.apache.http.Header;
+import org.apache.http.params.CoreProtocolPNames;
+import org.apache.http.params.HttpProtocolParams;
+import org.apache.http.ProtocolVersion;
+import org.apache.http.util.EntityUtils;
+import org.apache.http.message.BasicHeader;
+
+import org.apache.http.conn.ConnectTimeoutException;
+import org.apache.http.client.RedirectException;
+import org.apache.http.client.CircularRedirectException;
+import org.apache.http.NoHttpResponseException;
+import org.apache.http.HttpException;
+
+import org.apache.commons.logging.Log;
+
+import javax.xml.soap.MimeHeader;
+import javax.xml.soap.MimeHeaders;
+import javax.xml.soap.SOAPException;
+import java.io.ByteArrayOutputStream;
+import java.io.FilterInputStream;
+import java.io.InterruptedIOException;
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.OutputStream;
+import java.io.Reader;
+import java.io.InputStreamReader;
+import java.io.Writer;
+import java.io.StringWriter;
+import java.io.File;
+import java.io.FileOutputStream;
+import java.io.FileInputStream;
+import java.net.URL;
+import java.util.ArrayList;
+import java.util.Hashtable;
+import java.util.Iterator;
+import java.util.HashMap;
+import java.util.Map;
+import java.util.List;
+
+/* Class to use httpcomponents to communicate with a SOAP server.
+* I've replaced the original rather complicated class with a much simpler one that
+* relies on having an HttpClient object passed into the invoke() method.  Since
+* the object is already set up, not much needs to be done in here.
+*/
+
+public class CommonsHTTPSender extends BasicHandler {
+
+  public static final String HTTPCLIENT_PROPERTY = "ManifoldCF_HttpClient";
+
+  /** Field log           */
+  protected static Log log =
+    LogFactory.getLog(CommonsHTTPSender.class.getName());
+
+  /** Properties */
+  protected CommonsHTTPClientProperties clientProperties;
+
+  public CommonsHTTPSender() {
+    this.clientProperties = CommonsHTTPClientPropertiesFactory.create();
+  }
+
+  /**
+  * invoke creates a socket connection, sends the request SOAP message and then
+  * reads the response SOAP message back from the SOAP server
+  *
+  * @param msgContext the messsage context
+  *
+  * @throws AxisFault
+  */
+  public void invoke(MessageContext msgContext) throws AxisFault {
+    if (log.isDebugEnabled())
+    {
+      log.debug(Messages.getMessage("enter00",
+        "CommonsHTTPSender::invoke"));
+    }
+    
+    // Catch all exceptions and turn them into AxisFaults
+    try
+    {
+      // Get the URL
+      URL targetURL =
+        new URL(msgContext.getStrProp(MessageContext.TRANS_URL));
+
+      // Get the HttpClient
+      HttpClient httpClient = (HttpClient)msgContext.getProperty(HTTPCLIENT_PROPERTY);
+
+      boolean posting = true;
+      // If we're SOAP 1.2, allow the web method to be set from the
+      // MessageContext.
+      if (msgContext.getSOAPConstants() == SOAPConstants.SOAP12_CONSTANTS) {
+        String webMethod = msgContext.getStrProp(SOAP12Constants.PROP_WEBMETHOD);
+        if (webMethod != null) {
+          posting = webMethod.equals(HTTPConstants.HEADER_POST);
+        }
+      }
+
+      boolean http10 = false;
+      String httpVersion = msgContext.getStrProp(MessageContext.HTTP_TRANSPORT_VERSION);
+      if (httpVersion != null) {
+        if (httpVersion.equals(HTTPConstants.HEADER_PROTOCOL_V10)) {
+          http10 = true;
+        }
+        // assume 1.1
+      }
+
+      HttpRequestBase method;
+        
+      if (posting) {
+        HttpPost postMethod = new HttpPost(targetURL.toString());
+          
+        // set false as default, addContetInfo can overwrite
+        HttpProtocolParams.setUseExpectContinue(postMethod.getParams(),false);
+
+        Message reqMessage = msgContext.getRequestMessage();
+          
+        boolean httpChunkStream = addContextInfo(postMethod, msgContext);
+
+        HttpEntity requestEntity = null;
+        requestEntity = new MessageRequestEntity(reqMessage, httpChunkStream,
+          http10 || !httpChunkStream);
+        postMethod.setEntity(requestEntity);
+        method = postMethod;
+      } else {
+        method = new HttpGet(targetURL.toString());
+      }
+        
+      if (http10)
+        HttpProtocolParams.setVersion(method.getParams(),new ProtocolVersion("HTTP",1,0));
+
+      BackgroundHTTPThread methodThread = new BackgroundHTTPThread(httpClient,method);
+      methodThread.start();
+      try
+      {
+        int returnCode = methodThread.getResponseCode();
+          
+        String contentType =
+          getHeader(methodThread, HTTPConstants.HEADER_CONTENT_TYPE);
+        String contentLocation =
+          getHeader(methodThread, HTTPConstants.HEADER_CONTENT_LOCATION);
+        String contentLength =
+          getHeader(methodThread, HTTPConstants.HEADER_CONTENT_LENGTH);
+        
+        if ((returnCode > 199) && (returnCode < 300)) {
+
+          // SOAP return is OK - so fall through
+        } else if (msgContext.getSOAPConstants() ==
+          SOAPConstants.SOAP12_CONSTANTS) {
+          // For now, if we're SOAP 1.2, fall through, since the range of
+          // valid result codes is much greater
+        } else if ((contentType != null) && !contentType.equals("text/html")
+          && ((returnCode > 499) && (returnCode < 600))) {
+
+          // SOAP Fault should be in here - so fall through
+        } else {
+          String statusMessage = methodThread.getResponseStatus();
+          AxisFault fault = new AxisFault("HTTP",
+            "(" + returnCode + ")"
+          + statusMessage, null,
+            null);
+
+          fault.setFaultDetailString(
+            Messages.getMessage("return01",
+            "" + returnCode,
+            getResponseBodyAsString(methodThread)));
+          fault.addFaultDetail(Constants.QNAME_FAULTDETAIL_HTTPERRORCODE,
+            Integer.toString(returnCode));
+          throw fault;
+        }
+
+        String contentEncoding =
+         methodThread.getFirstHeader(HTTPConstants.HEADER_CONTENT_ENCODING);
+        if (contentEncoding != null) {
+          AxisFault fault = new AxisFault("HTTP",
+            "unsupported content-encoding of '"
+          + contentEncoding
+          + "' found", null, null);
+          throw fault;
+        }
+
+        Map<String,List<String>> responseHeaders = methodThread.getResponseHeaders();
+
+        InputStream dataStream = methodThread.getSafeInputStream();
+
+        Message outMsg = new Message(new BackgroundInputStream(methodThread,dataStream),
+          false, contentType, contentLocation);
+          
+        // Transfer HTTP headers of HTTP message to MIME headers of SOAP message
+        MimeHeaders responseMimeHeaders = outMsg.getMimeHeaders();
+        for (String name : responseHeaders.keySet())
+        {
+          List<String> values = responseHeaders.get(name);
+          for (String value : values) {
+            responseMimeHeaders.addHeader(name,value);
+          }
+        }
+        outMsg.setMessageType(Message.RESPONSE);
+          
+        // Put the message in the message context.
+        msgContext.setResponseMessage(outMsg);
+        
+        // Pass off the method thread to the stream for closure
+        methodThread = null;
+      }
+      finally
+      {
+        if (methodThread != null)
+        {
+          methodThread.abort();
+          methodThread.finishUp();
+        }
+      }
+
+    } catch (AxisFault af) {
+      log.debug(af);
+      throw af;
+    } catch (Exception e) {
+      log.debug(e);
+      throw AxisFault.makeFault(e);
+    }
+
+    if (log.isDebugEnabled()) {
+      log.debug(Messages.getMessage("exit00",
+        "CommonsHTTPSender::invoke"));
+    }
+  }
+
+  /**
+  * Extracts info from message context.
+  *
+  * @param method Post or get method
+  * @param msgContext the message context
+  */
+  private static boolean addContextInfo(HttpPost method,
+    MessageContext msgContext)
+    throws AxisFault {
+
+    boolean httpChunkStream = false;
+
+    // Get SOAPAction, default to ""
+    String action = msgContext.useSOAPAction()
+      ? msgContext.getSOAPActionURI()
+      : "";
+
+    if (action == null) {
+      action = "";
+    }
+
+    Message msg = msgContext.getRequestMessage();
+
+    if (msg != null){
+
+      // First, transfer MIME headers of SOAPMessage to HTTP headers.
+      // Some of these might be overridden later.
+      MimeHeaders mimeHeaders = msg.getMimeHeaders();
+      if (mimeHeaders != null) {
+        for (Iterator i = mimeHeaders.getAllHeaders(); i.hasNext(); ) {
+          MimeHeader mimeHeader = (MimeHeader) i.next();
+          method.addHeader(mimeHeader.getName(),
+            mimeHeader.getValue());
+        }
+      }
+
+      method.setHeader(new BasicHeader(HTTPConstants.HEADER_CONTENT_TYPE,
+        msg.getContentType(msgContext.getSOAPConstants())));
+    }
+    
+    method.setHeader(new BasicHeader("Accept","*/*"));
+
+    method.setHeader(new BasicHeader(HTTPConstants.HEADER_SOAP_ACTION,
+      "\"" + action + "\""));
+    method.setHeader(new BasicHeader(HTTPConstants.HEADER_USER_AGENT, Messages.getMessage("axisUserAgent")));
+
+
+    // process user defined headers for information.
+    Hashtable userHeaderTable =
+      (Hashtable) msgContext.getProperty(HTTPConstants.REQUEST_HEADERS);
+
+    if (userHeaderTable != null) {
+      for (Iterator e = userHeaderTable.entrySet().iterator();
+        e.hasNext();) {
+        Map.Entry me = (Map.Entry) e.next();
+        Object keyObj = me.getKey();
+
+        if (null == keyObj) {
+          continue;
+        }
+        String key = keyObj.toString().trim();
+        String value = me.getValue().toString().trim();
+
+        if (key.equalsIgnoreCase(HTTPConstants.HEADER_EXPECT) &&
+          value.equalsIgnoreCase(HTTPConstants.HEADER_EXPECT_100_Continue)) {
+          HttpProtocolParams.setUseExpectContinue(method.getParams(),true);
+        } else if (key.equalsIgnoreCase(HTTPConstants.HEADER_TRANSFER_ENCODING_CHUNKED)) {
+          String val = me.getValue().toString();
+          if (null != val)  {
+            httpChunkStream = JavaUtils.isTrue(val);
+          }
+        } else {
+          method.addHeader(key, value);
+        }
+      }
+    }
+    
+    return httpChunkStream;
+  }
+
+  private static String getHeader(BackgroundHTTPThread methodThread, String headerName)
+    throws IOException, InterruptedException, HttpException {
+    String header = methodThread.getFirstHeader(headerName);
+    return (header == null) ? null : header.trim();
+  }
+
+  private static String getResponseBodyAsString(BackgroundHTTPThread methodThread)
+    throws IOException, InterruptedException, HttpException {
+    InputStream is = methodThread.getSafeInputStream();
+    if (is != null)
+    {
+      try
+      {
+        String charSet = methodThread.getCharSet();
+        if (charSet == null)
+          charSet = "utf-8";
+        char[] buffer = new char[65536];
+        Reader r = new InputStreamReader(is,charSet);
+        Writer w = new StringWriter();
+        try
+        {
+          while (true)
+          {
+            int amt = r.read(buffer);
+            if (amt == -1)
+              break;
+            w.write(buffer,0,amt);
+          }
+        }
+        finally
+        {
+          w.flush();
+        }
+        return w.toString();
+      }
+      finally
+      {
+        is.close();
+      }
+    }
+    return "";
+  }
+  
+  private static class MessageRequestEntity implements HttpEntity {
+
+    private final Message message;
+    private final boolean httpChunkStream; //Use HTTP chunking or not.
+    private final boolean contentLengthNeeded;
+
+    public MessageRequestEntity(Message message, boolean httpChunkStream, boolean contentLengthNeeded) {
+      this.message = message;
+      this.httpChunkStream = httpChunkStream;
+      this.contentLengthNeeded = contentLengthNeeded;
+    }
+
+    @Override
+    public boolean isChunked() {
+      return httpChunkStream;
+    }
+    
+    @Override
+    public void consumeContent()
+      throws IOException {
+      EntityUtils.consume(this);
+    }
+    
+    @Override
+    public boolean isRepeatable() {
+      return true;
+    }
+
+    @Override
+    public boolean isStreaming() {
+      return false;
+    }
+    
+    @Override
+    public InputStream getContent()
+      throws IOException, IllegalStateException {
+      // MHL
+      return null;
+    }
+    
+    @Override
+    public void writeTo(OutputStream out)
+      throws IOException {
+      try {
+        this.message.writeTo(out);
+      } catch (SOAPException e) {
+        throw new IOException(e.getMessage());
+      }
+    }
+
+    @Override
+    public long getContentLength() {
+      if (contentLengthNeeded) {
+        try {
+          return message.getContentLength();
+        } catch (Exception e) {
+        }
+      }
+      // Unknown (chunked) length
+      return -1L;
+    }
+
+    @Override
+    public Header getContentType() {
+      return null; // a separate header is added
+    }
+
+    @Override
+    public Header getContentEncoding() {
+      return null;
+    }
+  }
+
+  /** This input stream wraps a background http transaction thread, so that
+  * the thread is ended when the stream is closed.
+  */
+  private static class BackgroundInputStream extends InputStream {
+    
+    private BackgroundHTTPThread methodThread = null;
+    private InputStream xThreadInputStream = null;
+    
+    /** Construct an http transaction stream.  The stream is driven by a background
+    * thread, whose existence is tied to this class.  The sequence of activity that
+    * this class expects is as follows:
+    * (1) Construct the httpclient and request object and initialize them
+    * (2) Construct a background method thread, and start it
+    * (3) If the response calls for it, call this constructor, and put the resulting stream
+    *    into the message response
+    * (4) Otherwise, terminate the background method thread in the standard manner,
+    *    being sure NOT
+    */
+    public BackgroundInputStream(BackgroundHTTPThread methodThread, InputStream xThreadInputStream)
+    {
+      this.methodThread = methodThread;
+      this.xThreadInputStream = xThreadInputStream;
+    }
+    
+    @Override
+    public int available()
+      throws IOException
+    {
+      if (xThreadInputStream != null)
+        return xThreadInputStream.available();
+      return super.available();
+    }
+    
+    @Override
+    public void close()
+      throws IOException
+    {
+      try
+      {
+        if (xThreadInputStream != null)
+        {
+          xThreadInputStream.close();
+          xThreadInputStream = null;
+        }
+      }
+      finally
+      {
+        if (methodThread != null)
+        {
+          methodThread.abort();
+          try
+          {
+            methodThread.finishUp();
+          }
+          catch (InterruptedException e)
+          {
+            throw new InterruptedIOException(e.getMessage());
+          }
+          methodThread = null;
+        }
+      }
+    }
+    
+    @Override
+    public void mark(int readlimit)
+    {
+      if (xThreadInputStream != null)
+        xThreadInputStream.mark(readlimit);
+      else
+        super.mark(readlimit);
+    }
+    
+    @Override
+    public void reset()
+      throws IOException
+    {
+      if (xThreadInputStream != null)
+        xThreadInputStream.reset();
+      else
+        super.reset();
+    }
+    
+    @Override
+    public boolean markSupported()
+    {
+      if (xThreadInputStream != null)
+        return xThreadInputStream.markSupported();
+      return super.markSupported();
+    }
+    
+    @Override
+    public long skip(long n)
+      throws IOException
+    {
+      if (xThreadInputStream != null)
+        return xThreadInputStream.skip(n);
+      return super.skip(n);
+    }
+    
+    @Override
+    public int read(byte[] b, int off, int len)
+      throws IOException
+    {
+      if (xThreadInputStream != null)
+        return xThreadInputStream.read(b,off,len);
+      return super.read(b,off,len);
+    }
+
+    @Override
+    public int read(byte[] b)
+      throws IOException
+    {
+      if (xThreadInputStream != null)
+        return xThreadInputStream.read(b);
+      return super.read(b);
+    }
+    
+    @Override
+    public int read()
+      throws IOException
+    {
+      if (xThreadInputStream != null)
+        return xThreadInputStream.read();
+      return -1;
+    }
+    
+  }
+
+  /** This thread does the actual socket communication with the server.
+  * It's set up so that it can be abandoned at shutdown time.
+  *
+  * The way it works is as follows:
+  * - it starts the transaction
+  * - it receives the response, and saves that for the calling class to inspect
+  * - it transfers the data part to an input stream provided to the calling class
+  * - it shuts the connection down
+  *
+  * If there is an error, the sequence is aborted, and an exception is recorded
+  * for the calling class to examine.
+  *
+  * The calling class basically accepts the sequence above.  It starts the
+  * thread, and tries to get a response code.  If instead an exception is seen,
+  * the exception is thrown up the stack.
+  */
+  protected static class BackgroundHTTPThread extends Thread
+  {
+    /** Client and method, all preconfigured */
+    protected final HttpClient httpClient;
+    protected final HttpRequestBase executeMethod;
+    
+    protected HttpResponse response = null;
+    protected Throwable responseException = null;
+    protected XThreadInputStream threadStream = null;
+    protected InputStream bodyStream = null;
+    protected String charSet = null;
+    protected boolean streamCreated = false;
+    protected Throwable streamException = null;
+    protected boolean abortThread = false;
+
+    protected Throwable shutdownException = null;
+
+    protected Throwable generalException = null;
+    
+    public BackgroundHTTPThread(HttpClient httpClient, HttpRequestBase executeMethod)
+    {
+      super();
+      setDaemon(true);
+      this.httpClient = httpClient;
+      this.executeMethod = executeMethod;
+    }
+
+    public void run()
+    {
+      try
+      {
+        try
+        {
+          // Call the execute method appropriately
+          synchronized (this)
+          {
+            if (!abortThread)
+            {
+              try
+              {
+                response = httpClient.execute(executeMethod);
+              }
+              catch (java.net.SocketTimeoutException e)
+              {
+                responseException = e;
+              }
+              catch (ConnectTimeoutException e)
+              {
+                responseException = e;
+              }
+              catch (InterruptedIOException e)
+              {
+                throw e;
+              }
+              catch (Throwable e)
+              {
+                responseException = e;
+              }
+              this.notifyAll();
+            }
+          }
+          
+          // Start the transfer of the content
+          if (responseException == null)
+          {
+            synchronized (this)
+            {
+              if (!abortThread)
+              {
+                try
+                {
+                  HttpEntity entity = response.getEntity();
+                  bodyStream = entity.getContent();
+                  if (bodyStream != null)
+                  {
+                    threadStream = new XThreadInputStream(bodyStream);
+                    charSet = EntityUtils.getContentCharSet(entity);
+                  }
+                  streamCreated = true;
+                }
+                catch (java.net.SocketTimeoutException e)
+                {
+                  streamException = e;
+                }
+                catch (ConnectTimeoutException e)
+                {
+                  streamException = e;
+                }
+                catch (InterruptedIOException e)
+                {
+                  throw e;
+                }
+                catch (Throwable e)
+                {
+                  streamException = e;
+                }
+                this.notifyAll();
+              }
+            }
+          }
+          
+          if (responseException == null && streamException == null)
+          {
+            if (threadStream != null)
+            {
+              // Stuff the content until we are done
+              threadStream.stuffQueue();
+            }
+          }
+          
+        }
+        finally
+        {
+          if (bodyStream != null)
+          {
+            try
+            {
+              bodyStream.close();
+            }
+            catch (IOException e)
+            {
+            }
+            bodyStream = null;
+          }
+          synchronized (this)
+          {
+            try
+            {
+              executeMethod.abort();
+            }
+            catch (Throwable e)
+            {
+              shutdownException = e;
+            }
+            this.notifyAll();
+          }
+        }
+      }
+      catch (Throwable e)
+      {
+        // We catch exceptions here that should ONLY be InterruptedExceptions, as a result of the thread being aborted.
+        this.generalException = e;
+      }
+    }
+
+    public int getResponseCode()
+      throws InterruptedException, IOException, HttpException
+    {
+      // Must wait until the response object is there
+      while (true)
+      {
+        synchronized (this)
+        {
+          checkException(responseException);
+          if (response != null)
+            return response.getStatusLine().getStatusCode();
+          wait();
+        }
+      }
+    }
+
+    public String getResponseStatus()
+      throws InterruptedException, IOException, HttpException
+    {
+      // Must wait until the response object is there
+      while (true)
+      {
+        synchronized (this)
+        {
+          checkException(responseException);
+          if (response != null)
+            return response.getStatusLine().toString();
+          wait();
+        }
+      }
+    }
+
+    public Map<String,List<String>> getResponseHeaders()
+      throws InterruptedException, IOException, HttpException
+    {
+      // Must wait for the response object to appear
+      while (true)
+      {
+        synchronized (this)
+        {
+          checkException(responseException);
+          if (response != null)
+          {
+            Header[] headers = response.getAllHeaders();
+            Map<String,List<String>> rval = new HashMap<String,List<String>>();
+            int i = 0;
+            while (i < headers.length)
+            {
+              Header h = headers[i++];
+              String name = h.getName();
+              String value = h.getValue();
+              List<String> values = rval.get(name);
+              if (values == null)
+              {
+                values = new ArrayList<String>();
+                rval.put(name,values);
+              }
+              values.add(value);
+            }
+            return rval;
+          }
+          wait();
+        }
+      }
+
+    }
+    
+    public String getFirstHeader(String headerName)
+      throws InterruptedException, IOException, HttpException
+    {
+      // Must wait for the response object to appear
+      while (true)
+      {
+        synchronized (this)
+        {
+          checkException(responseException);
+          if (response != null)
+          {
+            Header h = response.getFirstHeader(headerName);
+            if (h == null)
+              return null;
+            return h.getValue();
+          }
+          wait();
+        }
+      }
+    }
+
+    public InputStream getSafeInputStream()
+      throws InterruptedException, IOException, HttpException
+    {
+      // Must wait until stream is created, or until we note an exception was thrown.
+      while (true)
+      {
+        synchronized (this)
+        {
+          if (responseException != null)
+            throw new IllegalStateException("Check for response before getting stream");
+          checkException(streamException);
+          if (streamCreated)
+            return threadStream;
+          wait();
+        }
+      }
+    }
+    
+    public String getCharSet()
+      throws InterruptedException, IOException, HttpException
+    {
+      while (true)
+      {
+        synchronized (this)
+        {
+          if (responseException != null)
+            throw new IllegalStateException("Check for response before getting charset");
+          checkException(streamException);
+          if (streamCreated)
+            return charSet;
+          wait();
+        }
+      }
+    }
+    
+    public void abort()
+    {
+      // This will be called during the finally
+      // block in the case where all is well (and
+      // the stream completed) and in the case where
+      // there were exceptions.
+      synchronized (this)
+      {
+        if (streamCreated)
+        {
+          if (threadStream != null)
+            threadStream.abort();
+        }
+        abortThread = true;
+      }
+    }
+    
+    public void finishUp()
+      throws InterruptedException
+    {
+      join();
+    }
+    
+    protected synchronized void checkException(Throwable exception)
+      throws IOException, HttpException
+    {
+      if (exception != null)
+      {
+        Throwable e = exception;
+        if (e instanceof IOException)
+          throw (IOException)e;
+        else if (e instanceof HttpException)
+          throw (HttpException)e;
+        else if (e instanceof RuntimeException)
+          throw (RuntimeException)e;
+        else if (e instanceof Error)
+          throw (Error)e;
+        else
+          throw new RuntimeException("Unhandled exception of type: "+e.getClass().getName(),e);
+      }
+    }
+
+  }
+
+}
+
diff --git a/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/sharepoint/common_en_US.properties b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/sharepoint/common_en_US.properties
new file mode 100644
index 0000000..8bc42ec
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/sharepoint/common_en_US.properties
@@ -0,0 +1,78 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+SharePointAuthority.DomainController=Domain Controller
+SharePointAuthority.Cache=Cache
+SharePointAuthority.DomainControllers=Domain Controllers:
+SharePointAuthority.DomainControllerName=Domain controller name
+SharePointAuthority.DomainSuffix=Domain suffix
+SharePointAuthority.AdministrativeUserName=Administrative user name
+SharePointAuthority.AdministrativePassword=Administrative password
+SharePointAuthority.Authentication=Authentication
+SharePointAuthority.LoginNameADAttribute=Login name AD attribute
+SharePointAuthority.CacheLifetime=Cache lifetime:
+SharePointAuthority.CacheLRUSize=Cache LRU size:
+SharePointAuthority.minutes=minutes
+SharePointAuthority.AddToEnd=Add to End
+SharePointAuthority.AddRuleToEnd=Add rule to end of list
+SharePointAuthority.Delete=Delete
+SharePointAuthority.DeleteRuleNumber=Delete rule #
+SharePointAuthority.InsertBefore=Insert Before
+SharePointAuthority.InsertBeforeRuleNumber=Insert before rule #
+SharePointAuthority.EnterADomainControllerServerName=Enter a domain controller server name
+SharePointAuthority.Domain Controller2=Domain Controller
+SharePointAuthority.AdministrativeUserNameCannotBeNull=Administrative user name cannot be null
+SharePointAuthority.AuthenticationCannotBeNull=Authentication cannot be null
+SharePointAuthority.CacheLifetimeCannotBeNull=Cache lifetime cannot be null
+SharePointAuthority.CacheLifetimeMustBeAnInteger=Cache lifetime must be an integer
+SharePointAuthority.CacheLRUSizeCannotBeNull=Cache LRU size cannot be null
+SharePointAuthority.CacheLRUSizeMustBeAnInteger=Cache LRU size must be an integer
+SharePointAuthority.certificate=certificate(s)
+
+SharePointAuthority.Server=Server
+SharePointAuthority.ServerSharePointVersion=Server SharePoint version:
+SharePointAuthority.ServerProtocol=Server protocol:
+SharePointAuthority.ServerName=Server name:
+SharePointAuthority.ServerPort=Server port:
+SharePointAuthority.SitePath=Site path:
+SharePointAuthority.UserName=User name:
+SharePointAuthority.Password=Password:
+SharePointAuthority.SSLCertificateList=SSL certificate list:
+SharePointAuthority.NoCertificatesPresent=No certificates present
+SharePointAuthority.Delete=Delete
+SharePointAuthority.DeleteCert=Delete cert
+SharePointAuthority.Add=Add
+SharePointAuthority.AddCert=Add cert
+SharePointAuthority.Parameters=Parameters:
+SharePointAuthority.ChooseACertificateFile=Choose a certificate file
+SharePointAuthority.PleaseSupplyAValidNumber=Please supply a valid number
+SharePointAuthority.PleaseSpecifyAnyServerPathInformation=Please specify any server path information in the site path field, not the server name field
+SharePointAuthority.SitePathMustBeginWithWCharacter=Site path must begin with a '/' character
+SharePointAuthority.SitePathCannotEndWithACharacter=Site path cannot end with a '/' character
+SharePointAuthority.AValidSharePointUserNameHasTheForm=A valid SharePoint user name has the form <domain>\\<user>
+SharePointAuthority.PleaseFillInASharePointServerName=Please fill in a SharePoint server name
+SharePointAuthority.PleaseSpecifyAnyServerPathInformationInTheSitePathField=Please specify any server path information in the site path field, not the server name field
+SharePointAuthority.PleaseSupplyASharePointPortNumber=Please supply a SharePoint port number, or none for default
+SharePointAuthority.TheConnectionRequiresAValidSharePointUserName=The connection requires a valid SharePoint user name of the form <domain>\\<user>
+SharePointAuthority.Certificate=Certificate:
+
+SharePointAuthority.AuthorizationModel=Authorization model
+SharePointAuthority.AuthorizationModelColon=Authorization model:
+SharePointAuthority.Classic=Classic
+SharePointAuthority.ClaimSpace=Claim Space
+
+
+
+
diff --git a/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/sharepoint/common_ja_JP.properties b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/sharepoint/common_ja_JP.properties
new file mode 100644
index 0000000..e447dd3
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/authorities/authorities/sharepoint/common_ja_JP.properties
@@ -0,0 +1,74 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+SharePointAuthority.DomainController=ドメインコントローラ
+SharePointAuthority.Cache=キャッシュ
+SharePointAuthority.DomainControllers=ドメインコントローラ:
+SharePointAuthority.DomainControllerName=ドメインコントローラ名
+SharePointAuthority.DomainSuffix=Domain suffix
+SharePointAuthority.AdministrativeUserName=管理者
+SharePointAuthority.AdministrativePassword=管理者パスワード
+SharePointAuthority.Authentication=認証
+SharePointAuthority.LoginNameADAttribute=ログイン名AD属性
+SharePointAuthority.CacheLifetime=キャッシュライフタイム:
+SharePointAuthority.CacheLRUSize=キャッシュLRUサイズ:
+SharePointAuthority.minutes=分
+SharePointAuthority.AddToEnd=最後に追加する
+SharePointAuthority.AddRuleToEnd=リストの最後にルールを追加します。
+SharePointAuthority.Delete=削除する
+SharePointAuthority.DeleteRuleNumber=ルールを削除します。 #
+SharePointAuthority.InsertBefore=前に挿入します。
+SharePointAuthority.InsertBeforeRuleNumber=ルールの前に挿入 #
+SharePointAuthority.EnterADomainControllerServerName=ドメインコントローラ名を入力してください
+SharePointAuthority.Domain Controller2=ドメインコントローラ
+SharePointAuthority.AdministrativeUserNameCannotBeNull=管理者名を入力してください
+SharePointAuthority.AuthenticationCannotBeNull=認証情報を入力してください
+SharePointAuthority.CacheLifetimeCannotBeNull=キャッシュライフタイムを入力してください
+SharePointAuthority.CacheLifetimeMustBeAnInteger=キャッシュライフタイムには整数を入力してください
+SharePointAuthority.CacheLRUSizeCannotBeNull=キャッシュLRUサイズを入力してください
+SharePointAuthority.CacheLRUSizeMustBeAnInteger=キャッシュLRUサイズには整数を入力してください
+SharePointAuthority.certificate=証明書
+
+SharePointAuthority.Server=サーバ
+SharePointAuthority.ServerSharePointVersion=サーバSharePointバージョン:
+SharePointAuthority.ServerProtocol=サーバプロトコル:
+SharePointAuthority.ServerName=サーバ名:
+SharePointAuthority.ServerPort=サーバポート番号:
+SharePointAuthority.SitePath=サイトパス:
+SharePointAuthority.UserName=ユーザ名:
+SharePointAuthority.Password=パスワード:
+SharePointAuthority.SSLCertificateList=SSL認証一覧:
+SharePointAuthority.NoCertificatesPresent=認証がありません
+SharePointAuthority.Delete=削除
+SharePointAuthority.DeleteCert=認証の削除
+SharePointAuthority.Add=追加
+SharePointAuthority.AddCert=認証の追加
+SharePointAuthority.Parameters=引数:
+SharePointAuthority.ChooseACertificateFile=証明書ファイルを選択してください
+SharePointAuthority.PleaseSupplyAValidNumber=数字を入力してください
+SharePointAuthority.PleaseSpecifyAnyServerPathInformation=サーバ名ではなく、サイトパスにサーバパス情報を入力してください
+SharePointAuthority.SitePathMustBeginWithWCharacter=サイトパスは文字「/」から始めてください
+SharePointAuthority.SitePathCannotEndWithACharacter=サイトパスの未尾に文字「/」を使わないでください
+SharePointAuthority.AValidSharePointUserNameHasTheForm=SharePointユーザ名は<ドメイン>\\<ユーザ>\で指定してください
+SharePointAuthority.PleaseFillInASharePointServerName=SharePointサーバ名を入力してください
+SharePointAuthority.PleaseSpecifyAnyServerPathInformationInTheSitePathField=サーバ名ではなく、サイトパスにサーバパス情報を入力してください
+SharePointAuthority.PleaseSupplyASharePointPortNumber=SharePointポート番号を入力してください。デフォルト値を利用する場合は空白にしてください
+SharePointAuthority.TheConnectionRequiresAValidSharePointUserName=コネクションのSharePointユーザ名は<ドメイン>\\<ユーザ>\形式で指定してください
+SharePointAuthority.Certificate=証明証:
+
+SharePointAuthority.AuthorizationModel=Authorization model
+SharePointAuthority.AuthorizationModelColon=Authorization model:
+SharePointAuthority.Classic=Classic
+SharePointAuthority.ClaimSpace=Claim Space
diff --git a/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_en_US.properties b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_en_US.properties
index 11a488f..eb7c493 100644
--- a/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_en_US.properties
+++ b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_en_US.properties
@@ -13,6 +13,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+SharePointRepository.AuthorityTypeColon=Authority type:
+SharePointRepository.SharePoint=SharePoint
+SharePointRepository.ActiveDirectory=Active Directory
+
 SharePointRepository.List=List
 SharePointRepository.AddList=Add List
 SharePointRepository.SelectList=Select list
@@ -20,9 +24,13 @@
 SharePointRepository.PleaseSelectAListFirst=Please select a list first
 SharePointRepository.AddListToMetadataRulePath=Add List to Metadata Rule Path
 
+SharePointRepository.AddToMappings=Add to mappings
+SharePointRepository.AddPathMapping=Add Path Mapping
+
 SharePointRepository.NoAccessTokensPresent=No access tokens present
 SharePointRepository.NoAccessTokensSpecified=No access tokens specified
 SharePointRepository.Server=Server
+SharePointRepository.AuthorityType=Authority type
 SharePointRepository.Paths=Paths
 SharePointRepository.Security=Security
 SharePointRepository.Metadata=Metadata
@@ -47,7 +55,6 @@
 SharePointRepository.InsertNewRule=Insert New Rule
 SharePointRepository.Delete=Delete
 SharePointRepository.DeleteRule=Delete rule #
-SharePointRepository.InsertNewRule=Insert New Rule
 SharePointRepository.NoDocumentsCurrentlyIncluded=No documents currently included
 SharePointRepository.AddNewRule=Add New Rule
 SharePointRepository.File=File
@@ -66,8 +73,6 @@
 SharePointRepository.Action=Action
 SharePointRepository.AllMetadata=All metadata?
 SharePointRepository.Fields=Fields
-SharePointRepository.InsertNewRule=Insert New Rule
-SharePointRepository.AddNewRule=Add New Rule
 SharePointRepository.ResetPath=Reset Path
 SharePointRepository.PathMetadata=Path metadata:
 SharePointRepository.AttributeName=Attribute name:
@@ -83,7 +88,7 @@
 SharePointRepository.include2=include
 SharePointRepository.NoMetadataWillBeIncluded=No metadata will be included
 SharePointRepository.AccessToken=Access tokens:
-SharePointRepository.PathMetadataAttributeName:=Path metadata attribute name:
+SharePointRepository.PathMetadataAttributeName=Path metadata attribute name:
 SharePointRepository.NoPathNameMetadataAttributeSpecified=No path-name metadata attribute specified
 SharePointRepository.PathValueMapping=Path-value mapping:
 SharePointRepository.NoMappingsSpecified=No mappings specified
@@ -110,7 +115,6 @@
 SharePointRepository.MatchStringMustBeValidRegularExpression=Match string must be valid regular expression
 SharePointRepository.InsertNewRuleBeforeRule=Insert new rule before rule #
 SharePointRepository.DeleteRule=Delete rule #
-SharePointRepository.InsertNewRuleBeforeRule=Insert new rule before rule #
 SharePointRepository.NewRule=New rule:
 SharePointRepository.ResetRulePath=Reset Rule Path
 SharePointRepository.RemoveFromRulePath=Remove from Rule Path
diff --git a/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_ja_JP.properties b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_ja_JP.properties
index ea94c75..c94c98b 100644
--- a/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_ja_JP.properties
+++ b/connectors/sharepoint/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/sharepoint/common_ja_JP.properties
@@ -13,6 +13,10 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
+SharePointRepository.AuthorityTypeColon=Authority type:
+SharePointRepository.SharePoint=SharePoint
+SharePointRepository.ActiveDirectory=Active Directory
+
 SharePointRepository.List=一覧表
 SharePointRepository.AddList=一覧を追加します
 SharePointRepository.SelectList=一覧を選択
@@ -20,9 +24,13 @@
 SharePointRepository.PleaseSelectAListFirst=最初のリストを選択してください
 SharePointRepository.AddListToMetadataRulePath=メタデータ·ルールのパスに一覧を追加します
 
+SharePointRepository.AddToMappings=Add to mappings
+SharePointRepository.AddPathMapping=Add Path Mapping
+
 SharePointRepository.NoAccessTokensPresent=アクセストークンが存在しません
 SharePointRepository.NoAccessTokensSpecified=アクセストークンが未定義です
 SharePointRepository.Server=サーバ
+SharePointRepository.AuthorityType=Authority type
 SharePointRepository.Paths=パス
 SharePointRepository.Security=セキュリティ
 SharePointRepository.Metadata=メタデータ
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration.js b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration.js
new file mode 100644
index 0000000..d3fd88e
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration.js
@@ -0,0 +1,134 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkConfig()
+{
+  var i = 0;
+  var count = editconnection.dcrecord_count.value;
+  while (i < count)
+  {
+    var username = eval("editconnection.dcrecord_username_"+i+".value");
+    if (username == "")
+    {
+      alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AdministrativeUserNameCannotBeNull'))");
+      eval("editconnection.dcrecord_username_"+i+".focus()");
+      return false;
+    }
+    var authentication = eval("editconnection.dcrecord_authentication_"+i+".value");
+    if (authentication == "")
+    {
+      alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AuthenticationCannotBeNull'))");
+      eval("editconnection.dcrecord_authentication_"+i+".focus()");
+      return false;
+    }
+    i += 1;
+  }
+  return true;
+}
+
+function checkConfigForSave()
+{
+  if (editconnection.cachelifetime.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetimeCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelifetime.focus();
+    return false;
+  }
+  if (editconnection.cachelifetime.value != "" && !isInteger(editconnection.cachelifetime.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetimeMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelifetime.focus();
+    return false;
+  }
+  if (editconnection.cachelrusize.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSizeCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelrusize.focus();
+    return false;
+  }
+  if (editconnection.cachelrusize.value != "" && !isInteger(editconnection.cachelrusize.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSizeMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelrusize.focus();
+    return false;
+  }
+  return true;
+}
+
+function deleteDC(i)
+{
+  eval("editconnection.dcrecord_op_"+i+".value=\"Delete\"");
+  postFormSetAnchor("dcrecord");
+}
+
+function insertDC(i)
+{
+  if (editconnection.dcrecord_domaincontrollername.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.EnterADomainControllerServerName'))");
+    editconnection.dcrecord_domaincontrollername.focus();
+    return;
+  }
+  if (editconnection.dcrecord_username.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AdministrativeUserNameCannotBeNull'))");
+    editconnection.dcrecord_username.focus();
+    return;
+  }
+  if (editconnection.dcrecord_authentication.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AuthenticationCannotBeNull'))");
+    editconnection.dcrecord_authentication.focus();
+    return;
+  }
+  eval("editconnection.dcrecord_op_"+i+".value=\"Insert\"");
+  postFormSetAnchor("dcrecord_"+i);
+}
+
+function addDC()
+{
+  if (editconnection.dcrecord_domaincontrollername.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.EnterADomainControllerServerName'))");
+    editconnection.dcrecord_domaincontrollername.focus();
+    return;
+  }
+  if (editconnection.dcrecord_username.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AdministrativeUserNameCannotBeNull'))");
+    editconnection.dcrecord_username.focus();
+    return;
+  }
+  if (editconnection.dcrecord_authentication.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AuthenticationCannotBeNull'))");
+    editconnection.dcrecord_authentication.focus();
+    return;
+  }
+  editconnection.dcrecord_op.value="Add";
+  postFormSetAnchor("dcrecord");
+}
+
+//-->
+</script>
+
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration_Cache.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration_Cache.html
new file mode 100644
index 0000000..384abd1
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration_Cache.html
@@ -0,0 +1,37 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointAuthority.Cache'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetime'))</nobr></td>
+    <td class="value"><input type="text" size="5" name="cachelifetime" value="$Encoder.attributeEscape($CACHELIFETIME)"/> $Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.minutes'))</td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSize'))</nobr></td>
+    <td class="value"><input type="text" size="5" name="cachelrusize" value="$Encoder.attributeEscape($CACHELRUSIZE)"/></td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="cachelifetime" value="$Encoder.attributeEscape($CACHELIFETIME)"/>
+<input type="hidden" name="cachelrusize" value="$Encoder.attributeEscape($CACHELRUSIZE)"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration_DomainController.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration_DomainController.html
new file mode 100644
index 0000000..d5da69f
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editADConfiguration_DomainController.html
@@ -0,0 +1,132 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointAuthority.DomainController'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.DomainControllers'))</td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"></td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.DomainControllerName'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.DomainSuffix'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.AdministrativeUserName'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.AdministrativePassword'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.Authentication'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.LoginNameADAttribute'))</td>
+        </tr>
+  #set($dccounter = 0)
+  #foreach($domaincontroller in $DOMAINCONTROLLERS)
+    #if(($dccounter % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <a name="dcrecord_$dccounter">
+              <nobr>
+                <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.Delete'))"
+                alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.DeleteRuleNumber'))$dccounter" onclick="javascript:deleteDC($dccounter);"/>
+                <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.InsertBefore'))"
+                alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.InsertBeforeRuleNumber'))$dccounter" onclick="javascript:insertDC($dccounter);"/>
+              </nobr>
+            </a>
+            <input type="hidden" name="dcrecord_op_$dccounter" value="Continue"/>
+            <input type="hidden" name="dcrecord_domaincontrollername_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('DOMAINCONTROLLER'))"/>
+          </td>
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($domaincontroller.get('DOMAINCONTROLLER'))</nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_suffix_$dccounter" type="text" size="10" value="$Encoder.attributeEscape($domaincontroller.get('SUFFIX'))"/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_username_$dccounter" type="text" size="10" value="$Encoder.attributeEscape($domaincontroller.get('USERNAME'))"/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_password_$dccounter" type="password" size="6" value="$Encoder.attributeEscape($domaincontroller.get('PASSWORD'))"/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_authentication_$dccounter" type="text" size="10" value="$Encoder.attributeEscape($domaincontroller.get('AUTHENTICATION'))"/></nobr></td>
+          <td class="formcolumncell">
+            <nobr>
+              <select name="dcrecord_userACLsUsername_$dccounter">
+    #if($domaincontroller.get('USERACLsUSERNAME') == 'sAMAccountName')
+                <option value="sAMAccountName" selected="true">
+    #else
+                <option value="sAMAccountName">
+    #end
+                  sAMAccountName
+                </option>
+    #if($domaincontroller.get('USERACLsUSERNAME') == 'userPrincipalName')
+                <option value="userPrincipalName" selected="true">
+    #else
+                <option value="userPrincipalName">
+    #end
+                  userPrincipalName
+                </option>
+              </select>
+            </nobr>
+          </td>
+        </tr>
+    #set($dccounter = $dccounter + 1)
+  #end
+        <tr class="formrow"><td class="formseparator" colspan="7"><hr/></td></tr>
+        <tr class="formrow">
+          <td class="formcolumncell">
+            <a name="dcrecord">
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.AddToEnd'))"
+              alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.AddRuleToEnd'))" onclick="javascript:addDC();"/>
+            </a>
+            <input type="hidden" name="dcrecord_count" value="$dccounter"/>
+            <input type="hidden" name="dcrecord_op" value="Continue"/>
+          </td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_domaincontrollername" type="text" size="32" value=""/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_suffix" type="text" size="10" value=""/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_username" type="text" size="10" value=""/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_password" type="password" size="6" value=""/></nobr></td>
+          <td class="formcolumncell"><nobr><input name="dcrecord_authentication" type="text" size="10" value="DIGEST-MD5 GSSAPI"/></nobr></td>
+          <td class="formcolumncell">
+            <nobr>
+              <select name="dcrecord_userACLsUsername">
+                <option value="sAMAccountName" selected="true">
+                  sAMAccountName
+                </option>
+                <option value="userPrincipalName">
+                  userPrincipalName
+                </option>
+              </select>
+            </nobr>
+          </td>
+        </tr>
+      </table>
+    </td>
+  </tr>
+</table>
+
+#else
+
+  #set($dccounter = 0)
+  #foreach($domaincontroller in $DOMAINCONTROLLERS)
+
+<input type="hidden" name="dcrecord_suffix_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('SUFFIX'))"/>
+<input type="hidden" name="dcrecord_domaincontrollername_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('DOMAINCONTROLLER'))"/>
+<input type="hidden" name="dcrecord_username_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('USERNAME'))"/>
+<input type="hidden" name="dcrecord_password_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('PASSWORD'))"/>
+<input type="hidden" name="dcrecord_authentication_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('AUTHENTICATION'))"/>
+<input type="hidden" name="dcrecord_userACLsUsername_$dccounter" value="$Encoder.attributeEscape($domaincontroller.get('USERACLsUSERNAME'))"/>
+
+    #set($dccounter = $dccounter + 1)
+  #end
+
+<input type="hidden" name="dcrecord_count" value="$dccounter"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration.js b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration.js
new file mode 100644
index 0000000..9fed1f9
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration.js
@@ -0,0 +1,155 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function checkConfig()
+{
+  if (editconnection.serverPort.value != "" && !isInteger(editconnection.serverPort.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.PleaseSupplyAValidNumber'))");
+    editconnection.serverPort.focus();
+    return false;
+  }
+  if (editconnection.serverName.value.indexOf("/") >= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.PleaseSpecifyAnyServerPathInformation'))");
+    editconnection.serverName.focus();
+    return false;
+  }
+  var svrloc = editconnection.serverLocation.value;
+  if (svrloc != "" && svrloc.charAt(0) != "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.SitePathMustBeginWithWCharacter'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (svrloc != "" && svrloc.charAt(svrloc.length - 1) == "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.SitePathCannotEndWithACharacter'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (editconnection.userName.value != "" && editconnection.userName.value.indexOf("\\") <= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.AValidSharePointUserNameHasTheForm'))");
+    editconnection.userName.focus();
+    return false;
+  }
+  return true;
+}
+
+function checkConfigForSave()
+{
+  if (editconnection.cachelifetime.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetimeCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelifetime.focus();
+    return false;
+  }
+  if (editconnection.cachelifetime.value != "" && !isInteger(editconnection.cachelifetime.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetimeMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelifetime.focus();
+    return false;
+  }
+  if (editconnection.cachelrusize.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSizeCannotBeNull'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelrusize.focus();
+    return false;
+  }
+  if (editconnection.cachelrusize.value != "" && !isInteger(editconnection.cachelrusize.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSizeMustBeAnInteger'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Cache'))");
+    editconnection.cachelrusize.focus();
+    return false;
+  }
+  if (editconnection.serverName.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.PleaseFillInASharePointServerName'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Server'))");
+    editconnection.serverName.focus();
+    return false;
+  }
+  if (editconnection.serverName.value.indexOf("/") >= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.PleaseSpecifyAnyServerPathInformationInTheSitePathField'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Server'))");
+    editconnection.serverName.focus();
+    return false;
+  }
+  var svrloc = editconnection.serverLocation.value;
+  if (svrloc != "" && svrloc.charAt(0) != "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.SitePathMustBeginWithWCharacter'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Server'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (svrloc != "" && svrloc.charAt(svrloc.length - 1) == "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.SitePathCannotEndWithACharacter'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Server'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (editconnection.serverPort.value != "" && !isInteger(editconnection.serverPort.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.PleaseSupplyASharePointPortNumber'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Server'))");
+    editconnection.serverPort.focus();
+    return false;
+  }
+  if (editconnection.userName.value != "" && editconnection.userName.value.indexOf("\\") <= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.TheConnectionRequiresAValidSharePointUserName'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.Server'))");
+    editconnection.userName.focus();
+    return false;
+  }
+  return true;
+}
+
+function ShpDeleteCertificate(aliasName)
+{
+  editconnection.shpkeystorealias.value = aliasName;
+  editconnection.configop.value = "Delete";
+  postForm();
+}
+
+function ShpAddCertificate()
+{
+  if (editconnection.shpcertificate.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointAuthority.ChooseACertificateFile'))");
+    editconnection.shpcertificate.focus();
+  }
+  else
+  {
+    editconnection.configop.value = "Add";
+    postForm();
+  }
+}
+
+//-->
+</script>
+
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration_Cache.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration_Cache.html
new file mode 100644
index 0000000..384abd1
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration_Cache.html
@@ -0,0 +1,37 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointAuthority.Cache'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetime'))</nobr></td>
+    <td class="value"><input type="text" size="5" name="cachelifetime" value="$Encoder.attributeEscape($CACHELIFETIME)"/> $Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.minutes'))</td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSize'))</nobr></td>
+    <td class="value"><input type="text" size="5" name="cachelrusize" value="$Encoder.attributeEscape($CACHELRUSIZE)"/></td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="cachelifetime" value="$Encoder.attributeEscape($CACHELIFETIME)"/>
+<input type="hidden" name="cachelrusize" value="$Encoder.attributeEscape($CACHELRUSIZE)"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration_Server.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration_Server.html
new file mode 100644
index 0000000..46b1279
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/editConfiguration_Server.html
@@ -0,0 +1,122 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($KEYSTORE)
+<input type="hidden" name="keystoredata" value="$Encoder.attributeEscape($KEYSTORE)"/>
+#end
+
+#if($TabName == $ResourceBundle.getString('SharePointAuthority.Server'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerSharePointVersion'))</nobr></td>
+    <td class="value">
+      <select name="serverVersion">
+  #if($SERVERVERSION == '2.0')
+        <option value="2.0" selected="true">SharePoint Services 2.0 (2003)</option>
+  #else
+        <option value="2.0">SharePoint Services 2.0 (2003)</option>
+  #end
+  #if($SERVERVERSION == '3.0')
+        <option value="3.0" selected="true">SharePoint Services 3.0 (2007)</option>
+  #else
+        <option value="3.0">SharePoint Services 3.0 (2007)</option>
+  #end
+  #if($SERVERVERSION == '4.0')
+        <option value="4.0" selected="true">SharePoint Services 4.0 (2010)</option>
+  #else
+        <option value="4.0">SharePoint Services 4.0 (2010)</option>
+  #end
+      </select>
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerProtocol'))</nobr></td>
+    <td class="value">
+      <select name="serverProtocol">
+  #if($SERVERPROTOCOL == 'http')
+        <option value="http" selected="true">http</option>
+  #else
+        <option value="http">http</option>
+  #end
+  #if($SERVERPROTOCOL == 'https')
+        <option value="https" selected="true">https</option>
+  #else
+        <option value="https">https</option>
+  #end
+      </select>
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerName'))</nobr></td>
+    <td class="value"><input type="text" size="64" name="serverName" value="$Encoder.attributeEscape($SERVERNAME)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerPort'))</nobr></td>
+    <td class="value"><input type="text" size="5" name="serverPort" value="$SERVERPORT"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.SitePath'))</nobr></td>
+    <td class="value"><input type="text" size="64" name="serverLocation" value="$Encoder.attributeEscape($SERVERLOCATION)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.UserName'))</nobr></td>
+    <td class="value"><input type="text" size="32" name="userName" value="$Encoder.attributeEscape($USERNAME)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.Password'))</nobr></td>
+    <td class="value"><input type="password" size="32" name="password" value="$Encoder.attributeEscape($PASSWORD)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.SSLCertificateList'))</nobr></td>
+    <td class="value">
+      <input type="hidden" name="configop" value="Continue"/>
+      <input type="hidden" name="shpkeystorealias" value=""/>
+      <table class="displaytable">
+  #if($CERTIFICATELIST.size() == 0)
+        <tr><td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.NoCertificatesPresent'))</td></tr>
+  #else
+    #foreach($certificate in $CERTIFICATELIST)
+        <tr>
+          <td class="description">
+            <input type="button" onclick='Javascript:ShpDeleteCertificate("$Encoder.attributeJavascriptEscape($certificate.get('ALIAS'))")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.DeleteCert'))$Encoder.attributeEscape($certificate.get('ALIAS'))" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.Delete'))"/>
+          </td>
+          <td class="value">
+            $Encoder.bodyEscape($certificate.get('DESCRIPTION'))
+          </td>
+        </tr>
+    #end
+  #end
+      </table>
+      <input type="button" onclick='Javascript:ShpAddCertificate()' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.AddCert'))" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointAuthority.Add'))"/>&nbsp;
+      $Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.Certificate'))&nbsp;
+      <input name="shpcertificate" size="50" type="file"/>
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="serverProtocol" value="$SERVERPROTOCOL"/>
+<input type="hidden" name="serverName" value="$Encoder.attributeEscape($SERVERNAME)"/>
+<input type="hidden" name="serverPort" value="$SERVERPORT"/>
+<input type="hidden" name="serverLocation" value="$Encoder.attributeEscape($SERVERLOCATION)"/>
+<input type="hidden" name="userName" value="$Encoder.attributeEscape($USERNAME)"/>
+<input type="hidden" name="password" value="$Encoder.attributeEscape($PASSWORD)"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/viewADConfiguration.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/viewADConfiguration.html
new file mode 100644
index 0000000..666c2e3
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/viewADConfiguration.html
@@ -0,0 +1,62 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.DomainControllers'))</td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.DomainControllerName'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.DomainSuffix'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.AdministrativeUserName'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.AdministrativePassword'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.Authentication'))</td>
+          <td class="formcolumnheader">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.LoginNameADAttribute'))</td>
+        </tr>
+#set($dccounter = 0)
+#foreach($domaincontroller in $DOMAINCONTROLLERS)
+  #if(($dccounter % 2) == 0)
+        <tr class="evenformrow">
+  #else
+        <tr class="oddformrow">
+  #end
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($domaincontroller.get('DOMAINCONTROLLER'))</nobr></td>
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($domaincontroller.get('SUFFIX'))</nobr></td>
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($domaincontroller.get('USERNAME'))</nobr></td>
+          <td class="formcolumncell"><nobr>******</nobr></td>
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($domaincontroller.get('AUTHENTICATION'))</nobr></td>
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($domaincontroller.get('USERACLsUSERNAME'))</nobr></td>
+        </tr>
+  #set($dccounter = $dccounter + 1)
+#end
+      </table>
+    </td>
+  </tr>
+  
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetime'))</nobr></td>
+    <td class="value"><nobr>$Encoder.bodyEscape($CACHELIFETIME)</nobr> $Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.minutes'))</td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSize'))</nobr></td>
+    <td class="value"><nobr>$Encoder.bodyEscape($CACHELRUSIZE)</nobr></td>
+  </tr>
+  
+</table>
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/viewConfiguration.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/viewConfiguration.html
new file mode 100644
index 0000000..d70017a
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/authorities/authorities/sharepoint/viewConfiguration.html
@@ -0,0 +1,114 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerSharePointVersion'))</nobr></td>
+    <td class="value">
+  #if($SERVERVERSION == '2.0')
+        SharePoint Services 2.0 (2003)
+  #elseif($SERVERVERSION == '3.0')
+        SharePoint Services 3.0 (2007)
+  #elseif($SERVERVERSION == '4.0')
+        SharePoint Services 4.0 (2010)
+  #else
+        Unknown
+  #end
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerProtocol'))</nobr></td>
+    <td class="value">
+      $SERVERPROTOCOL
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerName'))</nobr></td>
+    <td class="value">
+      $Encoder.bodyEscape($SERVERNAME)
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ServerPort'))</nobr></td>
+    <td class="value">
+      $SERVERPORT
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.SitePath'))</nobr></td>
+    <td class="value">
+      $Encoder.bodyEscape($SERVERLOCATION)
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.UserName'))</nobr></td>
+    <td class="value">
+      $Encoder.bodyEscape($USERNAME)
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.Password'))</nobr></td>
+    <td class="value">
+      ********
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.SSLCertificateList'))</nobr></td>
+    <td class="value">
+      <table class="displaytable">
+  #if($CERTIFICATELIST.size() == 0)
+        <tr><td class="message" colspan="1">$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.NoCertificatesPresent'))</td></tr>
+  #else
+    #foreach($certificate in $CERTIFICATELIST)
+        <tr>
+          <td class="value">
+            $Encoder.bodyEscape($certificate.get('DESCRIPTION'))
+          </td>
+        </tr>
+    #end
+  #end
+      </table>
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.AuthorizationModelColon'))</nobr></td>
+    <td class="value">
+#if($AUTHORIZATIONMODEL == 'Classic')
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.Classic'))</nobr>
+#elseif($AUTHORIZATIONMODEL == 'ClaimSpace')
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.ClaimSpace'))</nobr>
+#else
+      Unknown
+#end
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLifetime'))</nobr></td>
+    <td class="value"><nobr>$Encoder.bodyEscape($CACHELIFETIME)</nobr> $Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.minutes'))</td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointAuthority.CacheLRUSize'))</nobr></td>
+    <td class="value"><nobr>$Encoder.bodyEscape($CACHELRUSIZE)</nobr></td>
+  </tr>
+  
+</table>
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration.js b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration.js
new file mode 100644
index 0000000..1c432c9
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration.js
@@ -0,0 +1,127 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+function ShpDeleteCertificate(aliasName)
+{
+  editconnection.shpkeystorealias.value = aliasName;
+  editconnection.configop.value = "Delete";
+  postForm();
+}
+
+function ShpAddCertificate()
+{
+  if (editconnection.shpcertificate.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.ChooseACertificateFile'))");
+    editconnection.shpcertificate.focus();
+  }
+  else
+  {
+    editconnection.configop.value = "Add";
+    postForm();
+  }
+}
+
+function checkConfig()
+{
+  if (editconnection.serverPort.value != "" && !isInteger(editconnection.serverPort.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSupplyAValidNumber'))");
+    editconnection.serverPort.focus();
+    return false;
+  }
+  if (editconnection.serverName.value.indexOf("/") >= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSpecifyAnyServerPathInformation'))");
+    editconnection.serverName.focus();
+    return false;
+  }
+  var svrloc = editconnection.serverLocation.value;
+  if (svrloc != "" && svrloc.charAt(0) != "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.SitePathMustBeginWithWCharacter'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (svrloc != "" && svrloc.charAt(svrloc.length - 1) == "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.SitePathCannotEndWithACharacter'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (editconnection.userName.value != "" && editconnection.userName.value.indexOf("\\") <= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.AValidSharePointUserNameHasTheForm'))");
+    editconnection.userName.focus();
+    return false;
+  }
+  return true;
+}
+
+function checkConfigForSave() 
+{
+  if (editconnection.serverName.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseFillInASharePointServerName'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.Server'))");
+    editconnection.serverName.focus();
+    return false;
+  }
+  if (editconnection.serverName.value.indexOf("/") >= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSpecifyAnyServerPathInformationInTheSitePathField'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.Server'))");
+    editconnection.serverName.focus();
+    return false;
+  }
+  var svrloc = editconnection.serverLocation.value;
+  if (svrloc != "" && svrloc.charAt(0) != "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.SitePathMustBeginWithWCharacter'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.Server'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (svrloc != "" && svrloc.charAt(svrloc.length - 1) == "/")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.SitePathCannotEndWithACharacter'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.Server'))");
+    editconnection.serverLocation.focus();
+    return false;
+  }
+  if (editconnection.serverPort.value != "" && !isInteger(editconnection.serverPort.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSupplyASharePointPortNumber'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.Server'))");
+    editconnection.serverPort.focus();
+    return false;
+  }
+  if (editconnection.userName.value != "" && editconnection.userName.value.indexOf("\\") <= 0)
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.TheConnectionRequiresAValidSharePointUserName'))");
+    SelectTab("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.Server'))");
+    editconnection.userName.focus();
+    return false;
+  }
+  return true;
+}
+
+//-->
+</script>
+
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration_AuthorityType.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration_AuthorityType.html
new file mode 100644
index 0000000..16c249c
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration_AuthorityType.html
@@ -0,0 +1,43 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointRepository.AuthorityType'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.AuthorityTypeColon'))</nobr></td>
+    <td class="value">
+  #if($AUTHORITYTYPE == 'SharePoint')
+      <input type="radio" name="authorityType" value="SharePoint" checked="true"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SharePoint'))</nobr></input>
+  #else
+      <input type="radio" name="authorityType" value="SharePoint"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SharePoint'))</nobr></input>
+  #end
+  #if($AUTHORITYTYPE == 'ActiveDirectory')
+      <input type="radio" name="authorityType" value="ActiveDirectory" checked="true"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ActiveDirectory'))</nobr></input>
+  #else
+      <input type="radio" name="authorityType" value="ActiveDirectory"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ActiveDirectory'))</nobr></input>
+  #end
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="authorityType" value="$AUTHORITYTYPE"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration_Server.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration_Server.html
new file mode 100644
index 0000000..e905354
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editConfiguration_Server.html
@@ -0,0 +1,122 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($KEYSTORE)
+<input type="hidden" name="keystoredata" value="$Encoder.attributeEscape($KEYSTORE)"/>
+#end
+
+#if($TabName == $ResourceBundle.getString('SharePointRepository.Server'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerSharePointVersion'))</nobr></td>
+    <td class="value">
+      <select name="serverVersion">
+  #if($SERVERVERSION == '2.0')
+        <option value="2.0" selected="true">SharePoint Services 2.0 (2003)</option>
+  #else
+        <option value="2.0">SharePoint Services 2.0 (2003)</option>
+  #end
+  #if($SERVERVERSION == '3.0')
+        <option value="3.0" selected="true">SharePoint Services 3.0 (2007)</option>
+  #else
+        <option value="3.0">SharePoint Services 3.0 (2007)</option>
+  #end
+  #if($SERVERVERSION == '4.0')
+        <option value="4.0" selected="true">SharePoint Services 4.0 (2010)</option>
+  #else
+        <option value="4.0">SharePoint Services 4.0 (2010)</option>
+  #end
+      </select>
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerProtocol'))</nobr></td>
+    <td class="value">
+      <select name="serverProtocol">
+  #if($SERVERPROTOCOL == 'http')
+        <option value="http" selected="true">http</option>
+  #else
+        <option value="http">http</option>
+  #end
+  #if($SERVERPROTOCOL == 'https')
+        <option value="https" selected="true">https</option>
+  #else
+        <option value="https">https</option>
+  #end
+      </select>
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerName'))</nobr></td>
+    <td class="value"><input type="text" size="64" name="serverName" value="$Encoder.attributeEscape($SERVERNAME)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerPort'))</nobr></td>
+    <td class="value"><input type="text" size="5" name="serverPort" value="$SERVERPORT"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SitePath'))</nobr></td>
+    <td class="value"><input type="text" size="64" name="serverLocation" value="$Encoder.attributeEscape($SERVERLOCATION)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.UserName'))</nobr></td>
+    <td class="value"><input type="text" size="32" name="userName" value="$Encoder.attributeEscape($USERNAME)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Password'))</nobr></td>
+    <td class="value"><input type="password" size="32" name="password" value="$Encoder.attributeEscape($PASSWORD)"/></td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SSLCertificateList'))</nobr></td>
+    <td class="value">
+      <input type="hidden" name="configop" value="Continue"/>
+      <input type="hidden" name="shpkeystorealias" value=""/>
+      <table class="displaytable">
+  #if($CERTIFICATELIST.size() == 0)
+        <tr><td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoCertificatesPresent'))</td></tr>
+  #else
+    #foreach($certificate in $CERTIFICATELIST)
+        <tr>
+          <td class="description">
+            <input type="button" onclick='Javascript:ShpDeleteCertificate("$Encoder.attributeJavascriptEscape($certificate.get('ALIAS'))")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.DeleteCert'))$Encoder.attributeEscape($certificate.get('ALIAS'))" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.Delete'))"/>
+          </td>
+          <td class="value">
+            $Encoder.bodyEscape($certificate.get('DESCRIPTION'))
+          </td>
+        </tr>
+    #end
+  #end
+      </table>
+      <input type="button" onclick='Javascript:ShpAddCertificate()' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddCert'))" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.Add'))"/>&nbsp;
+      $Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Certificate'))&nbsp;
+      <input name="shpcertificate" size="50" type="file"/>
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="serverProtocol" value="$SERVERPROTOCOL"/>
+<input type="hidden" name="serverName" value="$Encoder.attributeEscape($SERVERNAME)"/>
+<input type="hidden" name="serverPort" value="$SERVERPORT"/>
+<input type="hidden" name="serverLocation" value="$Encoder.attributeEscape($SERVERLOCATION)"/>
+<input type="hidden" name="userName" value="$Encoder.attributeEscape($USERNAME)"/>
+<input type="hidden" name="password" value="$Encoder.attributeEscape($PASSWORD)"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification.js b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification.js
new file mode 100644
index 0000000..5d83205
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification.js
@@ -0,0 +1,198 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<script type="text/javascript">
+<!--
+
+function checkSpecification()
+{
+  // Does nothing right now.
+  return true;
+}
+
+function SpecRuleAddPath(anchorvalue)
+{
+  if (editjob.spectype.value=="")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectATypeFirst'))");
+    editjob.spectype.focus();
+  }
+  else if (editjob.specflavor.value=="")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectAnActionFirst'))");
+    editjob.specflavor.focus();
+  }
+  else
+    SpecOp("specop","Add",anchorvalue);
+}
+  
+function SpecPathReset(anchorvalue)
+{
+  SpecOp("specpathop","Reset",anchorvalue);
+}
+  
+function SpecPathAppendSite(anchorvalue)
+{
+  if (editjob.specsite.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectASiteFirst'))");
+    editjob.specsite.focus();
+  }
+  else
+    SpecOp("specpathop","AppendSite",anchorvalue);
+}
+
+function SpecPathAppendLibrary(anchorvalue)
+{
+  if (editjob.speclibrary.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectALibraryFirst'))");
+    editjob.speclibrary.focus();
+  }
+  else
+    SpecOp("specpathop","AppendLibrary",anchorvalue);
+}
+
+function SpecPathAppendList(anchorvalue)
+{
+  if (editjob.speclist.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectAListFirst'))");
+    editjob.speclist.focus();
+  }
+  else
+    SpecOp("specpathop","AppendList",anchorvalue);
+}
+
+function SpecPathAppendText(anchorvalue)
+{
+  if (editjob.specmatch.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseProvideMatchTextFirst'))");
+    editjob.specmatch.focus();
+  }
+  else
+    SpecOp("specpathop","AppendText",anchorvalue);
+}
+
+function SpecPathRemove(anchorvalue)
+{
+  SpecOp("specpathop","Remove",anchorvalue);
+}
+
+function MetaRuleAddPath(anchorvalue)
+{
+  if (editjob.metaflavor.value=="")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectAnActionFirst'))");
+    editjob.metaflavor.focus();
+  }
+  else
+    SpecOp("metaop","Add",anchorvalue);
+}
+
+function MetaPathReset(anchorvalue)
+{
+  SpecOp("metapathop","Reset",anchorvalue);
+}
+  
+function MetaPathAppendSite(anchorvalue)
+{
+  if (editjob.metasite.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectASiteFirst'))");
+    editjob.metasite.focus();
+  }
+  else
+    SpecOp("metapathop","AppendSite",anchorvalue);
+}
+
+function MetaPathAppendLibrary(anchorvalue)
+{
+  if (editjob.metalibrary.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectALibraryFirst'))");
+    editjob.metalibrary.focus();
+  }
+  else
+    SpecOp("metapathop","AppendLibrary",anchorvalue);
+}
+
+function MetaPathAppendList(anchorvalue)
+{
+  if (editjob.metalist.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseSelectAListFirst'))");
+    editjob.metalist.focus();
+  }
+  else
+    SpecOp("metapathop","AppendList",anchorvalue);
+}
+
+function MetaPathAppendText(anchorvalue)
+{
+  if (editjob.metamatch.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.PleaseProvideMatchTextFirst'))");
+    editjob.metamatch.focus();
+  }
+  else
+    SpecOp("metapathop","AppendText",anchorvalue);
+}
+
+function MetaPathRemove(anchorvalue)
+{
+  SpecOp("metapathop","Remove",anchorvalue);
+}
+
+function SpecAddAccessToken(anchorvalue)
+{
+  if (editjob.spectoken.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.AccessTokenCannotBeNull'))");
+    editjob.spectoken.focus();
+  }
+  else
+    SpecOp("accessop","Add",anchorvalue);
+}
+
+function SpecAddMapping(anchorvalue)
+{
+  if (editjob.specmatch.value == "")
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.MatchStringCannotBeEmpty'))");
+    editjob.specmatch.focus();
+    return;
+  }
+  if (!isRegularExpression(editjob.specmatch.value))
+  {
+    alert("$Encoder.bodyJavascriptEscape($ResourceBundle.getString('SharePointRepository.MatchStringMustBeValidRegularExpression'))");
+    editjob.specmatch.focus();
+    return;
+  }
+  SpecOp("specmappingop","Add",anchorvalue);
+}
+
+function SpecOp(n, opValue, anchorvalue)
+{
+  eval("editjob."+n+".value = \""+opValue+"\"");
+  postFormSetAnchor(anchorvalue);
+}
+
+//-->
+</script>
+
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Metadata.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Metadata.html
new file mode 100644
index 0000000..eb232cb
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Metadata.html
@@ -0,0 +1,312 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointRepository.Metadata'))
+
+<input type="hidden" name="specmappingcount" value="$MAPLIST.size()"/>
+<input type="hidden" name="specmappingop" value=""/>
+
+<table class="displaytable">
+<tr><td class="separator" colspan="4"><hr/></td></tr>
+<tr>
+  <td class="description" colspan="1"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.MetadataRules'))</nobr></td>
+    <td class="boxcell" colspan="3">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathMatch'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Action'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.AllMetadata'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Fields'))</nobr></td>
+        </tr>
+
+  #set($rulecounter = 0)
+  #set($rownumber = 0)
+  #foreach($metadatarule in $METADATARULES)
+
+    #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <nobr>
+              <a name="meta_$rulecounter"/>
+              <input type="hidden" name="metaop_$rulecounter" value=""/>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.InsertNewRule'))" onClick='Javascript:SpecOp("metaop_$rulecounter","Insert Here","meta_$rulecounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.InsertNewMetadataRuleBeforeRule'))$rulecounter"/>
+            </nobr>
+          </td>
+          <td class="formcolumncell" colspan="4"></td>
+        </tr>
+
+    #set($rownumber = $rownumber + 1)
+    
+    #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <nobr>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.Delete'))" onClick='Javascript:SpecOp("metaop_$rulecounter","Delete","meta_$rulecounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.DeleteMetadataRule'))$rulecounter"/>
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="metapath_$rulecounter" value="$Encoder.attributeEscape($metadatarule.get('THEPATH'))"/>
+              $Encoder.bodyEscape($metadatarule.get('THEPATH'))
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="metaflav_$rulecounter" value="$metadatarule.get('THEACTION')"/>
+              $metadatarule.get('THEACTION')
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="metaall_$rulecounter" value="$metadatarule.get('ALLFLAG')"/>
+              $metadatarule.get('ALLFLAG')
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+    #foreach($field in $metadatarule.get('FIELDLIST'))
+              <input type="hidden" name="metafields_$rulecounter" value="$Encoder.attributeEscape($field)"/>
+    #end
+            $Encoder.bodyEscape($metadatarule.get('FIELDS'))
+          </td>
+        </tr>
+        
+    #set($rownumber = $rownumber + 1)
+    #set($rulecounter = $rulecounter + 1)
+  #end
+
+  #if($rulecounter == 0)
+        <tr class="formrow"><td class="formcolumnmessage" colspan="5">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoMetadataIncluded'))</td></tr>
+  #end
+  
+  #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+  #else
+        <tr class="oddformrow">
+  #end
+  
+          <td class="formcolumncell">
+            <nobr>
+              <a name="meta_$rulecounter"/>
+              <input type="hidden" name="metaop" value=""/>
+              <input type="hidden" name="metapathcount" value="$rulecounter"/>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddNewRule'))" onClick='Javascript:MetaRuleAddPath("meta_$rulecounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddRule'))"/>
+            </nobr>
+          </td>
+          <td class="formcolumncell" colspan="4"></td>
+        </tr>
+        <tr class="formrow"><td colspan="5" class="formseparator"><hr/></td></tr>
+        <tr class="formrow">
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NewRule'))</nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="metapathop" value=""/>
+              <input type="hidden" name="metapath" value="$Encoder.attributeEscape($METAPATHSOFAR)"/>
+              <input type="hidden" name="metapathstate" value="$METAPATHSTATE"/>
+  #if($METAPATHLIBRARY)
+              <input type="hidden" name="metapathlibrary" value="$Encoder.attributeEscape($METAPATHLIBRARY)"/>
+  #end
+              $Encoder.bodyEscape($METAPATHSOFAR)
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <select name="metaflavor" size="2">
+                <option value="include" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Include'))</option>
+                <option value="exclude">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Exclude'))</option>
+              </select>
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="checkbox" name="metaall" value="true"/>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.IncludeAllMetadata'))
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+  #if($METAFIELDLIST && $METAFIELDLIST.size() > 0)
+            <nobr>
+              <select name="metafields" multiple="true" size="5">
+    #foreach($field in $METAFIELDLIST)
+                <option value="$Encoder.attributeEscape($field.getValue())"/>$Encoder.bodyEscape($field.getPrettyName())</option>
+    #end
+              </select>
+            </nobr>
+  #end
+          </td>
+        </tr>
+
+        <tr class="formrow"><td colspan="5" class="formseparator"><hr/></td></tr>
+
+        <tr class="formrow">
+  #if($METAMESSAGE)
+          <td class="formcolumnmessage" colspan="5">$Encoder.bodyEscape($METAMESSAGE)</td></tr>
+  #else
+          <td class="formcolumncell">
+            <nobr>
+              <a name="metapathwidget"/>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.ResetPath'))" onClick='Javascript:MetaPathReset("metapathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.ResetMetadataRulePath'))"/>
+    #if($METAPATHSOFAR.length() > 1 && ($METAPATHSTATE == 'site' || $METAPATHSTATE == 'library'))
+              <input type="button" value="-" onClick='Javascript:MetaPathRemove("metapathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.RemoveFromMetadataRulePath'))"/>
+    #end
+            </nobr>
+          </td>
+          <td class="formcolumncell" colspan="4">
+            <nobr>
+    #if($METAPATHSTATE == 'site' && $METACHILDSITELIST && $METACHILDSITELIST.size() > 0)
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddSite'))" onClick='Javascript:MetaPathAppendSite("metapathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddSiteToMetadataRulePath'))"/>
+              <select name="metasite" size="5">
+                <option value="" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SelectSite'))</option>
+      #foreach($child in $METACHILDSITELIST)
+                <option value="$Encoder.attributeEscape($child.getValue())">$Encoder.bodyEscape($child.getPrettyName())</option>
+      #end
+              </select>
+    #end
+        
+    #if($METAPATHSTATE == 'site' && $METACHILDLIBLIST && $METACHILDLIBLIST.size() > 0)
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddLibrary'))" onClick='Javascript:MetaPathAppendLibrary("metapathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddLibraryToMetadataRulePath'))"/>
+              <select name="metalibrary" size="5">
+                <option value="" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SelectLibrary'))</option>\
+      #foreach($child in $METACHILDLIBLIST)
+                <option value="$Encoder.attributeEscape($child.getValue())">$Encoder.bodyEscape($child.getPrettyName())</option>
+      #end
+              </select>
+    #end
+
+    #if($METAPATHSTATE == 'site' && $METACHILDLISTLIST && $METACHILDLISTLIST.size() > 0)
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddList'))" onClick='Javascript:MetaPathAppendList("metapathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddListToMetadataRulePath'))"/>
+              <select name="metalist" size="5">
+                <option value="" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SelectList'))</option>
+      #foreach($child in $METACHILDLISTLIST)
+                <option value="$Encoder.attributeEscape($child.getValue())">$Encoder.bodyEscape($child.getPrettyName())</option>
+      #end
+              </select>
+    #end
+
+    #if($METAPATHSTATE != 'list')
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddText'))" onClick='Javascript:MetaPathAppendText("metapathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddTextToMetadataRulePath'))"/>
+              <input type="text" name="metamatch" size="32" value=""/>
+    #end
+            </nobr>
+          </td>
+  #end
+        </tr>
+      </table>
+    </td>
+  </tr>
+  
+  <tr><td class="separator" colspan="4"><hr/></td></tr>
+  
+  <tr>
+    <td class="description" colspan="1"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathMetadata'))</nobr></td>
+    <td class="boxcell" colspan="3">
+      <table class="displaytable">
+        <tr>
+          <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.AttributeName'))</nobr></td>
+          <td class="value" colspan="3">
+            <nobr>
+              <input type="text" name="specpathnameattribute" size="20" value="$Encoder.attributeEscape($PATHNAMEATTRIBUTE)"/>
+            </nobr>
+          </td>
+        </tr>
+        
+        <tr><td class="separator" colspan="4"><hr/></td></tr>
+  
+  #set($mapcounter = 0)
+  #foreach($mapitem in $MAPLIST)
+        <tr>
+          <td class="description">
+            <input type="hidden" name="specmappingop_$mapcounter" value=""/>
+            <a name="mapping_$mapcounter">
+              <input type="button" onClick='Javascript:SpecOp("specmappingop_$mapcounter","Delete","mapping_$mapcounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.DeleteMapping'))$mapcounter" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.DeletePathMapping'))"/>
+            </a>
+          </td>
+          <td class="value">
+            <nobr>
+              <input type="hidden" name="specmatch_$mapcounter" value="$Encoder.attributeEscape($mapitem.get('MATCH'))"/>
+              $Encoder.bodyEscape($mapitem.get('MATCH'))
+            </nobr>
+          </td>
+          <td class="value">==></td>
+          <td class="value">
+            <nobr>
+              <input type="hidden" name="specreplace_$mapcounter" value="$Encoder.attributeEscape($mapitem.get('REPLACE'))"/>
+              $Encoder.bodyEscape($mapitem.get('REPLACE'))
+            </nobr>
+          </td>
+        </tr>
+    #set($mapcounter = $mapcounter + 1)
+  #end
+  
+  #if($mapcounter == 0)
+        <tr><td colspan="4" class="message">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoMappingsSpecified'))</td></tr>
+  #end
+  
+        <tr><td class="lightseparator" colspan="4"><hr/></td></tr>
+
+  #set($mapcounterplusone = $mapcounter + 1)
+  
+        <tr>
+          <td class="description">
+            <a name="mapping_$mapcounter">
+              <input type="button" onClick='Javascript:SpecAddMapping("mapping_$mapcounterplusone")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddToMappings'))" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddPathMapping'))"/>
+            </a>
+          </td>
+          <td class="value"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.MatchRegexp'))&nbsp;<input type="text" name="specmatch" size="32" value=""/></nobr></td>
+          <td class="value">==></td>
+          <td class="value"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ReplaceString'))&nbsp;<input type="text" name="specreplace" size="32" value=""/></nobr></td>
+        </tr>
+      </table>
+    </td>
+  </tr>
+</table>
+
+#else
+
+  #set($rulecounter = 0)
+  #foreach($metadatarule in $METADATARULES)
+
+<input type="hidden" name="metapath_$rulecounter" value="$Encoder.attributeEscape($metadatarule.get('THEPATH'))"/>
+<input type="hidden" name="metaflav_$rulecounter" value="$metadatarule.get('THEACTION')"/>
+<input type="hidden" name="metaall_$rulecounter" value="$metadatarule.get('ALLFLAG')"/>
+    #foreach($field in $metadatarule.get('FIELDLIST'))
+<input type="hidden" name="metafields_$rulecounter" value="$Encoder.attributeEscape($field)"/>
+    #end
+     
+    #set($rulecounter = $rulecounter + 1)
+  #end
+  
+<input type="hidden" name="metapathcount" value="$rulecounter"/>
+<input type="hidden" name="specpathnameattribute" value="$Encoder.attributeEscape($PATHNAMEATTRIBUTE)"/>
+  
+  #set($mapcounter = 0)
+  #foreach($mapping in $MAPLIST)
+<input type="hidden" name="specmatch_$mapcounter" value="$Encoder.attributeEscape($mapping.get('MATCH'))"/>
+<input type="hidden" name="specreplace_$mapcounter" value="$Encoder.attributeEscape($mapping.get('REPLACE'))"/>
+    #set($mapcounter = $mapcounter + 1)
+  #end
+
+<input type="hidden" name="specmappingcount" value="$mapcounter"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Paths.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Paths.html
new file mode 100644
index 0000000..71d10e0
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Paths.html
@@ -0,0 +1,235 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointRepository.Paths'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathRules'))</nobr></td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathMatch'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Type'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Action'))</nobr></td>
+        </tr>
+        
+  #set($rulecounter = 0)
+  #set($rownumber = 0)
+  #foreach($rule in $RULES)
+  
+    #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <nobr>
+              <a name="path_$rulecounter"/>
+              <input type="hidden" name="specop_$rulecounter" value=""/>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.InsertNewRule'))" onClick='Javascript:SpecOp("specop_$rulecounter","Insert Here","path_$rulecounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.InsertNewRuleBeforeRule'))$rulecounter"/>
+            </nobr>
+          </td>
+          <td class="formcolumncell" colspan="3"></td>
+        </tr>
+        
+    #set($rownumber = $rownumber + 1)
+    
+    #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell">
+            <nobr>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.Delete'))" onClick='Javascript:SpecOp("specop_$rulecounter","Delete","path_$rulecounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.DeleteRule'))$rulecounter"/>
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="specpath_$rulecounter" value="$Encoder.attributeEscape($rule.get('THEPATH'))"/>
+              $Encoder.bodyEscape($rule.get('THEPATH'))
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="spectype_$rulecounter" value="$rule.get('THETYPE')"/>
+              $rule.get('THETYPE')
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="specflav_$rulecounter" value="$rule.get('THEACTION')"/>
+              $rule.get('THEACTION')
+            </nobr>
+          </td>
+        </tr>
+        
+    #set($rownumber = $rownumber + 1)
+    #set($rulecounter = $rulecounter + 1)
+  #end
+  
+  #if($rulecounter == 0)
+        <tr class="formrow"><td colspan="4" class="formcolumnmessage">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoDocumentsCurrentlyIncluded'))</td></tr>
+  #end
+  
+  #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+  #else
+        <tr class="oddformrow">
+  #end
+          <td class="formcolumncell">
+            <nobr>
+              <a name="path_$rulecounter"/>
+              <input type="hidden" name="specop" value=""/>
+              <input type="hidden" name="specpathcount" value="$rulecounter"/>
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddNewRule'))" onClick='Javascript:SpecRuleAddPath("path_$rulecounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddRule'))"/>
+            </nobr>
+          </td>
+          <td class="formcolumncell" colspan="3"></td>
+        </tr>
+        
+        <tr class="formrow"><td colspan="4" class="formseparator"><hr/></td></tr>
+        
+        <tr class="formrow">
+          <td class="formcolumncell">
+            <nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NewRule'))</nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <input type="hidden" name="specpathop" value=""/>
+              <input type="hidden" name="specpath" value="$Encoder.attributeEscape($PATHSOFAR)"/>
+              <input type="hidden" name="specpathstate" value="$PATHSTATE"/>
+  #if($PATHLIBRARY)
+              <input type="hidden" name="specpathlibrary" value="$Encoder.attributeEscape($PATHLIBRARY)"/>
+  #end
+              $Encoder.bodyEscape($PATHSOFAR)
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+
+  #if($PATHSTATE == 'unknown')
+  
+    #if(!$PATHLIBRARY)
+              <select name="spectype" size="4">
+                <option value="file" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.File'))</option>
+                <option value="library">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Library'))</option>
+                <option value="list">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.List'))</option>
+                <option value="site">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Site'))</option>
+              </select>
+    #else
+              <input type="hidden" name="spectype" value="file"/>
+              file
+    #end
+
+  #else
+              <input type="hidden" name="spectype" value="$Encoder.attributeEscape($PATHSTATE)"/>
+              $PATHSTATE
+  #end
+
+            </nobr>
+          </td>
+          <td class="formcolumncell">
+            <nobr>
+              <select name="specflavor" size="2">
+                <option value="include" selected="true">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Include'))</option>
+                <option value="exclude">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Exclude'))</option>
+              </select>
+            </nobr>
+          </td>
+        </tr>
+        <tr class="formrow"><td colspan="4" class="formseparator"><hr/></td></tr>
+        <tr class="formrow">
+        
+  #if($MESSAGE)
+          <td class="formcolumnmessage" colspan="4">$Encoder.bodyEscape($MESSAGE)</td>
+  #else
+  
+          <td class="formcolumncell">
+            <nobr>
+              <a name="pathwidget"/>
+              <input type="button" value="Reset Path" onClick='Javascript:SpecPathReset("pathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.ResetRulePath'))"/>
+    #if($PATHSOFAR.length() > 1 && ($PATHSTATE == 'site' || $PATHSTATE == 'library' || $PATHSTATE == 'list'))
+              <input type="button" value="-" onClick='Javascript:SpecPathRemove("pathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.RemoveFromRulePath'))"/>
+    #end
+            </nobr>
+          </td>
+          <td class="formcolumncell" colspan="3">
+            <nobr>
+    #if($PATHSTATE == 'site' && $CHILDSITELIST && $CHILDSITELIST.size() != 0)
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddSite'))" onClick='Javascript:SpecPathAppendSite("pathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddSiteToRulePath'))"/>
+              <select name="specsite" size="5">
+                <option value="" selected="true">-- $Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SelectSite')) --</option>
+      #foreach($sitechild in $CHILDSITELIST)
+                <option value="$Encoder.attributeEscape($sitechild.getValue())">$Encoder.bodyEscape($sitechild.getPrettyName())</option>
+      #end
+              </select>
+    #end
+    
+    #if($PATHSTATE == 'site' && $CHILDLIBLIST && $CHILDLIBLIST.size() != 0)
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddLibrary'))" onClick='Javascript:SpecPathAppendLibrary("pathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddLibraryToRulePath'))"/>
+              <select name="speclibrary" size="5">
+                <option value="" selected="true">-- $Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SelectLibrary')) --</option>
+      #foreach($libchild in $CHILDLIBLIST)
+                <option value="$Encoder.attributeEscape($libchild.getValue())">$Encoder.bodyEscape($libchild.getPrettyName())</option>
+      #end
+              </select>
+    #end
+
+    #if($PATHSTATE == 'site' && $CHILDLISTLIST && $CHILDLISTLIST.size() != 0)
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddList'))" onClick='Javascript:SpecPathAppendList("pathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddListToRulePath'))"/>
+              <select name="speclist" size="5">
+                <option value="" selected="true">-- $Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SelectList')) --</option>
+      #foreach($listchild in $CHILDLISTLIST)
+                <option value="$Encoder.attributeEscape($listchild.getValue())">$Encoder.bodyEscape($listchild.getPrettyName())</option>
+      #end
+              </select>
+    #end
+        
+    #if($PATHSTATE != 'list')
+              <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddText'))" onClick='Javascript:SpecPathAppendText("pathwidget")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddTextToRulePath'))"/>
+              <input type="text" name="specmatch" size="32" value=""/>
+    #end
+        
+            </nobr>
+          </td>
+  #end
+  
+        </tr>
+      </table>
+    </td>
+  </tr>
+</table>
+
+#else
+
+  #set($rulecounter = 0)
+  #foreach($rule in $RULES)
+
+<input type="hidden" name="specpath_$rulecounter" value="$Encoder.attributeEscape($rule.get('THEPATH'))"/>
+<input type="hidden" name="spectype_$rulecounter" value="$rule.get('THETYPE')"/>
+<input type="hidden" name="specflav_$rulecounter" value="$rule.get('THEACTION')"/>
+      
+    #set($rulecounter = $rulecounter + 1)
+  #end
+
+<input type="hidden" name="specpathcount" value="$rulecounter"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Security.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Security.html
new file mode 100644
index 0000000..694f318
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/editSpecification_Security.html
@@ -0,0 +1,99 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+#if($TabName == $ResourceBundle.getString('SharePointRepository.Security'))
+
+<table class="displaytable">
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Security2'))</nobr></td>
+    <td class="value" colspan="1">
+      <nobr>
+  #if($SECURITY == 'on')
+        <input type="radio" name="specsecurity" value="on" checked="true"/>
+  #else
+        <input type="radio" name="specsecurity" value="on"/>
+  #end
+          $Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Enabled'))&nbsp;
+  #if($SECURITY == 'off')
+        <input type="radio" name="specsecurity" value="off" checked="true"/>
+  #else
+        <input type="radio" name="specsecurity" value="off"/>
+  #end
+          $Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Disabled'))
+      </nobr>
+    </td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  #set($atcounter = 0)
+  #foreach($accesstoken in $ACCESSTOKENS)
+
+  <tr>
+    <td class="description">
+      <input type="hidden" name="accessop_$atcounter" value=""/>
+      <input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($accesstoken)"/>
+      <a name="token_$atcounter">
+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.Delete'))" onClick='Javascript:SpecOp("accessop_$atcounter","Delete","token_$atcounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.DeleteToken'))$atcounter"/>
+      </a>
+    </td>
+    <td class="value">
+      <nobr>$Encoder.bodyEscape($accesstoken)</nobr>
+    </td>
+  </tr>
+  
+    #set($atcounter = $atcounter + 1)
+  #end
+  #if($atcounter == 0)
+  
+  <tr>
+    <td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoAccessTokensPresent'))</td>
+  </tr>
+  
+  #end
+  
+  <tr><td class="lightseparator" colspan="2"><hr/></td></tr>
+  <tr>
+    <td class="description">
+      <input type="hidden" name="tokencount" value="$atcounter"/>
+      <input type="hidden" name="accessop" value=""/>
+      <a name="token_$atcounter">
+        <input type="button" value="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.Add'))" onClick='Javascript:SpecAddAccessToken("token_$atcounter")' alt="$Encoder.attributeEscape($ResourceBundle.getString('SharePointRepository.AddAccessToken'))"/>
+      </a>
+    </td>
+    <td class="value">
+      <input type="text" size="30" name="spectoken" value=""/>
+    </td>
+  </tr>
+</table>
+
+#else
+
+<input type="hidden" name="specsecurity" value="$SECURITY"/>
+
+  #set($atcounter = 0)
+  #foreach($accesstoken in $ACCESSTOKENS)
+  
+<input type="hidden" name="spectoken_$atcounter" value="$Encoder.attributeEscape($accesstoken)"/>
+
+    #set($atcounter = $atcounter + 1)
+  #end
+
+<input type="hidden" name="tokencount" value="$atcounter"/>
+
+#end
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/sharepoint-client-config.wsdd b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/sharepoint-client-config.wsdd
deleted file mode 100644
index c66373a..0000000
--- a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/sharepoint-client-config.wsdd
+++ /dev/null
@@ -1,29 +0,0 @@
-<!-- Licensed to the Apache Software Foundation (ASF) under one or more
-     contributor license agreements. See the NOTICE file distributed with
-     this work for additional information regarding copyright ownership.
-     The ASF licenses this file to You under the Apache License, Version 2.0
-     (the "License"); you may not use this file except in compliance with
-     the License. You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
-     Unless required by applicable law or agreed to in writing, software
-     distributed under the License is distributed on an "AS IS" BASIS,
-     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-     See the License for the specific language governing permissions and
-     limitations under the License.
--->
-
-<deployment name="defaultClientConfig" 
- xmlns="http://xml.apache.org/axis/wsdd/" 
- xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"> 
-  <globalConfiguration> 
-    <parameter name="disablePrettyXML" value="true"/> 
-  </globalConfiguration> 
-  <transport name="http" pivot="java:org.apache.manifoldcf.crawler.connectors.sharepoint.CommonsHTTPSender"> 
-    <parameter name="SO_TIMEOUT" locked="false">60000</parameter>
-  </transport>
-  <!-- transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"/ --> 
-  <transport name="local" pivot="java:org.apache.axis.transport.local.LocalSender"/> 
-  <transport name="java" pivot="java:org.apache.axis.transport.java.JavaSender"/> 
-</deployment>
\ No newline at end of file
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/viewConfiguration.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/viewConfiguration.html
new file mode 100644
index 0000000..d9c475f
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/viewConfiguration.html
@@ -0,0 +1,102 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerSharePointVersion'))</nobr></td>
+    <td class="value">
+#if($SERVERVERSION == '2.0')
+        SharePoint Services 2.0 (2003)
+#elseif($SERVERVERSION == '3.0')
+        SharePoint Services 3.0 (2007)
+#elseif($SERVERVERSION == '4.0')
+        SharePoint Services 4.0 (2010)
+#else
+        Unknown
+#end
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerProtocol'))</nobr></td>
+    <td class="value">
+      $SERVERPROTOCOL
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerName'))</nobr></td>
+    <td class="value">
+      $Encoder.bodyEscape($SERVERNAME)
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ServerPort'))</nobr></td>
+    <td class="value">
+      $SERVERPORT
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SitePath'))</nobr></td>
+    <td class="value">
+      $Encoder.bodyEscape($SERVERLOCATION)
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.UserName'))</nobr></td>
+    <td class="value">
+      $Encoder.bodyEscape($USERNAME)
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Password'))</nobr></td>
+    <td class="value">
+      ********
+    </td>
+  </tr>
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SSLCertificateList'))</nobr></td>
+    <td class="value">
+      <table class="displaytable">
+#if($CERTIFICATELIST.size() == 0)
+        <tr><td class="message" colspan="1">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoCertificatesPresent'))</td></tr>
+#else
+  #foreach($certificate in $CERTIFICATELIST)
+        <tr>
+          <td class="value">
+            $Encoder.bodyEscape($certificate.get('DESCRIPTION'))
+          </td>
+        </tr>
+  #end
+#end
+      </table>
+    </td>
+  </tr>
+  
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.AuthorityTypeColon'))</nobr></td>
+    <td class="value">
+#if($AUTHORITYTYPE == 'SharePoint')
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.SharePoint'))</nobr>
+#elseif($AUTHORITYTYPE == 'ActiveDirectory')
+      <nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.ActiveDirectory'))</nobr>
+#else
+      Unknown
+#end
+    </td>
+  </tr>
+</table>
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/viewSpecification.html b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/viewSpecification.html
new file mode 100644
index 0000000..c4c2e1e
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/crawler/connectors/sharepoint/viewSpecification.html
@@ -0,0 +1,138 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<table class="displaytable">
+  <tr>
+#if($RULES.size() != 0)
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathRules'))</nobr></td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathMatch'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.RuleType'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Action'))</nobr></td>
+        </tr>
+  
+  #set($rownumber = 0)
+  #foreach($rule in $RULES)
+    #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($rule.get('THEPATH'))</nobr></td>
+          <td class="formcolumncell"><nobr>$rule.get('THETYPE')</nobr></td>
+          <td class="formcolumncell"><nobr>$rule.get('THEACTION')</nobr></td>
+        </tr>
+    #set($rownumber = $rownumber + 1)
+  #end
+      </table>
+    </td>
+#else
+    <td colspan="2" class="message"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoDocumentsWillBeIncluded'))</nobr></td>
+#end
+  </tr>
+  
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+#if($METADATARULES.size() > 0)
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Metadata2'))</nobr></td>
+    <td class="boxcell">
+      <table class="formtable">
+        <tr class="formheaderrow">
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathMatch'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Action'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.AllMetadata'))</nobr></td>
+          <td class="formcolumnheader"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Fields'))</nobr></td>
+        </tr>
+
+  #set($rownumber = 0)
+  #foreach($metadatarule in $METADATARULES)
+    #if(($rownumber % 2) == 0)
+        <tr class="evenformrow">
+    #else
+        <tr class="oddformrow">
+    #end
+          <td class="formcolumncell"><nobr>$Encoder.bodyEscape($metadatarule.get('THEPATH'))</nobr></td>
+          <td class="formcolumncell"><nobr>$metadatarule.get('THEACTION')</nobr></td>
+          <td class="formcolumncell"><nobr>$metadatarule.get('ALLFLAG')</nobr></td>
+          <td class="formcolumncell">$Encoder.bodyEscape($metadatarule.get('FIELDS'))+"</td>
+        </tr>
+    #set($rownumber = $rownumber + 1)
+  #end
+      </table>
+    </td>
+#else
+    <td colspan="2" class="message"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoMetadataWillBeIncluded'))</nobr></td>
+#end
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.Security2'))</nobr></td>
+    <td class="value"><nobr>$Encoder.bodyEscape($SECURITY)</nobr></td>
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+#if($ACCESSTOKENS.size() > 0)
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.AccessToken'))</nobr></td>
+    <td class="value">
+  #foreach($token in $ACCESSTOKENS)
+      <nobr>$Encoder.bodyEscape($token)</nobr><br/>
+  #end
+    </td>
+  </tr>
+#else
+  <tr><td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoAccessTokensSpecified'))</td></tr>
+#end
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+  
+  <tr>
+#if($PATHNAMEATTRIBUTE.length() > 0)
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathMetadataAttributeName'))</nobr></td>
+    <td class="value"><nobr>$Encoder.bodyEscape($PATHNAMEATTRIBUTE)</nobr></td>
+#else
+    <td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoPathNameMetadataAttributeSpecified'))</td>
+#end
+  </tr>
+
+  <tr><td class="separator" colspan="2"><hr/></td></tr>
+
+  <tr>
+#if($MAPLIST.size() > 0)
+    <td class="description"><nobr>$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.PathValueMapping'))</nobr></td>
+    <td class="value">
+      <table class="displaytable">
+  #foreach($mapitem in $MAPLIST)
+        <tr>
+          <td class="value"><nobr>$Encoder.bodyEscape($mapitem.get('MATCH'))</nobr></td>
+          <td class="value">==></td>
+          <td class="value"><nobr>$Encoder.bodyEscape($mapitem.get('REPLACE'))</nobr></td>
+        </tr>
+  #end
+      </table>
+    </td>
+#else
+    <td class="message" colspan="2">$Encoder.bodyEscape($ResourceBundle.getString('SharePointRepository.NoMappingsSpecified'))</td>
+#end
+  </tr>
+</table>
diff --git a/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/sharepoint/sharepoint-client-config.wsdd b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/sharepoint/sharepoint-client-config.wsdd
new file mode 100644
index 0000000..0a4c02c
--- /dev/null
+++ b/connectors/sharepoint/connector/src/main/resources/org/apache/manifoldcf/sharepoint/sharepoint-client-config.wsdd
@@ -0,0 +1,29 @@
+<!-- Licensed to the Apache Software Foundation (ASF) under one or more
+     contributor license agreements. See the NOTICE file distributed with
+     this work for additional information regarding copyright ownership.
+     The ASF licenses this file to You under the Apache License, Version 2.0
+     (the "License"); you may not use this file except in compliance with
+     the License. You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+     Unless required by applicable law or agreed to in writing, software
+     distributed under the License is distributed on an "AS IS" BASIS,
+     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+     See the License for the specific language governing permissions and
+     limitations under the License.
+-->
+
+<deployment name="defaultClientConfig" 
+ xmlns="http://xml.apache.org/axis/wsdd/" 
+ xmlns:java="http://xml.apache.org/axis/wsdd/providers/java"> 
+  <globalConfiguration> 
+    <parameter name="disablePrettyXML" value="true"/> 
+  </globalConfiguration> 
+  <transport name="http" pivot="java:org.apache.manifoldcf.sharepoint.CommonsHTTPSender"> 
+    <parameter name="SO_TIMEOUT" locked="false">60000</parameter>
+  </transport>
+  <!-- transport name="http" pivot="java:org.apache.axis.transport.http.HTTPSender"/ --> 
+  <transport name="local" pivot="java:org.apache.axis.transport.local.LocalSender"/> 
+  <transport name="java" pivot="java:org.apache.axis.transport.java.JavaSender"/> 
+</deployment>
\ No newline at end of file
diff --git a/connectors/sharepoint/pom.xml b/connectors/sharepoint/pom.xml
index e55392a..b86d54d 100644
--- a/connectors/sharepoint/pom.xml
+++ b/connectors/sharepoint/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -28,21 +28,31 @@
   <name>ManifoldCF - Connectors - SharePoint</name>
 
   <build>
-    <sourceDirectory>connector/src/main/java</sourceDirectory>
-    <testSourceDirectory>connector/src/test/java</testSourceDirectory>
+    <sourceDirectory>${basedir}/src/main/java</sourceDirectory>
+    <testSourceDirectory>${basedir}/src/test/java</testSourceDirectory>
     <resources>
       <resource>
-        <directory>connectors/src/main/resources</directory>
+        <directory>${basedir}/connector/src/main/resources</directory>
+        <includes>
+          <include>**/*.html</include>
+          <include>**/*.js</include>
+        </includes>
       </resource>
-    </resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -52,7 +62,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -183,7 +195,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/connectors/sharepoint/wsdls/DspSts.wsdl b/connectors/sharepoint/wsdls/DspSts.wsdl
index 7f50cb3..3af0a1e 100644
--- a/connectors/sharepoint/wsdls/DspSts.wsdl
+++ b/connectors/sharepoint/wsdls/DspSts.wsdl
@@ -223,10 +223,10 @@
   </wsdl:binding>
   <wsdl:service name="StsAdapter">
     <wsdl:port name="StsAdapterSoap" binding="tns:StsAdapterSoap">
-      <soap:address location="http://www.wssdemo.com/_vti_bin/DspSts.asmx" />
+      <soap:address location="http://localhost/_vti_bin/DspSts.asmx" />
     </wsdl:port>
     <wsdl:port name="StsAdapterSoap12" binding="tns:StsAdapterSoap12">
-      <soap12:address location="http://www.wssdemo.com/_vti_bin/DspSts.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/DspSts.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
diff --git a/connectors/sharepoint/wsdls/Lists.wsdl b/connectors/sharepoint/wsdls/Lists.wsdl
index 9972a19..b0f0e3b 100644
--- a/connectors/sharepoint/wsdls/Lists.wsdl
+++ b/connectors/sharepoint/wsdls/Lists.wsdl
@@ -1772,10 +1772,10 @@
   </wsdl:binding>
   <wsdl:service name="Lists">
     <wsdl:port name="ListsSoap" binding="tns:ListsSoap">
-      <soap:address location="http://www.wssdemo.com/_vti_bin/Lists.asmx" />
+      <soap:address location="http://localhost/_vti_bin/Lists.asmx" />
     </wsdl:port>
     <wsdl:port name="ListsSoap12" binding="tns:ListsSoap12">
-      <soap12:address location="http://www.wssdemo.com/_vti_bin/Lists.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/Lists.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
diff --git a/connectors/sharepoint/wsdls/MCPermissions.wsdl b/connectors/sharepoint/wsdls/MCPermissions.wsdl
index 9ccdcad..5ec0c1f 100755
--- a/connectors/sharepoint/wsdls/MCPermissions.wsdl
+++ b/connectors/sharepoint/wsdls/MCPermissions.wsdl
@@ -115,10 +115,10 @@
   </wsdl:binding>
   <wsdl:service name="Permissions">
     <wsdl:port name="PermissionsSoap" binding="tns:PermissionsSoap">
-      <soap:address location="http://win-0hb0c25kl3n:33445/_vti_bin/MCPermissions.asmx" />
+      <soap:address location="http://localhost/_vti_bin/MCPermissions.asmx" />
     </wsdl:port>
     <wsdl:port name="PermissionsSoap12" binding="tns:PermissionsSoap12">
-      <soap12:address location="http://win-0hb0c25kl3n:33445/_vti_bin/MCPermissions.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/MCPermissions.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
\ No newline at end of file
diff --git a/connectors/sharepoint/wsdls/Permissions.wsdl b/connectors/sharepoint/wsdls/Permissions.wsdl
index ad602aa..2dc4f8a 100644
--- a/connectors/sharepoint/wsdls/Permissions.wsdl
+++ b/connectors/sharepoint/wsdls/Permissions.wsdl
@@ -296,10 +296,10 @@
   </wsdl:binding>
   <wsdl:service name="Permissions">
     <wsdl:port name="PermissionsSoap" binding="tns:PermissionsSoap">
-      <soap:address location="http://www.wssdemo.com/_vti_bin/Permissions.asmx" />
+      <soap:address location="http://localhost/_vti_bin/Permissions.asmx" />
     </wsdl:port>
     <wsdl:port name="PermissionsSoap12" binding="tns:PermissionsSoap12">
-      <soap12:address location="http://www.wssdemo.com/_vti_bin/Permissions.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/Permissions.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
diff --git a/connectors/sharepoint/wsdls/usergroup.wsdl b/connectors/sharepoint/wsdls/usergroup.wsdl
index 7228bbf..7343f34 100644
--- a/connectors/sharepoint/wsdls/usergroup.wsdl
+++ b/connectors/sharepoint/wsdls/usergroup.wsdl
@@ -1977,10 +1977,10 @@
   </wsdl:binding>
   <wsdl:service name="UserGroup">
     <wsdl:port name="UserGroupSoap" binding="tns:UserGroupSoap">
-      <soap:address location="http://www.wssdemo.com/_vti_bin/usergroup.asmx" />
+      <soap:address location="http://localhost/_vti_bin/usergroup.asmx" />
     </wsdl:port>
     <wsdl:port name="UserGroupSoap12" binding="tns:UserGroupSoap12">
-      <soap12:address location="http://www.wssdemo.com/_vti_bin/usergroup.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/usergroup.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
diff --git a/connectors/sharepoint/wsdls/versions.wsdl b/connectors/sharepoint/wsdls/versions.wsdl
index 958b1b8..c4dc93b 100644
--- a/connectors/sharepoint/wsdls/versions.wsdl
+++ b/connectors/sharepoint/wsdls/versions.wsdl
@@ -224,10 +224,10 @@
   </wsdl:binding>
   <wsdl:service name="Versions">
     <wsdl:port name="VersionsSoap" binding="tns:VersionsSoap">
-      <soap:address location="http://www.wssdemo.com/_vti_bin/versions.asmx" />
+      <soap:address location="http://localhost/_vti_bin/versions.asmx" />
     </wsdl:port>
     <wsdl:port name="VersionsSoap12" binding="tns:VersionsSoap12">
-      <soap12:address location="http://www.wssdemo.com/_vti_bin/versions.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/versions.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
diff --git a/connectors/sharepoint/wsdls/webs.wsdl b/connectors/sharepoint/wsdls/webs.wsdl
index 58a5e1e..a97ae69 100644
--- a/connectors/sharepoint/wsdls/webs.wsdl
+++ b/connectors/sharepoint/wsdls/webs.wsdl
@@ -1024,10 +1024,10 @@
   </wsdl:binding>
   <wsdl:service name="Webs">
     <wsdl:port name="WebsSoap" binding="tns:WebsSoap">
-      <soap:address location="http://www.wssdemo.com/_vti_bin/webs.asmx" />
+      <soap:address location="http://localhost/_vti_bin/webs.asmx" />
     </wsdl:port>
     <wsdl:port name="WebsSoap12" binding="tns:WebsSoap12">
-      <soap12:address location="http://www.wssdemo.com/_vti_bin/webs.asmx" />
+      <soap12:address location="http://localhost/_vti_bin/webs.asmx" />
     </wsdl:port>
   </wsdl:service>
 </wsdl:definitions>
diff --git a/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/HttpPoster.java b/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/HttpPoster.java
index c885197..2d7306f 100644
--- a/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/HttpPoster.java
+++ b/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/HttpPoster.java
@@ -70,6 +70,7 @@
 import org.apache.solr.common.params.ModifiableSolrParams;
 import org.apache.solr.common.util.ContentStream;
 import org.apache.solr.common.SolrException;
+import org.apache.solr.client.solrj.impl.HttpClientUtil;
 
 
 /**
@@ -107,6 +108,7 @@
   private String idAttributeName;
   private String modifiedDateAttributeName;
   private String createdDateAttributeName;
+  private String indexedDateAttributeName;
   private String fileNameAttributeName;
   private String mimeTypeAttributeName;
   
@@ -131,7 +133,7 @@
     int zkClientTimeout, int zkConnectTimeout,
     String updatePath, String removePath, String statusPath,
     String allowAttributeName, String denyAttributeName, String idAttributeName,
-    String modifiedDateAttributeName, String createdDateAttributeName,
+    String modifiedDateAttributeName, String createdDateAttributeName, String indexedDateAttributeName,
     String fileNameAttributeName, String mimeTypeAttributeName,
     Long maxDocumentLength,
     String commitWithin)
@@ -149,6 +151,7 @@
     this.idAttributeName = idAttributeName;
     this.modifiedDateAttributeName = modifiedDateAttributeName;
     this.createdDateAttributeName = createdDateAttributeName;
+    this.indexedDateAttributeName = indexedDateAttributeName;
     this.fileNameAttributeName = fileNameAttributeName;
     this.mimeTypeAttributeName = mimeTypeAttributeName;
     
@@ -156,7 +159,7 @@
     
     try
     {
-      CloudSolrServer cloudSolrServer = new CloudSolrServer(zookeeperHosts);
+      CloudSolrServer cloudSolrServer = new CloudSolrServer(zookeeperHosts/*, new ModifiedLBHttpSolrServer(HttpClientUtil.createClient(null))*/);
       cloudSolrServer.setZkClientTimeout(zkClientTimeout);
       cloudSolrServer.setZkConnectTimeout(zkConnectTimeout);
       cloudSolrServer.setDefaultCollection(collection);
@@ -176,7 +179,7 @@
     String updatePath, String removePath, String statusPath,
     String realm, String userID, String password,
     String allowAttributeName, String denyAttributeName, String idAttributeName,
-    String modifiedDateAttributeName, String createdDateAttributeName,
+    String modifiedDateAttributeName, String createdDateAttributeName, String indexedDateAttributeName,
     String fileNameAttributeName, String mimeTypeAttributeName,
     IKeystoreManager keystoreManager, Long maxDocumentLength,
     String commitWithin)
@@ -194,6 +197,7 @@
     this.idAttributeName = idAttributeName;
     this.modifiedDateAttributeName = modifiedDateAttributeName;
     this.createdDateAttributeName = createdDateAttributeName;
+    this.indexedDateAttributeName = indexedDateAttributeName;
     this.fileNameAttributeName = fileNameAttributeName;
     this.mimeTypeAttributeName = mimeTypeAttributeName;
     
@@ -233,6 +237,7 @@
     // This one is essential to prevent us from reading from the content stream before necessary during auth, but
     // is incompatible with some proxies.
     params.setBooleanParameter(CoreProtocolPNames.USE_EXPECT_CONTINUE,true);
+    params.setIntParameter(CoreProtocolPNames.WAIT_FOR_CONTINUE,socketTimeout);
     params.setBooleanParameter(CoreConnectionPNames.TCP_NODELAY,true);
     params.setBooleanParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK,true);
     params.setBooleanParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS,true);
@@ -398,7 +403,11 @@
     if (code == 500)
     {
       long currentTime = System.currentTimeMillis();
-      throw new ServiceInterruption("Solr exception during "+context+" ("+e.code()+"): "+e.getMessage(),
+      
+      // Log the error
+      String message = "Solr exception during "+context+" ("+e.code()+"): "+e.getMessage();
+      Logging.ingest.warn(message,e);
+      throw new ServiceInterruption(message,
         e,
         currentTime + interruptionRetryTime,
         currentTime + 2L * 60L * 60000L,
@@ -435,20 +444,26 @@
       if (e.getMessage().toLowerCase(Locale.ROOT).indexOf("broken pipe") != -1 ||
         e.getMessage().toLowerCase(Locale.ROOT).indexOf("connection reset") != -1 ||
         e.getMessage().toLowerCase(Locale.ROOT).indexOf("target server failed to respond") != -1)
+      {
         // Treat it as a service interruption, but with a limited number of retries.
         // In that way we won't burden the user with a huge retry interval; it should
         // give up fairly quickly, and yet NOT give up if the error was merely transient
-        throw new ServiceInterruption("Server dropped connection during "+context+": "+e.getMessage(),
+        String message = "Server dropped connection during "+context+": "+e.getMessage();
+        Logging.ingest.warn(message,e);
+        throw new ServiceInterruption(message,
           e,
           currentTime + interruptionRetryTime,
           -1L,
           3,
           false);
+      }
       
       // Other socket exceptions are service interruptions - but if we keep getting them, it means 
       // that a socket timeout is probably set too low to accept this particular document.  So
       // we retry for a while, then skip the document.
-      throw new ServiceInterruption("Socket timeout exception during "+context+": "+e.getMessage(),
+      String message2 = "Socket timeout exception during "+context+": "+e.getMessage();
+      Logging.ingest.warn(message2,e);
+      throw new ServiceInterruption(message2,
         e,
         currentTime + interruptionRetryTime,
         currentTime + 20L * 60000L,
@@ -457,7 +472,9 @@
     }
     
     // Otherwise, no idea what the trouble is, so presume that retries might fix it.
-    throw new ServiceInterruption("IO exception during "+context+": "+e.getMessage(),
+    String message3 = "IO exception during "+context+": "+e.getMessage();
+    Logging.ingest.warn(message3,e);
+    throw new ServiceInterruption(message3,
       e,
       currentTime + interruptionRetryTime,
       currentTime + 2L * 60L * 60000L,
@@ -476,7 +493,7 @@
   * @throws ManifoldCFException, ServiceInterruption
   */
   public boolean indexPost(String documentURI,
-    RepositoryDocument document, Map arguments, Map sourceTargets,
+    RepositoryDocument document, Map arguments, Map<String, List<String>> sourceTargets,
     String authorityNameString, IOutputAddActivity activities)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -752,7 +769,7 @@
     protected String documentURI;
     protected RepositoryDocument document;
     protected Map<String,List<String>> arguments;
-    protected Map<String,String> sourceTargets;
+    protected Map<String,List<String>> sourceTargets;
     protected String[] shareAcls;
     protected String[] shareDenyAcls;
     protected String[] acls;
@@ -768,7 +785,7 @@
     protected boolean rval = false;
 
     public IngestThread(String documentURI, RepositoryDocument document,
-      Map<String,List<String>> arguments, Map<String,String> sourceTargets,
+      Map<String,List<String>> arguments, Map<String, List<String>> sourceTargets,
       String[] shareAcls, String[] shareDenyAcls, String[] acls, String[] denyAcls, String commitWithin)
     {
       super();
@@ -820,6 +837,13 @@
               // Write value
               writeField(out,LITERAL+createdDateAttributeName,DateParser.formatISO8601Date(date));
           }
+          if (indexedDateAttributeName != null)
+          {
+            Date date = document.getIndexingDate();
+            if (date != null)
+              // Write value
+              writeField(out,LITERAL+indexedDateAttributeName,DateParser.formatISO8601Date(date));
+          }
           if (fileNameAttributeName != null)
           {
             String fileName = document.getFileName();
@@ -849,15 +873,26 @@
           while (iter.hasNext())
           {
             String fieldName = iter.next();
-            String newFieldName = sourceTargets.get(fieldName);
-            if (newFieldName == null)
-              newFieldName = fieldName;
-            if (newFieldName.length() > 0)
-            {
-              if (newFieldName.toLowerCase(Locale.ROOT).equals(idAttributeName.toLowerCase(Locale.ROOT)))
-                newFieldName = ID_METADATA;
-              String[] values = document.getFieldAsStrings(fieldName);
-              writeField(out,LITERAL+newFieldName,values);
+            List<String> mapping = sourceTargets.get(fieldName);
+            if(mapping != null) {
+              for(String newFieldName : mapping) {
+                if(newFieldName != null && !newFieldName.isEmpty()) {
+                  if (newFieldName.toLowerCase(Locale.ROOT).equals(idAttributeName.toLowerCase(Locale.ROOT))) {
+                    newFieldName = ID_METADATA;
+                  }
+                  String[] values = document.getFieldAsStrings(fieldName);
+                  writeField(out,LITERAL+newFieldName,values);
+                }
+              }
+            } else {
+              String newFieldName = fieldName;
+              if (!newFieldName.isEmpty()) {
+                if (newFieldName.toLowerCase(Locale.ROOT).equals(idAttributeName.toLowerCase(Locale.ROOT))) {
+                  newFieldName = ID_METADATA;
+                }
+                String[] values = document.getFieldAsStrings(fieldName);
+                writeField(out,LITERAL+newFieldName,values);
+              }
             }
           }
              
diff --git a/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConfig.java b/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConfig.java
index 2c3af20..f7517b0 100644
--- a/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConfig.java
+++ b/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConfig.java
@@ -87,6 +87,8 @@
   public static final String PARAM_MODIFIEDDATEFIELD = "Solr modified date field name";
   /** Optional created date field */
   public static final String PARAM_CREATEDDATEFIELD = "Solr created date field name";
+  /** Optional indexed date field */
+  public static final String PARAM_INDEXEDDATEFIELD = "Solr indexed date field name";
   /** Optional file name field */
   public static final String PARAM_FILENAMEFIELD = "Solr filename field name";
   /** Optional mime type field */
diff --git a/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConnector.java b/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConnector.java
index a8dd20a..e85e1f3 100644
--- a/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConnector.java
+++ b/connectors/solr/connector/src/main/java/org/apache/manifoldcf/agents/output/solr/SolrConnector.java
@@ -110,6 +110,16 @@
     }
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    return poster != null;
+  }
+
   /** Close the connection.  Call this before discarding the connection.
   */
   @Override
@@ -160,6 +170,10 @@
       if (createdDateAttributeName == null || createdDateAttributeName.length() == 0)
         createdDateAttributeName = null;
   
+      String indexedDateAttributeName = params.getParameter(SolrConfig.PARAM_INDEXEDDATEFIELD);
+      if (indexedDateAttributeName == null || indexedDateAttributeName.length() == 0)
+        indexedDateAttributeName = null;
+
       String fileNameAttributeName = params.getParameter(SolrConfig.PARAM_FILENAMEFIELD);
       if (fileNameAttributeName == null || fileNameAttributeName.length() == 0)
         fileNameAttributeName = null;
@@ -273,7 +287,7 @@
             connectTimeout,socketTimeout,
             updatePath,removePath,statusPath,realm,userID,password,
             allowAttributeName,denyAttributeName,idAttributeName,
-            modifiedDateAttributeName,createdDateAttributeName,
+            modifiedDateAttributeName,createdDateAttributeName,indexedDateAttributeName,
             fileNameAttributeName,mimeTypeAttributeName,
             keystoreManager,maxDocumentLength,commitWithin);
           
@@ -328,7 +342,7 @@
             zkClientTimeout,zkConnectTimeout,
             updatePath,removePath,statusPath,
             allowAttributeName,denyAttributeName,idAttributeName,
-            modifiedDateAttributeName,createdDateAttributeName,
+            modifiedDateAttributeName,createdDateAttributeName,indexedDateAttributeName,
             fileNameAttributeName,mimeTypeAttributeName,
             maxDocumentLength,commitWithin);
           
@@ -420,6 +434,8 @@
   public String getOutputDescription(OutputSpecification spec)
     throws ManifoldCFException, ServiceInterruption
   {
+    getSession();
+    
     StringBuilder sb = new StringBuilder();
 
     // All the arguments need to go into this string, since they affect ingestion.
@@ -459,6 +475,7 @@
     {
       String name = sortArray[i++];
       ArrayList values = (ArrayList)args.get(name);
+      java.util.Collections.sort(values);
       int j = 0;
       while (j < values.size())
       {
@@ -473,45 +490,52 @@
     
     packList(sb,nameValues,'+');
     
-    Map fieldMap = new HashMap();
+    // Do the source/target pairs
     i = 0;
-    while (i < spec.getChildCount())
-    {
+    Map<String, List<String>> sourceTargets = new HashMap<String, List<String>>();
+    while (i < spec.getChildCount()) {
       SpecificationNode sn = spec.getChild(i++);
-      if (sn.getType().equals(SolrConfig.NODE_FIELDMAP))
-      {
+      if (sn.getType().equals(SolrConfig.NODE_FIELDMAP)) {
         String source = sn.getAttributeValue(SolrConfig.ATTRIBUTE_SOURCE);
         String target = sn.getAttributeValue(SolrConfig.ATTRIBUTE_TARGET);
-        if (target == null)
+        if (target == null) {
           target = "";
-        fieldMap.put(source,target);
+        }
+        List<String> list = (List<String>)sourceTargets.get(source);
+        if (list == null) {
+          list = new ArrayList<String>();
+          sourceTargets.put(source, list);
+        }
+        list.add(target);
       }
     }
     
-    sortArray = new String[fieldMap.size()];
+    sortArray = new String[sourceTargets.size()];
+    iter = sourceTargets.keySet().iterator();
     i = 0;
-    iter = fieldMap.keySet().iterator();
-    while (iter.hasNext())
-    {
+    while (iter.hasNext()) {
       sortArray[i++] = (String)iter.next();
     }
     java.util.Arrays.sort(sortArray);
     
-    ArrayList sourceTargets = new ArrayList();
-    
+    ArrayList sourceTargetsList = new ArrayList();
     i = 0;
-    while (i < sortArray.length)
-    {
+    while (i < sortArray.length) {
       String source = sortArray[i++];
-      String target = (String)fieldMap.get(source);
-      fixedList[0] = source;
-      fixedList[1] = target;
-      StringBuilder pairBuffer = new StringBuilder();
-      packFixedList(pairBuffer,fixedList,'=');
-      sourceTargets.add(pairBuffer.toString());
+      List<String> values = (List<String>)sourceTargets.get(source);
+      java.util.Collections.sort(values);
+      int j = 0;
+      while (j < values.size()) {
+        String target = (String)values.get(j++);
+        fixedList[0] = source;
+        fixedList[1] = target;
+        StringBuilder pairBuffer = new StringBuilder();
+        packFixedList(pairBuffer,fixedList,'=');
+        sourceTargetsList.add(pairBuffer.toString());
+      }
     }
     
-    packList(sb,sourceTargets,'+');
+    packList(sb,sourceTargetsList,'+');
 
     // Here, append things which we have no intention of unpacking.  This includes stuff that comes from
     // the configuration information, for instance.
@@ -556,6 +580,7 @@
   public boolean checkMimeTypeIndexable(String outputDescription, String mimeType)
     throws ManifoldCFException, ServiceInterruption
   {
+    getSession();
     if (includedMimeTypes != null && includedMimeTypes.get(mimeType) == null)
       return false;
     if (excludedMimeTypes != null && excludedMimeTypes.get(mimeType) != null)
@@ -572,6 +597,7 @@
   public boolean checkLengthIndexable(String outputDescription, long length)
     throws ManifoldCFException, ServiceInterruption
   {
+    getSession();
     if (maxDocumentLength != null && length > maxDocumentLength.longValue())
       return false;
     return super.checkLengthIndexable(outputDescription,length);
@@ -597,7 +623,7 @@
   {
     // Build the argument map we'll send.
     Map args = new HashMap();
-    Map sourceTargets = new HashMap();
+    Map<String, List<String>> sourceTargets = new HashMap<String, List<String>>();
     int index = 0;
     ArrayList nameValues = new ArrayList();
     index = unpackList(nameValues,outputDescription,index,'+');
@@ -623,11 +649,17 @@
     
     // Do the source/target pairs
     i = 0;
-    while (i < sts.size())
-    {
+    while (i < sts.size()) {
       String x = (String)sts.get(i++);
       unpackFixedList(fixedBuffer,x,0,'=');
-      sourceTargets.put(fixedBuffer[0],fixedBuffer[1]);
+      String source = fixedBuffer[0];
+      String target = fixedBuffer[1];
+      List<String> list = (List<String>)sourceTargets.get(source);
+      if (list == null) {
+        list = new ArrayList<String>();
+        sourceTargets.put(source, list);
+      }
+      list.add(target);
     }
 
     // Establish a session
@@ -1059,6 +1091,10 @@
     String createdDateField = parameters.getParameter(SolrConfig.PARAM_CREATEDDATEFIELD);
     if (createdDateField == null)
       createdDateField = "";
+
+    String indexedDateField = parameters.getParameter(SolrConfig.PARAM_INDEXEDDATEFIELD);
+    if (indexedDateField == null)
+      indexedDateField = "";
     
     String fileNameField = parameters.getParameter(SolrConfig.PARAM_FILENAMEFIELD);
     if (fileNameField == null)
@@ -1079,6 +1115,8 @@
     String password = parameters.getObfuscatedParameter(SolrConfig.PARAM_PASSWORD);
     if (password == null)
       password = "";
+    else
+      password = out.mapPasswordToKey(password);
     
     String commits = parameters.getParameter(SolrConfig.PARAM_COMMITS);
     if (commits == null)
@@ -1435,6 +1473,7 @@
       out.print(
 "<input type=\"hidden\" name=\"count_zookeeper\" value=\""+k+"\"/>\n"+
 "<input type=\"hidden\" name=\"znodepath\" value=\""+znodePath+"\"/>\n"+
+"<input type=\"hidden\" name=\"collection\" value=\""+collection+"\"/>\n"+
 "<input type=\"hidden\" name=\"zkclienttimeout\" value=\""+zkClientTimeout+"\"/>\n"+
 "<input type=\"hidden\" name=\"zkconnecttimeout\" value=\""+zkConnectTimeout+"\"/>\n"
       );
@@ -1502,6 +1541,12 @@
 "    </td>\n"+
 "  </tr>\n"+
 "  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SolrConnector.IndexedDateFieldName") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"indexeddatefield\" type=\"text\" size=\"32\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(indexedDateField)+"\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
 "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"SolrConnector.FileNameFieldName") + "</nobr></td>\n"+
 "    <td class=\"value\">\n"+
 "      <input name=\"filenamefield\" type=\"text\" size=\"32\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(fileNameField)+"\"/>\n"+
@@ -1522,6 +1567,7 @@
 "<input type=\"hidden\" name=\"idfield\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(idField)+"\"/>\n"+
 "<input type=\"hidden\" name=\"modifieddatefield\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(modifiedDateField)+"\"/>\n"+
 "<input type=\"hidden\" name=\"createddatefield\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(createdDateField)+"\"/>\n"+
+"<input type=\"hidden\" name=\"indexeddatefield\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(indexedDateField)+"\"/>\n"+
 "<input type=\"hidden\" name=\"filenamefield\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(fileNameField)+"\"/>\n"+
 "<input type=\"hidden\" name=\"mimetypefield\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(mimeTypeField)+"\"/>\n"
       );
@@ -1811,6 +1857,10 @@
     if (createdDateField != null)
       parameters.setParameter(SolrConfig.PARAM_CREATEDDATEFIELD,createdDateField);
 
+    String indexedDateField = variableContext.getParameter("indexeddatefield");
+    if (indexedDateField != null)
+      parameters.setParameter(SolrConfig.PARAM_INDEXEDDATEFIELD,indexedDateField);
+
     String fileNameField = variableContext.getParameter("filenamefield");
     if (fileNameField != null)
       parameters.setParameter(SolrConfig.PARAM_FILENAMEFIELD,fileNameField);
@@ -1829,7 +1879,7 @@
 		
     String password = variableContext.getParameter("password");
     if (password != null)
-      parameters.setObfuscatedParameter(SolrConfig.PARAM_PASSWORD,password);
+      parameters.setObfuscatedParameter(SolrConfig.PARAM_PASSWORD,variableContext.mapKeyToPassword(password));
     
     String maxLength = variableContext.getParameter("maxdocumentlength");
     if (maxLength != null)
@@ -2218,21 +2268,7 @@
   public void outputSpecificationBody(IHTTPOutput out, Locale locale, OutputSpecification os, String tabName)
     throws ManifoldCFException, IOException
   {
-    // Prep for field mapping tab
-    HashMap fieldMap = new HashMap();
     int i = 0;
-    while (i < os.getChildCount())
-    {
-      SpecificationNode sn = os.getChild(i++);
-      if (sn.getType().equals(SolrConfig.NODE_FIELDMAP))
-      {
-        String source = sn.getAttributeValue(SolrConfig.ATTRIBUTE_SOURCE);
-        String target = sn.getAttributeValue(SolrConfig.ATTRIBUTE_TARGET);
-        if (target != null && target.length() == 0)
-          target = null;
-        fieldMap.put(source,target);
-      }
-    }
     
     // Field Mapping tab
     if (tabName.equals(Messages.getString(locale,"SolrConnector.SolrFieldMapping")))
@@ -2251,30 +2287,25 @@
 "        </tr>\n"
       );
 
-      String[] sourceFieldNames = new String[fieldMap.size()];
-      Iterator iter = fieldMap.keySet().iterator();
-      i = 0;
-      while (iter.hasNext())
-      {
-        sourceFieldNames[i++] = (String)iter.next();
-      }
-      java.util.Arrays.sort(sourceFieldNames);
-      
       int fieldCounter = 0;
       i = 0;
-      while (i < sourceFieldNames.length)
-      {
-        String source = sourceFieldNames[i++];
-        String target = (String)fieldMap.get(source);
-        String targetDisplay = target;
-        if (target == null)
-        {
-          target = "";
-          targetDisplay = "(remove)";
-        }
-        // It's prefix will be...
-        String prefix = "solr_fieldmapping_" + Integer.toString(fieldCounter);
-        out.print(
+      while (i < os.getChildCount()) {
+        SpecificationNode sn = os.getChild(i++);
+        if (sn.getType().equals(SolrConfig.NODE_FIELDMAP)) {
+          String source = sn.getAttributeValue(SolrConfig.ATTRIBUTE_SOURCE);
+          String target = sn.getAttributeValue(SolrConfig.ATTRIBUTE_TARGET);
+          if (target != null && target.length() == 0) {
+            target = null;
+          }
+          String targetDisplay = target;
+          if (target == null)
+          {
+            target = "";
+            targetDisplay = "(remove)";
+          }
+          // It's prefix will be...
+          String prefix = "solr_fieldmapping_" + Integer.toString(fieldCounter);
+          out.print(
 "        <tr class=\""+(((fieldCounter % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
 "          <td class=\"formcolumncell\">\n"+
 "            <a name=\""+prefix+"\">\n"+
@@ -2291,8 +2322,9 @@
 "            <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(targetDisplay)+"</nobr>\n"+
 "          </td>\n"+
 "        </tr>\n"
-        );
-        fieldCounter++;
+          );
+          fieldCounter++;
+        }
       }
       
       if (fieldCounter == 0)
@@ -2327,25 +2359,27 @@
     else
     {
       // Hiddens for field mapping
-      out.print(
-"<input type=\"hidden\" name=\"solr_fieldmapping_count\" value=\""+Integer.toString(fieldMap.size())+"\"/>\n"
-      );
-      Iterator iter = fieldMap.keySet().iterator();
+      i = 0;
       int fieldCounter = 0;
-      while (iter.hasNext())
-      {
-        String source = (String)iter.next();
-        String target = (String)fieldMap.get(source);
-        if (target == null)
-          target = "";
+      while (i < os.getChildCount()) {
+        SpecificationNode sn = os.getChild(i++);
+        if (sn.getType().equals(SolrConfig.NODE_FIELDMAP)) {
+          String source = sn.getAttributeValue(SolrConfig.ATTRIBUTE_SOURCE);
+          String target = sn.getAttributeValue(SolrConfig.ATTRIBUTE_TARGET);
+          if (target == null)
+            target = "";
         // It's prefix will be...
-        String prefix = "solr_fieldmapping_" + Integer.toString(fieldCounter);
-        out.print(
+          String prefix = "solr_fieldmapping_" + Integer.toString(fieldCounter);
+          out.print(
 "<input type=\"hidden\" name=\""+prefix+"_source\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(source)+"\"/>\n"+
 "<input type=\"hidden\" name=\""+prefix+"_target\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(target)+"\"/>\n"
-        );
-        fieldCounter++;
+          );
+          fieldCounter++;
+        }
       }
+      out.print(
+"<input type=\"hidden\" name=\"solr_fieldmapping_count\" value=\""+Integer.toString(fieldCounter)+"\"/>\n"
+      );
     }
 
   }
@@ -2424,27 +2458,6 @@
     // Prep for field mappings
     HashMap fieldMap = new HashMap();
     int i = 0;
-    while (i < os.getChildCount())
-    {
-      SpecificationNode sn = os.getChild(i++);
-      if (sn.getType().equals(SolrConfig.NODE_FIELDMAP))
-      {
-        String source = sn.getAttributeValue(SolrConfig.ATTRIBUTE_SOURCE);
-        String target = sn.getAttributeValue(SolrConfig.ATTRIBUTE_TARGET);
-        if (target != null && target.length() == 0)
-          target = null;
-        fieldMap.put(source,target);
-      }
-    }
-
-    String[] sourceFieldNames = new String[fieldMap.size()];
-    Iterator iter = fieldMap.keySet().iterator();
-    i = 0;
-    while (iter.hasNext())
-    {
-      sourceFieldNames[i++] = (String)iter.next();
-    }
-    java.util.Arrays.sort(sourceFieldNames);
 
     // Display field mappings
     out.print(
@@ -2461,17 +2474,19 @@
     );
 
     int fieldCounter = 0;
-    while (fieldCounter < sourceFieldNames.length)
-    {
-      String source = sourceFieldNames[fieldCounter++];
-      String target = (String)fieldMap.get(source);
-      String targetDisplay = target;
-      if (target == null)
-      {
-        target = "";
-        targetDisplay = "(remove)";
-      }
-      out.print(
+    i = 0;
+    while (i < os.getChildCount()) {
+      SpecificationNode sn = os.getChild(i++);
+      if (sn.getType().equals(SolrConfig.NODE_FIELDMAP)) {
+        String source = sn.getAttributeValue(SolrConfig.ATTRIBUTE_SOURCE);
+        String target = sn.getAttributeValue(SolrConfig.ATTRIBUTE_TARGET);
+        String targetDisplay = target;
+        if (target == null)
+        {
+          target = "";
+          targetDisplay = "(remove)";
+        }
+        out.print(
 "        <tr class=\""+(((fieldCounter % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
 "          <td class=\"formcolumncell\">\n"+
 "            <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(source)+"</nobr>\n"+
@@ -2480,8 +2495,9 @@
 "            <nobr>"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(targetDisplay)+"</nobr>\n"+
 "          </td>\n"+
 "        </tr>\n"
-      );
-      fieldCounter++;
+        );
+        fieldCounter++;
+      }
     }
     
     if (fieldCounter == 0)
diff --git a/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_en_US.properties b/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_en_US.properties
index 3d64a3c..26cd27d 100644
--- a/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_en_US.properties
+++ b/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_en_US.properties
@@ -54,6 +54,7 @@
 SolrConnector.IDFieldName=ID field name:
 SolrConnector.ModifiedDateFieldName=Modified date field name:
 SolrConnector.CreatedDateFieldName=Created date field name:
+SolrConnector.IndexedDateFieldName=Indexed date field name:
 SolrConnector.FileNameFieldName=File name field name:
 SolrConnector.MimeTypeFieldName=Mime type field name:
 SolrConnector.MaximumDocumentLength=Maximum document length:
diff --git a/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_ja_JP.properties b/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_ja_JP.properties
index 6ef4f65..37b751a 100644
--- a/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_ja_JP.properties
+++ b/connectors/solr/connector/src/main/native2ascii/org/apache/manifoldcf/agents/output/solr/common_ja_JP.properties
@@ -54,6 +54,7 @@
 SolrConnector.IDFieldName=IDフィールド名:
 SolrConnector.ModifiedDateFieldName=更新日付フィールド名:
 SolrConnector.CreatedDateFieldName=作成日付フィールド名:
+SolrConnector.IndexedDateFieldName=Indexed date field name:
 SolrConnector.FileNameFieldName=ファイル名称フィールド名:
 SolrConnector.MimeTypeFieldName=MIMEタイプフィールド名:
 SolrConnector.MaximumDocumentLength=最大コンテンツ長:
diff --git a/connectors/solr/pom.xml b/connectors/solr/pom.xml
index df57432..db2c63c 100644
--- a/connectors/solr/pom.xml
+++ b/connectors/solr/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -82,9 +92,14 @@
       <version>${solr.version}</version>
     </dependency>
     <dependency>
+      <groupId>org.apache.zookeeper</groupId>
+      <artifactId>zookeeper</artifactId>
+      <version>${zookeeper.version}</version>
+    </dependency>
+    <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
   </dependencies>
 </project>
diff --git a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/IThrottledConnection.java b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/IThrottledConnection.java
index d401159..8e8b65a 100644
--- a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/IThrottledConnection.java
+++ b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/IThrottledConnection.java
@@ -39,6 +39,12 @@
   public static final int FETCH_INTERRUPTED = -104;
   public static final int FETCH_UNKNOWN_ERROR = -999;
 
+  /** Check whether the connection has expired.
+  *@param currentTime is the current time to use to judge if a connection has expired.
+  *@return true if the connection has expired, and should be closed.
+  */
+  public boolean hasExpired(long currentTime);
+
   /** Begin the fetch process.
   * @param fetchType is a short descriptive string describing the kind of fetch being requested.  This
   *        is used solely for logging purposes.
@@ -113,8 +119,13 @@
   public void doneFetch(IVersionActivity activities)
     throws ManifoldCFException;
 
-  /** Close the connection.  Call this to end this server connection.
+  /** Close the connection.  Call this to return the connection to
+  * its pool.
   */
-  public void close()
-    throws ManifoldCFException;
+  public void close();
+  
+  /** Destroy the connection.  Call this to close the connection.
+  */
+  public void destroy();
+  
 }
diff --git a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottleDescription.java b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottleDescription.java
index db7b063..c121568 100644
--- a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottleDescription.java
+++ b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottleDescription.java
@@ -33,13 +33,13 @@
 * any given bin value as much as possible.  For that reason I've organized this structure
 * accordingly.
 */
-public class ThrottleDescription
+public class ThrottleDescription implements IThrottleSpec
 {
   public static final String _rcsid = "@(#)$Id: ThrottleDescription.java 988245 2010-08-23 18:39:35Z kwright $";
 
   /** This is the hash that contains everything.  It's keyed by the regexp string itself.
   * Values are ThrottleItem's. */
-  protected HashMap patternHash = new HashMap();
+  protected Map<String,ThrottleItem> patternHash = new HashMap<String,ThrottleItem>();
 
   /** Constructor.  Build the description from the ConfigParams. */
   public ThrottleDescription(ConfigParams configData)
@@ -146,17 +146,15 @@
   }
 
   /** Given a bin name, find the max open connections to use for that bin.
-  *@return -1 if no limit found.
+  *@return Integer.MAX_VALUE if no limit found.
   */
+  @Override
   public int getMaxOpenConnections(String binName)
   {
     // Go through the regexps and match; for each match, find the maximum possible.
     int maxCount = -1;
-    Iterator iter = patternHash.keySet().iterator();
-    while (iter.hasNext())
+    for (ThrottleItem ti : patternHash.values())
     {
-      String binDescription = (String)iter.next();
-      ThrottleItem ti = (ThrottleItem)patternHash.get(binDescription);
       Integer limit = ti.getMaxOpenConnections();
       if (limit != null)
       {
@@ -169,22 +167,24 @@
         }
       }
     }
+    if (maxCount == -1)
+      maxCount = Integer.MAX_VALUE;
+    else if (maxCount == 0)
+      maxCount = 1;
     return maxCount;
   }
 
   /** Look up minimum milliseconds per byte for a bin.
   *@return 0.0 if no limit found.
   */
+  @Override
   public double getMinimumMillisecondsPerByte(String binName)
   {
     // Go through the regexps and match; for each match, find the maximum possible.
     double minMilliseconds = 0.0;
     boolean seenSomething = false;
-    Iterator iter = patternHash.keySet().iterator();
-    while (iter.hasNext())
+    for (ThrottleItem ti : patternHash.values())
     {
-      String binDescription = (String)iter.next();
-      ThrottleItem ti = (ThrottleItem)patternHash.get(binDescription);
       Double limit = ti.getMinimumMillisecondsPerByte();
       if (limit != null)
       {
@@ -206,16 +206,14 @@
   /** Look up minimum milliseconds for a fetch for a bin.
   *@return 0 if no limit found.
   */
+  @Override
   public long getMinimumMillisecondsPerFetch(String binName)
   {
     // Go through the regexps and match; for each match, find the maximum possible.
     long minMilliseconds = 0L;
     boolean seenSomething = false;
-    Iterator iter = patternHash.keySet().iterator();
-    while (iter.hasNext())
+    for (ThrottleItem ti : patternHash.values())
     {
-      String binDescription = (String)iter.next();
-      ThrottleItem ti = (ThrottleItem)patternHash.get(binDescription);
       Long limit = ti.getMinimumMillisecondsPerFetch();
       if (limit != null)
       {
@@ -239,7 +237,7 @@
   protected static class ThrottleItem
   {
     /** The bin-matching pattern. */
-    protected Pattern pattern;
+    protected final Pattern pattern;
     /** The minimum milliseconds between bytes, or null if no limit. */
     protected Double minimumMillisecondsPerByte = null;
     /** The minimum milliseconds per fetch, or null if no limit */
diff --git a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottledFetcher.java b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottledFetcher.java
index b47cd49..e1f42da 100644
--- a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottledFetcher.java
+++ b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/ThrottledFetcher.java
@@ -19,7 +19,9 @@
 package org.apache.manifoldcf.crawler.connectors.webcrawler;
 
 import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.common.DeflateInputStream;
 import org.apache.manifoldcf.core.common.XThreadInputStream;
+import org.apache.manifoldcf.core.common.InterruptibleSocketFactory;
 import org.apache.manifoldcf.agents.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.*;
 import org.apache.manifoldcf.crawler.system.Logging;
@@ -27,6 +29,7 @@
 import java.util.*;
 import java.io.*;
 import java.net.*;
+import java.util.zip.GZIPInputStream;
 import java.util.concurrent.TimeUnit;
 
 import org.apache.http.conn.ClientConnectionManager;
@@ -55,6 +58,7 @@
 import org.apache.http.HttpStatus;
 import org.apache.http.HttpHost;
 import org.apache.http.Header;
+import org.apache.http.HeaderElement;
 import org.apache.http.conn.params.ConnRoutePNames;
 import org.apache.http.message.BasicHeader;
 import org.apache.http.client.params.ClientPNames;
@@ -96,6 +100,12 @@
 {
   public static final String _rcsid = "@(#)$Id: ThrottledFetcher.java 989847 2010-08-26 17:52:30Z kwright $";
 
+  /** Web throttle group type */
+  protected static final String webThrottleGroupType = "_WEB_";
+  
+  /** Idle timeout */
+  protected static final long idleTimeout = 300000L;
+  
   /** This flag determines whether we record everything to the disk, as a means of doing a web snapshot */
   protected static final boolean recordEverything = false;
 
@@ -105,16 +115,13 @@
   protected static final long TIME_6HRS = 6L * 60L * 60000L;
   protected static final long TIME_1DAY = 24L * 60L * 60000L;
 
+  /** The read chunk length */
+  protected static final int READ_CHUNK_LENGTH = 4096;
 
-  /** This is the static pool of ConnectionBin's, keyed by bin name. */
-  protected static HashMap connectionBins = new HashMap();
-  /** This is the static pool of ThrottleBin's, keyed by bin name. */
-  protected static HashMap throttleBins = new HashMap();
-
-  /** This global lock protects the "distributed pool" resource, and insures that a connection
-  * can get pulled out of all the right pools and wind up in only the hands of one thread. */
-  protected static Integer poolLock = new Integer(0);
-
+  /** Connection pools.
+  /* This is a static hash of the connection pools in existence.  Each connection pool represents a set of identical connections. */
+  protected final static Map<ConnectionPoolKey,ConnectionPool> connectionPools = new HashMap<ConnectionPoolKey,ConnectionPool>();
+  
   /** Current host name */
   private static String currentHost = null;
   static
@@ -132,17 +139,13 @@
     }
   }
 
-  /** The read chunk length */
-  protected static final int READ_CHUNK_LENGTH = 4096;
-
-  /** Constructor.
+  /** Constructor.  Private since we never instantiate.
   */
-  public ThrottledFetcher()
+  private ThrottledFetcher()
   {
   }
 
 
-
   /** Obtain a connection to specified protocol, server, and port.  We use the protocol because the
   * setup for some protocols is extensive (e.g. https) and hopefully would not need to be repeated if
   * we distinguish connections based on that.
@@ -158,15 +161,22 @@
   *@param connectionLimit isthe maximum number of connections permitted.
   *@return an IThrottledConnection object that can be used to fetch from the port.
   */
-  public static IThrottledConnection getConnection(String protocol, String server, int port,
+  public static IThrottledConnection getConnection(IThreadContext threadContext, String throttleGroupName,
+    String protocol, String server, int port,
     PageCredentials authentication,
     IKeystoreManager trustStore,
-    ThrottleDescription throttleDescription, String[] binNames,
+    IThrottleSpec throttleDescription, String[] binNames,
     int connectionLimit,
     String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
     throws ManifoldCFException
   {
-    // Create the https scheme for this connection
+    // Get a throttle groups handle
+    IThrottleGroups throttleGroups = ThrottleGroupsFactory.make(threadContext);
+    
+    // Create the appropruate throttle group, or update the throttle description for an existing one
+    throttleGroups.createOrUpdateThrottleGroup(webThrottleGroupType,throttleGroupName,throttleDescription);
+    
+    // Create the https scheme and trust store string for this connection
     javax.net.ssl.SSLSocketFactory baseFactory;
     String trustStoreString;
     if (trustStore != null)
@@ -180,757 +190,68 @@
       trustStoreString = null;
     }
 
-
-    ConnectionBin[] bins = new ConnectionBin[binNames.length];
-
-    // Now, start looking for a connection
-    int i = 0;
-    while (i < binNames.length)
+    // Construct a connection pool key
+    ConnectionPoolKey poolKey = new ConnectionPoolKey(protocol,server,port,authentication,
+      trustStoreString,proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
+    
+    ConnectionPool p;
+    synchronized (connectionPools)
     {
-      String binName = binNames[i];
-
-      // Find or create the bin object
-      ConnectionBin cb;
-      synchronized (connectionBins)
+      p = connectionPools.get(poolKey);
+      if (p == null)
       {
-        cb = (ConnectionBin)connectionBins.get(binName);
-        if (cb == null)
-        {
-          cb = new ConnectionBin(binName);
-          connectionBins.put(binName,cb);
-        }
-        //cb.sanityCheck();
+        // Construct a new IConnectionThrottler.
+        IConnectionThrottler connectionThrottler =
+          throttleGroups.obtainConnectionThrottler(webThrottleGroupType,throttleGroupName,binNames);
+        p = new ConnectionPool(connectionThrottler,protocol,server,port,authentication,baseFactory,
+          proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
+        connectionPools.put(poolKey,p);
       }
-      bins[i] = cb;
-      i++;
     }
-
-    ThrottledConnection connectionToReuse;
-
-    long startTime = 0L;
-    if (Logging.connectors.isDebugEnabled())
+    
+    try
     {
-      startTime = System.currentTimeMillis();
-      Logging.connectors.debug("WEB: Waiting to start getting a connection to "+protocol+"://"+server+":"+port);
+      return p.grab();
     }
-
-    synchronized (poolLock)
+    catch (InterruptedException e)
     {
-
-      // If the number of outstanding connections is greater than the global limit, close pooled connections until we are under the limit
-      long idleTimeout = 64000L;
-      while (true)
-      {
-        int openCount = 0;
-
-        // Lock up everything for a moment
-        synchronized (connectionBins)
-        {
-          // Time out connections that have been idle too long.  To do this, we need to go through
-          // all connection bins and look at the pool
-          Iterator binIter = connectionBins.keySet().iterator();
-          while (binIter.hasNext())
-          {
-            String binName = (String)binIter.next();
-            ConnectionBin cb = (ConnectionBin)connectionBins.get(binName);
-            openCount += cb.countConnections();
-          }
-        }
-
-        if (openCount < connectionLimit)
-          break;
-
-        if (idleTimeout == 0L)
-        {
-          // Can't actually conclude anything here unfortunately
-
-          // Logging.connectors.warn("Web: Exceeding connection limit!  Open count = "+Integer.toString(openCount)+"; limit = "+Integer.toString(connectionLimit));
-          break;
-        }
-        idleTimeout = idleTimeout/4L;
-
-        // Lock up everything for a moment, since otherwise we could delete something people
-        // expect to stick around.
-        synchronized (connectionBins)
-        {
-          // Time out connections that have been idle too long.  To do this, we need to go through
-          // all connection bins and look at the pool
-          Iterator binIter = connectionBins.keySet().iterator();
-          while (binIter.hasNext())
-          {
-            String binName = (String)binIter.next();
-            ConnectionBin cb = (ConnectionBin)connectionBins.get(binName);
-            cb.flushIdleConnections(idleTimeout);
-          }
-        }
-      }
-
-      try
-      {
-        // Retry until we get the connection.
-        while (true)
-        {
-          if (Logging.connectors.isDebugEnabled())
-            Logging.connectors.debug("WEB: Attempting to get connection to "+protocol+"://"+server+":"+port+" ("+new Long(System.currentTimeMillis()-startTime).toString()+" ms)");
-
-          i = 0;
-
-          connectionToReuse = null;
-
-          try
-          {
-
-            // Now, start looking for a connection
-            while (i < binNames.length)
-            {
-              String binName = binNames[i];
-              ConnectionBin cb = bins[i];
-
-              // Figure out the connection limit for this bin, based on the throttle description
-              int maxConnections = throttleDescription.getMaxOpenConnections(binName);
-
-              // If no restriction, use a very large value.
-              if (maxConnections == -1)
-                maxConnections = Integer.MAX_VALUE;
-              else if (maxConnections == 0)
-                maxConnections = 1;
-
-              // Now, do what we need to do to reserve our connection for this bin.
-              // If we can't reserve it now, we plan on undoing everything we did, so
-              // whatever we do must be reversible.  Furthermore, nothing we call here
-              // should actually wait(); that will occur if we can't get what we need out
-              // here at this level.
-
-              if (connectionToReuse != null)
-              {
-                // We have a reuse candidate already, so just make sure each remaining bin is within
-                // its limits.
-                cb.insureWithinLimits(maxConnections,connectionToReuse);
-              }
-              else
-              {
-                connectionToReuse = cb.findConnection(maxConnections,bins,protocol,server,port,authentication,trustStoreString,
-                  proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
-              }
-
-              // Increment after we successfully handled this bin
-              i++;
-            }
-
-            // That loop completed, meaning that we think we got a connection.  Now, go through all the bins and make sure there's enough time since the last
-            // fetch.  If not, we have to clean everything up and try again.
-            long currentTime = System.currentTimeMillis();
-
-            // Global lock needed to insure that fetch time is updated across all bins simultaneously
-            synchronized (connectionBins)
-            {
-              i = 0;
-              while (i < binNames.length)
-              {
-                String binName = binNames[i];
-                ConnectionBin cb = bins[i];
-                //cb.sanityCheck();
-                // Get the minimum time between fetches for this bin, based on the throttle description
-                long minMillisecondsPerFetch = throttleDescription.getMinimumMillisecondsPerFetch(binName);
-                if (cb.getLastFetchTime() + minMillisecondsPerFetch > currentTime)
-                  throw new WaitException(cb.getLastFetchTime() + minMillisecondsPerFetch - currentTime);
-                i++;
-              }
-              i = 0;
-              while (i < binNames.length)
-              {
-                ConnectionBin cb = bins[i++];
-                cb.setLastFetchTime(currentTime);
-              }
-            }
-
-          }
-          catch (Throwable e)
-          {
-            // We have to free everything and retry, because otherwise we are subject to deadlock.
-            // The only thing we have reserved is the connection, which we must free if there's a
-            // problem.
-
-            if (connectionToReuse != null)
-            {
-              // Return this connection to the pool.  That is, the pools for all the bins.
-              int k = 0;
-              while (k < binNames.length)
-              {
-                String binName = binNames[k++];
-                ConnectionBin cb;
-                synchronized (connectionBins)
-                {
-                  cb = (ConnectionBin)connectionBins.get(binName);
-                  if (cb == null)
-                  {
-                    cb = new ConnectionBin(binName);
-                    connectionBins.put(binName,cb);
-                  }
-                }
-                //cb.sanityCheck();
-                cb.addToPool(connectionToReuse);
-                //cb.sanityCheck();
-              }
-              connectionToReuse = null;
-              // We should not need to notify here because nothing has really changed from
-              // when the attempt started to get the connection.  We just undid what we did.
-            }
-
-
-            if (e instanceof Error)
-              throw (Error)e;
-            if (e instanceof ManifoldCFException)
-              throw (ManifoldCFException)e;
-
-            if (e instanceof WaitException)
-            {
-              // Wait because we need a certain amount of time after a previous fetch.
-              WaitException we = (WaitException)e;
-              long waitAmount = we.getWaitAmount();
-              if (Logging.connectors.isDebugEnabled())
-                Logging.connectors.debug("WEB: Waiting "+new Long(waitAmount).toString()+" ms before starting fetch on "+protocol+"://"+server+":"+port);
-              // Really don't want to sleep inside the pool lock!
-              // The easiest thing to do instead is to use a timed wait.  There is no reason why we need
-              // to wake before the wait time is exceeded - but it's harmless, and the alternative is to
-              // do more reorganization than probably is wise.
-              poolLock.wait(waitAmount);
-              continue;
-            }
-
-            if (e instanceof PoolException)
-            {
-
-              if (Logging.connectors.isDebugEnabled())
-                Logging.connectors.debug("WEB: Going into wait for connection to "+protocol+"://"+server+":"+port+" ("+new Long(System.currentTimeMillis()-startTime).toString()+" ms)");
-
-              // Now, wait for something external to change.  The only thing that can help us is if
-              // some other thread frees a connection.
-              poolLock.wait();
-              // Go back around and try again.
-              continue;
-            }
-
-            throw new ManifoldCFException("Unexpected exception encountered: "+e.getMessage(),e);
-          }
-
-          if (Logging.connectors.isDebugEnabled())
-            Logging.connectors.debug("WEB: Successfully got connection to "+protocol+"://"+server+":"+port+" ("+new Long(System.currentTimeMillis()-startTime).toString()+" ms)");
-
-          // If we have a connection located, activate it.
-          if (connectionToReuse == null)
-            connectionToReuse = new ThrottledConnection(protocol,server,port,authentication,baseFactory,trustStoreString,bins,
-              proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
-          connectionToReuse.setup(throttleDescription);
-          return connectionToReuse;
-        }
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
-      }
+      throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
     }
   }
 
-
   /** Flush connections that have timed out from inactivity. */
-  public static void flushIdleConnections()
+  public static void flushIdleConnections(IThreadContext threadContext)
     throws ManifoldCFException
   {
-    synchronized (poolLock)
+    // Go through outstanding connection pools and clean them up.
+    synchronized (connectionPools)
     {
-      // Lock up everything for a moment, since otherwise we could delete something people
-      // expect to stick around.
-      synchronized (connectionBins)
+      for (ConnectionPool pool : connectionPools.values())
       {
-        // Time out connections that have been idle too long.  To do this, we need to go through
-        // all connection bins and look at the pool
-        Iterator binIter = connectionBins.keySet().iterator();
-        while (binIter.hasNext())
-        {
-          String binName = (String)binIter.next();
-          ConnectionBin cb = (ConnectionBin)connectionBins.get(binName);
-          if (cb.flushIdleConnections(60000L))
-          {
-            // Bin is no longer doing anything; get rid of it.
-            // I've determined this is safe - inUseConnections is designed to prevent any active connection from getting
-            // whacked.
-            // Oops.  Hang results again when I enabled this, so out it goes again.
-            //connectionBins.remove(binName);
-            //binIter = connectionBins.keySet().iterator();
-          }
-        }
+        pool.flushIdleConnections();
       }
     }
   }
 
-  /** Connection pool for a bin.
-  * An instance of this class tracks the connections that are pooled and that are in use for a specific bin.
-  */
-  protected static class ConnectionBin
-  {
-    /** This is the bin name which this connection pool belongs to */
-    protected String binName;
-    /** This is the number of connections in this bin that are signed out and presumably in use */
-    protected int inUseConnections = 0;
-    /** This is the last time a fetch was done on this bin */
-    protected long lastFetchTime = 0L;
-    /** This object is what we synchronize on when we are waiting on a connection to free up for this
-    * bin.  This is a separate object, because we also want to protect the integrity of the
-    * ConnectionBin object itself, for which we'll use the ConnectionBin's synchronizer. */
-    protected Integer connectionWait = new Integer(0);
-    /** This map contains ThrottledConnection objects that are in the pool, and are not in use. */
-    protected HashMap freePool = new HashMap();
-
-    /** Constructor. */
-    public ConnectionBin(String binName)
-    {
-      this.binName = binName;
-    }
-
-    /** Get the bin name. */
-    public String getBinName()
-    {
-      return binName;
-    }
-
-    /** Note the creation of an active connection that belongs to this bin.  The slots all must
-    * have been reserved prior to the connection being created.
-    */
-    public synchronized void noteConnectionCreation()
-    {
-      inUseConnections++;
-    }
-
-    /** Note the destruction of an active connection that belongs to this bin.
-    */
-    public synchronized void noteConnectionDestruction()
-    {
-      inUseConnections--;
-    }
-
-
-    /** Activate a connection that should be in the pool.
-    * Removes the connection from the pool.
-    */
-    public synchronized void takeFromPool(ThrottledConnection tc)
-    {
-      // Remove this connection from the pool list
-      freePool.remove(tc);
-      inUseConnections++;
-    }
-
-    /** Put a connection into the pool.
-    */
-    public synchronized void addToPool(ThrottledConnection tc)
-    {
-      // Add this connection to the pool list
-      freePool.put(tc,tc);
-      inUseConnections--;
-    }
-
-    /** Verify that this bin is within limits.
-    */
-    public synchronized void insureWithinLimits(int maxConnections, ThrottledConnection existingConnection)
-      throws PoolException
-    {
-      //sanityCheck();
-
-      // See if the connection is in fact within the pool; if so, we just presume the limits are fine as they are.
-      // This is necessary because if the connection that's being checked for is freed, then we wreck the data structures.
-      if (existsInPool(existingConnection))
-        return;
-
-      while (maxConnections > 0 && inUseConnections + freePool.size() > maxConnections)
-      {
-        //sanityCheck();
-
-        // If there are any pool connections, free them one at a time
-        ThrottledConnection freeMe = getPoolConnection();
-        if (freeMe != null)
-        {
-          // It's okay to call activate since we guarantee that only one thread is trying to grab
-          // a connection at a time.
-          freeMe.activate();
-          freeMe.destroy();
-          continue;
-        }
-
-        // Instead of waiting, throw a pool exception, so that we can wait and retry at the next level up.
-        throw new PoolException("Waiting for a connection");
-      }
-    }
-
-    /** This method is called only when there is no existing connection yet identified that can be used
-    * for contacting the server and port specified.  This method returns a connection if a matching one can be found;
-    * otherwise it returns null.
-    * If a matching connection is found, it is activated before it is returned.  That removes the connection from all
-    * pools in which it lives.
-    */
-    public synchronized ThrottledConnection findConnection(int maxConnections,
-      ConnectionBin[] binNames, String protocol, String server, int port,
-      PageCredentials authentication, String trustStoreString,
-      String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
-      throws PoolException
-    {
-      //sanityCheck();
-
-      // First, wait until there's no excess.
-      while (maxConnections > 0 && inUseConnections + freePool.size() > maxConnections)
-      {
-        //sanityCheck();
-        // If there are any pool connections, free them one at a time
-        ThrottledConnection freeMe = getPoolConnection();
-        if (freeMe != null)
-        {
-          // It's okay to call activate since we guarantee that only one thread is trying to grab
-          // a connection at a time.
-          freeMe.activate();
-          freeMe.destroy();
-          continue;
-        }
-
-        // Instead of waiting, throw a pool exception, so that we can wait and retry at the next level up.
-        throw new PoolException("Waiting for a connection");
-
-      }
-
-      // Wait until there's a free one
-      if (maxConnections > 0 && inUseConnections > maxConnections-1)
-      {
-        // Instead of waiting, throw a pool exception, so that we can wait and retry at the next level up.
-        throw new PoolException("Waiting for a connection");
-      }
-
-      // A null return means that there is no existing pooled connection that matches, and the  caller is free to create a new connection
-      ThrottledConnection rval = getPoolConnection();
-      if (rval == null)
-        return null;
-
-      // It's okay to call activate since we guarantee that only one thread is trying to grab
-      // a connection at a time.
-      rval.activate();
-      //sanityCheck();
-
-      if (!rval.matches(binNames,protocol,server,port,authentication,trustStoreString,
-        proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword))
-      {
-        // Destroy old connection.  That should free up space for a new creation.
-        rval.destroy();
-        // Return null to indicate that we can create a new connection now
-        return null;
-      }
-
-      // Existing entry matched.  Activate and return it.
-      return rval;
-    }
-
-    /** Note a new time for connection fetch for this pool.
-    *@param currentTime is the time the fetch was started.
-    */
-    public synchronized void setLastFetchTime(long currentTime)
-    {
-      if (currentTime > lastFetchTime)
-        lastFetchTime = currentTime;
-    }
-
-    /** Get the last fetch time.
-    *@return the time.
-    */
-    public synchronized long getLastFetchTime()
-    {
-      return lastFetchTime;
-    }
-
-    /** Count connections that are in use.
-    *@return connections that are in use.
-    */
-    public synchronized int countConnections()
-    {
-      return freePool.size() + inUseConnections;
-    }
-
-    /** Flush any idle connections.
-    *@return true if the connection bin is now, in fact, empty.
-    */
-    public synchronized boolean flushIdleConnections(long idleTimeout)
-    {
-      //sanityCheck();
-
-      // We have to time out the pool connections.  When there are no pool connections
-      // left, AND the in-use counts are zero, we can delete the whole thing.
-      Iterator iter = freePool.keySet().iterator();
-      while (iter.hasNext())
-      {
-        ThrottledConnection tc = (ThrottledConnection)iter.next();
-        if (tc.flushIdleConnections(idleTimeout))
-        {
-          // Can delete this connection, since it timed out.
-          tc.activate();
-          tc.destroy();
-          iter = freePool.keySet().iterator();
-        }
-      }
-
-      //sanityCheck();
-
-      return (freePool.size() == 0 && inUseConnections == 0);
-    }
-
-    /** Grab a connection from the current pool.  This does not remove the connection from the pool;
-    * it just sets it up so that later methods can do that.
-    */
-    protected ThrottledConnection getPoolConnection()
-    {
-      if (freePool.size() == 0)
-        return null;
-      Iterator iter = freePool.keySet().iterator();
-      ThrottledConnection rval = (ThrottledConnection)iter.next();
-      return rval;
-    }
-
-    /** Check if a connection exists in the pool already.
-    */
-    protected boolean existsInPool(ThrottledConnection tc)
-    {
-      return freePool.get(tc) != null;
-    }
-
-    public synchronized void sanityCheck()
-    {
-      // Make sure all the connections in the current pool in fact have a reference to this bin.
-      Iterator iter = freePool.keySet().iterator();
-      while (iter.hasNext())
-      {
-        ThrottledConnection tc = (ThrottledConnection)iter.next();
-        tc.mustHaveReference(this);
-      }
-    }
-
-  }
-
-  /** Throttles for a bin.
-  * An instance of this class keeps track of the information needed to bandwidth throttle access
-  * to a url belonging to a specific bin.
-  *
-  * In order to calculate
-  * the effective "burst" fetches per second and bytes per second, we need to have some idea what the window is.
-  * For example, a long hiatus from fetching could cause overuse of the server when fetching resumes, if the
-  * window length is too long.
-  *
-  * One solution to this problem would be to keep a list of the individual fetches as records.  Then, we could
-  * "expire" a fetch by discarding the old record.  However, this is quite memory consumptive for all but the
-  * smallest intervals.
-  *
-  * Another, better, solution is to hook into the start and end of individual fetches.  These will, presumably, occur
-  * at the fastest possible rate without long pauses spent doing something else.  The only complication is that
-  * fetches may well overlap, so we need to "reference count" the fetches to know when to reset the counters.
-  * For "fetches per second", we can simply make sure we "schedule" the next fetch at an appropriate time, rather
-  * than keep records around.  The overall rate may therefore be somewhat less than the specified rate, but that's perfectly
-  * acceptable.
-  *
-  * Some notes on the algorithms used to limit server bandwidth impact
-  * ==================================================================
-  *
-  * In a single connection case, the algorithm we'd want to use works like this.  On the first chunk of a series,
-  * the total length of time and the number of bytes are recorded.  Then, prior to each subsequent chunk, a calculation
-  * is done which attempts to hit the bandwidth target by the end of the chunk read, using the rate of the first chunk
-  * access as a way of estimating how long it will take to fetch those next n bytes.
-  *
-  * For a multi-connection case, which this is, it's harder to either come up with a good maximum bandwidth estimate,
-  * and harder still to "hit the target", because simultaneous fetches will intrude.  The strategy is therefore:
-  *
-  * 1) The first chunk of any series should proceed without interference from other connections to the same server.
-  *    The goal here is to get a decent quality estimate without any possibility of overwhelming the server.
-  *
-  * 2) The bandwidth of the first chunk is treated as the "maximum bandwidth per connection".  That is, if other
-  *    connections are going on, we can presume that each connection will use at most the bandwidth that the first fetch
-  *    took.  Thus, by generating end-time estimates based on this number, we are actually being conservative and
-  *    using less server bandwidth.
-  *
-  * 3) For chunks that have started but not finished, we keep track of their size and estimated elapsed time in order to schedule when
-  *    new chunks from other connections can start.
-  *
-  */
-  protected static class ThrottleBin
-  {
-    /** This is the bin name which this throttle belongs to. */
-    protected String binName;
-    /** This is the reference count for this bin (which records active references) */
-    protected int refCount = 0;
-    /** The inverse rate estimate of the first fetch, in ms/byte */
-    protected double rateEstimate = 0.0;
-    /** Flag indicating whether a rate estimate is needed */
-    protected boolean estimateValid = false;
-    /** Flag indicating whether rate estimation is in progress yet */
-    protected boolean estimateInProgress = false;
-    /** The start time of this series */
-    protected long seriesStartTime = -1L;
-    /** Total actual bytes read in this series; this includes fetches in progress */
-    protected long totalBytesRead = -1L;
-
-    /** This object is used to gate access while the first chunk is being read */
-    protected Integer firstChunkLock = new Integer(0);
-
-    /** Constructor. */
-    public ThrottleBin(String binName)
-    {
-      this.binName = binName;
-    }
-
-    /** Get the bin name. */
-    public String getBinName()
-    {
-      return binName;
-    }
-
-    /** Note the start of a fetch operation for a bin.  Call this method just before the actual stream access begins.
-    * May wait until schedule allows.
-    */
-    public void beginFetch()
-      throws InterruptedException
-    {
-      synchronized (this)
-      {
-        if (refCount == 0)
-        {
-          // Now, reset bandwidth throttling counters
-          estimateValid = false;
-          rateEstimate = 0.0;
-          totalBytesRead = 0L;
-          estimateInProgress = false;
-          seriesStartTime = -1L;
-        }
-        refCount++;
-      }
-
-    }
-
-    /** Note the start of an individual byte read of a specified size.  Call this method just before the
-    * read request takes place.  Performs the necessary delay prior to reading specified number of bytes from the server.
-    */
-    public void beginRead(int byteCount, double minimumMillisecondsPerBytePerServer)
-      throws InterruptedException
-    {
-      long currentTime = System.currentTimeMillis();
-
-      synchronized (firstChunkLock)
-      {
-        while (estimateInProgress)
-          firstChunkLock.wait();
-        if (estimateValid == false)
-        {
-          seriesStartTime = currentTime;
-          estimateInProgress = true;
-          // Add these bytes to the estimated total
-          synchronized (this)
-          {
-            totalBytesRead += (long)byteCount;
-          }
-          // Exit early; this thread isn't going to do any waiting
-          return;
-        }
-      }
-
-      long waitTime = 0L;
-      synchronized (this)
-      {
-        // Add these bytes to the estimated total
-        totalBytesRead += (long)byteCount;
-
-        // Estimate the time this read will take, and wait accordingly
-        long estimatedTime = (long)(rateEstimate * (double)byteCount);
-
-        // Figure out how long the total byte count should take, to meet the constraint
-        long desiredEndTime = seriesStartTime + (long)(((double)totalBytesRead) * minimumMillisecondsPerBytePerServer);
-
-        // The wait time is the different between our desired end time, minus the estimated time to read the data, and the
-        // current time.  But it can't be negative.
-        waitTime = (desiredEndTime - estimatedTime) - currentTime;
-      }
-
-      if (waitTime > 0L)
-      {
-        if (Logging.connectors.isDebugEnabled())
-          Logging.connectors.debug("WEB: Performing a read wait on bin '"+binName+"' of "+
-          new Long(waitTime).toString()+" ms.");
-        ManifoldCF.sleep(waitTime);
-      }
-
-    }
-
-    /** Note the end of an individual read from the server.  Call this just after an individual read completes.
-    * Pass the actual number of bytes read to the method.
-    */
-    public void endRead(int originalCount, int actualCount)
-    {
-      long currentTime = System.currentTimeMillis();
-
-      synchronized (this)
-      {
-        totalBytesRead = totalBytesRead + (long)actualCount - (long)originalCount;
-      }
-
-      // Only one thread should get here if it's the first chunk, but we synchronize to be sure
-      synchronized (firstChunkLock)
-      {
-        if (estimateInProgress)
-        {
-          if (actualCount == 0)
-            // Didn't actually get any bytes, so use 0.0
-            rateEstimate = 0.0;
-          else
-            rateEstimate = ((double)(currentTime - seriesStartTime))/(double)actualCount;
-          estimateValid = true;
-          estimateInProgress = false;
-          firstChunkLock.notifyAll();
-        }
-      }
-    }
-
-    /** Note the end of a fetch operation.  Call this method just after the fetch completes.
-    */
-    public boolean endFetch()
-    {
-      synchronized (this)
-      {
-        refCount--;
-        return (refCount == 0);
-      }
-
-    }
-
-  }
-
   /** Throttled connections.  Each instance of a connection describes the bins to which it belongs,
   * along with the actual open connection itself, and the last time the connection was used. */
   protected static class ThrottledConnection implements IThrottledConnection
   {
-    /** The connection has resolved pointers to the ConnectionBin structures that manage pool
-    * maximums.  These are ONLY valid when the connection is actually in the pool. */
-    protected ConnectionBin[] connectionBinArray;
-    /** The connection has resolved pointers to the ThrottleBin structures that help manage
-    * bandwidth throttling. */
-    protected ThrottleBin[] throttleBinArray;
-    /** These are the bandwidth limits, per bin */
-    protected double[] minMillisecondsPerByte;
-    /** Is the connection considered "active"? */
-    protected boolean isActive;
-    /** If not active, this is when it went inactive */
-    protected long inactiveTime = 0L;
-
+    /** Connection pool */
+    protected final ConnectionPool myPool;
+    /** Fetch throttler */
+    protected final IFetchThrottler fetchThrottler;
     /** Protocol */
-    protected String protocol;
+    protected final String protocol;
     /** Server */
-    protected String server;
+    protected final String server;
     /** Port */
-    protected int port;
+    protected final int port;
     /** Authentication */
-    protected PageCredentials authentication;
-    /** Trust store */
-    protected IKeystoreManager trustStore;
-    /** Trust store string */
-    protected String trustStoreString;
+    protected final PageCredentials authentication;
+
+    /** This is when the connection will expire.  Only valid if connection is in the pool. */
+    protected long expireTime = -1L;
 
     /** The http connection manager.  The pool is of size 1.  */
     protected PoolingClientConnectionManager connManager = null;
@@ -973,10 +294,13 @@
 
     /** Constructor.  Create a connection with a specific server and port, and
     * register it as active against all bins. */
-    public ThrottledConnection(String protocol, String server, int port, PageCredentials authentication,
-      javax.net.ssl.SSLSocketFactory httpsSocketFactory, String trustStoreString, ConnectionBin[] connectionBins,
+    public ThrottledConnection(ConnectionPool myPool, IFetchThrottler fetchThrottler,
+      String protocol, String server, int port, PageCredentials authentication,
+      javax.net.ssl.SSLSocketFactory httpsSocketFactory,
       String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
     {
+      this.myPool = myPool;
+      this.fetchThrottler = fetchThrottler;
       this.proxyHost = proxyHost;
       this.proxyPort = proxyPort;
       this.proxyAuthDomain = proxyAuthDomain;
@@ -987,168 +311,21 @@
       this.port = port;
       this.authentication = authentication;
       this.httpsSocketFactory = httpsSocketFactory;
-      this.trustStoreString = trustStoreString;
-      this.connectionBinArray = connectionBins;
-      this.throttleBinArray = new ThrottleBin[connectionBins.length];
-      this.minMillisecondsPerByte = new double[connectionBins.length];
-      this.isActive = true;
-      int i = 0;
-      while (i < connectionBins.length)
-      {
-        connectionBins[i].noteConnectionCreation();
-        // We don't keep throttle bin references around, since these are transient
-        throttleBinArray[i] = null;
-        minMillisecondsPerByte[i] = 0.0;
-        i++;
-      }
-
-
     }
 
-    public void mustHaveReference(ConnectionBin cb)
-    {
-      int i = 0;
-      while (i < connectionBinArray.length)
-      {
-        if (cb == connectionBinArray[i])
-          return;
-        i++;
-      }
-      String msg = "Connection bin "+cb.toString()+" owns connection "+this.toString()+" for "+protocol+server+":"+port+
-        " but there is no back reference!";
-      Logging.connectors.error(msg);
-      System.out.println(msg);
-      new Exception(msg).printStackTrace();
-      System.exit(3);
-      //throw new RuntimeException(msg);
-    }
-
-    /** See if this instances matches a given server and port. */
-    public boolean matches(ConnectionBin[] bins, String protocol, String server, int port, PageCredentials authentication,
-      String trustStoreString, String proxyHost, int proxyPort, String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
-    {
-      if (this.trustStoreString == null || trustStoreString == null)
-      {
-        if (this.trustStoreString != trustStoreString)
-          return false;
-      }
-      else
-      {
-        if (!this.trustStoreString.equals(trustStoreString))
-          return false;
-      }
-
-      if (this.authentication == null || authentication == null)
-      {
-        if (this.authentication != authentication)
-          return false;
-      }
-      else
-      {
-        if (!this.authentication.equals(authentication))
-          return false;
-      }
-
-      if (this.proxyHost == null || proxyHost == null)
-      {
-        if (this.proxyHost != proxyHost)
-          return false;
-      }
-      else
-      {
-        if (!this.proxyHost.equals(proxyHost))
-          return false;
-        if (this.proxyAuthDomain == null || proxyAuthDomain == null)
-        {
-          if (this.proxyAuthDomain != proxyAuthDomain)
-            return false;
-        }
-        else
-        {
-          if (!this.proxyAuthDomain.equals(proxyAuthDomain))
-            return false;
-        }
-        if (this.proxyAuthUsername == null || proxyAuthUsername == null)
-        {
-          if (this.proxyAuthUsername != proxyAuthUsername)
-            return false;
-        }
-        else
-        {
-          if (!this.proxyAuthUsername.equals(proxyAuthUsername))
-            return false;
-        }
-        if (this.proxyAuthPassword == null || proxyAuthPassword == null)
-        {
-          if (this.proxyAuthPassword != proxyAuthPassword)
-            return false;
-        }
-        else
-        {
-          if (!this.proxyAuthPassword.equals(proxyAuthPassword))
-            return false;
-        }
-      }
-      
-      if (this.proxyPort != proxyPort)
-        return false;
-      
-      
-      if (this.connectionBinArray.length != bins.length || !this.protocol.equals(protocol) || !this.server.equals(server) || this.port != port)
-        return false;
-      
-      int i = 0;
-      while (i < bins.length)
-      {
-        if (connectionBinArray[i] != bins[i])
-          return false;
-        i++;
-      }
-      return true;
-    }
-
-    /** Activate the connection. */
-    public void activate()
-    {
-      isActive = true;
-      int i = 0;
-      while (i < connectionBinArray.length)
-      {
-        connectionBinArray[i++].takeFromPool(this);
-      }
-    }
-
-    /** Set up the connection.  This allows us to feed all bins the correct bandwidth limit info.
+    /** Check whether the connection has expired.
+    *@param currentTime is the current time to use to judge if a connection has expired.
+    *@return true if the connection has expired, and should be closed.
     */
-    public void setup(ThrottleDescription description)
+    @Override
+    public boolean hasExpired(long currentTime)
     {
-      // Go through all bins, and set up the current limits.
-      int i = 0;
-      while (i < connectionBinArray.length)
-      {
-        String binName = connectionBinArray[i].getBinName();
-        minMillisecondsPerByte[i] = description.getMinimumMillisecondsPerByte(binName);
-        i++;
-      }
-    }
-
-    /** Do periodic bookkeeping.
-    *@return true if the connection is no longer valid, and can be removed. */
-    public boolean flushIdleConnections(long idleTimeout)
-    {
-      if (isActive)
-        return false;
-
       if (connManager != null)
       {
         connManager.closeIdleConnections(idleTimeout, TimeUnit.MILLISECONDS);
         connManager.closeExpiredConnections();
-        // Need to determine if there's a valid connection in the connection manager still, or if it is empty.
-        //return connManager.getConnectionsInPool() == 0;
-        return true;
       }
-      else
-        return true;
+      return (currentTime > expireTime);
     }
 
     /** Log the fetch of a number of bytes, from within a stream. */
@@ -1157,37 +334,10 @@
       fetchCounter += (long)count;
     }
 
-    /** Begin a read operation, from within a stream */
-    public void beginRead(int len)
-      throws InterruptedException
-    {
-      // Consult with throttle bins
-      int i = 0;
-      while (i < throttleBinArray.length)
-      {
-        throttleBinArray[i].beginRead(len,minMillisecondsPerByte[i]);
-        i++;
-      }
-    }
-
-    /** End a read operation, from within a stream */
-    public void endRead(int origLen, int actualAmt)
-    {
-      // Consult with throttle bins
-      int i = 0;
-      while (i < throttleBinArray.length)
-      {
-        throttleBinArray[i].endRead(origLen,actualAmt);
-        i++;
-      }
-    }
-
     /** Destroy the connection forever */
-    protected void destroy()
+    @Override
+    public void destroy()
     {
-      if (isActive == false)
-        throw new RuntimeException("Trying to destroy an inactive connection");
-
       // Kill the actual connection object.
       if (connManager != null)
       {
@@ -1195,13 +345,6 @@
         connManager = null;
       }
 
-      // Call all the bins this belongs to, and decrement the in-use count.
-      int i = 0;
-      while (i < connectionBinArray.length)
-      {
-        ConnectionBin cb = connectionBinArray[i++];
-        cb.noteConnectionDestruction();
-      }
     }
 
 
@@ -1213,30 +356,12 @@
     public void beginFetch(String fetchType)
       throws ManifoldCFException
     {
+      this.fetchType = fetchType;
+      this.fetchCounter = 0L;
       try
       {
-        this.fetchType = fetchType;
-        this.fetchCounter = 0L;
-        // Find or create the needed throttle bins
-        int i = 0;
-        while (i < throttleBinArray.length)
-        {
-          // Access the bins as we need them, and drop them when ref count goes to zero
-          String binName = connectionBinArray[i].getBinName();
-          ThrottleBin tb;
-          synchronized (throttleBins)
-          {
-            tb = (ThrottleBin)throttleBins.get(binName);
-            if (tb == null)
-            {
-              tb = new ThrottleBin(binName);
-              throttleBins.put(binName,tb);
-            }
-            tb.beginFetch();
-          }
-          throttleBinArray[i] = tb;
-          i++;
-        }
+        if (fetchThrottler.obtainFetchDocumentPermission() == false)
+          throw new IllegalStateException("Unexpected return value from obtainFetchDocumentPermission()");
       }
       catch (InterruptedException e)
       {
@@ -1507,7 +632,8 @@
       fetchMethod.setHeader(new BasicHeader("User-Agent",userAgent));
       fetchMethod.setHeader(new BasicHeader("From",from));
       fetchMethod.setHeader(new BasicHeader("Accept","*/*"));
-        
+      fetchMethod.setHeader(new BasicHeader("Accept-Encoding","gzip,deflate"));
+
       // Use a custom cookie store
       CookieStore cookieStore = new OurBasicCookieStore();
       // If we have any cookies to set, set them.
@@ -1531,7 +657,7 @@
       //httpClient.setCookieStore(cookieStore);
       
       // Create the thread
-      methodThread = new ExecuteMethodThread(this, httpClient, fetchMethod, cookieStore);
+      methodThread = new ExecuteMethodThread(this, fetchThrottler, httpClient, fetchMethod, cookieStore);
       try
       {
         methodThread.start();
@@ -1824,17 +950,6 @@
           methodThread.abort();
 
         long endTime = System.currentTimeMillis();
-        int i = 0;
-        while (i < throttleBinArray.length)
-        {
-          synchronized (throttleBins)
-          {
-            if (throttleBinArray[i].endFetch())
-              throttleBins.remove(throttleBinArray[i].getBinName());
-          }
-          throttleBinArray[i] = null;
-          i++;
-        }
 
         activities.recordActivity(new Long(startFetchTime),WebcrawlerConnector.ACTIVITY_FETCH,
           new Long(fetchCounter),myUrl,Integer.toString(statusCode),(throwable==null)?null:throwable.getMessage(),null);
@@ -1876,48 +991,13 @@
 
     }
 
-    /** Close the connection.  Call this to end this server connection.
+    /** Close the connection.  Call this to return the connection to its pool.
     */
     @Override
     public void close()
-      throws ManifoldCFException
     {
-      synchronized (poolLock)
-      {
-        // Verify that all the connections that exist are in fact sane
-        synchronized (connectionBins)
-        {
-          Iterator iter = connectionBins.keySet().iterator();
-          while (iter.hasNext())
-          {
-            String connectionName = (String)iter.next();
-            ConnectionBin cb = (ConnectionBin)connectionBins.get(connectionName);
-            //cb.sanityCheck();
-          }
-        }
-
-        // Leave the connection alive, but mark it as inactive, and return it to the appropriate pools.
-        isActive = false;
-        inactiveTime = System.currentTimeMillis();
-        int i = 0;
-        while (i < connectionBinArray.length)
-        {
-          connectionBinArray[i++].addToPool(this);
-        }
-        // Verify that all the connections that exist are in fact sane
-        synchronized (connectionBins)
-        {
-          Iterator iter = connectionBins.keySet().iterator();
-          while (iter.hasNext())
-          {
-            String connectionName = (String)iter.next();
-            ConnectionBin cb = (ConnectionBin)connectionBins.get(connectionName);
-            //cb.sanityCheck();
-          }
-        }
-        // Wake up everything waiting on the pool lock
-        poolLock.notifyAll();
-      }
+      expireTime = System.currentTimeMillis() + idleTimeout;
+      myPool.release(this);
     }
     
     protected void handleHTTPException(HttpException e, String activity)
@@ -1983,17 +1063,18 @@
   */
   protected static class ThrottledInputstream extends InputStream
   {
-    /** Stream throttling parameters */
-    protected double minimumMillisecondsPerBytePerServer;
+    /** Stream throttler */
+    protected final IStreamThrottler streamThrottler;
     /** The throttled connection we belong to */
-    protected ThrottledConnection throttledConnection;
+    protected final ThrottledConnection throttledConnection;
     /** The stream we are wrapping. */
-    protected InputStream inputStream;
+    protected final InputStream inputStream;
 
     /** Constructor.
     */
-    public ThrottledInputstream(ThrottledConnection connection, InputStream is)
+    public ThrottledInputstream(IStreamThrottler streamThrottler, ThrottledConnection connection, InputStream is)
     {
+      this.streamThrottler = streamThrottler;
       this.throttledConnection = connection;
       this.inputStream = is;
     }
@@ -2008,7 +1089,7 @@
       int count = read(byteArray,0,1);
       if (count == -1)
         return count;
-      return (int)byteArray[0];
+      return ((int)byteArray[0]) & 0xff;
     }
 
     /** Read lots of bytes.
@@ -2061,7 +1142,8 @@
     {
       try
       {
-        throttledConnection.beginRead(len);
+        if (streamThrottler.obtainReadPermission(len) == false)
+          throw new IllegalStateException("Unexpected result calling obtainReadPermission()");
         int amt = 0;
         try
         {
@@ -2071,10 +1153,10 @@
         finally
         {
           if (amt == -1)
-            throttledConnection.endRead(len,0);
+            streamThrottler.releaseReadPermission(len,0);
           else
           {
-            throttledConnection.endRead(len,amt);
+            streamThrottler.releaseReadPermission(len,amt);
             throttledConnection.logFetchCount(amt);
           }
         }
@@ -2161,6 +1243,10 @@
       {
         Logging.connectors.debug("IO Exception trying to close connection: "+e.getMessage(),e);
       }
+      finally
+      {
+        streamThrottler.closeStream();
+      }
     }
 
   }
@@ -2191,178 +1277,6 @@
     }
   }
 
-  /** SSL Socket factory which wraps another socket factory but allows timeout on socket
-  * creation.
-  */
-  protected static class InterruptibleSocketFactory extends javax.net.ssl.SSLSocketFactory
-  {
-    protected final javax.net.ssl.SSLSocketFactory wrappedFactory;
-    protected final long connectTimeoutMilliseconds;
-    
-    public InterruptibleSocketFactory(javax.net.ssl.SSLSocketFactory wrappedFactory, long connectTimeoutMilliseconds)
-    {
-      this.wrappedFactory = wrappedFactory;
-      this.connectTimeoutMilliseconds = connectTimeoutMilliseconds;
-    }
-
-    @Override
-    public Socket createSocket()
-      throws IOException
-    {
-      // Socket isn't open
-      return wrappedFactory.createSocket();
-    }
-    
-    @Override
-    public Socket createSocket(String host, int port)
-      throws IOException, UnknownHostException
-    {
-      return fireOffThread(InetAddress.getByName(host),port,null,-1);
-    }
-
-    @Override
-    public Socket createSocket(InetAddress host, int port)
-      throws IOException
-    {
-      return fireOffThread(host,port,null,-1);
-    }
-    
-    @Override
-    public Socket createSocket(String host, int port, InetAddress localHost, int localPort)
-      throws IOException, UnknownHostException
-    {
-      return fireOffThread(InetAddress.getByName(host),port,localHost,localPort);
-    }
-    
-    @Override
-    public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort)
-      throws IOException
-    {
-      return fireOffThread(address,port,localAddress,localPort);
-    }
-    
-    @Override
-    public Socket createSocket(Socket s, String host, int port, boolean autoClose)
-      throws IOException
-    {
-      // Socket's already open
-      return wrappedFactory.createSocket(s,host,port,autoClose);
-    }
-    
-    @Override
-    public String[] getDefaultCipherSuites()
-    {
-      return wrappedFactory.getDefaultCipherSuites();
-    }
-    
-    @Override
-    public String[] getSupportedCipherSuites()
-    {
-      return wrappedFactory.getSupportedCipherSuites();
-    }
-    
-    protected Socket fireOffThread(InetAddress address, int port, InetAddress localHost, int localPort)
-      throws IOException
-    {
-      SocketCreateThread thread = new SocketCreateThread(wrappedFactory,address,port,localHost,localPort);
-      thread.start();
-      try
-      {
-        // Wait for thread to complete for only a certain amount of time!
-        thread.join(connectTimeoutMilliseconds);
-        // If join() times out, then the thread is going to still be alive.
-        if (thread.isAlive())
-        {
-          // Kill the thread - not that this will necessarily work, but we need to try
-          thread.interrupt();
-          throw new ConnectTimeoutException("Secure connection timed out");
-        }
-        // The thread terminated.  Throw an error if there is one, otherwise return the result.
-        Throwable t = thread.getException();
-        if (t != null)
-        {
-          if (t instanceof java.net.SocketTimeoutException)
-            throw (java.net.SocketTimeoutException)t;
-          else if (t instanceof ConnectTimeoutException)
-            throw (ConnectTimeoutException)t;
-          else if (t instanceof InterruptedIOException)
-            throw (InterruptedIOException)t;
-          else if (t instanceof IOException)
-            throw (IOException)t;
-          else if (t instanceof Error)
-            throw (Error)t;
-          else if (t instanceof RuntimeException)
-            throw (RuntimeException)t;
-          throw new Error("Received an unexpected exception: "+t.getMessage(),t);
-        }
-        return thread.getResult();
-      }
-      catch (InterruptedException e)
-      {
-        throw new InterruptedIOException("Interrupted: "+e.getMessage());
-      }
-
-    }
-    
-  }
-  
-  /** Create a secure socket in a thread, so that we can "give up" after a while if the socket fails to connect.
-  */
-  protected static class SocketCreateThread extends Thread
-  {
-    // Socket factory
-    protected javax.net.ssl.SSLSocketFactory socketFactory;
-    protected InetAddress host;
-    protected int port;
-    protected InetAddress clientHost;
-    protected int clientPort;
-
-    // The return socket
-    protected Socket rval = null;
-    // The return error
-    protected Throwable throwable = null;
-
-    /** Create the thread */
-    public SocketCreateThread(javax.net.ssl.SSLSocketFactory socketFactory,
-      InetAddress host,
-      int port,
-      InetAddress clientHost,
-      int clientPort)
-    {
-      this.socketFactory = socketFactory;
-      this.host = host;
-      this.port = port;
-      this.clientHost = clientHost;
-      this.clientPort = clientPort;
-      setDaemon(true);
-    }
-
-    public void run()
-    {
-      try
-      {
-        if (clientHost == null)
-          rval = socketFactory.createSocket(host,port);
-        else
-          rval = socketFactory.createSocket(host,port,clientHost,clientPort);
-      }
-      catch (Throwable e)
-      {
-        throwable = e;
-      }
-    }
-
-    public Throwable getException()
-    {
-      return throwable;
-    }
-
-    public Socket getResult()
-    {
-      return rval;
-    }
-  }
-
   /** Class to override browser compatibility to make it not check cookie paths.  See CONNECTORS-97.
   */
   protected static class LaxBrowserCompatSpec extends BrowserCompatSpec
@@ -2405,6 +1319,8 @@
   {
     /** The connection */
     protected final ThrottledConnection theConnection;
+    /** The fetch throttler */
+    protected final IFetchThrottler fetchThrottler;
     /** Client and method, all preconfigured */
     protected final AbstractHttpClient httpClient;
     protected final HttpRequestBase executeMethod;
@@ -2415,6 +1331,7 @@
     protected LoginCookies cookies = null;
     protected Throwable cookieException = null;
     protected XThreadInputStream threadStream = null;
+    protected InputStream bodyStream = null;
     protected boolean streamCreated = false;
     protected Throwable streamException = null;
     protected boolean abortThread = false;
@@ -2423,12 +1340,13 @@
 
     protected Throwable generalException = null;
     
-    public ExecuteMethodThread(ThrottledConnection theConnection,
+    public ExecuteMethodThread(ThrottledConnection theConnection, IFetchThrottler fetchThrottler,
       AbstractHttpClient httpClient, HttpRequestBase executeMethod, CookieStore cookieStore)
     {
       super();
       setDaemon(true);
       this.theConnection = theConnection;
+      this.fetchThrottler = fetchThrottler;
       this.httpClient = httpClient;
       this.executeMethod = executeMethod;
       this.cookieStore = cookieStore;
@@ -2500,10 +1418,36 @@
               {
                 try
                 {
-                  InputStream bodyStream = response.getEntity().getContent();
+                  boolean gzip = false;
+                  boolean deflate = false;
+                  Header ceheader = response.getEntity().getContentEncoding();
+                  if (ceheader != null)
+                  {
+                    HeaderElement[] codecs = ceheader.getElements();
+                    for (int i = 0; i < codecs.length; i++)
+                    {
+                      if (codecs[i].getName().equalsIgnoreCase("gzip"))
+                      {
+                        // GZIP
+                        gzip = true;
+                        break;
+                      }
+                      else if (codecs[i].getName().equalsIgnoreCase("deflate"))
+                      {
+                        // Deflate
+                        deflate = true;
+                        break;
+                      }
+                    }
+                  }
+                  bodyStream = response.getEntity().getContent();
                   if (bodyStream != null)
                   {
-                    bodyStream = new ThrottledInputstream(theConnection,bodyStream);
+                    bodyStream = new ThrottledInputstream(fetchThrottler.createFetchStream(),theConnection,bodyStream);
+                    if (gzip)
+                      bodyStream = new GZIPInputStream(bodyStream);
+                    else if (deflate)
+                      bodyStream = new DeflateInputStream(bodyStream);
                     threadStream = new XThreadInputStream(bodyStream);
                   }
                   streamCreated = true;
@@ -2541,6 +1485,17 @@
         }
         finally
         {
+          if (bodyStream != null)
+          {
+            try
+            {
+              bodyStream.close();
+            }
+            catch (IOException e)
+            {
+            }
+            bodyStream = null;
+          }
           synchronized (this)
           {
             try
@@ -2810,4 +1765,254 @@
 
   }
 
+  /** Connection pool key */
+  protected static class ConnectionPoolKey
+  {
+    protected final String protocol;
+    protected final String server;
+    protected final int port;
+    protected final PageCredentials authentication;
+    protected final String trustStoreString;
+    protected final String proxyHost;
+    protected final int proxyPort;
+    protected final String proxyAuthDomain;
+    protected final String proxyAuthUsername;
+    protected final String proxyAuthPassword;
+    
+    public ConnectionPoolKey(String protocol,
+      String server, int port, PageCredentials authentication,
+      String trustStoreString, String proxyHost, int proxyPort,
+      String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
+    {
+      this.protocol = protocol;
+      this.server = server;
+      this.port = port;
+      this.authentication = authentication;
+      this.trustStoreString = trustStoreString;
+      this.proxyHost = proxyHost;
+      this.proxyPort = proxyPort;
+      this.proxyAuthDomain = proxyAuthDomain;
+      this.proxyAuthUsername = proxyAuthUsername;
+      this.proxyAuthPassword = proxyAuthPassword;
+    }
+    
+    public int hashCode()
+    {
+      return protocol.hashCode() +
+        server.hashCode() +
+        (port * 31) +
+        ((authentication==null)?0:authentication.hashCode()) +
+        ((trustStoreString==null)?0:trustStoreString.hashCode()) +
+        ((proxyHost==null)?0:proxyHost.hashCode()) +
+        (proxyPort * 29) +
+        ((proxyAuthDomain==null)?0:proxyAuthDomain.hashCode()) +
+        ((proxyAuthUsername==null)?0:proxyAuthUsername.hashCode()) +
+        ((proxyAuthPassword==null)?0:proxyAuthPassword.hashCode());
+    }
+    
+    public boolean equals(Object o)
+    {
+      if (!(o instanceof ConnectionPoolKey))
+        return false;
+      ConnectionPoolKey other = (ConnectionPoolKey)o;
+      if (!server.equals(other.server) ||
+        port != other.port)
+        return false;
+      if (authentication == null || other.authentication == null)
+      {
+        if (authentication != other.authentication)
+          return false;
+      }
+      else
+      {
+        if (!authentication.equals(other.authentication))
+          return false;
+      }
+      if (trustStoreString == null || other.trustStoreString == null)
+      {
+        if (trustStoreString != other.trustStoreString)
+          return false;
+      }
+      else
+      {
+        if (!trustStoreString.equals(other.trustStoreString))
+          return false;
+      }
+      if (proxyHost == null || other.proxyHost == null)
+      {
+        if (proxyHost != other.proxyHost)
+          return false;
+      }
+      else
+      {
+        if (!proxyHost.equals(other.proxyHost))
+          return false;
+      }
+      if (proxyPort != other.proxyPort)
+        return false;
+      if (proxyAuthDomain == null || other.proxyAuthDomain == null)
+      {
+        if (proxyAuthDomain != other.proxyAuthDomain)
+          return false;
+      }
+      else
+      {
+        if (!proxyAuthDomain.equals(other.proxyAuthDomain))
+          return false;
+      }
+      if (proxyAuthUsername == null || other.proxyAuthUsername == null)
+      {
+        if (proxyAuthUsername != other.proxyAuthUsername)
+          return false;
+      }
+      else
+      {
+        if (!proxyAuthUsername.equals(other.proxyAuthUsername))
+          return false;
+      }
+      if (proxyAuthPassword == null || other.proxyAuthPassword == null)
+      {
+        if (proxyAuthPassword != other.proxyAuthPassword)
+          return false;
+      }
+      else
+      {
+        if (!proxyAuthPassword.equals(other.proxyAuthPassword))
+          return false;
+      }
+      return true;
+    }
+  }
+  
+  /** Each connection pool has identical connections we can draw on.
+  */
+  protected static class ConnectionPool
+  {
+    /** Throttler */
+    protected final IConnectionThrottler connectionThrottler;
+    
+    // If we need to create a connection, these are what we use
+    
+    protected final String protocol;
+    protected final String server;
+    protected final int port;
+    protected final PageCredentials authentication;
+    protected final javax.net.ssl.SSLSocketFactory baseFactory;
+    protected final String proxyHost;
+    protected final int proxyPort;
+    protected final String proxyAuthDomain;
+    protected final String proxyAuthUsername;
+    protected final String proxyAuthPassword;
+
+    /** The actual pool of connections */
+    protected final List<IThrottledConnection> connections = new ArrayList<IThrottledConnection>();
+    
+    public ConnectionPool(IConnectionThrottler connectionThrottler,
+      String protocol,
+      String server, int port, PageCredentials authentication,
+      javax.net.ssl.SSLSocketFactory baseFactory,
+      String proxyHost, int proxyPort,
+      String proxyAuthDomain, String proxyAuthUsername, String proxyAuthPassword)
+    {
+      this.connectionThrottler = connectionThrottler;
+      
+      this.protocol = protocol;
+      this.server = server;
+      this.port = port;
+      this.authentication = authentication;
+      this.baseFactory = baseFactory;
+      this.proxyHost = proxyHost;
+      this.proxyPort = proxyPort;
+      this.proxyAuthDomain = proxyAuthDomain;
+      this.proxyAuthUsername = proxyAuthUsername;
+      this.proxyAuthPassword = proxyAuthPassword;
+    }
+    
+    public IThrottledConnection grab()
+      throws InterruptedException
+    {
+      // Wait for a connection
+      int result = connectionThrottler.waitConnectionAvailable();
+      if (result == IConnectionThrottler.CONNECTION_FROM_POOL)
+      {
+        // We are guaranteed to have a connection in the pool, unless there's a coding error.
+        synchronized (connections)
+        {
+          return connections.remove(connections.size()-1);
+        }
+      }
+      else if (result == IConnectionThrottler.CONNECTION_FROM_CREATION)
+      {
+        return new ThrottledConnection(this,connectionThrottler.getNewConnectionFetchThrottler(),
+          protocol,server,port,authentication,baseFactory,
+          proxyHost,proxyPort,
+          proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
+      }
+      else
+        throw new IllegalStateException("Unexpected return value from waitConnectionAvailable(): "+result);
+    }
+    
+    public void release(IThrottledConnection connection)
+    {
+      if (connectionThrottler.noteReturnedConnection())
+      {
+        // Destroy this connection
+        connection.destroy();
+        connectionThrottler.noteConnectionDestroyed();
+      }
+      else
+      {
+        // Return to pool
+        synchronized (connections)
+        {
+          connections.add(connection);
+        }
+        connectionThrottler.noteConnectionReturnedToPool();
+      }
+    }
+    
+    public void flushIdleConnections()
+    {
+      long currentTime = System.currentTimeMillis();
+      // First, remove connections that are over the quota
+      while (connectionThrottler.checkDestroyPooledConnection())
+      {
+        // Destroy the oldest ones first
+        IThrottledConnection connection;
+        synchronized (connections)
+        {
+          connection = connections.remove(0);
+        }
+        connection.close();
+        connectionThrottler.noteConnectionDestroyed();
+      }
+      // Now, get rid of expired connections
+      while (true)
+      {
+        boolean expired;
+        synchronized (connections)
+        {
+          expired = connections.size() > 0 && connections.get(0).hasExpired(currentTime);
+        }
+        if (!expired)
+          break;
+        // We found an expired connection!  Now tell the throttler that, and see if it agrees.
+        if (connectionThrottler.checkExpireConnection())
+        {
+          // Remove a connection from the pool, and destroy it.
+          // It's not guaranteed to be an expired one, but that's a rare occurrence, we expect.
+          IThrottledConnection connection;
+          synchronized (connections)
+          {
+            connection = connections.remove(0);
+          }
+          connection.destroy();
+          connectionThrottler.noteConnectionDestroyed();
+        }
+        else
+          break;
+      }
+    }
+    
+  }
 }
diff --git a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/WebcrawlerConnector.java b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/WebcrawlerConnector.java
index 6b70bfb..64c78d9 100644
--- a/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/WebcrawlerConnector.java
+++ b/connectors/webcrawler/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/webcrawler/WebcrawlerConnector.java
@@ -160,6 +160,8 @@
   protected int connectionTimeoutMilliseconds = 60000;
   /** Socket timeout, milliseconds */
   protected int socketTimeoutMilliseconds = 300000;
+  /** Throttle group name */
+  protected String throttleGroupName = null;
 
   // Canonicalization enabling/disabling.  Eventually this will probably need to be by regular expression.
 
@@ -354,6 +356,9 @@
     {
       String x;
 
+      // Either set this from the connection name, or just have one.  Right now, we have one.
+      String throttleGroupName = "";
+      
       String emailAddress = params.getParameter(WebcrawlerConfig.PARAMETER_EMAIL);
       if (emailAddress == null)
         throw new ManifoldCFException("Missing email address");
@@ -406,7 +411,7 @@
   public void poll()
     throws ManifoldCFException
   {
-    ThrottledFetcher.flushIdleConnections();
+    ThrottledFetcher.flushIdleConnections(currentContext);
   }
 
   /** Check status of connection.
@@ -425,6 +430,7 @@
   public void disconnect()
     throws ManifoldCFException
   {
+    throttleGroupName = null;
     throttleDescription = null;
     credentialsDescription = null;
     trustsDescription = null;
@@ -711,7 +717,9 @@
 
                   // Prepare to perform the fetch, and decide what to do with the document.
                   //
-                  IThrottledConnection connection = ThrottledFetcher.getConnection(protocol,ipAddress,port,
+                  IThrottledConnection connection = ThrottledFetcher.getConnection(currentContext,
+                    throttleGroupName,
+                    protocol,ipAddress,port,
                     credential,trustStore,throttleDescription,binNames,connectionLimit,
                     proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
                   try
@@ -1343,6 +1351,17 @@
 
           RepositoryDocument rd = new RepositoryDocument();
 
+          // Set the file name
+          String fileName = "";
+          try {
+            fileName = documentIdentifiertoFileName(documentIdentifier);
+          } catch (URISyntaxException e1) {
+            fileName = "";
+          }
+          if (fileName.length() > 0){
+            rd.setFileName(fileName);
+          }
+          
           // Set the content type
           rd.setMimeType(cache.getContentType(documentIdentifier));
           
@@ -1860,6 +1879,8 @@
     String proxyAuthPassword = parameters.getObfuscatedParameter(WebcrawlerConfig.PARAMETER_PROXYAUTHPASSWORD);
     if (proxyAuthPassword == null)
       proxyAuthPassword = "";
+    else
+      proxyAuthPassword = out.mapPasswordToKey(proxyAuthPassword);
 
     // Proxy tab
     if (tabName.equals(Messages.getString(locale,"WebcrawlerConnector.Proxy")))
@@ -2212,7 +2233,7 @@
             if (domain == null)
               domain = "";
             String userName = cn.getAttributeValue(WebcrawlerConfig.ATTR_USERNAME);
-            String password = org.apache.manifoldcf.crawler.system.ManifoldCF.deobfuscate(cn.getAttributeValue(WebcrawlerConfig.ATTR_PASSWORD));
+            String password = out.mapPasswordToKey(ManifoldCF.deobfuscate(cn.getAttributeValue(WebcrawlerConfig.ATTR_PASSWORD)));
                                         
             // It's prefix will be...
             String prefix = "acredential_" + Integer.toString(accessCounter);
@@ -2395,6 +2416,8 @@
                       String password = paramNode.getAttributeValue(WebcrawlerConfig.ATTR_PASSWORD);
                       if (password == null)
                         password = "";
+                      else
+                        password = out.mapPasswordToKey(ManifoldCF.deobfuscate(password));
                       String authParamPrefix = authpagePrefix + "_" + paramCounter;
                       out.print(
 "                    <tr class=\""+(((paramCounter % 2)==0)?"evenformrow":"oddformrow")+"\">\n"+
@@ -2411,7 +2434,7 @@
 "                        <nobr><input type=\"text\" size=\"15\" name=\""+authParamPrefix+"_value"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(value)+"\"/></nobr>\n"+
 "                      </td>\n"+
 "                      <td class=\"formcolumncell\">\n"+
-"                        <nobr><input type=\"password\" size=\"15\" name=\""+authParamPrefix+"_password"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(org.apache.manifoldcf.crawler.system.ManifoldCF.deobfuscate(password))+"\"/></nobr>\n"+
+"                        <nobr><input type=\"password\" size=\"15\" name=\""+authParamPrefix+"_password"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(password)+"\"/></nobr>\n"+
 "                      </td>\n"+
 "                    </tr>\n"
                       );
@@ -2542,7 +2565,7 @@
             if (domain == null)
               domain = "";
             String userName = cn.getAttributeValue(WebcrawlerConfig.ATTR_USERNAME);
-            String password = org.apache.manifoldcf.crawler.system.ManifoldCF.deobfuscate(cn.getAttributeValue(WebcrawlerConfig.ATTR_PASSWORD));
+            String password = out.mapPasswordToKey(ManifoldCF.deobfuscate(cn.getAttributeValue(WebcrawlerConfig.ATTR_PASSWORD)));
 
             // It's prefix will be...
             String prefix = "acredential_" + Integer.toString(accessCounter);
@@ -2620,11 +2643,13 @@
                       String password = paramNode.getAttributeValue(WebcrawlerConfig.ATTR_PASSWORD);
                       if (password == null)
                         password = "";
+                      else
+                        password = out.mapPasswordToKey(ManifoldCF.deobfuscate(password));
                       String authParamPrefix = authpagePrefix + "_" + paramCounter;
                       out.print(
 "<input type=\"hidden\" name=\""+authParamPrefix+"_param"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(param)+"\"/>\n"+
 "<input type=\"hidden\" name=\""+authParamPrefix+"_value"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(value)+"\"/>\n"+
-"<input type=\"hidden\" name=\""+authParamPrefix+"_password"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(org.apache.manifoldcf.crawler.system.ManifoldCF.deobfuscate(password))+"\"/>\n"
+"<input type=\"hidden\" name=\""+authParamPrefix+"_password"+"\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(password)+"\"/>\n"
                       );
                       paramCounter++;
                     }
@@ -2847,7 +2872,7 @@
       parameters.setParameter(WebcrawlerConfig.PARAMETER_PROXYAUTHUSERNAME,proxyAuthUsername);
     String proxyAuthPassword = variableContext.getParameter("proxyauthpassword");
     if (proxyAuthPassword != null)
-      parameters.setObfuscatedParameter(WebcrawlerConfig.PARAMETER_PROXYAUTHPASSWORD,proxyAuthPassword);
+      parameters.setObfuscatedParameter(WebcrawlerConfig.PARAMETER_PROXYAUTHPASSWORD,variableContext.mapKeyToPassword(proxyAuthPassword));
 
     String x = variableContext.getParameter("bandwidth_count");
     if (x != null && x.length() > 0)
@@ -2972,7 +2997,7 @@
           node.setAttribute(WebcrawlerConfig.ATTR_DOMAIN,domain);
           node.setAttribute(WebcrawlerConfig.ATTR_USERNAME,userName);
           node.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,
-            org.apache.manifoldcf.crawler.system.ManifoldCF.obfuscate(password));
+            ManifoldCF.obfuscate(variableContext.mapKeyToPassword(password)));
           parameters.addChild(parameters.getChildCount(),node);
         }
         i++;
@@ -2990,8 +3015,7 @@
         node.setAttribute(WebcrawlerConfig.ATTR_TYPE,type);
         node.setAttribute(WebcrawlerConfig.ATTR_DOMAIN,domain);
         node.setAttribute(WebcrawlerConfig.ATTR_USERNAME,userName);
-        node.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,
-          org.apache.manifoldcf.crawler.system.ManifoldCF.obfuscate(password));
+        node.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,ManifoldCF.obfuscate(password));
         parameters.addChild(parameters.getChildCount(),node);
       }
     }
@@ -3063,7 +3087,7 @@
                     if (value != null && value.length() > 0)
                       paramNode.setAttribute(WebcrawlerConfig.ATTR_VALUE,value);
                     if (password != null && password.length() > 0)
-                      paramNode.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,org.apache.manifoldcf.crawler.system.ManifoldCF.obfuscate(password));
+                      paramNode.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,ManifoldCF.obfuscate(variableContext.mapKeyToPassword(password)));
                     authPageNode.addChild(authPageNode.getChildCount(),paramNode);
                   }
                   z++;
@@ -3081,7 +3105,7 @@
                   if (value != null && value.length() > 0)
                     paramNode.setAttribute(WebcrawlerConfig.ATTR_VALUE,value);
                   if (password != null && password.length() > 0)
-                    paramNode.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,org.apache.manifoldcf.crawler.system.ManifoldCF.obfuscate(password));
+                    paramNode.setAttribute(WebcrawlerConfig.ATTR_PASSWORD,ManifoldCF.obfuscate(password));
                   authPageNode.addChild(authPageNode.getChildCount(),paramNode);
                 }
               }
@@ -5110,7 +5134,8 @@
       // We've successfully obtained a lock on reading robots for this server!  Now, guarantee that we'll free it, by instantiating a try/finally
       try
       {
-        IThrottledConnection connection = ThrottledFetcher.getConnection(protocol,hostIPAddress,port,credential,
+        IThrottledConnection connection = ThrottledFetcher.getConnection(currentContext,throttleGroupName,
+          protocol,hostIPAddress,port,credential,
           trustStore,throttleDescription,binNames,connectionLimit,
           proxyHost,proxyPort,proxyAuthDomain,proxyAuthUsername,proxyAuthPassword);
         try
@@ -5604,7 +5629,10 @@
     if (interestingMimeTypeMap.get(contentType) != null)
       return true;
     
-    return activities.checkMimeTypeIndexable(contentType);
+    boolean rval = activities.checkMimeTypeIndexable(contentType);
+    if (rval == false && Logging.connectors.isDebugEnabled())
+      Logging.connectors.debug("Web: For document '"+documentIdentifier+"', not fetching because output connector does not want mimetype '"+contentType+"'");
+    return rval;
   }
   
   /** Code to check if an already-fetched document should be ingested.
@@ -5616,13 +5644,25 @@
       return false;
 
     if (activities.checkLengthIndexable(cache.getDataLength(documentIdentifier)) == false)
+    {
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("Web: For document '"+documentIdentifier+"', not indexing because output connector thinks length "+cache.getDataLength(documentIdentifier)+" is too long");
       return false;
-
+    }
+    
     if (activities.checkURLIndexable(documentIdentifier) == false)
+    {
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("Web: For document '"+documentIdentifier+"', not indexing because output connector does not want URL");
       return false;
+    }
 
     if (filter.isDocumentIndexable(documentIdentifier) == false)
+    {
+      if (Logging.connectors.isDebugEnabled())
+        Logging.connectors.debug("Web: For document '"+documentIdentifier+"', not indexing because document does not match web job constraints");
       return false;
+    }
     
     // Check if it's a recognized content type
     String contentType = cache.getContentType(documentIdentifier);
@@ -5645,7 +5685,48 @@
       contentType = contentType.substring(0,pos);
     contentType = contentType.trim();
 
-    return activities.checkMimeTypeIndexable(contentType);
+    boolean rval = activities.checkMimeTypeIndexable(contentType);
+    if (rval == false && Logging.connectors.isDebugEnabled())
+      Logging.connectors.debug("Web: For document '"+documentIdentifier+"', not indexing because output connector does not want mime type '"+contentType+"'");
+    return rval;
+  }
+
+  /** Convert a document identifier to filename.
+   * @param documentIdentifier
+   * @return
+   * @throws URISyntaxException
+   */
+  protected String documentIdentifiertoFileName(String documentIdentifier) 
+    throws URISyntaxException
+  {
+    StringBuffer path = new StringBuffer();
+    URI uri = null;
+
+    uri = new URI(documentIdentifier);
+
+    if (uri.getRawPath() != null) {
+      if (uri.getRawPath().equals("")) {
+        path.append("");
+      } else if (uri.getRawPath().equals("/")) {
+        path.append("index.html");
+      } else if (uri.getRawPath().length() != 0) {
+        if (uri.getRawPath().endsWith("/")) {
+          path.append("index.html");
+        } else {
+          String[] names = uri.getRawPath().split("/"); 
+          path.append(names[names.length - 1]);
+        } 
+      }
+    }
+
+    if (path.length() > 0) {
+      if (uri.getRawQuery() != null) {
+        path.append("?");
+        path.append(uri.getRawQuery());
+      }
+    }
+
+    return path.toString();
   }
 
   /** Find a redirection URI, if it exists */
@@ -5720,11 +5801,17 @@
   {
     ProcessActivityRedirectionHandler redirectHandler = new ProcessActivityRedirectionHandler(documentIdentifier,activities,filter);
     handleRedirects(documentIdentifier,redirectHandler);
+    if (Logging.connectors.isDebugEnabled() && redirectHandler.shouldIndex() == false)
+      Logging.connectors.debug("Web: Not indexing document '"+documentIdentifier+"' because of redirection");
     // For html, we don't want any actions, because we don't do form submission.
     ProcessActivityHTMLHandler htmlHandler = new ProcessActivityHTMLHandler(documentIdentifier,activities,filter);
     handleHTML(documentIdentifier,htmlHandler);
+    if (Logging.connectors.isDebugEnabled() && htmlHandler.shouldIndex() == false)
+      Logging.connectors.debug("Web: Not indexing document '"+documentIdentifier+"' because of HTML robots or content tags prohibiting indexing");
     ProcessActivityXMLHandler xmlHandler = new ProcessActivityXMLHandler(documentIdentifier,activities,filter);
     handleXML(documentIdentifier,xmlHandler);
+    if (Logging.connectors.isDebugEnabled() && xmlHandler.shouldIndex() == false)
+      Logging.connectors.debug("Web: Not indexing document '"+documentIdentifier+"' because of XML robots or content tags prohibiting indexing");
     // May add more later for other extraction tasks.
     return htmlHandler.shouldIndex() && redirectHandler.shouldIndex() && xmlHandler.shouldIndex();
   }
diff --git a/connectors/webcrawler/pom.xml b/connectors/webcrawler/pom.xml
index 826682f..9e823a5 100644
--- a/connectors/webcrawler/pom.xml
+++ b/connectors/webcrawler/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -91,7 +101,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/PageBuffer.java b/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/PageBuffer.java
deleted file mode 100644
index 935b98d..0000000
--- a/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/PageBuffer.java
+++ /dev/null
@@ -1,87 +0,0 @@
-/* $Id$ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-package org.apache.manifoldcf.crawler.connectors.wiki;
-
-import java.util.*;
-
-/** Thread-safe class that functions as a limited-size buffer of pageIDs */
-public class PageBuffer
-{
-  protected static int MAX_SIZE = 1024;
-  
-  protected List<String> buffer = new ArrayList<String>(MAX_SIZE);
-  
-  protected boolean complete = false;
-  protected boolean abandoned = false;
-  
-  /** Constructor */
-  public PageBuffer()
-  {
-  }
-  
-  /** Add a page id to the buffer, and block if the buffer is full */
-  public synchronized void add(String pageID)
-    throws InterruptedException
-  {
-    while (buffer.size() == MAX_SIZE && !abandoned)
-      wait();
-    if (abandoned)
-      return;
-    buffer.add(pageID);
-    // Notify threads that are waiting on there being stuff in the queue
-    notifyAll();
-  }
-  
-  /** Signal that the buffer should be abandoned */
-  public synchronized void abandon()
-  {
-    abandoned = true;
-    // Notify waiting threads
-    notifyAll();
-  }
-  
-  /** Signal that the operation is complete, and that no more pageID's
-  * will be added.
-  */
-  public synchronized void signalDone()
-  {
-    complete = true;
-    // Notify threads that are waiting for stuff to appear, because it won't
-    notifyAll();
-  }
-  
-  /** Pull an id off the buffer, and wait if there's more to come.
-  * Returns null if the operation is complete.
-  */
-  public synchronized String fetch()
-    throws InterruptedException
-  {
-    while (buffer.size() == 0 && !complete)
-      wait();
-    if (buffer.size() == 0)
-      return null;
-    boolean isBufferFull = (buffer.size() == MAX_SIZE);
-    String rval = buffer.remove(buffer.size()-1);
-    // Notify those threads waiting on buffer being not completely full to wake
-    if (isBufferFull)
-      notifyAll();
-    return rval;
-  }
-  
-}
diff --git a/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConfig.java b/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConfig.java
index 43c8775..8993c3d 100644
--- a/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConfig.java
+++ b/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConfig.java
@@ -44,6 +44,18 @@
   public static final String PARAM_PASSWORD = "serverpass";
   public static final String PARAM_DOMAIN = "serverdomain";
 
+  // Access credentials
+  public static final String PARAM_ACCESSREALM = "accessrealm";
+  public static final String PARAM_ACCESSUSER = "accessuser";
+  public static final String PARAM_ACCESSPASSWORD = "accesspassword";
+  
+  // Proxy info
+  public static final String PARAM_PROXYHOST = "Proxy host";
+  public static final String PARAM_PROXYPORT = "Proxy port";
+  public static final String PARAM_PROXYDOMAIN = "Proxy domain";
+  public static final String PARAM_PROXYUSERNAME = "Proxy username";
+  public static final String PARAM_PROXYPASSWORD = "Proxy password";
+
   // Document specification
 
   /** Namespace and title prefix */
diff --git a/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConnector.java b/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConnector.java
index 4560e90..9d18873 100644
--- a/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConnector.java
+++ b/connectors/wiki/connector/src/main/java/org/apache/manifoldcf/crawler/connectors/wiki/WikiConnector.java
@@ -32,6 +32,10 @@
 import org.apache.manifoldcf.agents.common.XMLStringContext;
 import org.apache.manifoldcf.agents.common.XMLFileContext;
 
+import org.apache.http.auth.AuthScope;
+import org.apache.http.auth.Credentials;
+import org.apache.http.auth.UsernamePasswordCredentials;
+import org.apache.http.auth.NTCredentials;
 import org.apache.http.conn.ClientConnectionManager;
 import org.apache.http.client.HttpClient;
 import org.apache.http.impl.conn.PoolingClientConnectionManager;
@@ -40,12 +44,14 @@
 import org.apache.http.client.methods.HttpGet;
 import org.apache.http.client.methods.HttpPost;
 import org.apache.http.HttpResponse;
+import org.apache.http.HttpHost;
+import org.apache.http.HttpEntity;
+import org.apache.http.NameValuePair;
 import org.apache.http.params.BasicHttpParams;
 import org.apache.http.params.HttpParams;
 import org.apache.http.params.CoreConnectionPNames;
-import org.apache.http.HttpEntity;
+import org.apache.http.params.CoreProtocolPNames;
 import org.apache.http.client.entity.UrlEncodedFormEntity;
-import org.apache.http.NameValuePair;
 import org.apache.http.message.BasicNameValuePair;
 import org.apache.http.protocol.HTTP;
 import org.apache.http.util.EntityUtils;
@@ -53,6 +59,10 @@
 import org.apache.http.client.params.ClientPNames;
 import org.apache.http.client.HttpRequestRetryHandler;
 import org.apache.http.protocol.HttpContext;
+import org.apache.http.conn.params.ConnRoutePNames;
+import org.apache.http.conn.scheme.Scheme;
+import org.apache.http.conn.ssl.SSLSocketFactory;
+import org.apache.http.conn.ssl.AllowAllHostnameVerifier;
 
 import org.apache.http.conn.ConnectTimeoutException;
 import org.apache.http.client.CircularRedirectException;
@@ -99,15 +109,45 @@
   /** The user-agent for this connector instance */
   protected String userAgent = null;
 
+  // Server login parameters
   protected String serverLogin = null;
   protected String serverPass = null;
   protected String serverDomain = null;
   
+  // Basic auth parameters
+  protected String accessRealm = null;
+  protected String accessUser = null;
+  protected String accessPassword = null;
+  
+  // Proxy parameters
+  protected String proxyHost = null;
+  protected String proxyPort = null;
+  protected String proxyDomain = null;
+  protected String proxyUsername = null;
+  protected String proxyPassword = null;
+  
   /** Connection management */
   protected ClientConnectionManager connectionManager = null;
 
   protected HttpClient httpClient = null;
   
+  // Current host name
+  private static String currentHost = null;
+  static
+  {
+    // Find the current host name
+    try
+    {
+      java.net.InetAddress addr = java.net.InetAddress.getLocalHost();
+
+      // Get hostname
+      currentHost = addr.getHostName();
+    }
+    catch (java.net.UnknownHostException e)
+    {
+    }
+  }
+
   /** Constructor.
   */
   public WikiConnector()
@@ -139,10 +179,20 @@
   public void connect(ConfigParams configParameters)
   {
     super.connect(configParameters);
+
     server = params.getParameter(WikiConfig.PARAM_SERVER);
     serverLogin = params.getParameter(WikiConfig.PARAM_LOGIN);
     serverPass = params.getObfuscatedParameter(WikiConfig.PARAM_PASSWORD);
     serverDomain = params.getParameter(WikiConfig.PARAM_DOMAIN);
+    accessRealm = params.getParameter(WikiConfig.PARAM_ACCESSREALM);
+    accessUser = params.getParameter(WikiConfig.PARAM_ACCESSUSER);
+    accessPassword = params.getObfuscatedParameter(WikiConfig.PARAM_ACCESSPASSWORD);
+
+    proxyHost = params.getParameter(WikiConfig.PARAM_PROXYHOST);
+    proxyPort = params.getParameter(WikiConfig.PARAM_PROXYPORT);
+    proxyDomain = params.getParameter(WikiConfig.PARAM_PROXYDOMAIN);
+    proxyUsername = params.getParameter(WikiConfig.PARAM_PROXYUSERNAME);
+    proxyPassword = params.getObfuscatedParameter(WikiConfig.PARAM_PROXYPASSWORD);
   }
 
   protected void getSession()
@@ -168,16 +218,28 @@
       
       baseURL = protocol + "://" + server + ((portString!=null)?":" + portString:"") + path + "/api.php?format=xml&";
 
+      int socketTimeout = 900000;
+      int connectionTimeout = 300000;
+
+      javax.net.ssl.SSLSocketFactory httpsSocketFactory = KeystoreManagerFactory.getTrustingSecureSocketFactory();
+      SSLSocketFactory myFactory = new SSLSocketFactory(new InterruptibleSocketFactory(httpsSocketFactory,connectionTimeout),
+        new AllowAllHostnameVerifier());
+      Scheme myHttpsProtocol = new Scheme("https", 443, myFactory);
+
       // Set up connection manager
       PoolingClientConnectionManager localConnectionManager = new PoolingClientConnectionManager();
       localConnectionManager.setMaxTotal(1);
       connectionManager = localConnectionManager;
+      // Set up protocol registry
+      connectionManager.getSchemeRegistry().register(myHttpsProtocol);
 
       BasicHttpParams params = new BasicHttpParams();
+      params.setBooleanParameter(CoreProtocolPNames.USE_EXPECT_CONTINUE,true);
+      params.setIntParameter(CoreProtocolPNames.WAIT_FOR_CONTINUE,socketTimeout);
       params.setBooleanParameter(CoreConnectionPNames.TCP_NODELAY,true);
       params.setBooleanParameter(CoreConnectionPNames.STALE_CONNECTION_CHECK,false);
-      params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT,900000);
-      params.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT,300000);
+      params.setIntParameter(CoreConnectionPNames.SO_TIMEOUT,socketTimeout);
+      params.setIntParameter(CoreConnectionPNames.CONNECTION_TIMEOUT,connectionTimeout);
       params.setBooleanParameter(ClientPNames.ALLOW_CIRCULAR_REDIRECTS,true);
       DefaultHttpClient localHttpClient = new DefaultHttpClient(connectionManager,params);
       // No retries
@@ -193,6 +255,52 @@
        
         });
 
+      if (accessUser != null && accessUser.length() > 0 && accessPassword != null)
+      {
+        Credentials credentials = new UsernamePasswordCredentials(accessUser, accessPassword);
+        if (accessRealm != null && accessRealm.length() > 0)
+          localHttpClient.getCredentialsProvider().setCredentials(new AuthScope(AuthScope.ANY_HOST, AuthScope.ANY_PORT, accessRealm), credentials);
+        else
+          localHttpClient.getCredentialsProvider().setCredentials(AuthScope.ANY, credentials);
+      }
+
+      // If there's a proxy, set that too.
+      if (proxyHost != null && proxyHost.length() > 0)
+      {
+
+        int proxyPortInt;
+        if (proxyPort != null && proxyPort.length() > 0)
+        {
+          try
+          {
+            proxyPortInt = Integer.parseInt(proxyPort);
+          }
+          catch (NumberFormatException e)
+          {
+            throw new ManifoldCFException("Bad number: "+e.getMessage(),e);
+          }
+        }
+        else
+          proxyPortInt = 8080;
+
+        // Configure proxy authentication
+        if (proxyUsername != null && proxyUsername.length() > 0)
+        {
+          if (proxyPassword == null)
+            proxyPassword = "";
+          if (proxyDomain == null)
+            proxyDomain = "";
+
+          localHttpClient.getCredentialsProvider().setCredentials(
+            new AuthScope(proxyHost, proxyPortInt),
+            new NTCredentials(proxyUsername, proxyPassword, currentHost, proxyDomain));
+        }
+
+        HttpHost proxy = new HttpHost(proxyHost, proxyPortInt);
+
+        localHttpClient.getParams().setParameter(ConnRoutePNames.DEFAULT_PROXY, proxy);
+      }
+
       httpClient = localHttpClient;
       
       loginToAPI();
@@ -678,6 +786,14 @@
     serverLogin = null;
     serverPass = null;
     serverDomain = null;
+    accessUser = null;
+    accessPassword = null;
+    accessRealm = null;
+    proxyHost = null;
+    proxyPort = null;
+    proxyDomain = null;
+    proxyUsername = null;
+    proxyPassword = null;
     baseURL = null;
     userAgent = null;
 
@@ -865,6 +981,7 @@
   {
     tabsArray.add(Messages.getString(locale,"WikiConnector.Server"));
     tabsArray.add(Messages.getString(locale,"WikiConnector.Email"));
+    tabsArray.add(Messages.getString(locale,"WikiConnector.Proxy"));
 
     out.print(
 "<script type=\"text/javascript\">\n"+
@@ -889,6 +1006,12 @@
 "    editconnection.serverpath.focus();\n"+
 "    return false;\n"+
 "  }\n"+
+"  if (editconnection.proxyport.value != \"\" && !isInteger(editconnection.proxyport.value))\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"WikiConnector.ProxyPortMustBeAValidInteger")+"\");\n"+
+"    editconnection.proxyport.focus();\n"+
+"    return false;\n"+
+"  }\n"+
 "  return true;\n"+
 "}\n"+
 "\n"+
@@ -922,6 +1045,13 @@
 "    editconnection.serverpath.focus();\n"+
 "    return false;\n"+
 "  }\n"+
+"  if (editconnection.proxyport.value != \"\" && !isInteger(editconnection.proxyport.value))\n"+
+"  {\n"+
+"    alert(\""+Messages.getBodyJavascriptString(locale,"WikiConnector.ProxyPortMustBeAValidInteger")+"\");\n"+
+"    SelectTab(\""+Messages.getBodyJavascriptString(locale,"WikiConnector.Proxy")+"\");\n"+
+"    editconnection.proxyport.focus();\n"+
+"    return false;\n"+
+"  }\n"+
 "  return true;\n"+
 "}\n"+
 "\n"+
@@ -963,6 +1093,8 @@
     if (path == null)
       path = "/w";
 
+    // Server login parameters
+
     String login = parameters.getParameter(WikiConfig.PARAM_LOGIN);
     if (login == null) {
       login = "";
@@ -970,11 +1102,94 @@
     String pass = parameters.getObfuscatedParameter(WikiConfig.PARAM_PASSWORD);
     if (pass == null) {
       pass = "";
+    } else {
+      pass = out.mapPasswordToKey(pass);
     }
     String domain = parameters.getParameter(WikiConfig.PARAM_DOMAIN);
     if (domain == null) {
       domain = "";
     }
+
+    // Basic auth parameters
+    
+    String accessRealm = parameters.getParameter(WikiConfig.PARAM_ACCESSREALM);
+    if (accessRealm == null)
+      accessRealm = "";
+    
+    String accessUser = parameters.getParameter(WikiConfig.PARAM_ACCESSUSER);
+    if (accessUser == null)
+      accessUser = "";
+    
+    String accessPassword = parameters.getObfuscatedParameter(WikiConfig.PARAM_ACCESSPASSWORD);
+    if (accessPassword == null)
+      accessPassword = "";
+    else
+      accessPassword = out.mapPasswordToKey(accessPassword);
+
+    // Proxy parameters
+    
+    String proxyHost = parameters.getParameter(WikiConfig.PARAM_PROXYHOST);
+    if (proxyHost == null)
+      proxyHost = "";
+    
+    String proxyPort = parameters.getParameter(WikiConfig.PARAM_PROXYPORT);
+    if (proxyPort == null)
+      proxyPort = "";
+    
+    String proxyDomain = parameters.getParameter(WikiConfig.PARAM_PROXYDOMAIN);
+    if (proxyDomain == null)
+      proxyDomain = "";
+    
+    String proxyUsername = parameters.getParameter(WikiConfig.PARAM_PROXYUSERNAME);
+    if (proxyUsername == null)
+      proxyUsername = "";
+    
+    String proxyPassword = parameters.getObfuscatedParameter(WikiConfig.PARAM_PROXYPASSWORD);
+    if (proxyPassword == null)
+      proxyPassword = "";
+    else
+      proxyPassword = out.mapPasswordToKey(proxyPassword);
+
+    // Proxy tab
+    if (tabName.equals(Messages.getString(locale,"WikiConnector.Proxy")))
+    {
+      out.print(
+"<table class=\"displaytable\">\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"WikiConnector.ProxyHostColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"text\" size=\"32\" name=\"proxyhost\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyHost)+"\"/></td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"WikiConnector.ProxyPortColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"text\" size=\"5\" name=\"proxyport\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyPort)+"\"/></td>\n"+
+"  </tr>\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"WikiConnector.ProxyDomainColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"text\" size=\"32\" name=\"proxydomain\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyDomain)+"\"/></td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"WikiConnector.ProxyUsernameColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"text\" size=\"16\" name=\"proxyusername\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyUsername)+"\"/></td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale,"WikiConnector.ProxyPasswordColon") + "</nobr></td>\n"+
+"    <td class=\"value\"><input type=\"password\" size=\"16\" name=\"proxypassword\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyPassword)+"\"/></td>\n"+
+"  </tr>\n"+
+"</table>\n"
+      );
+    }
+    else
+    {
+      out.print(
+"<input type=\"hidden\" name=\"proxyhost\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyHost)+"\"/>\n"+
+"<input type=\"hidden\" name=\"proxyport\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyPort)+"\"/>\n"+
+"<input type=\"hidden\" name=\"proxydomain\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyDomain)+"\"/>\n"+
+"<input type=\"hidden\" name=\"proxyusername\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyUsername)+"\"/>\n"+
+"<input type=\"hidden\" name=\"proxypassword\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(proxyPassword)+"\"/>\n"
+      );
+    }
     
     // Email tab
     if (tabName.equals(Messages.getString(locale,"WikiConnector.Email")))
@@ -1026,6 +1241,7 @@
 "      <input name=\"serverpath\" type=\"text\" size=\"16\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
 "    </td>\n"+
 "  </tr>\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
 "  <tr>\n"+
 "    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "WikiConnector.ServerLogin") + "</nobr></td>\n"+
 "    <td class=\"value\">\n"+
@@ -1044,6 +1260,25 @@
 "      <input name=\"serverdomain\" type=\"text\" size=\"16\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(domain) + "\"/>\n"+
 "    </td>\n"+
 "  </tr>\n"+
+"  <tr><td class=\"separator\" colspan=\"2\"><hr/></td></tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "WikiConnector.AccessUser") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"accessuser\" type=\"text\" size=\"16\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(accessUser) + "\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "WikiConnector.AccessPassword") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"accesspassword\" type=\"password\" size=\"16\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(accessPassword) + "\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
+"  <tr>\n"+
+"    <td class=\"description\"><nobr>" + Messages.getBodyString(locale, "WikiConnector.AccessRealm") + "</nobr></td>\n"+
+"    <td class=\"value\">\n"+
+"      <input name=\"accessrealm\" type=\"text\" size=\"16\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(accessRealm) + "\"/>\n"+
+"    </td>\n"+
+"  </tr>\n"+
 "</table>\n"
       );
     }
@@ -1057,7 +1292,10 @@
 "<input type=\"hidden\" name=\"serverpath\" value=\""+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(path)+"\"/>\n"+
 "<input type=\"hidden\" name=\"serverlogin\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(login) + "\"/>\n"+
 "<input type=\"hidden\" name=\"serverpass\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(pass) + "\"/>\n"+
-"<input type=\"hidden\" name=\"serverdomain\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(domain) + "\"/>\n"
+"<input type=\"hidden\" name=\"serverdomain\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(domain) + "\"/>\n"+
+"<input type=\"hidden\" name=\"accessuser\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(accessUser) + "\"/>\n"+
+"<input type=\"hidden\" name=\"accesspassword\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(accessPassword) + "\"/>\n"+
+"<input type=\"hidden\" name=\"accessrealm\" value=\"" + org.apache.manifoldcf.ui.util.Encoder.attributeEscape(accessRealm) + "\"/>\n"
       );
     }
 
@@ -1104,7 +1342,7 @@
 
     String pass = variableContext.getParameter("serverpass");
     if (pass != null) {
-      parameters.setObfuscatedParameter(WikiConfig.PARAM_PASSWORD, pass);
+      parameters.setObfuscatedParameter(WikiConfig.PARAM_PASSWORD, variableContext.mapKeyToPassword(pass));
     }
 
     String domain = variableContext.getParameter("serverdomain");
@@ -1112,6 +1350,46 @@
       parameters.setParameter(WikiConfig.PARAM_DOMAIN, domain);
     }
 
+    String accessUser = variableContext.getParameter("accessuser");
+    if (accessUser != null) {
+      parameters.setParameter(WikiConfig.PARAM_ACCESSUSER, accessUser);
+    }
+
+    String accessPassword = variableContext.getParameter("accesspassword");
+    if (accessPassword != null) {
+      parameters.setObfuscatedParameter(WikiConfig.PARAM_ACCESSPASSWORD, variableContext.mapKeyToPassword(accessPassword));
+    }
+
+    String accessRealm = variableContext.getParameter("accessrealm");
+    if (accessRealm != null) {
+      parameters.setParameter(WikiConfig.PARAM_ACCESSREALM, accessRealm);
+    }
+
+    String proxyHost = variableContext.getParameter("proxyhost");
+    if (proxyHost != null) {
+      parameters.setParameter(WikiConfig.PARAM_PROXYHOST, proxyHost);
+    }
+    
+    String proxyPort = variableContext.getParameter("proxyport");
+    if (proxyPort != null) {
+      parameters.setParameter(WikiConfig.PARAM_PROXYPORT, proxyPort);
+    }
+
+    String proxyDomain = variableContext.getParameter("proxydomain");
+    if (proxyDomain != null) {
+      parameters.setParameter(WikiConfig.PARAM_PROXYDOMAIN, proxyDomain);
+    }
+    
+    String proxyUsername = variableContext.getParameter("proxyusername");
+    if (proxyUsername != null) {
+      parameters.setParameter(WikiConfig.PARAM_PROXYUSERNAME, proxyUsername);
+    }
+
+    String proxyPassword = variableContext.getParameter("proxypassword");
+    if (proxyPassword != null) {
+      parameters.setObfuscatedParameter(WikiConfig.PARAM_PROXYPASSWORD, variableContext.mapKeyToPassword(proxyPassword));
+    }
+
     return null;
   }
   
@@ -2061,7 +2339,7 @@
       try
       {
 	HttpRequestBase executeMethod = getInitializedGetMethod(getListPagesURL(startPageTitle,namespace,prefix));
-        PageBuffer pageBuffer = new PageBuffer();
+        XThreadStringBuffer pageBuffer = new XThreadStringBuffer();
         ExecuteListPagesThread t = new ExecuteListPagesThread(httpClient,executeMethod,pageBuffer,startPageTitle);
         try
         {
@@ -2191,12 +2469,12 @@
     protected HttpClient client;
     protected HttpRequestBase executeMethod;
     protected Throwable exception = null;
-    protected PageBuffer pageBuffer;
+    protected XThreadStringBuffer pageBuffer;
     protected String lastPageTitle = null;
     protected String startPageTitle;
     protected boolean loginNeeded = false;
 
-    public ExecuteListPagesThread(HttpClient client, HttpRequestBase executeMethod, PageBuffer pageBuffer, String startPageTitle)
+    public ExecuteListPagesThread(HttpClient client, HttpRequestBase executeMethod, XThreadStringBuffer pageBuffer, String startPageTitle)
     {
       super();
       setDaemon(true);
@@ -2277,7 +2555,7 @@
   *   </query-continue>
   * </api>
   */
-  protected static boolean parseListPagesResponse(InputStream is, PageBuffer buffer, String startPageTitle, ReturnString lastTitle)
+  protected static boolean parseListPagesResponse(InputStream is, XThreadStringBuffer buffer, String startPageTitle, ReturnString lastTitle)
     throws ManifoldCFException, ServiceInterruption
   {
     // Parse the document.  This will cause various things to occur, within the instantiated XMLContext class.
@@ -2309,11 +2587,11 @@
   protected static class WikiListPagesAPIContext extends SingleLevelContext
   {
     protected String lastTitle = null;
-    protected PageBuffer buffer;
+    protected XThreadStringBuffer buffer;
     protected String startPageTitle;
     protected boolean loginNeeded = false;
     
-    public WikiListPagesAPIContext(XMLStream theStream, PageBuffer buffer, String startPageTitle)
+    public WikiListPagesAPIContext(XMLStream theStream, XThreadStringBuffer buffer, String startPageTitle)
     {
       super(theStream,"api");
       this.buffer = buffer;
@@ -2350,11 +2628,11 @@
   protected static class WikiListPagesQueryContext extends SingleLevelErrorContext
   {
     protected String lastTitle = null;
-    protected PageBuffer buffer;
+    protected XThreadStringBuffer buffer;
     protected String startPageTitle;
     
     public WikiListPagesQueryContext(XMLStream theStream, String namespaceURI, String localName, String qName, Attributes atts,
-      PageBuffer buffer, String startPageTitle)
+      XThreadStringBuffer buffer, String startPageTitle)
     {
       super(theStream,namespaceURI,localName,qName,atts,"query");
       this.buffer = buffer;
@@ -2385,11 +2663,11 @@
   protected static class WikiListPagesAllPagesContext extends SingleLevelContext
   {
     protected String lastTitle = null;
-    protected PageBuffer buffer;
+    protected XThreadStringBuffer buffer;
     protected String startPageTitle;
     
     public WikiListPagesAllPagesContext(XMLStream theStream, String namespaceURI, String localName, String qName, Attributes atts,
-      PageBuffer buffer, String startPageTitle)
+      XThreadStringBuffer buffer, String startPageTitle)
     {
       super(theStream,namespaceURI,localName,qName,atts,"allpages");
       this.buffer = buffer;
@@ -2422,11 +2700,11 @@
   protected static class WikiListPagesPContext extends BaseProcessingContext
   {
     protected String lastTitle = null;
-    protected PageBuffer buffer;
+    protected XThreadStringBuffer buffer;
     protected String startPageTitle;
     
     public WikiListPagesPContext(XMLStream theStream, String namespaceURI, String localName, String qName, Attributes atts,
-      PageBuffer buffer, String startPageTitle)
+      XThreadStringBuffer buffer, String startPageTitle)
     {
       super(theStream,namespaceURI,localName,qName,atts);
       this.buffer = buffer;
diff --git a/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_en_US.properties b/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_en_US.properties
index 4a19ebb..f4ef681 100644
--- a/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_en_US.properties
+++ b/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_en_US.properties
@@ -14,6 +14,9 @@
 # limitations under the License.
 
 WikiConnector.Server=Server
+WikiConnector.Email=Email
+WikiConnector.Proxy=Proxy
+
 WikiConnector.Protocol=Protocol:
 WikiConnector.ServerName=Server name:
 WikiConnector.Port=Port:
@@ -23,7 +26,6 @@
 WikiConnector.ServerDomain=API domain:
 WikiConnector.NamespaceAndTitles=Namespace and Titles
 WikiConnector.NamespaceAndTitles2=Namespaces and titles:
-WikiConnector.Email=Email
 WikiConnector.Namespace=Namespace
 WikiConnector.TitlePrefix=Title prefix
 WikiConnector.Security=Security
@@ -47,4 +49,14 @@
 WikiConnector.Parameters=Parameters:
 WikiConnector.certificates= certificate(s)
 WikiConnector.DeleteNamespaceTitle=Delete namespace/title #
+WikiConnector.AccessUser=Basic auth user name:
+WikiConnector.AccessPassword=Basic auth password:
+WikiConnector.AccessRealm=Basic auth realm:
 
+WikiConnector.ProxyHostColon=Proxy host:
+WikiConnector.ProxyPortColon=Proxy port:
+WikiConnector.ProxyDomainColon=Proxy domain:
+WikiConnector.ProxyUsernameColon=Proxy user name:
+WikiConnector.ProxyPasswordColon=Proxy password:
+
+WikiConnector.ProxyPortMustBeAValidInteger=Proxy port must be a valid integer
diff --git a/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_ja_JP.properties b/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_ja_JP.properties
index 0395160..733897f 100644
--- a/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_ja_JP.properties
+++ b/connectors/wiki/connector/src/main/native2ascii/org/apache/manifoldcf/crawler/connectors/wiki/common_ja_JP.properties
@@ -14,6 +14,9 @@
 # limitations under the License.
 
 WikiConnector.Server=サーバ
+WikiConnector.Email=メール
+WikiConnector.Proxy=Proxy
+
 WikiConnector.Protocol=プロトコル:
 WikiConnector.ServerName=サーバ名:
 WikiConnector.Port=ポート:
@@ -23,7 +26,6 @@
 WikiConnector.ServerDomain=API domain:
 WikiConnector.NamespaceAndTitles=名前空間と題名
 WikiConnector.NamespaceAndTitles2=名前空間と題名:
-WikiConnector.Email=メール
 WikiConnector.Namespace=名前空間
 WikiConnector.TitlePrefix=題名接先頭辞
 WikiConnector.Security=Security
@@ -47,4 +49,14 @@
 WikiConnector.Parameters=引数:
 WikiConnector.certificates= 証明証
 WikiConnector.DeleteNamespaceTitle=名前空間/題名を削除: #
+WikiConnector.AccessUser=Basic auth user name:
+WikiConnector.AccessPassword=Basic auth password:
+WikiConnector.AccessRealm=Basic auth realm:
 
+WikiConnector.ProxyHostColon=Proxy host:
+WikiConnector.ProxyPortColon=Proxy port:
+WikiConnector.ProxyDomainColon=Proxy domain:
+WikiConnector.ProxyUsernameColon=Proxy user name:
+WikiConnector.ProxyPasswordColon=Proxy password:
+
+WikiConnector.ProxyPortMustBeAValidInteger=Proxy port must be a valid integer
diff --git a/connectors/wiki/pom.xml b/connectors/wiki/pom.xml
index 3c3abf3..c2da9db 100644
--- a/connectors/wiki/pom.xml
+++ b/connectors/wiki/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-connectors</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -30,14 +30,22 @@
   <build>
     <sourceDirectory>${basedir}/connector/src/main/java</sourceDirectory>
     <testSourceDirectory>${basedir}/connector/src/test/java</testSourceDirectory>
+    <resources>
+      <resource>
+        <directory>${basedir}/connector/src/main/native2ascii</directory>
+        <includes>
+          <include>**/*.properties</include>
+        </includes>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>connector/src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -47,7 +55,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
@@ -91,7 +101,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/dist-license/DEPENDENCIES.txt b/dist-license/DEPENDENCIES.txt
index 315d3b9..c7f8c50 100644
--- a/dist-license/DEPENDENCIES.txt
+++ b/dist-license/DEPENDENCIES.txt
@@ -1,6 +1,6 @@
 ManifoldCF requires
 ------------------
-* JRE 1.6 or above
+* JRE 1.7 or above
 * Many other libraries, available from the ManifoldCF XXX-lib distribution
 
 For running ManifoldCF:
diff --git a/dist-license/LICENSE.txt b/dist-license/LICENSE.txt
index dff8488..6a5e290 100644
--- a/dist-license/LICENSE.txt
+++ b/dist-license/LICENSE.txt
@@ -293,6 +293,33 @@
 This product includes a jstl-impl-1.2.jar.
 License: Common Development and Distribution License (CDDL) v1.0 (https://glassfish.dev.java.net/public/CDDLv1.0.html)
 
+This product includes a dropbox-client-1.5.3.jar.
+License: MIT license (http://opensource.org/licenses/MIT).
+
+This product includes a json-simple-1.1.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a jackson-core-2.1.3.jar.
+License: Dual license; we choose to distribute under Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-api-client-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-oauth-client-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-api-services-drive-v2-rev64-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-api-services-drive-v2-rev64-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-http-client-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-http-client-jackson2-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
 This product may include pdf files that embed IPA-licensed fonts.
 License: IPA Font License Agreement v1.0 (http://ossipedia.ipa.go.jp/ipafont/index.html#LicenseEng)
 
diff --git a/dist-license/README.txt b/dist-license/README.txt
index 951a537..770f488 100644
--- a/dist-license/README.txt
+++ b/dist-license/README.txt
@@ -28,11 +28,11 @@
 
 Since you downloaded the binary version of the package, you can just run it.
 
-1. Download and install the Java SE 6 JDK (Java Development Kit), or greater,
+1. Download and install the Java SE 7 JDK (Java Development Kit), or greater,
    from http://java.sun.com.  You will need the JDK installed, and the
    %JAVA_HOME%/bin directory included on your command path.  To test this,
    issue a "java -version" command from your shell and verify that the Java
-   version is 1.6 or greater.
+   version is 1.7 or greater.
    
 2. In your shell, change to the single-process example directory, "example".
 
diff --git a/framework/agents/pom.xml b/framework/agents/pom.xml
index 4250d36..1d8f17a 100644
--- a/framework/agents/pom.xml
+++ b/framework/agents/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentRun.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentRun.java
index c7f0e4b..4c22945 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentRun.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentRun.java
@@ -28,8 +28,7 @@
 {
   public static final String _rcsid = "@(#)$Id: AgentRun.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  public static final String agentInUseSignal = "_AGENTINUSE_";
-  public static final String agentShutdownSignal = "_AGENTRUN_";
+  public static final String agentServiceType = "AGENT";
   
   public AgentRun()
   {
@@ -37,41 +36,32 @@
 
   protected void doExecute(IThreadContext tc) throws ManifoldCFException
   {
+    // Note well:
+    // As part of CONNECTORS-781, multiple agents processes are now permitted, provided
+    // that a truly global lock manager implementation is used.  This implementation thus does the
+    // following:
+    // (1) Register the agent, and begin its execution
+    // (2) Periodically check for any new registered IAgent implementations
+    // (3) Await a shutdown signal
+    // (4) If exit signal seen, exit active block
+    // (5) Trap JVM exit to be sure we exit active block no matter what
+    //   (This latter option requires the ability to exit active blocks from different ILockManager instances)
+    //
+    // Note well that the agents shutdown signal is NEVER modified by this code; it will be set/cleared by
+    // AgentStop only, and AgentStop will wait until all services become inactive before exiting.
+    String processID = ManifoldCF.getProcessID();
     ILockManager lockManager = LockManagerFactory.make(tc);
-    // Agent already in use?
-    if (lockManager.checkGlobalFlag(agentInUseSignal))
-    {
-      System.err.println("Agent already in use");
-      System.exit(1);
-    }
-    
-    ManifoldCF.addShutdownHook(new AgentRunShutdownRunner());
-    
-    // Set the agents in use signal.
-    lockManager.setGlobalFlag(agentInUseSignal);    
+    lockManager.registerServiceBeginServiceActivity(agentServiceType, processID, null);
     try
     {
-      // Clear the agents shutdown signal.
-      lockManager.clearGlobalFlag(agentShutdownSignal);
+      // Register a shutdown hook to make sure we signal that the main agents process is going inactive.
+      ManifoldCF.addShutdownHook(new AgentRunShutdownRunner(processID));
+      
       Logging.root.info("Running...");
-      while (true)
-      {
-        // Any shutdown signal yet?
-        if (lockManager.checkGlobalFlag(agentShutdownSignal))
-          break;
-
-        // Start whatever agents need to be started
-        ManifoldCF.startAgents(tc);
-
-        try
-        {
-          ManifoldCF.sleep(5000);
-        }
-        catch (InterruptedException e)
-        {
-          break;
-        }
-      }
+      // Register hook first so stopAgents() not required
+      AgentsDaemon ad = new AgentsDaemon(processID);
+      ad.registerAgentsShutdownHook(tc);
+      ad.runAgents(tc);
       Logging.root.info("Shutting down...");
     }
     catch (ManifoldCFException e)
@@ -81,7 +71,9 @@
     }
     finally
     {
-      lockManager.clearGlobalFlag(agentInUseSignal);
+      // Exit service
+      // This is a courtesy; some lock managers (i.e. ZooKeeper) manage to do this anyway
+      lockManager.endServiceActivity(agentServiceType, processID);
     }
   }
 
@@ -111,16 +103,24 @@
   
   protected static class AgentRunShutdownRunner implements IShutdownHook
   {
-    public AgentRunShutdownRunner()
+    protected final String processID;
+    
+    public AgentRunShutdownRunner(String processID)
     {
+      this.processID = processID;
     }
     
-    public void doCleanup()
+    @Override
+    public void doCleanup(IThreadContext tc)
       throws ManifoldCFException
     {
-      IThreadContext tc = ThreadContextFactory.make();
       ILockManager lockManager = LockManagerFactory.make(tc);
-      lockManager.clearGlobalFlag(agentInUseSignal);
+      // We can blast the active flag off here; we may have already exited though and an exception will
+      // therefore be thrown.
+      if (lockManager.checkServiceActive(agentServiceType, processID))
+      {
+        lockManager.endServiceActivity(agentServiceType, processID);
+      }
     }
     
   }
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentStop.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentStop.java
index 28677fa..f2e4b88 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentStop.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/AgentStop.java
@@ -34,9 +34,35 @@
 
   protected void doExecute(IThreadContext tc) throws ManifoldCFException
   {
+    // As part of the work for CONNECTORS-781, this method is now synchronous.
+    // We assert the shutdown signal, and then wait until all active services have shut down.
     ILockManager lockManager = LockManagerFactory.make(tc);
-    lockManager.setGlobalFlag(AgentRun.agentShutdownSignal);
-    Logging.root.info("Shutdown signal sent");
+    AgentsDaemon.assertAgentsShutdownSignal(tc);
+    try
+    {
+      Logging.root.info("Shutdown signal sent");
+      while (true)
+      {
+        // Check to see if services are down yet
+        int count = lockManager.countActiveServices(AgentRun.agentServiceType);
+        if (count == 0)
+          break;
+        try
+        {
+          ManifoldCF.sleep(1000L);
+        }
+        catch (InterruptedException e)
+        {
+          throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        }
+      }
+      Logging.root.info("All agents shut down");
+    }
+    finally
+    {
+      // Clear shutdown signal
+      AgentsDaemon.clearAgentsShutdownSignal(tc);
+    }
   }
 
 
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/BaseAgentsInitializationCommand.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/BaseAgentsInitializationCommand.java
index 6e27c6c..3d91956 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/BaseAgentsInitializationCommand.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/BaseAgentsInitializationCommand.java
@@ -32,8 +32,8 @@
 {
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
     IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
     doExecute(tc);
   }
 
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/DefineOutputConnection.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/DefineOutputConnection.java
index 680c3fa..d68c484 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/DefineOutputConnection.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/DefineOutputConnection.java
@@ -51,8 +51,8 @@
 
                 try
                 {
-                        ManifoldCF.initializeEnvironment();
                         IThreadContext tc = ThreadContextFactory.make();
+                        ManifoldCF.initializeEnvironment(tc);
                         IOutputConnectionManager mgr = OutputConnectionManagerFactory.make(tc);
                         IOutputConnection conn = mgr.create();
                         conn.setName(connectionName);
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/DeleteOutputConnection.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/DeleteOutputConnection.java
index 41fd70c..e70c6ff 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/DeleteOutputConnection.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/DeleteOutputConnection.java
@@ -46,8 +46,8 @@
                 String connectionName = args[0];
                 try
                 {
-                        ManifoldCF.initializeEnvironment();
                         IThreadContext tc = ThreadContextFactory.make();
+                        ManifoldCF.initializeEnvironment(tc);
                         IOutputConnectionManager mgr = OutputConnectionManagerFactory.make(tc);
                         mgr.delete(connectionName);
 
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/SynchronizeAll.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/SynchronizeAll.java
index f0be090..8264e3a 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/SynchronizeAll.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/SynchronizeAll.java
@@ -43,7 +43,7 @@
       String classname = classnames[i++];
       try
       {
-        AgentFactory.make(tc,classname);
+        AgentFactory.make(classname);
       }
       catch (ManifoldCFException e)
       {
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/TransactionalAgentsInitializationCommand.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/TransactionalAgentsInitializationCommand.java
index 20f5b3c..b09c541 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/TransactionalAgentsInitializationCommand.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/TransactionalAgentsInitializationCommand.java
@@ -30,8 +30,8 @@
 {
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
     IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
     IDBInterface database = DBInterfaceFactory.make(tc,
       org.apache.manifoldcf.agents.system.ManifoldCF.getMasterDatabaseName(),
       org.apache.manifoldcf.agents.system.ManifoldCF.getMasterDatabaseUsername(),
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/agentmanager/AgentManager.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/agentmanager/AgentManager.java
index fdad36d..8c97b78 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/agentmanager/AgentManager.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/agentmanager/AgentManager.java
@@ -98,8 +98,8 @@
       {
         IResultRow row = set.getRow(i++);
         String className = row.getValue(classNameField).toString();
-        IAgent agent = AgentFactory.make(threadContext,className);
-        agent.deinstall();
+        IAgent agent = AgentFactory.make(className);
+        agent.deinstall(threadContext);
       }
       performDrop(null);
     }
@@ -142,8 +142,8 @@
         performInsert(map,null);
       }
       // In any case, call the install/upgrade method
-      IAgent agent = AgentFactory.make(threadContext,className);
-      agent.install();
+      IAgent agent = AgentFactory.make(className);
+      agent.install(threadContext);
     }
     catch (ManifoldCFException e)
     {
@@ -173,8 +173,8 @@
     try
     {
       // First, deregister agent
-      IAgent agent = AgentFactory.make(threadContext,className);
-      agent.deinstall();
+      IAgent agent = AgentFactory.make(className);
+      agent.deinstall(threadContext);
 
       // Remove from table
       removeAgent(className);
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/incrementalingest/IncrementalIngester.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/incrementalingest/IncrementalIngester.java
index 01d7410..f7f12cd 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/incrementalingest/IncrementalIngester.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/incrementalingest/IncrementalIngester.java
@@ -79,7 +79,9 @@
   protected ILockManager lockManager;
   // Output connection manager
   protected IOutputConnectionManager connectionManager;
-
+  // Output connector pool manager
+  protected IOutputConnectorPool outputConnectorPool;
+  
   /** Constructor.
   */
   public IncrementalIngester(IThreadContext threadContext, IDBInterface database)
@@ -89,10 +91,12 @@
     this.threadContext = threadContext;
     lockManager = LockManagerFactory.make(threadContext);
     connectionManager = OutputConnectionManagerFactory.make(threadContext);
+    outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
   }
 
   /** Install the incremental ingestion manager.
   */
+  @Override
   public void install()
     throws ManifoldCFException
   {
@@ -177,6 +181,7 @@
 
   /** Uninstall the incremental ingestion manager.
   */
+  @Override
   public void deinstall()
     throws ManifoldCFException
   {
@@ -203,7 +208,7 @@
     throws ManifoldCFException, ServiceInterruption
   {
     IOutputConnection connection = connectionManager.load(outputConnectionName);
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -213,7 +218,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
   }
 
@@ -228,7 +233,7 @@
     throws ManifoldCFException, ServiceInterruption
   {
     IOutputConnection connection = connectionManager.load(outputConnectionName);
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -238,7 +243,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
   }
 
@@ -254,7 +259,7 @@
     throws ManifoldCFException, ServiceInterruption
   {
     IOutputConnection connection = connectionManager.load(outputConnectionName);
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -264,7 +269,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
   }
 
@@ -280,7 +285,7 @@
     throws ManifoldCFException, ServiceInterruption
   {
     IOutputConnection connection = connectionManager.load(outputConnectionName);
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -290,7 +295,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
   }
 
@@ -304,7 +309,7 @@
     throws ManifoldCFException, ServiceInterruption
   {
     IOutputConnection connection = connectionManager.load(outputConnectionName);
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -314,7 +319,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
 
   }
@@ -407,6 +412,7 @@
   *@param activities is an object providing a set of methods that the implementer can use to perform the operation.
   *@return true if the ingest was ok, false if the ingest is illegal (and should not be repeated).
   */
+  @Override
   public boolean documentIngest(String outputConnectionName,
     String identifierClass, String identifierHash,
     String documentVersion,
@@ -670,6 +676,7 @@
   *@param identifierHash is the hashed document identifier.
   *@param checkTime is the time at which the check took place, in milliseconds since epoch.
   */
+  @Override
   public void documentCheck(String outputConnectionName,
     String identifierClass, String identifierHash,
     long checkTime)
@@ -1367,6 +1374,7 @@
   * they are checked.
   *@param outputConnectionName is the name of the output connection associated with this action.
   */
+  @Override
   public void resetOutputConnection(String outputConnectionName)
     throws ManifoldCFException
   {
@@ -1380,6 +1388,21 @@
     performUpdate(map,"WHERE "+query,list,null);
   }
 
+  /** Remove all knowledge of an output index from the system.  This is appropriate
+  * when the output index no longer exists and you wish to delete the associated job.
+  *@param outputConnectionName is the name of the output connection associated with this action.
+  */
+  @Override
+  public void removeOutputConnection(String outputConnectionName)
+    throws ManifoldCFException
+  {
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(outputConnNameField,outputConnectionName)});
+      
+    performDelete("WHERE "+query,list,null);
+  }
+  
   /** Note the ingestion of a document, or the "update" of a document.
   *@param outputConnectionName is the name of the output connection.
   *@param docKey is the key string describing the document.
@@ -1647,7 +1670,9 @@
     IOutputAddActivity activities)
     throws ManifoldCFException, ServiceInterruption
   {
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    // Set indexing date
+    document.setIndexingDate(new Date());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -1657,7 +1682,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
   }
 
@@ -1666,7 +1691,7 @@
   protected void removeDocument(IOutputConnection connection, String documentURI, String outputDescription, IOutputRemoveActivity activities)
     throws ManifoldCFException, ServiceInterruption
   {
-    IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+    IOutputConnector connector = outputConnectorPool.grab(connection);
     if (connector == null)
       // The connector is not installed; treat this as a service interruption.
       throw new ServiceInterruption("Output connector not installed",0L);
@@ -1676,7 +1701,7 @@
     }
     finally
     {
-      OutputConnectorFactory.release(connector);
+      outputConnectorPool.release(connection,connector);
     }
   }
 
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentFactory.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentFactory.java
index f6cb09c..b34454b 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentFactory.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentFactory.java
@@ -27,8 +27,6 @@
 {
   public static final String _rcsid = "@(#)$Id: AgentFactory.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  protected static final String agentIdentifier = "_Agent_";
-
   private AgentFactory()
   {
   }
@@ -38,73 +36,65 @@
   *@param className is the agent class name.
   *@return the agent.
   */
-  public static IAgent make(IThreadContext tc, String className)
+  public static IAgent make(String className)
     throws ManifoldCFException
   {
-    String agentName = agentIdentifier+className;
-    Object o = tc.get(agentName);
-    if (o == null || !(o instanceof IAgent))
+    try
     {
-      try
-      {
-        Class theClass = Class.forName(className);
-        Class[] argumentClasses = new Class[1];
-        argumentClasses[0] = IThreadContext.class;
-        // Look for a constructor
-        Constructor c = theClass.getConstructor(argumentClasses);
-        Object[] arguments = new Object[1];
-        arguments[0] = tc;
-        o = c.newInstance(arguments);
-        if (!(o instanceof IAgent))
-          throw new ManifoldCFException("Class '"+className+"' does not implement IAgent.");
-        tc.save(agentName,o);
-      }
-      catch (InvocationTargetException e)
-      {
-        Throwable z = e.getTargetException();
-        if (z instanceof Error)
-          throw (Error)z;
-        else
-          throw (ManifoldCFException)z;
-      }
-      catch (ClassNotFoundException e)
-      {
-        throw new ManifoldCFException("No class implementing IAgent called '"+
-          className+"'.",
-          e);
-      }
-      catch (NoSuchMethodException e)
-      {
-        throw new ManifoldCFException("No appropriate constructor for IAgent implementation '"+
-          className+"'.  Need xxx(ConfigParams).",
-          e);
-      }
-      catch (SecurityException e)
-      {
-        throw new ManifoldCFException("Protected constructor for IAgent implementation '"+className+"'",
-          e);
-      }
-      catch (IllegalAccessException e)
-      {
-        throw new ManifoldCFException("Unavailable constructor for IAgent implementation '"+className+"'",
-          e);
-      }
-      catch (IllegalArgumentException e)
-      {
-        throw new ManifoldCFException("Shouldn't happen!!!",e);
-      }
-      catch (InstantiationException e)
-      {
-        throw new ManifoldCFException("InstantiationException for IAgent implementation '"+className+"'",
-          e);
-      }
-      catch (ExceptionInInitializerError e)
-      {
-        throw new ManifoldCFException("ExceptionInInitializerError for IAgent implementation '"+className+"'",
-          e);
-      }
+      Class theClass = Class.forName(className);
+      Class[] argumentClasses = new Class[0];
+      // Look for a constructor
+      Constructor c = theClass.getConstructor(argumentClasses);
+      Object[] arguments = new Object[0];
+      Object o = c.newInstance(arguments);
+      if (!(o instanceof IAgent))
+        throw new ManifoldCFException("Class '"+className+"' does not implement IAgent.");
+      return (IAgent)o;
     }
-    return (IAgent)o;
+    catch (InvocationTargetException e)
+    {
+      Throwable z = e.getTargetException();
+      if (z instanceof Error)
+        throw (Error)z;
+      else
+        throw (ManifoldCFException)z;
+    }
+    catch (ClassNotFoundException e)
+    {
+      throw new ManifoldCFException("No class implementing IAgent called '"+
+        className+"'.",
+        e);
+    }
+    catch (NoSuchMethodException e)
+    {
+      throw new ManifoldCFException("No appropriate constructor for IAgent implementation '"+
+        className+"'.  Need xxx().",
+        e);
+    }
+    catch (SecurityException e)
+    {
+      throw new ManifoldCFException("Protected constructor for IAgent implementation '"+className+"'",
+        e);
+    }
+    catch (IllegalAccessException e)
+    {
+      throw new ManifoldCFException("Unavailable constructor for IAgent implementation '"+className+"'",
+        e);
+    }
+    catch (IllegalArgumentException e)
+    {
+      throw new ManifoldCFException("Shouldn't happen!!!",e);
+    }
+    catch (InstantiationException e)
+    {
+      throw new ManifoldCFException("InstantiationException for IAgent implementation '"+className+"'",
+        e);
+    }
+    catch (ExceptionInInitializerError e)
+    {
+      throw new ManifoldCFException("ExceptionInInitializerError for IAgent implementation '"+className+"'",
+        e);
+    }
   }
 
 }
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentManagerFactory.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentManagerFactory.java
index 668facc..b257032 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentManagerFactory.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/AgentManagerFactory.java
@@ -67,7 +67,7 @@
     int i = 0;
     while (i < theAgents.length)
     {
-      if (theAgents[i++].isOutputConnectionInUse(connName))
+      if (theAgents[i++].isOutputConnectionInUse(threadContext, connName))
         return true;
     }
     return false;
@@ -86,7 +86,7 @@
     int i = 0;
     while (i < theAgents.length)
     {
-      theAgents[i++].noteOutputConnectorDeregistration(connectionNames);
+      theAgents[i++].noteOutputConnectorDeregistration(threadContext, connectionNames);
     }
   }
 
@@ -104,7 +104,7 @@
     int i = 0;
     while (i < theAgents.length)
     {
-      theAgents[i++].noteOutputConnectorRegistration(connectionNames);
+      theAgents[i++].noteOutputConnectorRegistration(threadContext, connectionNames);
     }
   }
 
@@ -121,7 +121,7 @@
     int i = 0;
     while (i < theAgents.length)
     {
-      theAgents[i++].noteOutputConnectionChange(connectionName);
+      theAgents[i++].noteOutputConnectionChange(threadContext, connectionName);
     }
   }
 
@@ -138,7 +138,7 @@
     int i = 0;
     while (i < rval.length)
     {
-      rval[i] = AgentFactory.make(threadContext,allAgents[i]);
+      rval[i] = AgentFactory.make(allAgents[i]);
       i++;
     }
     return rval;
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IAgent.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IAgent.java
index 8b67799..90e5956 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IAgent.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IAgent.java
@@ -24,57 +24,103 @@
 * start-up time; they run independently until the JVM is shut down.
 * All agent classes are expected to support the following constructor:
 *
-* xxx(IThreadContext tc) throws ManifoldCFException
+* xxx() throws ManifoldCFException
 *
+* Agent classes are furthermore expected to be cross-thread, but not necessarily thread-safe
+* in that a given IAgent instance is meant to be used by only one thread at a time.  It is
+* furthermore safe to keep stateful data in the IAgent instance object pertaining to the
+* running state of the system.  That is, an instance of IAgent used to start the agent will be
+* the same one stopAgent() is called with.
 */
 public interface IAgent
 {
   public static final String _rcsid = "@(#)$Id: IAgent.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  /** Initialize agent environment.
+  * This is called before any of the other operations are called, and is meant to insure that
+  * the environment is properly initialized.
+  */
+  public void initialize(IThreadContext threadContext)
+    throws ManifoldCFException;
+  
+  /** Tear down agent environment.
+  * This is called after all the other operations are completed, and is meant to allow
+  * environment resources to be freed.
+  */
+  public void cleanUp(IThreadContext threadContext)
+    throws ManifoldCFException;
+  
   /** Install agent.  This usually installs the agent's database tables etc.
   */
-  public void install()
+  public void install(IThreadContext threadContext)
     throws ManifoldCFException;
 
   /** Uninstall agent.  This must clean up everything the agent is responsible for.
   */
-  public void deinstall()
+  public void deinstall(IThreadContext threadContext)
+    throws ManifoldCFException;
+
+  /** Called ONLY when no other active services of this kind are running.  Meant to be
+  * used after the cluster has been down for an indeterminate period of time.
+  */
+  public void clusterInit(IThreadContext threadContext)
+    throws ManifoldCFException;
+    
+  /** Cleanup after ALL agents processes.
+  * Call this method to clean up dangling persistent state when a cluster is just starting
+  * to come up.  This method CANNOT be called when there are any active agents
+  * processes at all.
+  *@param processID is the current process ID.
+  */
+  public void cleanUpAllAgentData(IThreadContext threadContext, String currentProcessID)
+    throws ManifoldCFException;
+  
+  /** Cleanup after agents process.
+  * Call this method to clean up dangling persistent state after agent has been stopped.
+  * This method CANNOT be called when the agent is active, but it can
+  * be called at any time and by any process in order to guarantee that a terminated
+  * agent does not block other agents from completing their tasks.
+  *@param currentProcessID is the current process ID.
+  *@param cleanupProcessID is the process ID of the agent to clean up after.
+  */
+  public void cleanUpAgentData(IThreadContext threadContext, String currentProcessID, String cleanupProcessID)
     throws ManifoldCFException;
 
   /** Start the agent.  This method should spin up the agent threads, and
   * then return.
+  *@param processID is the process ID to start up an agent for.
   */
-  public void startAgent()
+  public void startAgent(IThreadContext threadContext, String processID)
     throws ManifoldCFException;
 
   /** Stop the agent.  This should shut down the agent threads.
   */
-  public void stopAgent()
+  public void stopAgent(IThreadContext threadContext)
     throws ManifoldCFException;
 
   /** Request permission from agent to delete an output connection.
   *@param connName is the name of the output connection.
   *@return true if the connection is in use, false otherwise.
   */
-  public boolean isOutputConnectionInUse(String connName)
+  public boolean isOutputConnectionInUse(IThreadContext threadContext, String connName)
     throws ManifoldCFException;
 
   /** Note the deregistration of a set of output connections.
   *@param connectionNames are the names of the connections being deregistered.
   */
-  public void noteOutputConnectorDeregistration(String[] connectionNames)
+  public void noteOutputConnectorDeregistration(IThreadContext threadContext, String[] connectionNames)
     throws ManifoldCFException;
 
   /** Note the registration of a set of output connections.
   *@param connectionNames are the names of the connections being registered.
   */
-  public void noteOutputConnectorRegistration(String[] connectionNames)
+  public void noteOutputConnectorRegistration(IThreadContext threadContext, String[] connectionNames)
     throws ManifoldCFException;
 
   /** Note a change in configuration for an output connection.
   *@param connectionName is the name of the connection being changed.
   */
-  public void noteOutputConnectionChange(String connectionName)
+  public void noteOutputConnectionChange(IThreadContext threadContext, String connectionName)
     throws ManifoldCFException;
   
 }
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IIncrementalIngester.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IIncrementalIngester.java
index b883b19..d101bc1 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IIncrementalIngester.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IIncrementalIngester.java
@@ -297,4 +297,11 @@
   public void resetOutputConnection(String outputConnectionName)
     throws ManifoldCFException;
     
+  /** Remove all knowledge of an output index from the system.  This is appropriate
+  * when the output index no longer exists and you wish to delete the associated job.
+  *@param outputConnectionName is the name of the output connection associated with this action.
+  */
+  public void removeOutputConnection(String outputConnectionName)
+    throws ManifoldCFException;
+
 }
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IOutputConnectorPool.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IOutputConnectorPool.java
new file mode 100644
index 0000000..f68dc1b
--- /dev/null
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/IOutputConnectorPool.java
@@ -0,0 +1,81 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An object implementing this interface functions as a pool of output connectors.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public interface IOutputConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Get multiple output connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param outputConnections are the connections to use the build the connector instances.
+  */
+  public IOutputConnector[] grabMultiple(String[] orderingKeys, IOutputConnection[] outputConnections)
+    throws ManifoldCFException;
+
+  /** Get an output connector.
+  * The connector is specified by an output connection object.
+  *@param outputConnection is the output connection to base the connector instance on.
+  */
+  public IOutputConnector grab(IOutputConnection outputConnection)
+    throws ManifoldCFException;
+
+  /** Release multiple output connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  public void releaseMultiple(IOutputConnection[] connections, IOutputConnector[] connectors)
+    throws ManifoldCFException;
+
+  /** Release an output connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  public void release(IOutputConnection connection, IOutputConnector connector)
+    throws ManifoldCFException;
+
+  /** Idle notification for inactive output connector handles.
+  * This method polls all inactive handles.
+  */
+  public void pollAllConnectors()
+    throws ManifoldCFException;
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  public void flushUnusedConnectors()
+    throws ManifoldCFException;
+
+  /** Clean up all open output connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  public void closeAllConnectors()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorFactory.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorFactory.java
index ecd54a8..a20386d 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorFactory.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorFactory.java
@@ -27,28 +27,47 @@
 
 /** This is the factory class for IOutputConnector objects.
 */
-public class OutputConnectorFactory
+public class OutputConnectorFactory extends ConnectorFactory<IOutputConnector>
 {
   public static final String _rcsid = "@(#)$Id: OutputConnectorFactory.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  // Pool hash table.
-  // Keyed by PoolKey; value is Pool
-  protected static Map poolHash = new HashMap();
+  // Static factory
+  protected final static OutputConnectorFactory thisFactory = new OutputConnectorFactory();
 
-  // private static HashMap checkedOutConnectors = new HashMap();
-
-  private OutputConnectorFactory()
+  protected OutputConnectorFactory()
   {
   }
 
+  @Override
+  protected boolean isInstalled(IThreadContext tc, String className)
+    throws ManifoldCFException
+  {
+    IOutputConnectorManager connMgr = OutputConnectorManagerFactory.make(tc);
+    return connMgr.isInstalled(className);
+  }
+  
+  /** Get the activities supported by this connector.
+  *@param className is the class name.
+  *@return the list of activities.
+  */
+  public String[] getThisActivitiesList(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    IOutputConnector connector = getThisConnector(threadContext, className);
+    if (connector == null)
+      return null;
+    String[] values = connector.getActivitiesList();
+    java.util.Arrays.sort(values);
+    return values;
+  }
+
   /** Install connector.
   *@param className is the class name.
   */
   public static void install(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IOutputConnector connector = getConnectorNoCheck(className);
-    connector.install(threadContext);
+    thisFactory.installThis(threadContext,className);
   }
 
   /** Uninstall connector.
@@ -57,8 +76,7 @@
   public static void deinstall(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IOutputConnector connector = getConnectorNoCheck(className);
-    connector.deinstall(threadContext);
+    thisFactory.deinstallThis(threadContext,className);
   }
 
   /** Get the activities supported by this connector.
@@ -68,12 +86,7 @@
   public static String[] getActivitiesList(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IOutputConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return null;
-    String[] values = connector.getActivitiesList();
-    java.util.Arrays.sort(values);
-    return values;
+    return thisFactory.getThisActivitiesList(threadContext,className);
   }
 
   /** Output the configuration header section.
@@ -82,10 +95,7 @@
     IHTTPOutput out, Locale locale, ConfigParams parameters, ArrayList tabsArray)
     throws ManifoldCFException, IOException
   {
-    IOutputConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return;
-    connector.outputConfigurationHeader(threadContext,out,locale,parameters,tabsArray);
+    thisFactory.outputThisConfigurationHeader(threadContext,className,out,locale,parameters,tabsArray);
   }
 
   /** Output the configuration body section.
@@ -94,10 +104,7 @@
     IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    IOutputConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return;
-    connector.outputConfigurationBody(threadContext,out,locale,parameters,tabName);
+    thisFactory.outputThisConfigurationBody(threadContext,className,out,locale,parameters,tabName);
   }
 
   /** Process configuration post data for a connector.
@@ -106,10 +113,7 @@
     IPostParameters variableContext, Locale locale, ConfigParams configParams)
     throws ManifoldCFException
   {
-    IOutputConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return null;
-    return connector.processConfigurationPost(threadContext,variableContext,locale,configParams);
+    return thisFactory.processThisConfigurationPost(threadContext,className,variableContext,locale,configParams);
   }
   
   /** View connector configuration.
@@ -118,11 +122,7 @@
     IHTTPOutput out, Locale locale, ConfigParams configParams)
     throws ManifoldCFException, IOException
   {
-    IOutputConnector connector = getConnector(threadContext, className);
-    // We want to be able to view connections even if they have unregistered connectors.
-    if (connector == null)
-      return;
-    connector.viewConfiguration(threadContext,out,locale,configParams);
+    thisFactory.viewThisConfiguration(threadContext,className,out,locale,configParams);
   }
 
   /** Get an output connector instance, without checking for installed connector.
@@ -132,567 +132,7 @@
   public static IOutputConnector getConnectorNoCheck(String className)
     throws ManifoldCFException
   {
-    try
-    {
-      Class theClass = ManifoldCF.findClass(className);
-      Class[] argumentClasses = new Class[0];
-      // Look for a constructor
-      Constructor c = theClass.getConstructor(argumentClasses);
-      Object[] arguments = new Object[0];
-      Object o = c.newInstance(arguments);
-      if (!(o instanceof IOutputConnector))
-        throw new ManifoldCFException("Class '"+className+"' does not implement IOutputConnector.");
-      return (IOutputConnector)o;
-    }
-    catch (InvocationTargetException e)
-    {
-      Throwable z = e.getTargetException();
-      if (z instanceof Error)
-        throw (Error)z;
-      else if (z instanceof RuntimeException)
-        throw (RuntimeException)z;
-      else
-        throw (ManifoldCFException)z;
-    }
-    catch (ClassNotFoundException e)
-    {
-      throw new ManifoldCFException("No output connector class '"+className+"' was found.",
-        e);
-    }
-    catch (NoSuchMethodException e)
-    {
-      throw new ManifoldCFException("No appropriate constructor for IOutputConnector implementation '"+
-        className+"'.  Need xxx(ConfigParams).",
-        e);
-    }
-    catch (SecurityException e)
-    {
-      throw new ManifoldCFException("Protected constructor for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Unavailable constructor for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalArgumentException e)
-    {
-      throw new ManifoldCFException("Shouldn't happen!!!",e);
-    }
-    catch (InstantiationException e)
-    {
-      throw new ManifoldCFException("InstantiationException for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-    catch (ExceptionInInitializerError e)
-    {
-      throw new ManifoldCFException("ExceptionInInitializerError for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-
-  }
-
-  /** Get an output connector instance.
-  *@param className is the class name.
-  *@return the instance.
-  */
-  protected static IOutputConnector getConnector(IThreadContext threadContext, String className)
-    throws ManifoldCFException
-  {
-    IOutputConnectorManager connMgr = OutputConnectorManagerFactory.make(threadContext);
-    if (connMgr.isInstalled(className) == false)
-      return null;
-
-    try
-    {
-      Class theClass = ManifoldCF.findClass(className);
-      Class[] argumentClasses = new Class[0];
-      // Look for a constructor
-      Constructor c = theClass.getConstructor(argumentClasses);
-      Object[] arguments = new Object[0];
-      Object o = c.newInstance(arguments);
-      if (!(o instanceof IOutputConnector))
-        throw new ManifoldCFException("Class '"+className+"' does not implement IOutputConnector.");
-      return (IOutputConnector)o;
-    }
-    catch (InvocationTargetException e)
-    {
-      Throwable z = e.getTargetException();
-      if (z instanceof Error)
-        throw (Error)z;
-      else if (z instanceof RuntimeException)
-        throw (RuntimeException)z;
-      else
-        throw (ManifoldCFException)z;
-    }
-    catch (ClassNotFoundException e)
-    {
-      // This MAY mean that an existing connector has been uninstalled; check out this possibility!
-      // We return null because that is the signal that we cannot get a connector instance for that reason.
-      if (connMgr.isInstalled(className) == false)
-        return null;
-
-      throw new ManifoldCFException("No output connector class '"+className+"' was found.",
-        e);
-    }
-    catch (NoSuchMethodException e)
-    {
-      throw new ManifoldCFException("No appropriate constructor for IOutputConnector implementation '"+
-        className+"'.  Need xxx(ConfigParams).",
-        e);
-    }
-    catch (SecurityException e)
-    {
-      throw new ManifoldCFException("Protected constructor for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Unavailable constructor for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalArgumentException e)
-    {
-      throw new ManifoldCFException("Shouldn't happen!!!",e);
-    }
-    catch (InstantiationException e)
-    {
-      throw new ManifoldCFException("InstantiationException for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-    catch (ExceptionInInitializerError e)
-    {
-      throw new ManifoldCFException("ExceptionInInitializerError for IOutputConnector implementation '"+className+"'",
-        e);
-    }
-
-  }
-
-  /** Get multiple output connectors, all at once.  Do this in a particular order
-  * so that any connector exhaustion will not cause a deadlock.
-  */
-  public static IOutputConnector[] grabMultiple(IThreadContext threadContext,
-    String[] orderingKeys, String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
-    throws ManifoldCFException
-  {
-    IOutputConnector[] rval = new IOutputConnector[classNames.length];
-    HashMap orderMap = new HashMap();
-    int i = 0;
-    while (i < orderingKeys.length)
-    {
-      if (orderMap.get(orderingKeys[i]) != null)
-        throw new ManifoldCFException("Found duplicate order key");
-      orderMap.put(orderingKeys[i],new Integer(i));
-      i++;
-    }
-    java.util.Arrays.sort(orderingKeys);
-    i = 0;
-    while (i < orderingKeys.length)
-    {
-      String orderingKey = orderingKeys[i];
-      int index = ((Integer)orderMap.get(orderingKey)).intValue();
-      String className = classNames[index];
-      ConfigParams cp = configInfos[index];
-      int maxPoolSize = maxPoolSizes[index];
-      try
-      {
-        IOutputConnector connector = grab(threadContext,className,cp,maxPoolSize);
-        rval[index] = connector;
-      }
-      catch (Throwable e)
-      {
-        while (i > 0)
-        {
-          i--;
-          orderingKey = orderingKeys[i];
-          index = ((Integer)orderMap.get(orderingKey)).intValue();
-          try
-          {
-            release(rval[index]);
-          }
-          catch (ManifoldCFException e2)
-          {
-          }
-        }
-        if (e instanceof ManifoldCFException)
-          throw (ManifoldCFException)e;
-        else if (e instanceof RuntimeException)
-          throw (RuntimeException)e;
-        throw (Error)e;
-      }
-      i++;
-    }
-    return rval;
-  }
-
-  /** Get an output connector.
-  * The connector is specified by its class and its parameters.
-  *@param threadContext is the current thread context.
-  *@param className is the name of the class to get a connector for.
-  *@param configInfo are the name/value pairs constituting configuration info
-  * for this class.
-  */
-  public static IOutputConnector grab(IThreadContext threadContext,
-    String className, ConfigParams configInfo, int maxPoolSize)
-    throws ManifoldCFException
-  {
-    // We want to get handles off the pool and use them.  But the
-    // handles we fetch have to have the right config information.
-
-    // Use the classname and config info to build a pool key.  This
-    // key will be discarded if we actually have to save a key persistently,
-    // since we avoid copying the configInfo unnecessarily.
-    PoolKey pk = new PoolKey(className,configInfo);
-    Pool p;
-    synchronized (poolHash)
-    {
-      p = (Pool)poolHash.get(pk);
-      if (p == null)
-      {
-        pk = new PoolKey(className,configInfo.duplicate());
-        p = new Pool(pk,maxPoolSize);
-        poolHash.put(pk,p);
-      }
-    }
-
-    IOutputConnector rval = p.getConnector(threadContext);
-
-    return rval;
-
-  }
-
-  /** Release multiple output connectors.
-  */
-  public static void releaseMultiple(IOutputConnector[] connectors)
-    throws ManifoldCFException
-  {
-    int i = 0;
-    ManifoldCFException currentException = null;
-    while (i < connectors.length)
-    {
-      IOutputConnector c = connectors[i++];
-      try
-      {
-        release(c);
-      }
-      catch (ManifoldCFException e)
-      {
-        if (currentException == null)
-          currentException = e;
-      }
-    }
-    if (currentException != null)
-      throw currentException;
-  }
-
-  /** Release an output connector.
-  *@param connector is the connector to release.
-  */
-  public static void release(IOutputConnector connector)
-    throws ManifoldCFException
-  {
-    // If the connector is null, skip the release, because we never really got the connector in the first place.
-    if (connector == null)
-      return;
-
-    // Figure out which pool this goes on, and put it there
-    PoolKey pk = new PoolKey(connector.getClass().getName(),connector.getConfiguration());
-    Pool p;
-    synchronized (poolHash)
-    {
-      p = (Pool)poolHash.get(pk);
-    }
-
-    p.releaseConnector(connector);
-
-    // synchronized (checkedOutConnectors)
-    // {
-    //      checkedOutConnectors.remove(connector.toString());
-    // }
-
-  }
-
-  /** Idle notification for inactive output connector handles.
-  * This method polls all inactive handles.
-  */
-  public static void pollAllConnectors(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // System.out.println("Pool stats:");
-
-    // Go through the whole pool and notify everyone
-    synchronized (poolHash)
-    {
-      Iterator iter = poolHash.values().iterator();
-      while (iter.hasNext())
-      {
-        Pool p = (Pool)iter.next();
-        p.pollAll(threadContext);
-      }
-    }
-
-    // System.out.println("About to check if any output connector instances have been abandoned...");
-    // checkConnectors(System.currentTimeMillis());
-  }
-
-  /** Clean up all open output connector handles.
-  * This method is called when the connector pool needs to be flushed,
-  * to free resources.
-  *@param threadContext is the local thread context.
-  */
-  public static void closeAllConnectors(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // Go through the whole pool and clean it out
-    synchronized (poolHash)
-    {
-      Iterator iter = poolHash.values().iterator();
-      while (iter.hasNext())
-      {
-        Pool p = (Pool)iter.next();
-        p.releaseAll(threadContext);
-      }
-    }
-  }
-
-  /** This is an immutable pool key class, which describes a pool in terms of two independent keys.
-  */
-  public static class PoolKey
-  {
-    protected String className;
-    protected ConfigParams configInfo;
-
-    /** Constructor.
-    */
-    public PoolKey(String className, Map configInfo)
-    {
-      this.className = className;
-      this.configInfo = new ConfigParams(configInfo);
-    }
-
-    public PoolKey(String className, ConfigParams configInfo)
-    {
-      this.className = className;
-      this.configInfo = configInfo;
-    }
-
-    /** Get the class name.
-    *@return the class name.
-    */
-    public String getClassName()
-    {
-      return className;
-    }
-
-    /** Get the config info.
-    *@return the params
-    */
-    public ConfigParams getParams()
-    {
-      return configInfo;
-    }
-
-    /** Hash code.
-    */
-    public int hashCode()
-    {
-      return className.hashCode() + configInfo.hashCode();
-    }
-
-    /** Equals operator.
-    */
-    public boolean equals(Object o)
-    {
-      if (!(o instanceof PoolKey))
-        return false;
-
-      PoolKey pk = (PoolKey)o;
-      return pk.className.equals(className) && pk.configInfo.equals(configInfo);
-    }
-
-  }
-
-  /** This class represents a value in the pool hash, which corresponds to a given key.
-  */
-  public static class Pool
-  {
-    protected ArrayList stack = new ArrayList();
-    protected PoolKey key;
-    protected int numFree;
-
-    /** Constructor
-    */
-    public Pool(PoolKey pk, int maxCount)
-    {
-      key = pk;
-      numFree = maxCount;
-    }
-
-    /** Grab an output connector.
-    * If none exists, construct it using the information in the pool key.
-    *@return the connector, or null if no connector could be connected.
-    */
-    public synchronized IOutputConnector getConnector(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      while (numFree == 0)
-      {
-        try
-        {
-          wait();
-        }
-        catch (InterruptedException e)
-        {
-          throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-        }
-      }
-
-      if (stack.size() == 0)
-      {
-        String className = key.getClassName();
-        ConfigParams configParams = key.getParams();
-
-        IOutputConnectorManager connMgr = OutputConnectorManagerFactory.make(threadContext);
-        if (connMgr.isInstalled(className) == false)
-          return null;
-
-        try
-        {
-          Class theClass = ManifoldCF.findClass(className);
-          Class[] argumentClasses = new Class[0];
-          // Look for a constructor
-          Constructor c = theClass.getConstructor(argumentClasses);
-          Object[] arguments = new Object[0];
-          Object o = c.newInstance(arguments);
-          if (!(o instanceof IOutputConnector))
-            throw new ManifoldCFException("Class '"+className+"' does not implement IOutputConnector.");
-          IOutputConnector newrc = (IOutputConnector)o;
-          newrc.connect(configParams);
-          stack.add(newrc);
-        }
-        catch (InvocationTargetException e)
-        {
-          Throwable z = e.getTargetException();
-          if (z instanceof Error)
-            throw (Error)z;
-          else if (z instanceof RuntimeException)
-            throw (RuntimeException)z;
-          else
-            throw (ManifoldCFException)z;
-        }
-        catch (ClassNotFoundException e)
-        {
-          // If we see this exception, it COULD mean that the connector was uninstalled, and we happened to get here
-          // after that occurred.
-          // We return null because that is the signal that we cannot get a connector instance for that reason.
-          if (connMgr.isInstalled(className) == false)
-            return null;
-
-          throw new ManifoldCFException("No output connector class '"+className+"' was found.",
-            e);
-        }
-        catch (NoSuchMethodException e)
-        {
-          throw new ManifoldCFException("No appropriate constructor for IOutputConnector implementation '"+
-            className+"'.  Need xxx(ConfigParams).",
-            e);
-        }
-        catch (SecurityException e)
-        {
-          throw new ManifoldCFException("Protected constructor for IOutputConnector implementation '"+className+"'",
-            e);
-        }
-        catch (IllegalAccessException e)
-        {
-          throw new ManifoldCFException("Unavailable constructor for IOutputConnector implementation '"+className+"'",
-            e);
-        }
-        catch (IllegalArgumentException e)
-        {
-          throw new ManifoldCFException("Shouldn't happen!!!",e);
-        }
-        catch (InstantiationException e)
-        {
-          throw new ManifoldCFException("InstantiationException for IOutputConnector implementation '"+className+"'",
-            e);
-        }
-        catch (ExceptionInInitializerError e)
-        {
-          throw new ManifoldCFException("ExceptionInInitializerError for IOutputConnector implementation '"+className+"'",
-            e);
-        }
-      }
-      
-      // Since thread context set can fail, do that before we remove it from the pool.
-      IOutputConnector rc = (IOutputConnector)stack.get(stack.size()-1);
-      rc.setThreadContext(threadContext);
-      stack.remove(stack.size()-1);
-      numFree--;
-      
-      return rc;
-    }
-
-    /** Release an output connector to the pool.
-    *@param connector is the connector.
-    */
-    public synchronized void releaseConnector(IOutputConnector connector)
-      throws ManifoldCFException
-    {
-      if (connector == null)
-        return;
-
-      // Make sure connector knows it's released
-      connector.clearThreadContext();
-      // Append
-      stack.add(connector);
-      numFree++;
-      notifyAll();
-    }
-
-    /** Notify all free connectors.
-    */
-    public synchronized void pollAll(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      int i = 0;
-      while (i < stack.size())
-      {
-        IConnector rc = (IConnector)stack.get(i++);
-        // Notify
-        rc.setThreadContext(threadContext);
-        try
-        {
-          rc.poll();
-        }
-        finally
-        {
-          rc.clearThreadContext();
-        }
-      }
-    }
-
-    /** Release all free connectors.
-    */
-    public synchronized void releaseAll(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      while (stack.size() > 0)
-      {
-        // Disconnect
-        IConnector rc = (IConnector)stack.get(stack.size()-1);
-        rc.setThreadContext(threadContext);
-        try
-        {
-          rc.disconnect();
-          stack.remove(stack.size()-1);
-        }
-        finally
-        {
-          rc.clearThreadContext();
-        }
-      }
-    }
-
+    return thisFactory.getThisConnectorNoCheck(className);
   }
 
 }
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorPoolFactory.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorPoolFactory.java
new file mode 100644
index 0000000..7c26f2d
--- /dev/null
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/OutputConnectorPoolFactory.java
@@ -0,0 +1,55 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.system.ManifoldCF;
+
+import java.util.*;
+
+/** Output connector pool manager factory.
+*/
+public class OutputConnectorPoolFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // name to use in thread context pool of objects
+  private final static String objectName = "_OutputConnectorPoolMgr_";
+
+  private OutputConnectorPoolFactory()
+  {
+  }
+
+  /** Make an output connector pool handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IOutputConnectorPool make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IOutputConnectorPool))
+    {
+      o = new org.apache.manifoldcf.agents.outputconnectorpool.OutputConnectorPool(tc);
+      tc.save(objectName,o);
+    }
+    return (IOutputConnectorPool)o;
+  }
+
+}
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/RepositoryDocument.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/RepositoryDocument.java
index 112e6ab..49bc593 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/RepositoryDocument.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/interfaces/RepositoryDocument.java
@@ -50,6 +50,7 @@
   protected String contentMimeType = "application/octet-stream";
   protected Date createdDate = null;
   protected Date modifiedDate = null;
+  protected Date indexingDate = null;
   
   /** Constructor.
   */
@@ -88,6 +89,22 @@
   {
     return modifiedDate;
   }
+
+  /** Set the document's indexing date.  Use null to indicate that the date is unknown.
+  *@param date is the date.
+  */
+  public void setIndexingDate(Date date)
+  {
+    indexingDate = date;
+  }
+  
+  /** Get the document's indexing date.  Returns null of the date is unknown.
+  *@return the date.
+  */
+  public Date getIndexingDate()
+  {
+    return indexingDate;
+  }
   
   /** Set the document's mime type.
   *@param mimeType is the mime type.
@@ -343,7 +360,7 @@
     throws ManifoldCFException
   {
     if (fieldData == null)
-      addField(fieldName,(String)null);
+      addField(fieldName,(String[])null);
     else
       addField(fieldName,new String[]{fieldData});
   }
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/output/BaseOutputConnector.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/output/BaseOutputConnector.java
index 7bbed9b..2feabed 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/output/BaseOutputConnector.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/output/BaseOutputConnector.java
@@ -49,6 +49,7 @@
   /** Return the list of activities that this connector supports (i.e. writes into the log).
   *@return the list.
   */
+  @Override
   public String[] getActivitiesList()
   {
     return new String[0];
@@ -61,6 +62,7 @@
   *@param command is the command, which is taken directly from the API request.
   *@return true if the resource is found, false if not.  In either case, output may be filled in.
   */
+  @Override
   public boolean requestInfo(Configuration output, String command)
     throws ManifoldCFException
   {
@@ -72,6 +74,7 @@
   * is a good time to synchronize things.  It is called whenever a job is either completed or aborted.
   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
   */
+  @Override
   public void noteJobComplete(IOutputNotifyActivity activities)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -84,6 +87,7 @@
   *@param mimeType is the mime type of the document.
   *@return true if the mime type is indexable by this connector.
   */
+  @Override
   public boolean checkMimeTypeIndexable(String outputDescription, String mimeType)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -108,6 +112,7 @@
   *@param localFile is the local file to check.
   *@return true if the file is indexable.
   */
+  @Override
   public boolean checkDocumentIndexable(String outputDescription, File localFile)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -132,6 +137,7 @@
   *@param length is the length of the document.
   *@return true if the file is indexable.
   */
+  @Override
   public boolean checkLengthIndexable(String outputDescription, long length)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -144,6 +150,7 @@
   *@param url is the URL of the document.
   *@return true if the file is indexable.
   */
+  @Override
   public boolean checkURLIndexable(String outputDescription, String url)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -161,6 +168,7 @@
   *@return a string, of unlimited length, which uniquely describes output configuration and specification in such a way that if two such strings are equal,
   * the document will not need to be sent again to the output data store.
   */
+  @Override
   public String getOutputDescription(OutputSpecification spec)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -182,6 +190,7 @@
   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
   *@return the document status (accepted or permanently rejected).
   */
+  @Override
   public int addOrReplaceDocument(String documentURI, String outputDescription, RepositoryDocument document, String authorityNameString, IOutputAddActivity activities)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -195,6 +204,7 @@
   *@param outputDescription is the last description string that was constructed for this document by the getOutputDescription() method above.
   *@param activities is the handle to an object that the implementer of an output connector may use to perform operations, such as logging processing activity.
   */
+  @Override
   public void removeDocument(String documentURI, String outputDescription, IOutputRemoveActivity activities)
     throws ManifoldCFException, ServiceInterruption
   {
@@ -217,6 +227,7 @@
   *@param os is the current output specification for this job.
   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
   */
+  @Override
   public void outputSpecificationHeader(IHTTPOutput out, Locale locale, OutputSpecification os, List<String> tabsArray)
     throws ManifoldCFException, IOException
   {
@@ -256,6 +267,7 @@
   *@param os is the current output specification for this job.
   *@param tabName is the current tab name.
   */
+  @Override
   public void outputSpecificationBody(IHTTPOutput out, Locale locale, OutputSpecification os, String tabName)
     throws ManifoldCFException, IOException
   {
@@ -284,6 +296,7 @@
   *@param os is the current output specification for this job.
   *@return null if all is well, or a string error message if there is an error that should prevent saving of the job (and cause a redirection to an error page).
   */
+  @Override
   public String processSpecificationPost(IPostParameters variableContext, Locale locale, OutputSpecification os)
     throws ManifoldCFException
   {
@@ -311,6 +324,7 @@
   *@param locale is the preferred local of the output.
   *@param os is the current output specification for this job.
   */
+  @Override
   public void viewSpecification(IHTTPOutput out, Locale locale, OutputSpecification os)
     throws ManifoldCFException, IOException
   {
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/outputconnectorpool/OutputConnectorPool.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/outputconnectorpool/OutputConnectorPool.java
new file mode 100644
index 0000000..7d3e039
--- /dev/null
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/outputconnectorpool/OutputConnectorPool.java
@@ -0,0 +1,176 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.outputconnectorpool;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An implementation of IOutputConnectorPool.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public class OutputConnectorPool implements IOutputConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Local connector pool */
+  protected final static LocalPool localPool = new LocalPool();
+
+  /** Thread context */
+  protected final IThreadContext threadContext;
+  
+  /** Constructor */
+  public OutputConnectorPool(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    this.threadContext = threadContext;
+  }
+  
+  /** Get multiple output connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param outputConnections are the connections to use the build the connector instances.
+  */
+  @Override
+  public IOutputConnector[] grabMultiple(String[] orderingKeys, IOutputConnection[] outputConnections)
+    throws ManifoldCFException
+  {
+    // For now, use the OutputConnectorFactory method.  This will require us to extract info
+    // from each output connection, however.
+    String[] connectionNames = new String[outputConnections.length];
+    String[] classNames = new String[outputConnections.length];
+    ConfigParams[] configInfos = new ConfigParams[outputConnections.length];
+    int[] maxPoolSizes = new int[outputConnections.length];
+    
+    for (int i = 0; i < outputConnections.length; i++)
+    {
+      connectionNames[i] = outputConnections[i].getName();
+      classNames[i] = outputConnections[i].getClassName();
+      configInfos[i] = outputConnections[i].getConfigParams();
+      maxPoolSizes[i] = outputConnections[i].getMaxConnections();
+    }
+    return localPool.grabMultiple(threadContext,
+      orderingKeys, connectionNames, classNames, configInfos, maxPoolSizes);
+  }
+
+  /** Get an output connector.
+  * The connector is specified by an output connection object.
+  *@param outputConnection is the output connection to base the connector instance on.
+  */
+  @Override
+  public IOutputConnector grab(IOutputConnection outputConnection)
+    throws ManifoldCFException
+  {
+    return localPool.grab(threadContext, outputConnection.getName(), outputConnection.getClassName(),
+      outputConnection.getConfigParams(), outputConnection.getMaxConnections());
+  }
+
+  /** Release multiple output connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  @Override
+  public void releaseMultiple(IOutputConnection[] connections, IOutputConnector[] connectors)
+    throws ManifoldCFException
+  {
+    String[] connectionNames = new String[connections.length];
+    for (int i = 0; i < connections.length; i++)
+    {
+      connectionNames[i] = connections[i].getName();
+    }
+    localPool.releaseMultiple(threadContext, connectionNames, connectors);
+  }
+
+  /** Release an output connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  @Override
+  public void release(IOutputConnection connection, IOutputConnector connector)
+    throws ManifoldCFException
+  {
+    localPool.release(threadContext,connection.getName(),connector);
+  }
+
+  /** Idle notification for inactive output connector handles.
+  * This method polls all inactive handles.
+  */
+  @Override
+  public void pollAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.pollAllConnectors(threadContext);
+  }
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  @Override
+  public void flushUnusedConnectors()
+    throws ManifoldCFException
+  {
+    localPool.flushUnusedConnectors(threadContext);
+  }
+
+  /** Clean up all open output connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  @Override
+  public void closeAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.closeAllConnectors(threadContext);
+  }
+
+  /** Actual static output connector pool */
+  protected static class LocalPool extends org.apache.manifoldcf.core.connectorpool.ConnectorPool<IOutputConnector>
+  {
+    public LocalPool()
+    {
+      super("_OUTPUTCONNECTORPOOL_");
+    }
+    
+    @Override
+    protected boolean isInstalled(IThreadContext tc, String className)
+      throws ManifoldCFException
+    {
+      IOutputConnectorManager connectorManager = OutputConnectorManagerFactory.make(tc);
+      return connectorManager.isInstalled(className);
+    }
+
+    @Override
+    protected boolean isConnectionNameValid(IThreadContext tc, String connectionName)
+      throws ManifoldCFException
+    {
+      IOutputConnectionManager connectionManager = OutputConnectionManagerFactory.make(tc);
+      return connectionManager.load(connectionName) != null;
+    }
+
+    public IOutputConnector[] grabMultiple(IThreadContext tc, String[] orderingKeys, String[] connectionNames, String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
+      throws ManifoldCFException
+    {
+      return grabMultiple(tc,IOutputConnector.class,orderingKeys,connectionNames,classNames,configInfos,maxPoolSizes);
+    }
+
+  }
+  
+}
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/AgentsDaemon.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/AgentsDaemon.java
new file mode 100644
index 0000000..b7111d6
--- /dev/null
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/AgentsDaemon.java
@@ -0,0 +1,391 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import java.io.*;
+import java.util.*;
+
+public class AgentsDaemon
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Agent shutdown signal name */
+  public static final String agentShutdownSignal = "_AGENTRUN_";
+  /** Agent service name prefix (followed by agent class name) */
+  public static final String agentServicePrefix = "AGENT_";
+
+  /** The agents thread, which starts and stops agents daemons to keep them consistent with the database, and
+  * also takes on process cleanup where necessary. */
+  protected AgentsThread agentsThread = null;
+
+  /** The idle cleanup thread. */
+  protected IdleCleanupThread idleCleanupThread = null;
+  
+  /** Process ID for this agents daemon. */
+  protected final String processID;
+  
+  /** This is the place we keep track of the agents we've started. */
+  protected final Map<String,IAgent> runningHash = new HashMap<String,IAgent>();
+  
+  // There are a number of different ways of running the agents framework.
+  // (1) Repeatedly call checkAgents(), and when all done make sure to call stopAgents().
+  // (2) Call registerAgentsShutdownHook(), then repeatedly run checkAgents(),  Agent shutdown happens on JVM exit.
+  // (3) Call runAgents(), which will wait for someone else to call assertAgentsShutdownSignal().  Before exit, stopAgents() must be called.
+  // (4) Call registerAgentsShutdownHook(), then call runAgents(), which will wait for someone else to call assertAgentsShutdownSignal().  Shutdown happens on JVM exit.
+  
+  /** Create an agents daemon object.
+  *@param processID is the process ID of this agents daemon.  Process ID's must be unique
+  * for all agents daemons.
+  */
+  public AgentsDaemon(String processID)
+  {
+    this.processID = processID;
+  }
+  
+  /** Assert shutdown signal for the current agents daemon.
+  */
+  public static void assertAgentsShutdownSignal(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.setGlobalFlag(agentShutdownSignal);
+  }
+  
+  /** Clear shutdown signal for the current agents daemon.
+  */
+  public static void clearAgentsShutdownSignal(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.clearGlobalFlag(agentShutdownSignal);
+  }
+
+
+  /** Register agents shutdown hook.
+  * Call this ONCE before calling startAgents or checkAgents the first time, if you want automatic cleanup of agents on JVM stop.
+  */
+  public void registerAgentsShutdownHook(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // Create the shutdown hook for agents.  All activity will be keyed off of runningHash, so it is safe to do this under all conditions.
+    org.apache.manifoldcf.core.system.ManifoldCF.addShutdownHook(new AgentsShutdownHook());
+  }
+  
+  /** Run agents process.
+  * This method will not return until a shutdown signal is sent.
+  */
+  public void runAgents(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+
+    // Don't come up at all if shutdown signal in force
+    if (lockManager.checkGlobalFlag(agentShutdownSignal))
+      return;
+
+    // Create and start agents thread.
+    startAgents(threadContext);
+    
+    while (true)
+    {
+      // Any shutdown signal yet?
+      if (lockManager.checkGlobalFlag(agentShutdownSignal))
+        break;
+          
+      try
+      {
+        ManifoldCF.sleep(5000L);
+      }
+      catch (InterruptedException e)
+      {
+        break;
+      }
+    }
+    
+  }
+
+  /** Start agents thread for this agents daemon object.
+  */
+  public void startAgents(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // Create idle cleanup thread.
+    idleCleanupThread = new IdleCleanupThread(processID);
+    agentsThread = new AgentsThread();
+    // Create and start agents thread.
+    idleCleanupThread.start();
+    agentsThread.start();
+  }
+  
+  /** Stop all started agents running under this agents daemon.
+  */
+  public void stopAgents(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // Shut down agents background thread.
+    while (agentsThread != null || idleCleanupThread != null)
+    {
+      if (agentsThread != null)
+        agentsThread.interrupt();
+      if (idleCleanupThread != null)
+        idleCleanupThread.interrupt();
+      
+      if (agentsThread != null && !agentsThread.isAlive())
+        agentsThread = null;
+      if (idleCleanupThread != null && !idleCleanupThread.isAlive())
+        idleCleanupThread = null;
+    }
+    
+    // Shut down running agents services directly.
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    synchronized (runningHash)
+    {
+      // This is supposedly safe; iterator remove is used
+      Iterator<String> iter = runningHash.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String className = iter.next();
+        IAgent agent = runningHash.get(className);
+        // Stop it
+        agent.stopAgent(threadContext);
+        lockManager.endServiceActivity(getAgentsClassServiceType(className), processID);
+        iter.remove();
+        agent.cleanUp(threadContext);
+      }
+    }
+    // Done.
+    OutputConnectorPoolFactory.make(threadContext).flushUnusedConnectors();
+  }
+
+  protected static String getAgentsClassServiceType(String agentClassName)
+  {
+    return agentServicePrefix + agentClassName;
+  }
+  
+  /** Agents thread.  This runs in background until interrupted, at which point
+  * it shuts down.  Its responsibilities include cleaning up after dead processes,
+  * as well as starting newly-registered agent processes, and terminating ones that disappear.
+  */
+  protected class AgentsThread extends Thread
+  {
+    public AgentsThread()
+    {
+      super();
+      setName("Agents thread");
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        IThreadContext threadContext = ThreadContextFactory.make();
+        while (true)
+        {
+          try
+          {
+            if (Thread.currentThread().isInterrupted())
+              throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
+
+            checkAgents(threadContext);
+            ManifoldCF.sleep(5000L);
+          }
+          catch (InterruptedException e)
+          {
+            break;
+          }
+          catch (ManifoldCFException e)
+          {
+            if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+              break;
+            Logging.agents.error("Exception tossed: "+e.getMessage(),e);
+          }
+          catch (OutOfMemoryError e)
+          {
+            System.err.println("Agents process ran out of memory - shutting down");
+            e.printStackTrace(System.err);
+            System.exit(-200);
+          }
+          catch (Throwable e)
+          {
+            Logging.agents.fatal("Error tossed: "+e.getMessage(),e);
+          }
+        }
+      }
+      catch (Throwable e)
+      {
+        // Severe error on initialization
+        System.err.println("Agents process could not start - shutting down");
+        Logging.agents.fatal("AgentThread initialization error tossed: "+e.getMessage(),e);
+        System.exit(-300);
+      }
+    }
+  }
+
+  /** Start all not-running agents.
+  *@param threadContext is the thread context.
+  */
+  protected void checkAgents(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    // Get agent manager
+    IAgentManager manager = AgentManagerFactory.make(threadContext);
+    synchronized (runningHash)
+    {
+      String[] classes = manager.getAllAgents();
+      Set<String> currentAgentClasses = new HashSet<String>();
+
+      int i = 0;
+      while (i < classes.length)
+      {
+        String className = classes[i++];
+        if (runningHash.get(className) == null)
+        {
+          // Start this agent
+          IAgent agent = AgentFactory.make(className);
+          agent.initialize(threadContext);
+          try
+          {
+            // Throw a lock, so that cleanup processes and startup processes don't collide.
+            String serviceType = getAgentsClassServiceType(className);
+            lockManager.registerServiceBeginServiceActivity(serviceType, processID, new CleanupAgent(threadContext, agent, processID));
+            // There is a potential race condition where the agent has been started but hasn't yet appeared in runningHash.
+            // But having runningHash be the synchronizer for this activity will prevent any problems.
+            agent.startAgent(threadContext, processID);
+            // Successful!
+            runningHash.put(className,agent);
+          }
+          catch (ManifoldCFException e)
+          {
+            if (e.getErrorCode() != ManifoldCFException.INTERRUPTED)
+              agent.cleanUp(threadContext);
+            throw e;
+          }
+        }
+        currentAgentClasses.add(className);
+      }
+
+      // Go through running hash and look for agents processes that have left
+      Iterator<String> runningAgentsIterator = runningHash.keySet().iterator();
+      while (runningAgentsIterator.hasNext())
+      {
+        String runningAgentClass = runningAgentsIterator.next();
+        if (!currentAgentClasses.contains(runningAgentClass))
+        {
+          // Shut down this one agent.
+          IAgent agent = runningHash.get(runningAgentClass);
+          // Stop it
+          agent.stopAgent(threadContext);
+          lockManager.endServiceActivity(getAgentsClassServiceType(runningAgentClass), processID);
+          runningAgentsIterator.remove();
+          agent.cleanUp(threadContext);
+        }
+      }
+    }
+
+    synchronized (runningHash)
+    {
+      // For every class we're supposed to be running, find registered but no-longer-active instances and clean
+      // up after them.
+      for (String agentsClass : runningHash.keySet())
+      {
+        IAgent agent = runningHash.get(agentsClass);
+        IServiceCleanup cleanup = new CleanupAgent(threadContext, agent, processID);
+        String agentsClassServiceType = getAgentsClassServiceType(agentsClass);
+        while (!lockManager.cleanupInactiveService(agentsClassServiceType, cleanup))
+        {
+          // Loop until no more inactive services
+        }
+      }
+    }
+    
+  }
+
+  /** Agent cleanup class.  This provides functionality to clean up after agents processes
+  * that have gone away, or initialize an entire cluster.
+  */
+  protected static class CleanupAgent implements IServiceCleanup
+  {
+    protected final IAgent agent;
+    protected final IThreadContext threadContext;
+    protected final String processID;
+
+    public CleanupAgent(IThreadContext threadContext, IAgent agent, String processID)
+    {
+      this.agent = agent;
+      this.threadContext = threadContext;
+      this.processID = processID;
+    }
+    
+    /** Clean up after the specified service.  This method will block any startup of the specified
+    * service for as long as it runs.
+    *@param serviceName is the name of the service.
+    */
+    @Override
+    public void cleanUpService(String serviceName)
+      throws ManifoldCFException
+    {
+      agent.cleanUpAgentData(threadContext, processID, serviceName);
+    }
+
+    /** Clean up after ALL services of the type on the cluster.
+    */
+    @Override
+    public void cleanUpAllServices()
+      throws ManifoldCFException
+    {
+      agent.cleanUpAllAgentData(threadContext, processID);
+    }
+    
+    /** Perform cluster initialization - that is, whatever is needed presuming that the
+    * cluster has been down for an indeterminate period of time, but is otherwise in a clean
+    * state.
+    */
+    @Override
+    public void clusterInit()
+      throws ManifoldCFException
+    {
+      agent.clusterInit(threadContext);
+    }
+
+  }
+  
+  /** Agents shutdown hook class */
+  protected class AgentsShutdownHook implements IShutdownHook
+  {
+
+    public AgentsShutdownHook()
+    {
+    }
+    
+    @Override
+    public void doCleanup(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      // Shutting down in this way must prevent startup from taking place.
+      stopAgents(threadContext);
+    }
+    
+  }
+  
+}
+
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/IdleCleanupThread.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/IdleCleanupThread.java
new file mode 100644
index 0000000..531c3d0
--- /dev/null
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/IdleCleanupThread.java
@@ -0,0 +1,159 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.agents.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import java.util.*;
+
+/** This thread periodically calls the cleanup method in all connected output connectors.  The ostensible purpose
+* is to allow the connectors to shutdown idle connections etc.
+*/
+public class IdleCleanupThread extends Thread
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Local data
+  /** Process ID */
+  protected final String processID;
+
+  /** Constructor.
+  */
+  public IdleCleanupThread(String processID)
+    throws ManifoldCFException
+  {
+    super();
+    this.processID = processID;
+    setName("Idle cleanup thread");
+    setDaemon(true);
+  }
+
+  public void run()
+  {
+    Logging.agents.debug("Start up idle cleanup thread");
+    try
+    {
+      // Create a thread context object.
+      IThreadContext threadContext = ThreadContextFactory.make();
+      // Get the cache handle.
+      ICacheManager cacheManager = CacheManagerFactory.make(threadContext);
+      // Get the output connector pool handle
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+      // Throttler subsystem
+      IThrottleGroups throttleGroups = ThrottleGroupsFactory.make(threadContext);
+      
+      /* For HSQLDB debugging...
+      IDBInterface database = DBInterfaceFactory.make(threadContext,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+      */
+      
+      // Loop
+      while (true)
+      {
+        // Do another try/catch around everything in the loop
+        try
+        {
+          /*
+          System.out.println("+++++++++");
+          IResultSet results = database.performQuery("SELECT * FROM information_schema.system_sessions",null,null,null);
+          for (int i = 0; i < results.getRowCount(); i++)
+          {
+            IResultRow row = results.getRow(i);
+            Iterator<String> iter = row.getColumns();
+            while (iter.hasNext())
+            {
+              String columnName = iter.next();
+              System.out.println(columnName+": "+row.getValue(columnName).toString());
+            }
+            System.out.println("--------");
+          }
+          System.out.println("++++++++++");
+          */
+          
+          // Do the cleanup
+          outputConnectorPool.pollAllConnectors();
+          // Poll connection bins
+          throttleGroups.poll();
+          // Expire objects
+          cacheManager.expireObjects(System.currentTimeMillis());
+          
+          // Sleep for the retry interval.
+          ManifoldCF.sleep(5000L);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            break;
+
+          if (e.getErrorCode() == ManifoldCFException.DATABASE_CONNECTION_ERROR)
+          {
+            Logging.agents.error("Idle cleanup thread aborting and restarting due to database connection reset: "+e.getMessage(),e);
+            try
+            {
+              // Give the database a chance to catch up/wake up
+              ManifoldCF.sleep(10000L);
+            }
+            catch (InterruptedException se)
+            {
+              break;
+            }
+            continue;
+          }
+
+          // Log it, but keep the thread alive
+          Logging.agents.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
+        }
+        catch (OutOfMemoryError e)
+        {
+          System.err.println("agents process ran out of memory - shutting down");
+          e.printStackTrace(System.err);
+          System.exit(-200);
+        }
+        catch (Throwable e)
+        {
+          // A more severe error - but stay alive
+          Logging.agents.fatal("Error tossed: "+e.getMessage(),e);
+        }
+      }
+    }
+    catch (Throwable e)
+    {
+      // Severe error on initialization
+      System.err.println("agents process could not start - shutting down");
+      Logging.agents.fatal("IdleCleanupThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
+    }
+
+  }
+
+}
diff --git a/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/ManifoldCF.java b/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/ManifoldCF.java
index 7fbf0b0..2d1bf65 100644
--- a/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/ManifoldCF.java
+++ b/framework/agents/src/main/java/org/apache/manifoldcf/agents/system/ManifoldCF.java
@@ -27,40 +27,37 @@
 {
   public static final String _rcsid = "@(#)$Id: ManifoldCF.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  public static final String agentShutdownSignal = "_AGENTRUN_";
+
   // Agents initialized flag
   protected static boolean agentsInitialized = false;
   
-  /** This is the place we keep track of the agents we've started. */
-  protected static HashMap runningHash = new HashMap();
-  /** This flag prevents startAgents() from starting anything once stopAgents() has been called. */
-  protected static boolean stopAgentsRun = false;
-  
   /** Initialize environment.
   */
-  public static void initializeEnvironment()
+  public static void initializeEnvironment(IThreadContext threadContext)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
     {
       // Do core initialization
-      org.apache.manifoldcf.core.system.ManifoldCF.initializeEnvironment();
+      org.apache.manifoldcf.core.system.ManifoldCF.initializeEnvironment(threadContext);
       // Local initialization
-      org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize();
+      org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize(threadContext);
     }
   }
 
   /** Clean up environment.
   */
-  public static void cleanUpEnvironment()
+  public static void cleanUpEnvironment(IThreadContext threadContext)
   {
     synchronized (initializeFlagLock)
     {
-      org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup();
-      org.apache.manifoldcf.core.system.ManifoldCF.cleanUpEnvironment();
+      org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup(threadContext);
+      org.apache.manifoldcf.core.system.ManifoldCF.cleanUpEnvironment(threadContext);
     }
   }
   
-  public static void localInitialize()
+  public static void localInitialize(IThreadContext threadContext)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
@@ -68,31 +65,34 @@
       if (agentsInitialized)
         return;
 
-      // Create the shutdown hook for agents.  All activity will be keyed off of runningHash, so it is safe to do this under all conditions.
-      org.apache.manifoldcf.core.system.ManifoldCF.addShutdownHook(new AgentsShutdownHook());
-      
       // Initialize the local loggers
       Logging.initializeLoggers();
-      Logging.setLogLevels();
+      Logging.setLogLevels(threadContext);
       agentsInitialized = true;
     }
   }
 
-  public static void localCleanup()
+  public static void localCleanup(IThreadContext threadContext)
   {
+    // Close all pools
+    try
+    {
+      OutputConnectorPoolFactory.make(threadContext).closeAllConnectors();
+    }
+    catch (ManifoldCFException e)
+    {
+      if (Logging.agents != null)
+        Logging.agents.warn("Exception shutting down output connector pool: "+e.getMessage(),e);
+    }
   }
   
   /** Reset the environment.
   */
-  public static void resetEnvironment()
+  public static void resetEnvironment(IThreadContext threadContext)
   {
     synchronized (initializeFlagLock)
     {
-      org.apache.manifoldcf.core.system.ManifoldCF.resetEnvironment();
-      synchronized (runningHash)
-      {
-        stopAgentsRun = false;
-      }
+      org.apache.manifoldcf.core.system.ManifoldCF.resetEnvironment(threadContext);
     }
   }
 
@@ -129,73 +129,6 @@
     mgr.deinstall();
   }
 
-  /** Start all not-running agents.
-  *@param threadContext is the thread context.
-  */
-  public static void startAgents(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // Get agent manager
-    IAgentManager manager = AgentManagerFactory.make(threadContext);
-    ManifoldCFException problem = null;
-    synchronized (runningHash)
-    {
-      // DO NOT permit this method to do anything if stopAgents() has ever been called for this JVM! 
-      // (If it has, it means that the JVM is trying to shut down.)
-      if (stopAgentsRun)
-        return;
-      String[] classes = manager.getAllAgents();
-      int i = 0;
-      while (i < classes.length)
-      {
-        String className = classes[i++];
-        if (runningHash.get(className) == null)
-        {
-          // Start this agent
-          IAgent agent = AgentFactory.make(threadContext,className);
-          try
-          {
-            // There is a potential race condition where the agent has been started but hasn't yet appeared in runningHash.
-            // But having runningHash be the synchronizer for this activity will prevent any problems.
-            // There is ANOTHER potential race condition, however, that can occur if the process is shut down just before startAgents() is called.
-            // We avoid that problem by means of a flag, which prevents startAgents() from doing anything once stopAgents() has been called.
-            agent.startAgent();
-            // Successful!
-            runningHash.put(className,agent);
-          }
-          catch (ManifoldCFException e)
-          {
-            problem = e;
-          }
-        }
-      }
-    }
-    if (problem != null)
-      throw problem;
-    // Done.
-  }
-
-  /** Stop all started agents.
-  */
-  public static void stopAgents(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    synchronized (runningHash)
-    {
-      HashMap iterHash = (HashMap)runningHash.clone();
-      Iterator iter = iterHash.keySet().iterator();
-      while (iter.hasNext())
-      {
-        String className = (String)iter.next();
-        IAgent agent = (IAgent)runningHash.get(className);
-        // Stop it
-        agent.stopAgent();
-        runningHash.remove(className);
-      }
-    }
-    // Done.
-  }
-  
   /** Signal output connection needs redoing.
   * This is called when something external changed on an output connection, and
   * therefore all associated documents must be reindexed.
@@ -212,27 +145,22 @@
     // resulting from this signal that find themselves "unchanged".
     AgentManagerFactory.noteOutputConnectionChange(threadContext,connectionName);
   }
-  
-  /** Agents shutdown hook class */
-  protected static class AgentsShutdownHook implements IShutdownHook
+
+  /** Signal output connection has been deleted.
+  * This is called when the target of an output connection has been removed,
+  * therefore all associated documents were also already removed.
+  *@param threadContext is the thread context.
+  *@param connectionName is the connection name.
+  */
+  public static void signalOutputConnectionRemoved(IThreadContext threadContext, String connectionName)
+    throws ManifoldCFException
   {
-    
-    public AgentsShutdownHook()
-    {
-    }
-    
-    public void doCleanup()
-      throws ManifoldCFException
-    {
-      // Shutting down in this way must prevent startup from taking place.
-      synchronized (runningHash)
-      {
-        stopAgentsRun = true;
-      }
-      IThreadContext tc = ThreadContextFactory.make();
-      stopAgents(tc);
-    }
-    
+    // Blow away the incremental ingestion table first
+    IIncrementalIngester ingester = IncrementalIngesterFactory.make(threadContext);
+    ingester.removeOutputConnection(connectionName);
+    // Now, signal to all agents that the output connection configuration has changed.  Do this second, so that there cannot be documents
+    // resulting from this signal that find themselves "unchanged".
+    AgentManagerFactory.noteOutputConnectionChange(threadContext,connectionName);
   }
   
   // Helper methods for API support.  These are made public so connectors can use them to implement the executeCommand method.
diff --git a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseDerby.java b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseDerby.java
index fb07b24..9986e5e 100644
--- a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseDerby.java
+++ b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseDerby.java
@@ -111,13 +111,13 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize(ThreadContextFactory.make());
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup(ThreadContextFactory.make());
     super.cleanupSystem();
   }
 
diff --git a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDB.java b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDB.java
index 31811ae..beee355 100644
--- a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDB.java
+++ b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDB.java
@@ -111,13 +111,13 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize(ThreadContextFactory.make());
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup(ThreadContextFactory.make());
     super.cleanupSystem();
   }
 
diff --git a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDBext.java b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDBext.java
index ea1e3b9..a2d25b0 100644
--- a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDBext.java
+++ b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseHSQLDBext.java
@@ -114,13 +114,13 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize(ThreadContextFactory.make());
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup(ThreadContextFactory.make());
     super.cleanupSystem();
   }
   
diff --git a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseMySQL.java b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseMySQL.java
index e7d1e6e..cfb2afc 100644
--- a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseMySQL.java
+++ b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BaseMySQL.java
@@ -111,13 +111,13 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize(ThreadContextFactory.make());
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup(ThreadContextFactory.make());
     super.cleanupSystem();
   }
 
diff --git a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BasePostgresql.java b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BasePostgresql.java
index 4e8f174..019e6fc 100644
--- a/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BasePostgresql.java
+++ b/framework/agents/src/test/java/org/apache/manifoldcf/agents/tests/BasePostgresql.java
@@ -111,13 +111,13 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localInitialize(ThreadContextFactory.make());
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup();
+    org.apache.manifoldcf.agents.system.ManifoldCF.localCleanup(ThreadContextFactory.make());
     super.cleanupSystem();
   }
 
diff --git a/framework/api-service/pom.xml b/framework/api-service/pom.xml
index d164475..99c305a 100644
--- a/framework/api-service/pom.xml
+++ b/framework/api-service/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -87,7 +87,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/IdleCleanupThread.java b/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/IdleCleanupThread.java
new file mode 100644
index 0000000..ad99a7e
--- /dev/null
+++ b/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/IdleCleanupThread.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.apiservice;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** This thread periodically calls the cleanup method in all connected repository connectors.  The ostensible purpose
+* is to allow the connectors to shutdown idle connections etc.
+*/
+public class IdleCleanupThread extends Thread
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Constructor.
+  */
+  public IdleCleanupThread()
+    throws ManifoldCFException
+  {
+    super();
+    setName("Idle cleanup thread");
+    setDaemon(true);
+  }
+
+  public void run()
+  {
+    Logging.root.debug("Start up idle cleanup thread");
+    try
+    {
+      // Create a thread context object.
+      IThreadContext threadContext = ThreadContextFactory.make();
+      // Get the cache handle.
+      ICacheManager cacheManager = CacheManagerFactory.make(threadContext);
+      
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(threadContext);
+      IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(threadContext);
+      
+      IThrottleGroups throttleGroups = ThrottleGroupsFactory.make(threadContext);
+
+      // Loop
+      while (true)
+      {
+        // Do another try/catch around everything in the loop
+        try
+        {
+          // Do the cleanup
+          repositoryConnectorPool.pollAllConnectors();
+          outputConnectorPool.pollAllConnectors();
+          authorityConnectorPool.pollAllConnectors();
+          mappingConnectorPool.pollAllConnectors();
+          
+          throttleGroups.poll();
+          
+          cacheManager.expireObjects(System.currentTimeMillis());
+          
+          // Sleep for the retry interval.
+          ManifoldCF.sleep(5000L);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            break;
+
+          if (e.getErrorCode() == ManifoldCFException.DATABASE_CONNECTION_ERROR)
+          {
+            Logging.root.error("Idle cleanup thread aborting and restarting due to database connection reset: "+e.getMessage(),e);
+            try
+            {
+              // Give the database a chance to catch up/wake up
+              ManifoldCF.sleep(10000L);
+            }
+            catch (InterruptedException se)
+            {
+              break;
+            }
+            continue;
+          }
+
+          // Log it, but keep the thread alive
+          Logging.root.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
+        }
+        catch (OutOfMemoryError e)
+        {
+          System.err.println("API service ran out of memory - shutting down");
+          e.printStackTrace(System.err);
+          System.exit(-200);
+        }
+        catch (Throwable e)
+        {
+          // A more severe error - but stay alive
+          Logging.root.fatal("Error tossed: "+e.getMessage(),e);
+        }
+      }
+    }
+    catch (Throwable e)
+    {
+      // Severe error on initialization
+      System.err.println("API service could not start - shutting down");
+      Logging.root.fatal("IdleCleanupThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
+    }
+
+  }
+
+}
diff --git a/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/ServletListener.java b/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/ServletListener.java
index 6d0eb11..cfe250e 100644
--- a/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/ServletListener.java
+++ b/framework/api-service/src/main/java/org/apache/manifoldcf/apiservice/ServletListener.java
@@ -29,11 +29,16 @@
 {
   public static final String _rcsid = "@(#)$Id$";
 
+  protected IdleCleanupThread idleCleanupThread = null;
+  
   public void contextInitialized(ServletContextEvent sce)
   {
     try
     {
-      ManifoldCF.initializeEnvironment();
+      IThreadContext threadContext = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(threadContext);
+      idleCleanupThread = new IdleCleanupThread();
+      idleCleanupThread.start();
     }
     catch (ManifoldCFException e)
     {
@@ -43,7 +48,21 @@
   
   public void contextDestroyed(ServletContextEvent sce)
   {
-    ManifoldCF.cleanUpEnvironment();
+    try
+    {
+      while (true)
+      {
+        if (idleCleanupThread == null)
+          break;
+        idleCleanupThread.interrupt();
+        if (!idleCleanupThread.isAlive())
+          idleCleanupThread = null;
+      }
+    }
+    finally
+    {
+      ManifoldCF.cleanUpEnvironment(ThreadContextFactory.make());
+    }
   }
 
 }
diff --git a/framework/api-servlet/pom.xml b/framework/api-servlet/pom.xml
index 3187742..9fcbebe 100644
--- a/framework/api-servlet/pom.xml
+++ b/framework/api-servlet/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
diff --git a/framework/api-servlet/src/main/java/org/apache/manifoldcf/apiservlet/APIServlet.java b/framework/api-servlet/src/main/java/org/apache/manifoldcf/apiservlet/APIServlet.java
index c088ea9..c960ed1 100644
--- a/framework/api-servlet/src/main/java/org/apache/manifoldcf/apiservlet/APIServlet.java
+++ b/framework/api-servlet/src/main/java/org/apache/manifoldcf/apiservlet/APIServlet.java
@@ -588,7 +588,7 @@
     String[] terms = queryString.split("&");
     for (String term : terms)
     {
-      int index = queryString.indexOf("=");
+      int index = term.indexOf("=");
       if (index == -1)
         addValue(rval,URLDecoder.decode(term,"utf-8"),"");
       else
diff --git a/framework/authority-service/pom.xml b/framework/authority-service/pom.xml
index 8213169..1234c2b 100644
--- a/framework/authority-service/pom.xml
+++ b/framework/authority-service/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -87,7 +87,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/framework/authority-service/src/main/java/org/apache/manifoldcf/authorityservice/ServletListener.java b/framework/authority-service/src/main/java/org/apache/manifoldcf/authorityservice/ServletListener.java
index 7cc2409..1f3ffb2 100644
--- a/framework/authority-service/src/main/java/org/apache/manifoldcf/authorityservice/ServletListener.java
+++ b/framework/authority-service/src/main/java/org/apache/manifoldcf/authorityservice/ServletListener.java
@@ -33,7 +33,7 @@
   {
     try
     {
-      ManifoldCF.initializeEnvironment();
+      ManifoldCF.initializeEnvironment(ThreadContextFactory.make());
     }
     catch (ManifoldCFException e)
     {
@@ -43,7 +43,7 @@
   
   public void contextDestroyed(ServletContextEvent sce)
   {
-    ManifoldCF.cleanUpEnvironment();
+    ManifoldCF.cleanUpEnvironment(ThreadContextFactory.make());
   }
 
 }
diff --git a/framework/authority-servlet/pom.xml b/framework/authority-servlet/pom.xml
index 2efaf84..13aac41 100644
--- a/framework/authority-servlet/pom.xml
+++ b/framework/authority-servlet/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
diff --git a/framework/authority-servlet/src/main/java/org/apache/manifoldcf/authorityservlet/UserACLServlet.java b/framework/authority-servlet/src/main/java/org/apache/manifoldcf/authorityservlet/UserACLServlet.java
index d648130..02d4a9d 100644
--- a/framework/authority-servlet/src/main/java/org/apache/manifoldcf/authorityservlet/UserACLServlet.java
+++ b/framework/authority-servlet/src/main/java/org/apache/manifoldcf/authorityservlet/UserACLServlet.java
@@ -24,6 +24,7 @@
 import org.apache.manifoldcf.authorities.system.Logging;
 import org.apache.manifoldcf.authorities.system.RequestQueue;
 import org.apache.manifoldcf.authorities.system.AuthRequest;
+import org.apache.manifoldcf.authorities.system.MappingRequest;
 
 import java.io.*;
 import java.util.*;
@@ -105,13 +106,38 @@
 
       Logging.authorityService.debug("Received request");
 
+      Map<String,String> domainMap = new HashMap<String,String>();
+
+      // Legacy mode: single user name with optional domain
       String userID = request.getParameter("username");
-      if (userID == null)
+      if (userID != null)
+      {
+        String domain = request.getParameter("domain");
+        if (domain == null)
+          domain = "";
+        domainMap.put(domain,userID);
+      }
+      
+      // Now, go through enumerated username/domain pairs
+      int q = 0;
+      while (true)
+      {
+        String enumUserName = request.getParameter("username_"+q);
+        if (enumUserName == null)
+          break;
+        String enumDomain = request.getParameter("domain_"+q);
+        if (enumDomain == null)
+          enumDomain = "";
+        domainMap.put(enumDomain,enumUserName);
+        q++;
+      }
+      
+      if (domainMap.size() == 0)
       {
         response.sendError(response.SC_BAD_REQUEST);
         return;
       }
-
+      
       boolean idneeded = false;
       boolean aclneeded = true;
 
@@ -134,61 +160,195 @@
 
       if (Logging.authorityService.isDebugEnabled())
       {
-        Logging.authorityService.debug("Received authority request for user '"+userID+"'");
+        StringBuilder sb2 = new StringBuilder("[");
+        boolean first = true;
+        for (String domain : domainMap.keySet())
+        {
+          if (first)
+            first = false;
+          else
+            sb2.append(",");
+          sb2.append("'").append(domain).append("':'").append(domainMap.get(domain)).append("'");
+        }
+        sb2.append("]");
+        Logging.authorityService.debug("Received authority request for domain:user set "+sb2.toString());
       }
 
-      RequestQueue queue = ManifoldCF.getRequestQueue();
+      RequestQueue<MappingRequest> mappingQueue = ManifoldCF.getMappingRequestQueue();
+      if (mappingQueue == null)
+      {
+        // System wasn't started; return unauthorized
+        throw new ManifoldCFException("System improperly initialized");
+      }
+
+      RequestQueue<AuthRequest> queue = ManifoldCF.getRequestQueue();
       if (queue == null)
       {
         // System wasn't started; return unauthorized
         throw new ManifoldCFException("System improperly initialized");
       }
 
+      
       IThreadContext itc = ThreadContextFactory.make();
+      
+      IMappingConnectionManager mappingConnManager = MappingConnectionManagerFactory.make(itc);
       IAuthorityConnectionManager authConnManager = AuthorityConnectionManagerFactory.make(itc);
 
-      IAuthorityConnection[] connections = authConnManager.getAllConnections();
-      int i = 0;
+      // Get all mapping connections; we may not need them all but we do need to be able to look them all up
+      IMappingConnection[] mappingConnections = mappingConnManager.getAllConnections();
+      
+      // One thread per connection, which is responsible for starting the mapping process when it is ready.
+      List<MappingOrderThread> mappingThreads = new ArrayList<MappingOrderThread>();
+      // One thread per authority, which is responsible for starting the auth request when it is ready.
+      List<AuthOrderThread> authThreads = new ArrayList<AuthOrderThread>();
 
-      AuthRequest[] requests = new AuthRequest[connections.length];
+      Map<MapperDescription,MappingRequest> mappingRequests = new HashMap<MapperDescription,MappingRequest>();
+      Map<String,AuthRequest> authRequests = new HashMap<String,AuthRequest>();
 
-      // Queue up all the requests
-      while (i < connections.length)
+      Map<String,IMappingConnection> mappingConnMap = new HashMap<String,IMappingConnection>();
+      
+      // Fill in mappingConnMap, since we need to be able to find connections given connection names
+      for (IMappingConnection c : mappingConnections)
       {
-        IAuthorityConnection ac = connections[i];
-
-        String identifyingString = ac.getDescription();
-        if (identifyingString == null || identifyingString.length() == 0)
-          identifyingString = ac.getName();
-
-        AuthRequest ar = new AuthRequest(userID,ac.getClassName(),identifyingString,ac.getConfigParams(),ac.getMaxConnections());
-        queue.addRequest(ar);
-
-        requests[i++] = ar;
+        mappingConnMap.put(c.getName(),c);
       }
 
-      // Now, work through the returning answers.
-      i = 0;
+      // Set of connections we need to fire off
+      Set<MapperDescription> activeConnections = new HashSet<MapperDescription>();
 
-      // Ask all the registered authorities for their ACLs, and merge the final list together.
+      // We do the minimal set of mapping requests and authorities.  Since it is the authority tokens we are
+      // looking for, we start there, and build authority requests first, then mapping requests that support them,
+      // etc.
+      // Create auth requests
+      for (String authDomain : domainMap.keySet())
+      {
+        IAuthorityConnection[] connections = authConnManager.getDomainConnections(authDomain);
+        for (int i = 0; i < connections.length; i++)
+        {
+          IAuthorityConnection thisConnection = connections[i];
+          String identifyingString = thisConnection.getDescription();
+          if (identifyingString == null || identifyingString.length() == 0)
+            identifyingString = thisConnection.getName();
+          
+          // Create a request
+          AuthRequest ar = new AuthRequest(thisConnection,identifyingString);
+          authRequests.put(thisConnection.getName(), ar);
+          
+          // We create an auth thread if there are prerequisites to meet.
+          // Otherwise, we just fire off the request
+          String domainUserID = domainMap.get(authDomain);
+          if (thisConnection.getPrerequisiteMapping() == null)
+          {
+            ar.setUserID(domainUserID);
+            queue.addRequest(ar);
+          }
+          else
+          {
+            MapperDescription md = new MapperDescription(thisConnection.getPrerequisiteMapping(),authDomain);
+            AuthOrderThread thread = new AuthOrderThread(identifyingString,
+              ar, md,
+              queue, mappingRequests);
+            authThreads.add(thread);
+            // The same mapper can be used for multiple domains, although this is likely to be uncommon.  Nevertheless,
+            // mapper invocations need to be segregated to prevent trouble
+            activeConnections.add(md);
+          }
+        }
+      }
+
+      // Create mapping requests
+      while (!activeConnections.isEmpty())
+      {
+        Iterator<MapperDescription> connectionIter = activeConnections.iterator();
+        MapperDescription mapperDesc = connectionIter.next();
+        String connectionName = mapperDesc.mapperName;
+        String authDomain = mapperDesc.authDomain;
+        IMappingConnection thisConnection = mappingConnMap.get(connectionName);
+        String identifyingString = thisConnection.getDescription();
+        if (identifyingString == null || identifyingString.length() == 0)
+          identifyingString = connectionName;
+
+        // Create a request
+        MappingRequest mr = new MappingRequest(thisConnection,identifyingString);
+        mappingRequests.put(mapperDesc, mr);
+
+        // Either start up a thread, or just fire it off immediately.
+        if (thisConnection.getPrerequisiteMapping() == null)
+        {
+          mr.setUserID(domainMap.get(authDomain));
+          mappingQueue.addRequest(mr);
+        }
+        else
+        {
+          //System.out.println("Mapper: prerequisite found: '"+thisConnection.getPrerequisiteMapping()+"'");
+          MapperDescription p = new MapperDescription(thisConnection.getPrerequisiteMapping(),authDomain);
+          MappingOrderThread thread = new MappingOrderThread(identifyingString,
+            mr, p, mappingQueue, mappingRequests);
+          mappingThreads.add(thread);
+          if (mappingRequests.get(p) == null)
+            activeConnections.add(p);
+        }
+        activeConnections.remove(mapperDesc);
+      }
+      
+      // Start threads.  We have to wait until all the requests have been
+      // at least created before we do this.
+      for (MappingOrderThread thread : mappingThreads)
+      {
+        thread.start();
+      }
+      for (AuthOrderThread thread : authThreads)
+      {
+        thread.start();
+      }
+      
+      // Wait for the threads to finish up.  This will guarantee that all entities have run to completion.
+      for (MappingOrderThread thread : mappingThreads)
+      {
+        thread.finishUp();
+      }
+      for (AuthOrderThread thread : authThreads)
+      {
+        thread.finishUp();
+      }
+      
+      // This is probably unnecessary, but we do it anyway just to adhere to the contract
+      for (MappingRequest mr : mappingRequests.values())
+      {
+        mr.waitForComplete();
+      }
+      
+      // Handle all exceptions thrown during mapping.  In general this just means logging them, because
+      // the downstream authorities will presumably not find what they are looking for and error out that way.
+      for (MappingRequest mr : mappingRequests.values())
+      {
+        Throwable exception = mr.getAnswerException();
+        if (exception != null)
+        {
+          Logging.authorityService.warn("Mapping exception logged from "+mr.getIdentifyingString()+": "+exception.getMessage()+"; mapper aborted", exception);
+        }
+      }
+      
+      // Now, work through the returning answers.
+
+      // Ask all the interrogated authorities for their ACLs, and merge the final list together.
       StringBuilder sb = new StringBuilder();
       // Set response mime type
       response.setContentType("text/plain; charset=ISO8859-1");
       ServletOutputStream out = response.getOutputStream();
       try
       {
-        while (i < connections.length)
+        for (String connectionName : authRequests.keySet())
         {
-          IAuthorityConnection ac = connections[i];
-          AuthRequest ar = requests[i++];
+          AuthRequest ar = authRequests.get(connectionName);
 
           if (Logging.authorityService.isDebugEnabled())
-            Logging.authorityService.debug("Waiting for answer from connector class '"+ac.getClassName()+"' for user '"+userID+"'");
+            Logging.authorityService.debug("Waiting for answer from authority connection "+ar.getIdentifyingString()+" for user '"+ar.getUserID()+"'");
 
           ar.waitForComplete();
 
           if (Logging.authorityService.isDebugEnabled())
-            Logging.authorityService.debug("Received answer from connector class '"+ac.getClassName()+"' for user '"+userID+"'");
+            Logging.authorityService.debug("Received answer from authority connection "+ar.getIdentifyingString()+" for user '"+ar.getUserID()+"'");
 
           Throwable exception = ar.getAnswerException();
           AuthorizationResponse reply = ar.getAnswerResponse();
@@ -203,21 +363,28 @@
             return;
           }
 
-          if (reply.getResponseStatus() == AuthorizationResponse.RESPONSE_UNREACHABLE)
+          // A null reply means the same as USERNOTFOUND; it occurs because a user mapping failed somewhere.
+          if (reply == null)
           {
-            Logging.authorityService.warn("Authority '"+ar.getIdentifyingString()+"' is unreachable for user '"+userID+"'");
+            if (Logging.authorityService.isDebugEnabled())
+              Logging.authorityService.debug("User '"+ar.getUserID()+"' mapping failed for authority '"+ar.getIdentifyingString()+"'");
+            sb.append(USERNOTFOUND_VALUE).append(java.net.URLEncoder.encode(ar.getIdentifyingString(),"UTF-8")).append("\n");
+          }
+          else if (reply.getResponseStatus() == AuthorizationResponse.RESPONSE_UNREACHABLE)
+          {
+            Logging.authorityService.warn("Authority '"+ar.getIdentifyingString()+"' is unreachable for user '"+ar.getUserID()+"'");
             sb.append(UNREACHABLE_VALUE).append(java.net.URLEncoder.encode(ar.getIdentifyingString(),"UTF-8")).append("\n");
           }
           else if (reply.getResponseStatus() == AuthorizationResponse.RESPONSE_USERUNAUTHORIZED)
           {
             if (Logging.authorityService.isDebugEnabled())
-              Logging.authorityService.debug("Authority '"+ar.getIdentifyingString()+"' does not authorize user '"+userID+"'");
+              Logging.authorityService.debug("Authority '"+ar.getIdentifyingString()+"' does not authorize user '"+ar.getUserID()+"'");
             sb.append(UNAUTHORIZED_VALUE).append(java.net.URLEncoder.encode(ar.getIdentifyingString(),"UTF-8")).append("\n");
           }
           else if (reply.getResponseStatus() == AuthorizationResponse.RESPONSE_USERNOTFOUND)
           {
             if (Logging.authorityService.isDebugEnabled())
-              Logging.authorityService.debug("User '"+userID+"' unknown to authority '"+ar.getIdentifyingString()+"'");
+              Logging.authorityService.debug("User '"+ar.getUserID()+"' unknown to authority '"+ar.getIdentifyingString()+"'");
             sb.append(USERNOTFOUND_VALUE).append(java.net.URLEncoder.encode(ar.getIdentifyingString(),"UTF-8")).append("\n");
           }
           else
@@ -232,14 +399,15 @@
               while (j < acl.length)
               {
                 if (Logging.authorityService.isDebugEnabled())
-                  Logging.authorityService.debug("  User '"+userID+"' has Acl = '"+acl[j]+"' from authority '"+ar.getIdentifyingString()+"'");
-                sb.append(TOKEN_PREFIX).append(java.net.URLEncoder.encode(ac.getName(),"UTF-8")).append(":").append(java.net.URLEncoder.encode(acl[j++],"UTF-8")).append("\n");
+                  Logging.authorityService.debug("  User '"+ar.getUserID()+"' has Acl = '"+acl[j]+"' from authority '"+ar.getIdentifyingString()+"'");
+                sb.append(TOKEN_PREFIX).append(java.net.URLEncoder.encode(connectionName,"UTF-8")).append(":").append(java.net.URLEncoder.encode(acl[j++],"UTF-8")).append("\n");
               }
             }
           }
         }
 
-        if (idneeded)
+        // Maintained for backwards compatibility only; no practical use that I can determine here
+        if (idneeded && userID != null)
           sb.append(ID_PREFIX).append(java.net.URLEncoder.encode(userID,"UTF-8")).append("\n");
 
         byte[] responseValue = sb.toString().getBytes("ISO8859-1");
@@ -254,7 +422,20 @@
       }
 
       if (Logging.authorityService.isDebugEnabled())
-        Logging.authorityService.debug("Done with request for '"+userID+"'");
+      {
+        StringBuilder sb2 = new StringBuilder("[");
+        boolean first = true;
+        for (String domain : domainMap.keySet())
+        {
+          if (first)
+            first = false;
+          else
+            sb2.append(",");
+          sb2.append("'").append(domain).append("':'").append(domainMap.get(domain)).append("'");
+        }
+        sb2.append("]");
+        Logging.authorityService.debug("Done with request for domain:user set "+sb2.toString());
+      }
     }
     catch (InterruptedException e)
     {
@@ -272,4 +453,158 @@
     }
   }
 
+  /** This class represents a tuple of (mapper_name, auth_domain).
+  */
+  protected static class MapperDescription
+  {
+    public final String mapperName;
+    public final String authDomain;
+    
+    public MapperDescription(String mapperName, String authDomain)
+    {
+      this.mapperName = mapperName;
+      this.authDomain = authDomain;
+    }
+    
+    public int hashCode()
+    {
+      return mapperName.hashCode() + authDomain.hashCode();
+    }
+    
+    public boolean equals(Object o)
+    {
+      if (!(o instanceof MapperDescription))
+        return false;
+      MapperDescription other = (MapperDescription)o;
+      return this.mapperName.equals(other.mapperName) &&
+        this.authDomain.equals(other.authDomain);
+    }
+  }
+  
+  /** This thread is responsible for making sure that the constraints for a given mapping connection
+  * are met, and then when they are, firing off a MappingRequest.  One of these threads is spun up
+  * for every IMappingConnection being handled.
+  * NOTE WELL: The number of threads this might require is worrisome.  It is essentially
+  * <number_of_app_server_threads> * <number_of_mappers>.  I will try later to see if I can find
+  * a way of limiting this to sane numbers.
+  */
+  protected static class MappingOrderThread extends Thread
+  {
+    protected final MappingRequest request;
+    protected final MapperDescription prerequisite;
+    protected final Map<MapperDescription,MappingRequest> requests;
+    protected final RequestQueue<MappingRequest> mappingRequestQueue;
+
+    protected Throwable exception = null;
+    
+    public MappingOrderThread(
+      String identifyingString,
+      MappingRequest request,
+      MapperDescription prerequisite,
+      RequestQueue<MappingRequest> mappingRequestQueue,
+      Map<MapperDescription, MappingRequest> requests)
+    {
+      super();
+      this.request = request;
+      this.prerequisite = prerequisite;
+      this.mappingRequestQueue = mappingRequestQueue;
+      this.requests = requests;
+      setName("Constraint matcher for mapper '"+identifyingString+"'");
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        MappingRequest mappingRequest = requests.get(prerequisite);
+        mappingRequest.waitForComplete();
+        // Constraints are met.  Fire off the request.
+        request.setUserID(mappingRequest.getAnswerResponse());
+        mappingRequestQueue.addRequest(request);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException
+    {
+      join();
+      if (exception != null)
+      {
+        if (exception instanceof Error)
+          throw (Error)exception;
+        else if (exception instanceof RuntimeException)
+          throw (RuntimeException)exception;
+      }
+    }
+    
+  }
+
+  /** This thread is responsible for making sure that the constraints for a given authority connection
+  * are met, and then when they are, firing off an AuthRequest.  One of these threads is spun up
+  * for every IAuthorityConnection being handled.
+  * NOTE WELL: The number of threads this might require is worrisome.  It is essentially
+  * <number_of_app_server_threads> * <number_of_authorities>.  I will try later to see if I can find
+  * a way of limiting this to sane numbers.
+  */
+  protected static class AuthOrderThread extends Thread
+  {
+    protected final AuthRequest request;
+    protected final MapperDescription prerequisite;
+    protected final Map<MapperDescription,MappingRequest> mappingRequests;
+    protected final RequestQueue<AuthRequest> authRequestQueue;
+    
+    protected Throwable exception = null;
+    
+    public AuthOrderThread(
+      String identifyingString,
+      AuthRequest request,
+      MapperDescription prerequisite,
+      RequestQueue<AuthRequest> authRequestQueue,
+      Map<MapperDescription, MappingRequest> mappingRequests)
+    {
+      super();
+      this.request = request;
+      this.prerequisite = prerequisite;
+      this.authRequestQueue = authRequestQueue;
+      this.mappingRequests = mappingRequests;
+      setName("Constraint matcher for authority '"+identifyingString+"'");
+      setDaemon(true);
+    }
+    
+    public void run()
+    {
+      try
+      {
+        MappingRequest mappingRequest = mappingRequests.get(prerequisite);
+        mappingRequest.waitForComplete();
+        // Constraints are met.  Fire off the request.  User may be null if mapper failed!!
+        request.setUserID(mappingRequest.getAnswerResponse());
+        authRequestQueue.addRequest(request);
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+
+    public void finishUp()
+      throws InterruptedException
+    {
+      join();
+      if (exception != null)
+      {
+        if (exception instanceof Error)
+          throw (Error)exception;
+        else if (exception instanceof RuntimeException)
+          throw (RuntimeException)exception;
+      }
+    }
+    
+  }
+  
 }
diff --git a/framework/build.xml b/framework/build.xml
index f02776c..262d99c 100644
--- a/framework/build.xml
+++ b/framework/build.xml
@@ -32,6 +32,7 @@
     
     <path id="framework-classpath">
         <fileset dir="../lib">
+            <include name="zookeeper*.jar"/>
             <include name="json*.jar"/>
             <include name="commons-codec*.jar"/>
             <include name="commons-collections*.jar"/>
@@ -57,6 +58,7 @@
             <include name="xercesImpl*.jar"/>
             <include name="xml-apis*.jar"/>
             <include name="velocity*.jar"/>
+            <include name="mail*.jar"/>
         </fileset>
         <fileset dir="../lib">
             <include name="postgresql*.jar"/>
@@ -88,7 +90,7 @@
     
     <target name="compile-core">
         <mkdir dir="build/core/classes"/>
-        <javac srcdir="core/src/main/java" destdir="build/core/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="core/src/main/java" destdir="build/core/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
             </classpath>
@@ -97,7 +99,7 @@
 
     <target name="compile-ui-core" depends="compile-core">
         <mkdir dir="build/ui-core/classes"/>
-        <javac srcdir="ui-core/src/main/java" destdir="build/ui-core/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="ui-core/src/main/java" destdir="build/ui-core/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -108,7 +110,7 @@
 
     <target name="compile-agents" depends="compile-core">
         <mkdir dir="build/agents/classes"/>
-        <javac srcdir="agents/src/main/java" destdir="build/agents/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="agents/src/main/java" destdir="build/agents/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -118,7 +120,7 @@
 
     <target name="compile-pull-agent" depends="compile-core,compile-agents">
         <mkdir dir="build/pull-agent/classes"/>
-        <javac srcdir="pull-agent/src/main/java" destdir="build/pull-agent/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="pull-agent/src/main/java" destdir="build/pull-agent/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -129,7 +131,7 @@
 
     <target name="compile-jetty-runner" depends="compile-core,compile-agents">
         <mkdir dir="build/jetty-runner/classes"/>
-        <javac srcdir="jetty-runner/src/main/java" destdir="build/jetty-runner/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="jetty-runner/src/main/java" destdir="build/jetty-runner/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -141,7 +143,7 @@
 
     <target name="compile-script-engine" depends="compile-core">
         <mkdir dir="build/script-engine/classes"/>
-        <javac srcdir="script-engine/src/main/java" destdir="build/script-engine/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="script-engine/src/main/java" destdir="build/script-engine/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -151,7 +153,7 @@
 
     <target name="compile-authority-servlet" depends="compile-core,compile-agents,compile-pull-agent">
         <mkdir dir="build/authority-servlet/classes"/>
-        <javac srcdir="authority-servlet/src/main/java" destdir="build/authority-servlet/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="authority-servlet/src/main/java" destdir="build/authority-servlet/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -163,7 +165,7 @@
 
     <target name="compile-authority-service" depends="compile-core,compile-agents,compile-pull-agent,compile-authority-servlet">
         <mkdir dir="build/authority-service/classes"/>
-        <javac srcdir="authority-service/src/main/java" destdir="build/authority-service/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="authority-service/src/main/java" destdir="build/authority-service/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -176,7 +178,7 @@
 
     <target name="compile-api-servlet" depends="compile-core,compile-ui-core,compile-agents,compile-pull-agent">
         <mkdir dir="build/api-servlet/classes"/>
-        <javac srcdir="api-servlet/src/main/java" destdir="build/api-servlet/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="api-servlet/src/main/java" destdir="build/api-servlet/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -189,7 +191,7 @@
 
     <target name="compile-api-service" depends="compile-core,compile-ui-core,compile-agents,compile-pull-agent,compile-api-servlet">
         <mkdir dir="build/api-service/classes"/>
-        <javac srcdir="api-service/src/main/java" destdir="build/api-service/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="api-service/src/main/java" destdir="build/api-service/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -203,7 +205,7 @@
 
     <target name="compile-combined-service" depends="compile-core,compile-ui-core,compile-agents,compile-pull-agent,compile-api-servlet,compile-authority-servlet">
         <mkdir dir="build/combined-service/classes"/>
-        <javac srcdir="combined-service/src/main/java" destdir="build/combined-service/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="combined-service/src/main/java" destdir="build/combined-service/classes" target="1.6" source="1.6" encoding="utf-8" debug="true" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -218,7 +220,7 @@
 
     <target name="compile-crawler-ui" depends="compile-core,compile-ui-core,compile-agents,compile-pull-agent">
         <mkdir dir="build/crawler-ui/classes"/>
-        <javac srcdir="crawler-ui/src/main/java" destdir="build/crawler-ui/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="crawler-ui/src/main/java" destdir="build/crawler-ui/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -244,7 +246,7 @@
         <jasper2 validateXml="false" uriroot="crawler-ui/src/main/webapp" webXmlFragment="build/crawler-ui/web-generated.xml" outputDir="build/crawler-ui/java" /> 
         <!-- Compile java classes -->
         <mkdir dir="build/crawler-ui/classes"/>
-        <javac srcdir="build/crawler-ui/java" destdir="build/crawler-ui/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="build/crawler-ui/java" destdir="build/crawler-ui/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath id="classpath">
                 <pathelement location="${java.home}/../lib/tools.jar"/>
                 <path refid="framework-classpath"/>
@@ -308,6 +310,7 @@
         <mkdir dir="build/webapp/authority-service/WEB-INF/lib"/>
         <copy todir="build/webapp/authority-service/WEB-INF/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -324,6 +327,7 @@
                 <include name="xercesImpl*.jar"/>
                 <include name="xml-apis*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -346,6 +350,7 @@
         <mkdir dir="build/webapp/authority-service-proprietary/WEB-INF/lib"/>
         <copy todir="build/webapp/authority-service-proprietary/WEB-INF/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -362,6 +367,7 @@
                 <include name="xercesImpl*.jar"/>
                 <include name="xml-apis*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -389,6 +395,7 @@
         <mkdir dir="build/webapp/api-service/WEB-INF/lib"/>
         <copy todir="build/webapp/api-service/WEB-INF/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -405,6 +412,7 @@
                 <include name="xercesImpl*.jar"/>
                 <include name="xml-apis*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -428,6 +436,7 @@
         <mkdir dir="build/webapp/api-service-proprietary/WEB-INF/lib"/>
         <copy todir="build/webapp/api-service-proprietary/WEB-INF/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -444,6 +453,7 @@
                 <include name="xercesImpl*.jar"/>
                 <include name="xml-apis*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -473,6 +483,7 @@
         <copy todir="build/webapp/crawler-ui/WEB-INF/lib">
             <fileset dir="../lib">
                 <include name="jstl*.jar"/>
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -490,6 +501,7 @@
                 <include name="xml-apis*.jar"/>
                 <include name="velocity*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -520,6 +532,7 @@
         <copy todir="build/webapp/crawler-ui-proprietary/WEB-INF/lib">
             <fileset dir="../lib">
                 <include name="jstl*.jar"/>
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -537,6 +550,7 @@
                 <include name="xml-apis*.jar"/>
                 <include name="velocity*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -574,6 +588,7 @@
         <copy todir="build/webapp/combined-service/WEB-INF/lib">
             <fileset dir="../lib">
                 <include name="jstl*.jar"/>
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -591,6 +606,7 @@
                 <include name="xml-apis*.jar"/>
                 <include name="velocity*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -623,6 +639,7 @@
         <copy todir="build/webapp/combined-service-proprietary/WEB-INF/lib">
             <fileset dir="../lib">
                 <include name="jstl*.jar"/>
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -640,6 +657,7 @@
                 <include name="xml-apis*.jar"/>
                 <include name="velocity*.jar"/>
                 <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -736,10 +754,100 @@
         <copy file="example-common/README.txt" todir="dist/connector-lib-proprietary"/>
     </target>
     
-    <target name="multi-processes" depends="jar-core,jar-agents,jar-pull-agent">
-        <mkdir dir="dist/multiprocess-example/processes/lib"/>
-        <copy todir="dist/multiprocess-example/processes/lib">
+    <target name="multi-processes-file" depends="jar-core,jar-agents,jar-pull-agent">
+        <mkdir dir="dist/multiprocess-file-example/processes/lib"/>
+        <copy todir="dist/multiprocess-file-example/processes/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
+                <include name="json*.jar"/>
+                <include name="commons-codec*.jar"/>
+                <include name="commons-collections*.jar"/>
+                <include name="commons-el*.jar"/>
+                <include name="commons-fileupload*.jar"/>
+                <include name="httpcore*.jar"/>
+                <include name="httpclient*.jar"/>
+                <include name="commons-io*.jar"/>
+                <include name="commons-lang*.jar"/>
+                <include name="commons-logging*.jar"/>
+                <include name="log4j*.jar"/>
+                <include name="serializer*.jar"/>
+                <include name="servlet-api*.jar"/>
+                <include name="xalan*.jar"/>
+                <include name="xercesImpl*.jar"/>
+                <include name="xml-apis*.jar"/>
+                <include name="velocity*.jar"/>
+                <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
+            </fileset>
+            <fileset dir="../lib">
+                <include name="postgresql*.jar"/>
+                <include name="derby*.jar"/>
+                <include name="hsqldb*.jar"/>
+            </fileset>
+            <fileset dir="build/jar">
+                <include name="mcf-core.jar"/>
+                <include name="mcf-agents.jar"/>
+                <include name="mcf-pull-agent.jar"/>
+            </fileset>
+        </copy>
+        <copy todir="dist/multiprocess-file-example/processes">
+            <fileset dir="scripts"/>
+        </copy>
+        <mkdir dir="dist/multiprocess-file-example/syncharea"/>
+    </target>
+
+    <target name="multi-processes-file-proprietary" depends="jar-core,jar-agents,jar-pull-agent">
+        <mkdir dir="dist/multiprocess-file-example-proprietary/processes/lib"/>
+        <copy todir="dist/multiprocess-file-example-proprietary/processes/lib">
+            <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
+                <include name="json*.jar"/>
+                <include name="commons-codec*.jar"/>
+                <include name="commons-collections*.jar"/>
+                <include name="commons-el*.jar"/>
+                <include name="commons-fileupload*.jar"/>
+                <include name="httpcore*.jar"/>
+                <include name="httpclient*.jar"/>
+                <include name="commons-io*.jar"/>
+                <include name="commons-lang*.jar"/>
+                <include name="commons-logging*.jar"/>
+                <include name="log4j*.jar"/>
+                <include name="serializer*.jar"/>
+                <include name="servlet-api*.jar"/>
+                <include name="xalan*.jar"/>
+                <include name="xercesImpl*.jar"/>
+                <include name="xml-apis*.jar"/>
+                <include name="velocity*.jar"/>
+                <include name="slf4j*.jar"/>
+                <include name="mail*.jar"/>
+            </fileset>
+            <fileset dir="../lib">
+                <include name="postgresql*.jar"/>
+                <include name="derby*.jar"/>
+                <include name="hsqldb*.jar"/>
+            </fileset>
+            <fileset dir="../lib-proprietary">
+                <include name="mysql*.jar"/>
+                <include name="ojdbc*.jar"/>
+                <include name="jtds*.jar"/>
+            </fileset>
+            <fileset dir="build/jar">
+                <include name="mcf-core.jar"/>
+                <include name="mcf-agents.jar"/>
+                <include name="mcf-pull-agent.jar"/>
+            </fileset>
+        </copy>
+        <copy todir="dist/multiprocess-file-example-proprietary/processes">
+            <fileset dir="scripts"/>
+        </copy>
+        <mkdir dir="dist/multiprocess-file-example-proprietary/syncharea"/>
+    </target>
+
+    <target name="multi-processes-zk" depends="jar-core,jar-agents,jar-pull-agent">
+        <mkdir dir="dist/multiprocess-zk-example/processes/lib"/>
+        <copy todir="dist/multiprocess-zk-example/processes/lib">
+            <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -770,16 +878,17 @@
                 <include name="mcf-pull-agent.jar"/>
             </fileset>
         </copy>
-        <copy todir="dist/multiprocess-example/processes">
+        <copy todir="dist/multiprocess-zk-example/processes">
             <fileset dir="scripts"/>
         </copy>
-        <mkdir dir="dist/multiprocess-example/syncharea"/>
+        <mkdir dir="dist/multiprocess-zk-example/zookeeper"/>
     </target>
 
-    <target name="multi-processes-proprietary" depends="jar-core,jar-agents,jar-pull-agent">
-        <mkdir dir="dist/multiprocess-example-proprietary/processes/lib"/>
-        <copy todir="dist/multiprocess-example-proprietary/processes/lib">
+    <target name="multi-processes-zk-proprietary" depends="jar-core,jar-agents,jar-pull-agent">
+        <mkdir dir="dist/multiprocess-zk-example-proprietary/processes/lib"/>
+        <copy todir="dist/multiprocess-zk-example-proprietary/processes/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -815,27 +924,32 @@
                 <include name="mcf-pull-agent.jar"/>
             </fileset>
         </copy>
-        <copy todir="dist/multiprocess-example-proprietary/processes">
+        <copy todir="dist/multiprocess-zk-example-proprietary/processes">
             <fileset dir="scripts"/>
         </copy>
-        <mkdir dir="dist/multiprocess-example-proprietary/syncharea"/>
+        <mkdir dir="dist/multiprocess-zk-example-proprietary/zookeeper"/>
     </target>
 
-    <target name="multi-process-example" depends="jar-core,jar-ui-core,jar-agents,jar-pull-agent,jar-jetty-runner,multi-processes">
-        <mkdir dir="dist/multiprocess-example"/>
-        <copy todir="dist/multiprocess-example">
+    <target name="multi-process-file-example" depends="jar-core,jar-ui-core,jar-agents,jar-pull-agent,jar-jetty-runner,multi-processes-file">
+        <mkdir dir="dist/multiprocess-file-example"/>
+        <copy todir="dist/multiprocess-file-example">
             <fileset dir="example-multiprocess-common">
                 <include name="logging.ini"/>
                 <include name="*.sh"/>
                 <include name="*.bat"/>
             </fileset>
-            <fileset dir="example-multiprocess">
+            <fileset dir="example-multiprocess-file-common">
+                <include name="*.sh"/>
+                <include name="*.bat"/>
+            </fileset>
+            <fileset dir="example-multiprocess-file">
                 <include name="properties.xml"/>
             </fileset>
         </copy>
-        <mkdir dir="dist/multiprocess-example/lib"/>
-        <copy todir="dist/multiprocess-example/lib">
+        <mkdir dir="dist/multiprocess-file-example/lib"/>
+        <copy todir="dist/multiprocess-file-example/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-logging*.jar"/>
                 <include name="log4j*.jar"/>
@@ -857,25 +971,30 @@
                 <include name="jsp-api*.jar"/>
             </fileset>
         </copy>
-        <mkdir dir="dist/multiprocess-example/logs"/>
-        <chmod dir="dist/multiprocess-example" perm="a+x" includes="**/*.sh"/>
+        <mkdir dir="dist/multiprocess-file-example/logs"/>
+        <chmod dir="dist/multiprocess-file-example" perm="a+x" includes="**/*.sh"/>
     </target>
   
-      <target name="multi-process-example-proprietary" depends="jar-core,jar-ui-core,jar-agents,jar-pull-agent,jar-jetty-runner,multi-processes-proprietary">
-        <mkdir dir="dist/multiprocess-example-proprietary"/>
-        <copy todir="dist/multiprocess-example-proprietary">
+      <target name="multi-process-file-example-proprietary" depends="jar-core,jar-ui-core,jar-agents,jar-pull-agent,jar-jetty-runner,multi-processes-file-proprietary">
+        <mkdir dir="dist/multiprocess-file-example-proprietary"/>
+        <copy todir="dist/multiprocess-file-example-proprietary">
             <fileset dir="example-multiprocess-common">
                 <include name="logging.ini"/>
                 <include name="*.sh"/>
                 <include name="*.bat"/>
             </fileset>
-            <fileset dir="example-multiprocess-proprietary">
+            <fileset dir="example-multiprocess-file-common">
+                <include name="*.sh"/>
+                <include name="*.bat"/>
+            </fileset>
+            <fileset dir="example-multiprocess-file-proprietary">
                 <include name="properties.xml"/>
             </fileset>
         </copy>
-        <mkdir dir="dist/multiprocess-example-proprietary/lib"/>
-        <copy todir="dist/multiprocess-example-proprietary/lib">
+        <mkdir dir="dist/multiprocess-file-example-proprietary/lib"/>
+        <copy todir="dist/multiprocess-file-example-proprietary/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-logging*.jar"/>
                 <include name="log4j*.jar"/>
@@ -897,8 +1016,102 @@
                 <include name="jsp-api*.jar"/>
             </fileset>
         </copy>
-        <mkdir dir="dist/multiprocess-example-proprietary/logs"/>
-        <chmod dir="dist/multiprocess-example-proprietary" perm="a+x" includes="**/*.sh"/>
+        <mkdir dir="dist/multiprocess-file-example-proprietary/logs"/>
+        <chmod dir="dist/multiprocess-file-example-proprietary" perm="a+x" includes="**/*.sh"/>
+    </target>
+
+    <target name="multi-process-zk-example" depends="jar-core,jar-ui-core,jar-agents,jar-pull-agent,jar-jetty-runner,multi-processes-zk">
+        <mkdir dir="dist/multiprocess-zk-example"/>
+        <copy todir="dist/multiprocess-zk-example">
+            <fileset dir="example-multiprocess-common">
+                <include name="logging.ini"/>
+                <include name="*.sh"/>
+                <include name="*.bat"/>
+            </fileset>
+            <fileset dir="example-multiprocess-zk-common">
+                <include name="*.sh"/>
+                <include name="*.bat"/>
+                <include name="*.cfg"/>
+                <include name="*.xml"/>
+            </fileset>
+            <fileset dir="example-multiprocess-zk">
+                <include name="properties.xml"/>
+            </fileset>
+        </copy>
+        <mkdir dir="dist/multiprocess-zk-example/lib"/>
+        <copy todir="dist/multiprocess-zk-example/lib">
+            <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
+                <include name="json*.jar"/>
+                <include name="commons-logging*.jar"/>
+                <include name="log4j*.jar"/>
+            </fileset>
+            <fileset dir="build/jar">
+                <include name="mcf-core.jar"/>
+                <include name="mcf-ui-core.jar"/>
+                <include name="mcf-agents.jar"/>
+                <include name="mcf-pull-agent.jar"/>
+                <include name="mcf-jetty-runner.jar"/>
+            </fileset>
+            <fileset dir="../lib">
+                <include name="jetty*.jar"/>
+                <include name="slf4j*.jar"/>
+                <include name="servlet-api*.jar"/>
+                <include name="ecj*.jar"/>
+                <include name="jasper*.jar"/>
+                <include name="juli*.jar"/>
+                <include name="jsp-api*.jar"/>
+            </fileset>
+        </copy>
+        <mkdir dir="dist/multiprocess-zk-example/logs"/>
+        <chmod dir="dist/multiprocess-zk-example" perm="a+x" includes="**/*.sh"/>
+    </target>
+  
+      <target name="multi-process-zk-example-proprietary" depends="jar-core,jar-ui-core,jar-agents,jar-pull-agent,jar-jetty-runner,multi-processes-zk-proprietary">
+        <mkdir dir="dist/multiprocess-zk-example-proprietary"/>
+        <copy todir="dist/multiprocess-zk-example-proprietary">
+            <fileset dir="example-multiprocess-common">
+                <include name="logging.ini"/>
+                <include name="*.sh"/>
+                <include name="*.bat"/>
+            </fileset>
+            <fileset dir="example-multiprocess-zk-common">
+                <include name="*.sh"/>
+                <include name="*.bat"/>
+                <include name="*.cfg"/>
+                <include name="*.xml"/>
+            </fileset>
+            <fileset dir="example-multiprocess-zk-proprietary">
+                <include name="properties.xml"/>
+            </fileset>
+        </copy>
+        <mkdir dir="dist/multiprocess-zk-example-proprietary/lib"/>
+        <copy todir="dist/multiprocess-zk-example-proprietary/lib">
+            <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
+                <include name="json*.jar"/>
+                <include name="commons-logging*.jar"/>
+                <include name="log4j*.jar"/>
+            </fileset>
+            <fileset dir="build/jar">
+                <include name="mcf-core.jar"/>
+                <include name="mcf-ui-core.jar"/>
+                <include name="mcf-agents.jar"/>
+                <include name="mcf-pull-agent.jar"/>
+                <include name="mcf-jetty-runner.jar"/>
+            </fileset>
+            <fileset dir="../lib">
+                <include name="jetty*.jar"/>
+                <include name="slf4j*.jar"/>
+                <include name="servlet-api*.jar"/>
+                <include name="ecj*.jar"/>
+                <include name="jasper*.jar"/>
+                <include name="juli*.jar"/>
+                <include name="jsp-api*.jar"/>
+            </fileset>
+        </copy>
+        <mkdir dir="dist/multiprocess-zk-example-proprietary/logs"/>
+        <chmod dir="dist/multiprocess-zk-example-proprietary" perm="a+x" includes="**/*.sh"/>
     </target>
 
     <target name="script-engine" depends="jar-script-engine,jar-core">
@@ -932,6 +1145,7 @@
         <mkdir dir="dist/example/lib"/>
         <copy todir="dist/example/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -956,6 +1170,7 @@
                 <include name="xercesImpl*.jar"/>
                 <include name="xml-apis*.jar"/>
                 <include name="velocity*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -1044,7 +1259,8 @@
         <property name="manifest-cp-59" value="${manifest-cp-58} lib/slf4j-simple.jar"/>
         <property name="manifest-cp-60" value="${manifest-cp-59} lib/httpcore.jar"/>
         <property name="manifest-cp-61" value="${manifest-cp-60} lib/httpclient.jar"/>
-        <property name="manifest-cp" value="${manifest-cp-61}"/>
+        <property name="manifest-cp-62" value="${manifest-cp-61} lib/mail.jar"/>
+        <property name="manifest-cp" value="${manifest-cp-62}"/>
         <mkdir dir="build/example"/>
         <manifest file="build/example/manifest">
             <attribute name="Main-Class" value="org.apache.manifoldcf.jettyrunner.ManifoldCFJettyRunner"/>
@@ -1059,6 +1275,7 @@
         <mkdir dir="dist/example-proprietary/lib"/>
         <copy todir="dist/example-proprietary/lib">
             <fileset dir="../lib">
+                <include name="zookeeper*.jar"/>
                 <include name="json*.jar"/>
                 <include name="commons-codec*.jar"/>
                 <include name="commons-collections*.jar"/>
@@ -1083,6 +1300,7 @@
                 <include name="xercesImpl*.jar"/>
                 <include name="xml-apis*.jar"/>
                 <include name="velocity*.jar"/>
+                <include name="mail*.jar"/>
             </fileset>
             <fileset dir="../lib">
                 <include name="postgresql*.jar"/>
@@ -1179,7 +1397,8 @@
         <property name="manifest-cp-proprietary-61" value="${manifest-cp-proprietary-60} lib/slf4j-simple.jar"/>
         <property name="manifest-cp-proprietary-62" value="${manifest-cp-proprietary-61} lib/httpcore.jar"/>
         <property name="manifest-cp-proprietary-63" value="${manifest-cp-proprietary-62} lib/httpclient.jar"/>
-        <property name="manifest-cp-proprietary" value="${manifest-cp-proprietary-63}"/>
+        <property name="manifest-cp-proprietary-64" value="${manifest-cp-proprietary-63} lib/mail.jar"/>
+        <property name="manifest-cp-proprietary" value="${manifest-cp-proprietary-64}"/>
         <mkdir dir="build/example-proprietary"/>
         <manifest file="build/example-proprietary/manifest">
             <attribute name="Main-Class" value="org.apache.manifoldcf.jettyrunner.ManifoldCFJettyRunner"/>
@@ -1191,7 +1410,7 @@
 
     <target name="compile-core-tests" depends="compile-core">
         <mkdir dir="build/core-tests/classes"/>
-        <javac srcdir="core/src/test/java" destdir="build/core-tests/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="core/src/test/java" destdir="build/core-tests/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -1201,7 +1420,7 @@
 
     <target name="compile-agents-tests" depends="compile-core-tests,compile-agents">
         <mkdir dir="build/agents-tests/classes"/>
-        <javac srcdir="agents/src/test/java" destdir="build/agents-tests/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="agents/src/test/java" destdir="build/agents-tests/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -1213,7 +1432,7 @@
 
     <target name="compile-pull-agent-tests" depends="compile-agents-tests,compile-pull-agent">
         <mkdir dir="build/pull-agent-tests/classes"/>
-        <javac srcdir="pull-agent/src/test/java" destdir="build/pull-agent-tests/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="pull-agent/src/test/java" destdir="build/pull-agent-tests/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -1227,7 +1446,7 @@
 
     <target name="compile-script-engine-tests" depends="compile-core,compile-script-engine">
         <mkdir dir="build/script-engine-tests/classes"/>
-        <javac srcdir="script-engine/src/test/java" destdir="build/script-engine-tests/classes" target="1.6" source="1.6" debug="true" debuglevel="lines,vars,source">
+        <javac srcdir="script-engine/src/test/java" destdir="build/script-engine-tests/classes" target="1.6" source="1.6" debug="true" encoding="utf-8" debuglevel="lines,vars,source">
             <classpath>
                 <path refid="framework-classpath"/>
                 <pathelement location="build/core/classes"/>
@@ -1276,7 +1495,29 @@
             <formatter type="brief" usefile="false"/>
 
             <test name="org.apache.manifoldcf.core.common.DateTest" todir="test-output"/>
-            
+            <test name="org.apache.manifoldcf.core.fuzzyml.TestFuzzyML" todir="test-output"/>
+            <test name="org.apache.manifoldcf.core.lockmanager.TestZooKeeperLocks" todir="test-output"/>
+            <test name="org.apache.manifoldcf.core.throttler.TestThrottler" todir="test-output"/>
+
+        </junit>
+    </target>
+
+    <target name="run-pull-agent-tests" depends="compile-pull-agent,compile-pull-agent-tests">
+        <mkdir dir="test-output"/>
+        <junit fork="true" maxmemory="128m" dir="test-output" outputtoformatters="true" showoutput="true" haltonfailure="true">
+            <classpath>
+                <path refid="framework-classpath"/>
+                <pathelement location="build/core/classes"/>
+                <pathelement location="build/core-tests/classes"/>
+                <pathelement location="build/agents/classes"/>
+                <pathelement location="build/agents-tests/classes"/>
+                <pathelement location="build/pull-agent/classes"/>
+                <pathelement location="build/pull-agent-tests/classes"/>
+            </classpath>
+            <formatter type="brief" usefile="false"/>
+
+            <test name="org.apache.manifoldcf.crawler.tests.SchedulerHSQLDBTest" todir="test-output"/>
+
         </junit>
     </target>
 
@@ -1313,7 +1554,7 @@
         </junit>
     </target>
 
-    <target name="run-tests" depends="compile-tests,run-core-tests,run-script-engine-tests"/>
+    <target name="run-tests" depends="compile-tests,run-core-tests,run-pull-agent-tests,run-script-engine-tests"/>
 
     <target name="run-tests-derby" depends="compile-tests">
         <mkdir dir="test-derby-output"/>
@@ -1440,7 +1681,7 @@
         </java>
     </target>
     
-    <target name="build" depends="multi-process-example,multi-process-example-proprietary,single-process-example,single-process-example-proprietary,example-common,script-engine"/>
+    <target name="build" depends="multi-process-zk-example,multi-process-zk-example-proprietary,multi-process-file-example,multi-process-file-example-proprietary,single-process-example,single-process-example-proprietary,example-common,script-engine"/>
     
     <target name="all" depends="build,doc,build-tests,run-tests,run-tests-derby,run-tests-HSQLDB,run-tests-HSQLDBext"/>
     
diff --git a/framework/combined-service/pom.xml b/framework/combined-service/pom.xml
index 2c894e8..9ec8f38 100644
--- a/framework/combined-service/pom.xml
+++ b/framework/combined-service/pom.xml
@@ -20,7 +20,7 @@
   <parent>

     <groupId>org.apache.manifoldcf</groupId>

     <artifactId>mcf-framework</artifactId>

-    <version>1.2-SNAPSHOT</version>

+    <version>1.5-SNAPSHOT</version>

   </parent>

   <modelVersion>4.0.0</modelVersion>

 

@@ -110,7 +110,7 @@
     <dependency>

       <groupId>org.apache.httpcomponents</groupId>

       <artifactId>httpclient</artifactId>

-      <version>${httpcomponent.version}</version>

+      <version>${httpcomponent.httpclient.version}</version>

       <scope>runtime</scope>

     </dependency>

     <dependency>

@@ -187,35 +187,36 @@
   </dependencies>

 

   <build>

-      <plugins>

-        <plugin>

-          <groupId>org.apache.maven.plugins</groupId>

-          <artifactId>maven-war-plugin</artifactId>

-          <configuration>

-            <overlays>

-              <overlay>

-                <groupId>org.apache.manifoldcf</groupId>

-                <artifactId>mcf-crawler-ui</artifactId>

-                <type>war</type>

-                <includes>

-                  <include>*.jsp</include>

-                  <include>*.css</include>

-                  <include>*.png</include>

-                </includes>

-                <targetPath>/</targetPath>

-              </overlay>

-              <overlay>

-                <groupId>org.apache.manifoldcf</groupId>

-                <artifactId>mcf-crawler-ui</artifactId>

-                <type>war</type>

-                <includes>

-                  <include>WEB-INF/jsp/*</include>

-                </includes>

-                <targetPath>/</targetPath>

-              </overlay>

-            </overlays>

-          </configuration>

-        </plugin>

-      </plugins>

-    </build>

+    <plugins>

+      <plugin>

+        <groupId>org.apache.maven.plugins</groupId>

+        <artifactId>maven-war-plugin</artifactId>

+        <version>2.3</version>

+        <configuration>

+          <overlays>

+            <overlay>

+              <groupId>org.apache.manifoldcf</groupId>

+              <artifactId>mcf-crawler-ui</artifactId>

+              <type>war</type>

+              <includes>

+                <include>*.jsp</include>

+                <include>*.css</include>

+                <include>*.png</include>

+              </includes>

+              <targetPath>/</targetPath>

+            </overlay>

+            <overlay>

+              <groupId>org.apache.manifoldcf</groupId>

+              <artifactId>mcf-crawler-ui</artifactId>

+              <type>war</type>

+              <includes>

+                <include>WEB-INF/jsp/*</include>

+              </includes>

+              <targetPath>/</targetPath>

+            </overlay>

+          </overlays>

+        </configuration>

+      </plugin>

+    </plugins>

+  </build>

 </project>
\ No newline at end of file
diff --git a/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/IdleCleanupThread.java b/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/IdleCleanupThread.java
new file mode 100644
index 0000000..8e1d50d
--- /dev/null
+++ b/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/IdleCleanupThread.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.combinedservice;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** This thread periodically calls the cleanup method in all connected repository connectors.  The ostensible purpose
+* is to allow the connectors to shutdown idle connections etc.
+*/
+public class IdleCleanupThread extends Thread
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Constructor.
+  */
+  public IdleCleanupThread()
+    throws ManifoldCFException
+  {
+    super();
+    setName("Idle cleanup thread");
+    setDaemon(true);
+  }
+
+  public void run()
+  {
+    Logging.root.debug("Start up idle cleanup thread");
+    try
+    {
+      // Create a thread context object.
+      IThreadContext threadContext = ThreadContextFactory.make();
+      // Get the cache handle.
+      ICacheManager cacheManager = CacheManagerFactory.make(threadContext);
+      
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(threadContext);
+      IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(threadContext);
+      
+      IThrottleGroups throttleGroups = ThrottleGroupsFactory.make(threadContext);
+
+      // Loop
+      while (true)
+      {
+        // Do another try/catch around everything in the loop
+        try
+        {
+          // Do the cleanup
+          repositoryConnectorPool.pollAllConnectors();
+          outputConnectorPool.pollAllConnectors();
+          authorityConnectorPool.pollAllConnectors();
+          mappingConnectorPool.pollAllConnectors();
+          
+          throttleGroups.poll();
+          
+          cacheManager.expireObjects(System.currentTimeMillis());
+          
+          // Sleep for the retry interval.
+          ManifoldCF.sleep(5000L);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            break;
+
+          if (e.getErrorCode() == ManifoldCFException.DATABASE_CONNECTION_ERROR)
+          {
+            Logging.root.error("Idle cleanup thread aborting and restarting due to database connection reset: "+e.getMessage(),e);
+            try
+            {
+              // Give the database a chance to catch up/wake up
+              ManifoldCF.sleep(10000L);
+            }
+            catch (InterruptedException se)
+            {
+              break;
+            }
+            continue;
+          }
+
+          // Log it, but keep the thread alive
+          Logging.root.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
+        }
+        catch (OutOfMemoryError e)
+        {
+          System.err.println("Combined service ran out of memory - shutting down");
+          e.printStackTrace(System.err);
+          System.exit(-200);
+        }
+        catch (Throwable e)
+        {
+          // A more severe error - but stay alive
+          Logging.root.fatal("Error tossed: "+e.getMessage(),e);
+        }
+      }
+    }
+    catch (Throwable e)
+    {
+      // Severe error on initialization
+      System.err.println("Combined service could not start - shutting down");
+      Logging.root.fatal("IdleCleanupThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
+    }
+
+  }
+
+}
diff --git a/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/ServletListener.java b/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/ServletListener.java
index 13a0050..b091414 100644
--- a/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/ServletListener.java
+++ b/framework/combined-service/src/main/java/org/apache/manifoldcf/combinedservice/ServletListener.java
@@ -20,6 +20,7 @@
 
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.crawler.system.ManifoldCF;
+import org.apache.manifoldcf.agents.system.AgentsDaemon;
 import javax.servlet.*;
 
 /** This class furnishes a servlet shutdown hook for ManifoldCF.  It should be referenced in the
@@ -29,23 +30,29 @@
 {
   public static final String _rcsid = "@(#)$Id$";
 
-  public static final String agentShutdownSignal = org.apache.manifoldcf.agents.AgentRun.agentShutdownSignal;
-  private Thread jobsThread = null;
+  protected static AgentsThread agentsThread = null;
+  protected IdleCleanupThread idleCleanupThread = null;
 
   public void contextInitialized(ServletContextEvent sce)
   {
     try
     {
-      ManifoldCF.initializeEnvironment();
-
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
 
       ManifoldCF.createSystemDatabase(tc);
       ManifoldCF.installTables(tc);
       ManifoldCF.registerThisAgent(tc);
       ManifoldCF.reregisterAllConnectors(tc);
 
-      ManifoldCF.startAgents(tc);
+      // This is for the UI and API components
+      idleCleanupThread = new IdleCleanupThread();
+      idleCleanupThread.start();
+
+      // This is for the agents process
+      agentsThread = new AgentsThread(ManifoldCF.getProcessID());
+      agentsThread.start();
+
     }
     catch (ManifoldCFException e)
     {
@@ -55,16 +62,87 @@
   
   public void contextDestroyed(ServletContextEvent sce)
   {
+    IThreadContext tc = ThreadContextFactory.make();
     try
     {
-      IThreadContext tc = ThreadContextFactory.make();
-      ManifoldCF.stopAgents(tc);
+      if (agentsThread != null)
+      {
+        AgentsDaemon.assertAgentsShutdownSignal(tc);
+        agentsThread.finishUp();
+        agentsThread = null;
+        AgentsDaemon.clearAgentsShutdownSignal(tc);
+      }
+      
+      while (true)
+      {
+        if (idleCleanupThread == null)
+          break;
+        idleCleanupThread.interrupt();
+        if (!idleCleanupThread.isAlive())
+          idleCleanupThread = null;
+      }
+    }
+    catch (InterruptedException e)
+    {
     }
     catch (ManifoldCFException e)
     {
-      throw new RuntimeException("Cannot shutdown servlet cleanly; "+e.getMessage(),e);
+      if (e.getErrorCode() != ManifoldCFException.INTERRUPTED)
+        throw new RuntimeException("Cannot shutdown servlet cleanly; "+e.getMessage(),e);
     }
-    ManifoldCF.cleanUpEnvironment();
+    ManifoldCF.cleanUpEnvironment(tc);
   }
 
+  protected static class AgentsThread extends Thread
+  {
+    
+    protected final String processID;
+    
+    protected Throwable exception = null;
+
+    public AgentsThread(String processID)
+    {
+      setName("Agents");
+      this.processID = processID;
+    }
+    
+    public void run()
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      try
+      {
+        AgentsDaemon.clearAgentsShutdownSignal(tc);
+        AgentsDaemon ad = new AgentsDaemon(processID);
+        try
+        {
+          ad.runAgents(tc);
+        }
+        finally
+        {
+          ad.stopAgents(tc);
+        }
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public void finishUp()
+      throws ManifoldCFException, InterruptedException
+    {
+      join();
+      if (exception != null)
+      {
+        if (exception instanceof RuntimeException)
+          throw (RuntimeException)exception;
+        if (exception instanceof Error)
+          throw (Error)exception;
+        if (exception instanceof ManifoldCFException)
+          throw (ManifoldCFException)exception;
+        throw new RuntimeException("Unknown exception type thrown: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+      }
+    }
+  }
+  
 }
diff --git a/framework/core/pom.xml b/framework/core/pom.xml
index b368bab..985e2f5 100644
--- a/framework/core/pom.xml
+++ b/framework/core/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -55,6 +55,11 @@
   </build>
 
   <dependencies>
+      <dependency>
+      <groupId>org.apache.httpcomponents</groupId>
+      <artifactId>httpclient</artifactId>
+      <version>${httpcomponent.httpclient.version}</version>
+    </dependency>
     <dependency>
       <groupId>javax.servlet</groupId>
       <artifactId>servlet-api</artifactId>
@@ -82,5 +87,28 @@
       <artifactId>velocity</artifactId>
       <version>${velocity.version}</version>
     </dependency>
+    <dependency>
+      <groupId>org.apache.zookeeper</groupId>
+      <artifactId>zookeeper</artifactId>
+      <version>${zookeeper.version}</version>
+    </dependency>
+    <dependency>
+      <groupId>postgresql</groupId>
+      <artifactId>postgresql</artifactId>
+      <version>${postgresql.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.hsqldb</groupId>
+      <artifactId>hsqldb</artifactId>
+      <version>${hsqldb.version}</version>
+      <scope>test</scope>
+    </dependency>
+    <dependency>
+      <groupId>org.apache.derby</groupId>
+      <artifactId>derby</artifactId>
+      <version>${derby.version}</version>
+      <scope>test</scope>
+    </dependency>
   </dependencies>
 </project>
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/DBInitializationCommand.java b/framework/core/src/main/java/org/apache/manifoldcf/core/DBInitializationCommand.java
index eaea556..e60da39 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/DBInitializationCommand.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/DBInitializationCommand.java
@@ -49,8 +49,8 @@
 
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
     IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
     doExecute(tc);
   }
 
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/LockClean.java b/framework/core/src/main/java/org/apache/manifoldcf/core/LockClean.java
index 6501521..be16ace 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/LockClean.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/LockClean.java
@@ -38,15 +38,14 @@
    */
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
-    String synchDir = ManifoldCF.getFileProperty(org.apache.manifoldcf.core.lockmanager.LockManager.synchDirectoryProperty).toString();
+    ManifoldCF.initializeEnvironment(ThreadContextFactory.make());
+    File synchDir = org.apache.manifoldcf.core.lockmanager.FileLockManager.getSynchDirectoryProperty();
     if (synchDir != null)
     {
       // Recursively clean up the contents of the synch directory. But don't remove the directory itself
-      File dir = new File(synchDir);
-      if (dir.isDirectory())
+      if (synchDir.isDirectory())
       {
-        removeContentsOfDirectory(dir);
+        removeContentsOfDirectory(synchDir);
       }
     }
     Logging.root.info("Synchronization storage cleaned up");
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/common/DateParser.java b/framework/core/src/main/java/org/apache/manifoldcf/core/common/DateParser.java
index daa4ec9..269ea1c 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/common/DateParser.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/common/DateParser.java
@@ -32,19 +32,41 @@
   {
     if (isoDateValue == null)
       return null;
-    // There are a number of variations on the basic format.
-    // We'll look for key characters to help is determine which is which.
-    StringBuilder isoFormatString = new StringBuilder("yy");
-    if (isoDateValue.length() > 2 && isoDateValue.charAt(2) != '-')
-      isoFormatString.append("yy");
-    isoFormatString.append("-MM-dd'T'HH:mm:ss");
-    if (isoDateValue.indexOf(".") != -1)
-      isoFormatString.append(".SSS");
-    if (isoDateValue.endsWith("Z"))
-      isoFormatString.append("'Z'");
+    
+    boolean isMicrosoft = (isoDateValue.indexOf("T") == -1);
+    
+    String formatString;
+    if (isMicrosoft)
+    {
+      formatString = "yyyy-MM-dd' 'HH:mm:ss";
+    }
     else
-      isoFormatString.append("Z");      // RFC 822 time, including general time zones
-    java.text.DateFormat iso8601Format = new java.text.SimpleDateFormat(isoFormatString.toString());
+    {
+      // There are a number of variations on the basic format.
+      // We'll look for key characters to help is determine which is which.
+      StringBuilder isoFormatString = new StringBuilder("yy");
+      if (isoDateValue.length() > 2 && isoDateValue.charAt(2) != '-')
+        isoFormatString.append("yy");
+      isoFormatString.append("-MM-dd'T'HH:mm:ss");
+      if (isoDateValue.indexOf(".") != -1)
+        isoFormatString.append(".SSS");
+      if (isoDateValue.endsWith("Z"))
+        isoFormatString.append("'Z'");
+      else
+      {
+        // We need to be able to parse either "-08:00" or "-0800".  The 'Z' specifier only handles
+        // -0800, unfortunately - see CONNECTORS-700.  So we have to do some hackery to remove the colon.
+        int colonIndex = isoDateValue.lastIndexOf(":");
+        int dashIndex = isoDateValue.lastIndexOf("-");
+        int plusIndex = isoDateValue.lastIndexOf("+");
+        if (colonIndex != -1 &&
+          ((dashIndex != -1 && colonIndex == dashIndex+3 && isNumeral(isoDateValue,dashIndex-1)) || (plusIndex != -1 && colonIndex == plusIndex+3 && isNumeral(isoDateValue,plusIndex-1))))
+          isoDateValue = isoDateValue.substring(0,colonIndex) + isoDateValue.substring(colonIndex+1);
+        isoFormatString.append("Z");      // RFC 822 time, including general time zones
+      }
+      formatString = isoFormatString.toString();
+    }
+    java.text.DateFormat iso8601Format = new java.text.SimpleDateFormat(formatString);
     try
     {
       return iso8601Format.parse(isoDateValue);
@@ -55,6 +77,11 @@
     }
   }
   
+  protected static boolean isNumeral(String value, int index)
+  {
+    return index >= 0 && value.charAt(index) >= '0' && value.charAt(index) <= '9';
+  }
+  
   /** Format ISO8601 date.
   */
   public static String formatISO8601Date(Date dateValue)
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/common/DeflateInputStream.java b/framework/core/src/main/java/org/apache/manifoldcf/core/common/DeflateInputStream.java
new file mode 100644
index 0000000..1aad5d0
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/common/DeflateInputStream.java
@@ -0,0 +1,222 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.common;
+
+import java.io.IOException;
+import java.io.InputStream;
+import java.io.PushbackInputStream;
+import java.util.zip.DataFormatException;
+import java.util.zip.Inflater;
+import java.util.zip.InflaterInputStream;
+
+/** Deflate input stream.  This class takes logic from HttpComponents HttpClient 4.2.x that
+* really should have been exposed as an independent input stream wrapper, and does it the right
+* way.  I will also open an HttpClient ticket so that this code can be pushed upstream eventually.
+*/
+public class DeflateInputStream extends InputStream
+{
+  private InputStream sourceStream;
+
+  public DeflateInputStream(final InputStream wrapped)
+    throws IOException
+  {
+    /*
+      * A zlib stream will have a header.
+      *
+      * CMF | FLG [| DICTID ] | ...compressed data | ADLER32 |
+      *
+      * * CMF is one byte.
+      *
+      * * FLG is one byte.
+      *
+      * * DICTID is four bytes, and only present if FLG.FDICT is set.
+      *
+      * Sniff the content. Does it look like a zlib stream, with a CMF, etc? c.f. RFC1950,
+      * section 2.2. http://tools.ietf.org/html/rfc1950#page-4
+      *
+      * We need to see if it looks like a proper zlib stream, or whether it is just a deflate
+      * stream. RFC2616 calls zlib streams deflate. Confusing, isn't it? That's why some servers
+      * implement deflate Content-Encoding using deflate streams, rather than zlib streams.
+      *
+      * We could start looking at the bytes, but to be honest, someone else has already read
+      * the RFCs and implemented that for us. So we'll just use the JDK libraries and exception
+      * handling to do this. If that proves slow, then we could potentially change this to check
+      * the first byte - does it look like a CMF? What about the second byte - does it look like
+      * a FLG, etc.
+      */
+
+    /* We read a small buffer to sniff the content. */
+    final byte[] peeked = new byte[6];
+
+    final PushbackInputStream pushback = new PushbackInputStream(wrapped, peeked.length);
+
+    final int headerLength = pushback.read(peeked);
+
+    if (headerLength == -1) {
+      throw new IOException("Unable to read the response");
+    }
+
+    /* We try to read the first uncompressed byte. */
+    final byte[] dummy = new byte[1];
+
+    final Inflater inf = new Inflater();
+
+    try {
+      int n;
+      while ((n = inf.inflate(dummy)) == 0) {
+        if (inf.finished()) {
+
+          /* Not expecting this, so fail loudly. */
+          throw new IOException("Unable to read the response");
+        }
+
+        if (inf.needsDictionary()) {
+
+          /* Need dictionary - then it must be zlib stream with DICTID part? */
+          break;
+        }
+
+        if (inf.needsInput()) {
+          inf.setInput(peeked);
+        }
+      }
+
+      if (n == -1) {
+        throw new IOException("Unable to read the response");
+      }
+
+      /*
+        * We read something without a problem, so it's a valid zlib stream. Just need to reset
+        * and return an unused InputStream now.
+        */
+      pushback.unread(peeked, 0, headerLength);
+      sourceStream = new DeflateStream(pushback, new Inflater());
+    } catch (final DataFormatException e) {
+
+      /* Presume that it's an RFC1951 deflate stream rather than RFC1950 zlib stream and try
+        * again. */
+      pushback.unread(peeked, 0, headerLength);
+      sourceStream = new DeflateStream(pushback, new Inflater(true));
+    } finally {
+      inf.end();
+    }
+
+  }
+
+  /** Read a byte.
+  */
+  @Override
+  public int read()
+    throws IOException
+  {
+    return sourceStream.read();
+  }
+    
+  /** Read lots of bytes.
+  */
+  @Override
+  public int read(byte[] b)
+    throws IOException
+  {
+    return sourceStream.read(b);
+  }
+
+  /** Read lots of specific bytes.
+  */
+  @Override
+  public int read(byte[] b, int off, int len)
+    throws IOException
+  {
+    return sourceStream.read(b,off,len);
+  }
+  
+  /** Skip
+  */
+  @Override
+  public long skip(long n)
+    throws IOException
+  {
+    return sourceStream.skip(n);
+  }
+
+  /** Get available.
+  */
+  @Override
+  public int available()
+    throws IOException
+  {
+    return sourceStream.available();
+  }
+
+  /** Mark.
+  */
+  @Override
+  public void mark(int readLimit)
+  {
+    sourceStream.mark(readLimit);
+  }
+
+  /** Reset.
+  */
+  @Override
+  public void reset()
+    throws IOException
+  {
+    sourceStream.reset();
+  }
+
+  /** Check if mark is supported.
+  */
+  @Override
+  public boolean markSupported()
+  {
+    return sourceStream.markSupported();
+  }
+
+  /** Close.
+  */
+  @Override
+  public void close()
+    throws IOException
+  {
+    sourceStream.close();
+  }
+
+  static class DeflateStream extends InflaterInputStream {
+
+    private boolean closed = false;
+
+    public DeflateStream(final InputStream in, final Inflater inflater) {
+      super(in, inflater);
+    }
+
+    @Override
+    public void close() throws IOException {
+      if (closed) {
+        return;
+      }
+      closed = true;
+      inf.end();
+      super.close();
+    }
+
+  }
+
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/common/InterruptibleSocketFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/common/InterruptibleSocketFactory.java
new file mode 100644
index 0000000..4b39035
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/common/InterruptibleSocketFactory.java
@@ -0,0 +1,196 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.common;
+
+import java.io.*;
+import java.net.*;
+
+import org.apache.http.conn.ConnectTimeoutException;
+
+/** SSL Socket factory which wraps another socket factory but allows timeout on socket
+* creation.
+*/
+public class InterruptibleSocketFactory extends javax.net.ssl.SSLSocketFactory
+{
+  protected final javax.net.ssl.SSLSocketFactory wrappedFactory;
+  protected final long connectTimeoutMilliseconds;
+    
+  public InterruptibleSocketFactory(javax.net.ssl.SSLSocketFactory wrappedFactory, long connectTimeoutMilliseconds)
+  {
+    this.wrappedFactory = wrappedFactory;
+    this.connectTimeoutMilliseconds = connectTimeoutMilliseconds;
+  }
+
+  @Override
+  public Socket createSocket()
+    throws IOException
+  {
+    // Socket isn't open
+    return wrappedFactory.createSocket();
+  }
+    
+  @Override
+  public Socket createSocket(String host, int port)
+    throws IOException, UnknownHostException
+  {
+    return fireOffThread(InetAddress.getByName(host),port,null,-1);
+  }
+
+  @Override
+  public Socket createSocket(InetAddress host, int port)
+    throws IOException
+  {
+    return fireOffThread(host,port,null,-1);
+  }
+    
+  @Override
+  public Socket createSocket(String host, int port, InetAddress localHost, int localPort)
+    throws IOException, UnknownHostException
+  {
+    return fireOffThread(InetAddress.getByName(host),port,localHost,localPort);
+  }
+    
+  @Override
+  public Socket createSocket(InetAddress address, int port, InetAddress localAddress, int localPort)
+    throws IOException
+  {
+    return fireOffThread(address,port,localAddress,localPort);
+  }
+    
+  @Override
+  public Socket createSocket(Socket s, String host, int port, boolean autoClose)
+    throws IOException
+  {
+    // Socket's already open
+    return wrappedFactory.createSocket(s,host,port,autoClose);
+  }
+    
+  @Override
+  public String[] getDefaultCipherSuites()
+  {
+    return wrappedFactory.getDefaultCipherSuites();
+  }
+    
+  @Override
+  public String[] getSupportedCipherSuites()
+  {
+    return wrappedFactory.getSupportedCipherSuites();
+  }
+    
+  protected Socket fireOffThread(InetAddress address, int port, InetAddress localHost, int localPort)
+    throws IOException
+  {
+    SocketCreateThread thread = new SocketCreateThread(wrappedFactory,address,port,localHost,localPort);
+    thread.start();
+    try
+    {
+      // Wait for thread to complete for only a certain amount of time!
+      thread.join(connectTimeoutMilliseconds);
+      // If join() times out, then the thread is going to still be alive.
+      if (thread.isAlive())
+      {
+        // Kill the thread - not that this will necessarily work, but we need to try
+        thread.interrupt();
+        throw new ConnectTimeoutException("Secure connection timed out");
+      }
+      // The thread terminated.  Throw an error if there is one, otherwise return the result.
+      Throwable t = thread.getException();
+      if (t != null)
+      {
+        if (t instanceof java.net.SocketTimeoutException)
+          throw (java.net.SocketTimeoutException)t;
+        else if (t instanceof ConnectTimeoutException)
+          throw (ConnectTimeoutException)t;
+        else if (t instanceof InterruptedIOException)
+          throw (InterruptedIOException)t;
+        else if (t instanceof IOException)
+          throw (IOException)t;
+        else if (t instanceof Error)
+          throw (Error)t;
+        else if (t instanceof RuntimeException)
+          throw (RuntimeException)t;
+        throw new Error("Received an unexpected exception: "+t.getMessage(),t);
+      }
+      return thread.getResult();
+    }
+    catch (InterruptedException e)
+    {
+      throw new InterruptedIOException("Interrupted: "+e.getMessage());
+    }
+
+  }
+
+  /** Create a secure socket in a thread, so that we can "give up" after a while if the socket fails to connect.
+  */
+  protected static class SocketCreateThread extends Thread
+  {
+    // Socket factory
+    protected javax.net.ssl.SSLSocketFactory socketFactory;
+    protected InetAddress host;
+    protected int port;
+    protected InetAddress clientHost;
+    protected int clientPort;
+
+    // The return socket
+    protected Socket rval = null;
+    // The return error
+    protected Throwable throwable = null;
+
+    /** Create the thread */
+    public SocketCreateThread(javax.net.ssl.SSLSocketFactory socketFactory,
+      InetAddress host,
+      int port,
+      InetAddress clientHost,
+      int clientPort)
+    {
+      this.socketFactory = socketFactory;
+      this.host = host;
+      this.port = port;
+      this.clientHost = clientHost;
+      this.clientPort = clientPort;
+      setDaemon(true);
+    }
+
+    public void run()
+    {
+      try
+      {
+        if (clientHost == null)
+          rval = socketFactory.createSocket(host,port);
+        else
+          rval = socketFactory.createSocket(host,port,clientHost,clientPort);
+      }
+      catch (Throwable e)
+      {
+        throwable = e;
+      }
+    }
+
+    public Throwable getException()
+    {
+      return throwable;
+    }
+
+    public Socket getResult()
+    {
+      return rval;
+    }
+  }
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadInputStream.java b/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadInputStream.java
index 4786e84..5178d6c 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadInputStream.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadInputStream.java
@@ -26,20 +26,27 @@
 */
 public class XThreadInputStream extends InputStream
 {
-  private byte[] buffer = new byte[65536];
+  private final byte[] buffer = new byte[65536];
   private int startPoint = 0;
   private int byteCount = 0;
   private boolean streamEnd = false;
   private IOException failureException = null;
-  private InputStream sourceStream;
   private boolean abort = false;
-  
-  /** Constructor */
+
+  private final InputStream sourceStream;
+	
+  /** Constructor, from a given input stream. */
   public XThreadInputStream(InputStream sourceStream)
   {
     this.sourceStream = sourceStream;
   }
   
+  /** Constructor, from another source. */
+  public XThreadInputStream()
+  {
+    this.sourceStream = null;
+  }
+  
   /** Call this method to abort the stuffQueue() method.
   */
   public void abort()
@@ -51,8 +58,63 @@
     }
   }
   
+  /** This method is called from the helper thread side, to stuff bytes onto
+  * the queue when there is no input stream.
+  * It exits only when interrupted or done.
+  */
+  public void stuffQueue(byte[] byteBuffer, int offset, int amount)
+    throws InterruptedException
+  {
+    while (amount > 0)
+    {
+      int maxToRead;
+      int readStartPoint;
+      synchronized (this)
+      {
+        if (abort || streamEnd)
+          return;
+        // Calculate amount to read
+        maxToRead = buffer.length - byteCount;
+        if (maxToRead == 0)
+        {
+          wait();
+          continue;
+        }
+        readStartPoint = (startPoint + byteCount) & (buffer.length-1);
+      }
+      if (readStartPoint + maxToRead >= buffer.length)
+        maxToRead = buffer.length - readStartPoint;
+      // Now, copy to buffer
+      int amt;
+      if (amount > maxToRead)
+        amt = maxToRead;
+      else
+        amt = amount;
+      System.arraycopy(byteBuffer,offset,buffer,readStartPoint,amt);
+      offset += amt;
+      amount -= amt;
+      synchronized (this)
+      {
+        byteCount += amt;
+        notifyAll();
+      }
+    }
+  }
+  
+  /** Call this method when there is no more data to write.
+  */
+  public void doneStuffingQueue()
+  {
+    synchronized (this)
+    {
+      streamEnd = true;
+      notifyAll();
+    }
+  }
+  
   /** This method is called from the helper thread side, to keep the queue
-  * stuffed.  It exits when the stream is empty, or when interrupted.
+  * stuffed from the input stream.
+  * It exits when the stream is empty, or when interrupted.
   */
   public void stuffQueue()
     throws IOException, InterruptedException
@@ -232,7 +294,7 @@
   public void close()
     throws IOException
   {
-    // MHL
+    // Do nothing; stream close is handled by the caller on the stuffer side
   }
 
 }
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadOutputStream.java b/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadOutputStream.java
new file mode 100644
index 0000000..7da2306
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadOutputStream.java
@@ -0,0 +1,75 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.common;
+
+import java.io.*;
+
+/** Output stream, which writes to XThreadInputStream.
+* Use this when an API method needs to write to an output stream, but
+* you want an input stream in the other thread receiving the data.
+*/
+public class XThreadOutputStream extends OutputStream {
+
+  protected final XThreadInputStream inputStream;
+    
+  byte[] byteBuffer = new byte[1];
+
+  public XThreadOutputStream(XThreadInputStream inputStream) {
+    this.inputStream = inputStream;
+  }
+  
+  @Override
+  public void write(byte[] buffer)
+    throws IOException {
+    try {
+      inputStream.stuffQueue(buffer,0,buffer.length);
+    } catch (InterruptedException e) {
+      throw new InterruptedIOException(e.getMessage());
+    }
+  }
+
+  @Override
+  public void write(int c)
+    throws IOException {
+    byteBuffer[0] = (byte)c;
+    try {
+      inputStream.stuffQueue(byteBuffer,0,1);
+    } catch (InterruptedException e) {
+      throw new InterruptedIOException(e.getMessage());
+    }
+  }
+
+  @Override
+  public void write(byte[] buffer, int pos, int amt)
+    throws IOException {
+    try {
+      inputStream.stuffQueue(buffer,pos,amt);
+    } catch (InterruptedException e) {
+      throw new InterruptedIOException(e.getMessage());
+    }
+  }
+    
+  @Override
+  public void close()
+    throws IOException {
+    inputStream.doneStuffingQueue();
+    super.close();
+  }
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadStringBuffer.java b/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadStringBuffer.java
new file mode 100644
index 0000000..ce0f146
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/common/XThreadStringBuffer.java
@@ -0,0 +1,89 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.common;
+
+import java.util.*;
+
+/** Thread-safe class that functions as a limited-size buffer of strings */
+public class XThreadStringBuffer
+{
+  protected static int MAX_SIZE = 1024;
+  
+  protected List<String> buffer = new ArrayList<String>(MAX_SIZE);
+  
+  protected boolean complete = false;
+  protected boolean abandoned = false;
+  
+  /** Constructor */
+  public XThreadStringBuffer()
+  {
+  }
+  
+  /** Add a string to the buffer, and block if the buffer is full */
+  public synchronized void add(String string)
+    throws InterruptedException
+  {
+    while (buffer.size() == MAX_SIZE && !abandoned)
+      wait();
+    if (abandoned)
+      return;
+    buffer.add(string);
+    // Notify threads that are waiting on there being stuff in the queue
+    notifyAll();
+  }
+  
+  /** Signal that the buffer should be abandoned.
+  * Called by the receiving thread! */
+  public synchronized void abandon()
+  {
+    abandoned = true;
+    // Notify waiting threads
+    notifyAll();
+  }
+  
+  /** Signal that the operation is complete, and that no more strings
+  * will be added.  Called by the sending thread!
+  */
+  public synchronized void signalDone()
+  {
+    complete = true;
+    // Notify threads that are waiting for stuff to appear, because it won't
+    notifyAll();
+  }
+  
+  /** Pull an id off the buffer, and wait if there's more to come.
+  * Called by the receiving thread!
+  * Returns null if the operation is complete.
+  */
+  public synchronized String fetch()
+    throws InterruptedException
+  {
+    while (buffer.size() == 0 && !complete)
+      wait();
+    if (buffer.size() == 0)
+      return null;
+    boolean isBufferFull = (buffer.size() == MAX_SIZE);
+    String rval = buffer.remove(buffer.size()-1);
+    // Notify those threads waiting on buffer being not completely full to wake
+    if (isBufferFull)
+      notifyAll();
+    return rval;
+  }
+  
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/connector/BaseConnector.java b/framework/core/src/main/java/org/apache/manifoldcf/core/connector/BaseConnector.java
index b2f7cea..e49b56e 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/connector/BaseConnector.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/connector/BaseConnector.java
@@ -39,6 +39,7 @@
   * It is called when the connector is registered.
   *@param threadContext is the current thread context.
   */
+  @Override
   public void install(IThreadContext threadContext)
     throws ManifoldCFException
   {
@@ -50,6 +51,7 @@
   * It is called when the connector is deregistered.
   *@param threadContext is the current thread context.
   */
+  @Override
   public void deinstall(IThreadContext threadContext)
     throws ManifoldCFException
   {
@@ -59,6 +61,7 @@
   /** Connect.  The configuration parameters are included.
   *@param configParams are the configuration parameters for this connection.
   */
+  @Override
   public void connect(ConfigParams configParams)
   {
     params = configParams;
@@ -70,6 +73,7 @@
   /** Test the connection.  Returns a string describing the connection integrity.
   *@return the connection's status as a displayable string.
   */
+  @Override
   public String check()
     throws ManifoldCFException
   {
@@ -80,14 +84,27 @@
   /** This method is periodically called for all connectors that are connected but not
   * in active use.
   */
+  @Override
   public void poll()
     throws ManifoldCFException
   {
     // Base version does nothing
   }
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  @Override
+  public boolean isConnected()
+  {
+    // Consider it connected.
+    return true;
+  }
+
   /** Close the connection.  Call this before discarding the repository connector.
   */
+  @Override
   public void disconnect()
     throws ManifoldCFException
   {
@@ -97,6 +114,7 @@
   /** Clear out any state information specific to a given thread.
   * This method is called when this object is returned to the connection pool.
   */
+  @Override
   public void clearThreadContext()
   {
     currentContext = null;
@@ -105,6 +123,7 @@
   /** Attach to a new thread.
   *@param threadContext is the new thread context.
   */
+  @Override
   public void setThreadContext(IThreadContext threadContext)
     throws ManifoldCFException
   {
@@ -114,6 +133,7 @@
   /** Get configuration information.
   *@return the configuration information for this class.
   */
+  @Override
   public ConfigParams getConfiguration()
   {
     return params;
@@ -128,6 +148,7 @@
   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
   */
+  @Override
   public void outputConfigurationHeader(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, List<String> tabsArray)
     throws ManifoldCFException, IOException
   {
@@ -169,6 +190,7 @@
   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
   *@param tabName is the current tab name.
   */
+  @Override
   public void outputConfigurationBody(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
@@ -199,6 +221,7 @@
   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
   *@return null if all is well, or a string error message if there is an error that should prevent saving of the connection (and cause a redirection to an error page).
   */
+  @Override
   public String processConfigurationPost(IThreadContext threadContext, IPostParameters variableContext, Locale locale, ConfigParams parameters)
     throws ManifoldCFException
   {
@@ -228,6 +251,7 @@
   *@param locale is the locale that the output should use.
   *@param parameters are the configuration parameters, as they currently exist, for this connection being configured.
   */
+  @Override
   public void viewConfiguration(IThreadContext threadContext, IHTTPOutput out, Locale locale, ConfigParams parameters)
     throws ManifoldCFException, IOException
   {
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/connectorpool/ConnectorPool.java b/framework/core/src/main/java/org/apache/manifoldcf/core/connectorpool/ConnectorPool.java
new file mode 100644
index 0000000..8ec825a
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/connectorpool/ConnectorPool.java
@@ -0,0 +1,815 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.connectorpool;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+import java.util.*;
+import java.io.*;
+import java.lang.reflect.*;
+
+/** This is the base factory class for all ConnectorPool objects.
+*/
+public abstract class ConnectorPool<T extends IConnector>
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // How global connector allocation works:
+  // (1) There is a lock-manager "service" associated with this connector pool.  This allows us to clean
+  // up after local pools that have died without being released.  There's one anonymous service instance per local pool,
+  // and thus one service instance per JVM.
+  // (2) Each local pool knows how many connector instances of each type (keyed by connection name) there
+  // are.
+  // (3) Each local pool/connector instance type has a local authorization count.  This is the amount it's
+  // allowed to actually keep.  If the pool has more connectors of a type than the local authorization count permits,
+  // then every connector release operation will destroy the released connector until the local authorization count
+  // is met.
+  // (4) Each local pool/connector instance type needs a global variable describing how many CURRENT instances
+  // the local pool has allocated.  This is a transient value which should automatically go to zero if the service becomes inactive.
+  // The lock manager has primitives now that allow data to be set this way.  We will use the connection name as the
+  // "data type" name - only in the local pool will we pay any attention to config info and class name, and flush those handles
+  // that get returned that have the wrong info attached.
+
+  /** Target calc lock prefix */
+  protected final static String targetCalcLockPrefix = "_POOLTARGET_";
+  
+  /** Service type prefix */
+  protected final String serviceTypePrefix;
+
+  /** Pool hash table. Keyed by connection name; value is Pool */
+  protected final Map<String,Pool> poolHash = new HashMap<String,Pool>();
+
+  /** Random number */
+  protected final static Random randomNumberGenerator = new Random();
+  
+  protected ConnectorPool(String serviceTypePrefix)
+  {
+    this.serviceTypePrefix = serviceTypePrefix;
+  }
+
+  // Protected methods
+  
+  /** Override this method to hook into a connector manager.
+  */
+  protected abstract boolean isInstalled(IThreadContext tc, String className)
+    throws ManifoldCFException;
+  
+  /** Override this method to check if a connection name is still valid.
+  */
+  protected abstract boolean isConnectionNameValid(IThreadContext tc, String connectionName)
+    throws ManifoldCFException;
+  
+  /** Get a connector instance.
+  *@param className is the class name.
+  *@return the instance.
+  */
+  protected T createConnectorInstance(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    if (!isInstalled(threadContext,className))
+      return null;
+
+    try
+    {
+      Class theClass = ManifoldCF.findClass(className);
+      Class[] argumentClasses = new Class[0];
+      // Look for a constructor
+      Constructor c = theClass.getConstructor(argumentClasses);
+      Object[] arguments = new Object[0];
+      Object o = c.newInstance(arguments);
+      try
+      {
+        return (T)o;
+      }
+      catch (ClassCastException e)
+      {
+        throw new ManifoldCFException("Class '"+className+"' does not implement IConnector.");
+      }
+    }
+    catch (InvocationTargetException e)
+    {
+      Throwable z = e.getTargetException();
+      if (z instanceof Error)
+        throw (Error)z;
+      else if (z instanceof RuntimeException)
+        throw (RuntimeException)z;
+      else if (z instanceof ManifoldCFException)
+        throw (ManifoldCFException)z;
+      else
+        throw new RuntimeException("Unknown exception type: "+z.getClass().getName()+": "+z.getMessage(),z);
+    }
+    catch (ClassNotFoundException e)
+    {
+      // Equivalent to the connector not being installed
+      return null;
+      //throw new ManifoldCFException("No connector class '"+className+"' was found.",e);
+    }
+    catch (NoSuchMethodException e)
+    {
+      throw new ManifoldCFException("No appropriate constructor for IConnector implementation '"+
+        className+"'.  Need xxx(ConfigParams).",
+        e);
+    }
+    catch (SecurityException e)
+    {
+      throw new ManifoldCFException("Protected constructor for IConnector implementation '"+className+"'",
+        e);
+    }
+    catch (IllegalAccessException e)
+    {
+      throw new ManifoldCFException("Unavailable constructor for IConnector implementation '"+className+"'",
+        e);
+    }
+    catch (IllegalArgumentException e)
+    {
+      throw new ManifoldCFException("Shouldn't happen!!!",e);
+    }
+    catch (InstantiationException e)
+    {
+      throw new ManifoldCFException("InstantiationException for IConnector implementation '"+className+"'",
+        e);
+    }
+    catch (ExceptionInInitializerError e)
+    {
+      throw new ManifoldCFException("ExceptionInInitializerError for IConnector implementation '"+className+"'",
+        e);
+    }
+
+  }
+
+  /** Get multiple connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  */
+  public T[] grabMultiple(IThreadContext threadContext, Class<T> clazz,
+    String[] orderingKeys, String[] connectionNames,
+    String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
+    throws ManifoldCFException
+  {
+    T[] rval = (T[])Array.newInstance(clazz,classNames.length);
+    Map<String,Integer> orderMap = new HashMap<String,Integer>();
+    for (int i = 0; i < orderingKeys.length; i++)
+    {
+      if (orderMap.get(orderingKeys[i]) != null)
+        throw new ManifoldCFException("Found duplicate order key");
+      orderMap.put(orderingKeys[i],new Integer(i));
+    }
+    java.util.Arrays.sort(orderingKeys);
+    for (int i = 0; i < orderingKeys.length; i++)
+    {
+      String orderingKey = orderingKeys[i];
+      int index = orderMap.get(orderingKey).intValue();
+      String connectionName = connectionNames[index];
+      String className = classNames[index];
+      ConfigParams cp = configInfos[index];
+      int maxPoolSize = maxPoolSizes[index];
+      try
+      {
+        T connector = grab(threadContext,connectionName,className,cp,maxPoolSize);
+        rval[index] = connector;
+      }
+      catch (Throwable e)
+      {
+        while (i > 0)
+        {
+          i--;
+          orderingKey = orderingKeys[i];
+          index = orderMap.get(orderingKey).intValue();
+          try
+          {
+            release(threadContext,connectionName,rval[index]);
+          }
+          catch (ManifoldCFException e2)
+          {
+          }
+        }
+        if (e instanceof ManifoldCFException)
+          throw (ManifoldCFException)e;
+        else if (e instanceof RuntimeException)
+          throw (RuntimeException)e;
+        else if (e instanceof Error)
+          throw (Error)e;
+        else
+          throw new RuntimeException("Unexpected exception type: "+e.getClass().getName()+": "+e.getMessage(),e);
+      }
+    }
+    return rval;
+  }
+
+  /** Get a connector.
+  * The connector is specified by its connection name, class, and parameters.  If the
+  * class and parameters corresponding to a connection name change, then this code
+  * will destroy any old connector instance that does not correspond, and create a new
+  * one using the new class and parameters.
+  *@param threadContext is the current thread context.
+  *@param connectionName is the name of the connection.  This functions as a pool key.
+  *@param className is the name of the class to get a connector for.
+  *@param configInfo are the name/value pairs constituting configuration info
+  * for this class.
+  */
+  public T grab(IThreadContext threadContext, String connectionName,
+    String className, ConfigParams configInfo, int maxPoolSize)
+    throws ManifoldCFException
+  {
+    // We want to get handles off the pool and use them.  But the
+    // handles we fetch have to have the right config information.
+
+    // Loop until we successfully get a connector.  This is necessary because the
+    // pool may vanish because it has been closed.
+    while (true)
+    {
+      Pool p;
+      synchronized (poolHash)
+      {
+        p = poolHash.get(connectionName);
+        if (p == null)
+        {
+          p = new Pool(threadContext, maxPoolSize, connectionName);
+          poolHash.put(connectionName,p);
+          // Do an initial poll right away, so we don't have to wait 5 seconds to 
+          // get a connector instance unless they're already all in use.
+          p.pollAll(threadContext);
+        }
+        else
+        {
+          p.updateMaximumPoolSize(threadContext, maxPoolSize);
+        }
+      }
+
+      T rval = p.getConnector(threadContext,className,configInfo);
+      if (rval != null)
+        return rval;
+    }
+
+  }
+
+  /** Release multiple output connectors.
+  */
+  public void releaseMultiple(IThreadContext threadContext, String[] connectionNames, T[] connectors)
+    throws ManifoldCFException
+  {
+    ManifoldCFException currentException = null;
+    for (int i = 0; i < connectors.length; i++)
+    {
+      String connectionName = connectionNames[i];
+      T c = connectors[i];
+      try
+      {
+        release(threadContext,connectionName,c);
+      }
+      catch (ManifoldCFException e)
+      {
+        if (currentException == null)
+          currentException = e;
+      }
+    }
+    if (currentException != null)
+      throw currentException;
+  }
+
+  /** Release an output connector.
+  *@param connectionName is the connection name.
+  *@param connector is the connector to release.
+  */
+  public void release(IThreadContext threadContext, String connectionName, T connector)
+    throws ManifoldCFException
+  {
+    // If the connector is null, skip the release, because we never really got the connector in the first place.
+    if (connector == null)
+      return;
+
+    // Figure out which pool this goes on, and put it there
+    Pool p;
+    synchronized (poolHash)
+    {
+      p = poolHash.get(connectionName);
+    }
+
+    if (p != null)
+      p.releaseConnector(threadContext, connector);
+    else
+    {
+      // Destroy the connector instance, since the pool is gone and that means we're shutting down
+      connector.setThreadContext(threadContext);
+      try
+      {
+        connector.disconnect();
+      }
+      finally
+      {
+        connector.clearThreadContext();
+      }
+    }
+  }
+
+  /** Idle notification for inactive output connector handles.
+  * This method polls all inactive handles.
+  */
+  public void pollAllConnectors(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // System.out.println("Pool stats:");
+
+    // Go through the whole pool and notify everyone
+    synchronized (poolHash)
+    {
+      Iterator<String> iter = poolHash.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String connectionName = iter.next();
+        Pool p = poolHash.get(connectionName);
+        if (isConnectionNameValid(threadContext,connectionName))
+          p.pollAll(threadContext);
+        else
+        {
+          p.releaseAll(threadContext);
+          iter.remove();
+        }
+      }
+    }
+
+  }
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  public void flushUnusedConnectors(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // Go through the whole pool and clean it out
+    synchronized (poolHash)
+    {
+      Iterator<Pool> iter = poolHash.values().iterator();
+      while (iter.hasNext())
+      {
+        Pool p = iter.next();
+        p.flushUnused(threadContext);
+      }
+    }
+  }
+
+  /** Clean up all open output connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  *@param threadContext is the local thread context.
+  */
+  public void closeAllConnectors(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // Go through the whole pool and clean it out
+    synchronized (poolHash)
+    {
+      Iterator<Pool> iter = poolHash.values().iterator();
+      while (iter.hasNext())
+      {
+        Pool p = iter.next();
+        p.releaseAll(threadContext);
+        iter.remove();
+      }
+    }
+  }
+
+  // Protected methods and classes
+  
+  protected String buildServiceTypeName(String connectionName)
+  {
+    return serviceTypePrefix + connectionName;
+  }
+  
+  protected String buildTargetCalcLockName(String connectionName)
+  {
+    return targetCalcLockPrefix + serviceTypePrefix + connectionName;
+  }
+  
+  /** This class represents a value in the pool hash, which corresponds to a given key.
+  */
+  protected class Pool
+  {
+    /** Whether this pool is alive */
+    protected boolean isAlive = true;
+    /** The global maximum for this pool */
+    protected int globalMax;
+    /** Service type name */
+    protected final String serviceTypeName;
+    /** The (anonymous) service name */
+    protected final String serviceName;
+    /** The target calculation lock name */
+    protected final String targetCalcLockName;
+    /** Place where we keep unused connector instances */
+    protected final List<T> stack = new ArrayList<T>();
+    /** The number of local instances we can currently pass out to requesting threads.  Initially zero until pool is apportioned */
+    protected int numFree = 0;
+    /** The number of instances we are allowed to hand out locally, at this time */
+    protected int localMax = 0;
+    /** The number of instances that are actually connected and in use, as of the last poll */
+    protected int localInUse = 0;
+    
+    /** Constructor
+    */
+    public Pool(IThreadContext threadContext, int maxCount, String connectionName)
+      throws ManifoldCFException
+    {
+      this.globalMax = maxCount;
+      this.targetCalcLockName = buildTargetCalcLockName(connectionName);
+      this.serviceTypeName = buildServiceTypeName(connectionName);
+      // Now, register and activate service anonymously, and record the service name we get.
+      ILockManager lockManager = LockManagerFactory.make(threadContext);
+      this.serviceName = lockManager.registerServiceBeginServiceActivity(serviceTypeName, null, null);
+    }
+
+    /** Update the maximum pool size.
+    *@param maxPoolSize is the new global maximum pool size.
+    */
+    public synchronized void updateMaximumPoolSize(IThreadContext threadContext, int maxPoolSize)
+      throws ManifoldCFException
+    {
+      // This updates the maximum global size that the pool uses.
+      globalMax = maxPoolSize;
+      // We do nothing else at this time; we rely on polling to reapportion the pool.
+    }
+
+    
+    /** Grab a connector.
+    * If none exists, construct it using the information in the pool key.
+    *@return the connector, or null if no connector could be connected.
+    */
+    public synchronized T getConnector(IThreadContext threadContext, String className, ConfigParams configParams)
+      throws ManifoldCFException
+    {
+      // numFree represents the number of available connector instances that have not been given out at this moment.
+      // So it's the max minus the pool count minus the number in use.
+      while (isAlive && numFree <= 0)
+      {
+        try
+        {
+          wait();
+        }
+        catch (InterruptedException e)
+        {
+          throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        }
+      }
+      if (!isAlive)
+        return null;
+      
+      // We decrement numFree when we hand out a connector instance; we increment numFree when we
+      // throw away a connector instance from the pool.
+      while (true)
+      {
+        if (stack.size() == 0)
+        {
+          T newrc = createConnectorInstance(threadContext,className);
+          newrc.connect(configParams);
+          stack.add(newrc);
+        }
+        
+        // Since thread context set can fail, do that before we remove it from the pool.
+        T rc = stack.remove(stack.size()-1);
+        // Set the thread context.  This can throw an exception!!  We need to be sure our bookkeeping
+        // is resilient against that possibility.  Losing a connector instance that was just sitting
+        // in the pool does NOT affect numFree, so no change needed here; we just can't disconnect the
+        // connector instance if this fails.
+        rc.setThreadContext(threadContext);
+        // Verify that the connector is in fact compatible
+        if (!(rc.getClass().getName().equals(className) && rc.getConfiguration().equals(configParams)))
+        {
+          // Looks like parameters have changed, so discard old instance.
+          try
+          {
+            rc.disconnect();
+          }
+          finally
+          {
+            rc.clearThreadContext();
+          }
+          continue;
+        }
+        // About to return a connector instance; decrement numFree accordingly.
+        numFree--;
+        return rc;
+      }
+    }
+
+    /** Release a connector to the pool.
+    *@param connector is the connector.
+    */
+    public synchronized void releaseConnector(IThreadContext threadContext, T connector)
+      throws ManifoldCFException
+    {
+      if (connector == null)
+        return;
+
+      // Make sure connector knows it's released
+      connector.clearThreadContext();
+      // Return it to the pool, and note that it is no longer in use.
+      stack.add(connector);
+      numFree++;
+      // Determine if we need to free some connectors.  If the number
+      // of allocated connectors exceeds the target, we unload some
+      // off the stack.
+      // The question is whether the stack has too many connector instances
+      // on it.  Obviously, if it stack.size() > max, it does - but remember
+      // that the number of outstanding connectors is max - numFree.
+      // So, we have an excess if stack.size() > max - (max-numFree).
+      // Simplifying: excess is when stack.size() > numFree.
+      while (stack.size() > 0 && stack.size() > numFree)
+      {
+        // Try to find a connector instance that is not actually connected.
+        // These are likely to be at the front of the queue, since those are the
+        // oldest.
+        int j;
+        for (j = 0; j < stack.size(); j++)
+        {
+          if (!stack.get(j).isConnected())
+            break;
+        }
+        T rc;
+        if (j == stack.size())
+          rc = stack.remove(stack.size()-1);
+        else
+          rc = stack.remove(j);
+        rc.setThreadContext(threadContext);
+        try
+        {
+          rc.disconnect();
+        }
+        finally
+        {
+          rc.clearThreadContext();
+        }
+      }
+
+      notifyAll();
+    }
+
+    /** Notify all free connectors.
+    */
+    public synchronized void pollAll(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      // The meat of the cross-cluster apportionment algorithm goes here!
+      // Two global numbers each service posts: "in-use" and "target".  At no time does a service *ever* post either a "target"
+      // that, together with all other active service targets, is in excess of the max.  Also, at no time a service post
+      // a target that, when added to the other "in-use" values, exceeds the max.  If the "in-use" values everywhere else
+      // already equal or exceed the max, then the target will be zero.
+      // The target quota is calculated as follows:
+      // (1) Target is summed, excluding ours.  This is GlobalTarget.
+      // (2) In-use is summed, excluding ours.  This is GlobalInUse.
+      // (3) Our MaximumTarget is computed, which is Maximum - GlobalTarget or Maximum - GlobalInUse, whichever is
+      //     smaller, but never less than zero.
+      // (4) Our FairTarget is computed.  The FairTarget divides the Maximum by the number of services, and adds
+      //     1 randomly based on the remainder.
+      // (5) We compute OptimalTarget as follows: We start with current local target.  If current local target
+      //    exceeds current local in-use count, we adjust OptimalTarget downward by one.  Otherwise we increase it
+      //    by one.
+      // (6) Finally, we compute Target by taking the minimum of MaximumTarget, FairTarget, and OptimalTarget.
+
+      ILockManager lockManager = LockManagerFactory.make(threadContext);
+      lockManager.enterWriteLock(targetCalcLockName);
+      try
+      {
+        // Compute MaximumTarget
+        SumClass sumClass = new SumClass(serviceName);
+        lockManager.scanServiceData(serviceTypeName, sumClass);
+        //System.out.println("numServices = "+sumClass.getNumServices()+"; globalTarget = "+sumClass.getGlobalTarget()+"; globalInUse = "+sumClass.getGlobalInUse());
+        
+        int numServices = sumClass.getNumServices();
+        if (numServices == 0)
+          return;
+        int globalTarget = sumClass.getGlobalTarget();
+        int globalInUse = sumClass.getGlobalInUse();
+        int maximumTarget = globalMax - globalTarget;
+        if (maximumTarget > globalMax - globalInUse)
+          maximumTarget = globalMax - globalInUse;
+        if (maximumTarget < 0)
+          maximumTarget = 0;
+        
+        // Compute FairTarget
+        int fairTarget = globalMax / numServices;
+        int remainder = globalMax % numServices;
+        // Randomly choose whether we get an addition to the FairTarget
+        if (randomNumberGenerator.nextInt(numServices) < remainder)
+          fairTarget++;
+        
+        // Compute OptimalTarget (and poll connectors while we are at it)
+        int localInUse = localMax - numFree;      // These are the connectors that have been handed out
+        for (T rc : stack)
+        {
+          // Notify
+          rc.setThreadContext(threadContext);
+          try
+          {
+            rc.poll();
+            if (rc.isConnected())
+              localInUse++;       // Count every pooled connector that is still connected
+          }
+          finally
+          {
+            rc.clearThreadContext();
+          }
+        }
+        int optimalTarget = localMax;
+        if (localMax > localInUse)
+          optimalTarget--;
+        else
+        {
+          // We want a fast ramp up, so make this proportional to globalMax
+          int increment = globalMax >> 2;
+          if (increment == 0)
+            increment = 1;
+          optimalTarget += increment;
+        }
+        
+        //System.out.println(serviceTypeName+":maxTarget = "+maximumTarget+"; fairTarget = "+fairTarget+"; optimalTarget = "+optimalTarget);
+
+        // Now compute actual target
+        int target = maximumTarget;
+        if (target > fairTarget)
+          target = fairTarget;
+        if (target > optimalTarget)
+          target = optimalTarget;
+        
+        //System.out.println(serviceTypeName+":Picking target="+target+"; localInUse="+localInUse);
+        // Write these values to the service data variables.
+        // NOTE that there is a race condition here; the target value depends on all the calculations above being accurate, and not changing out from under us.
+        // So, that's why we have a write lock around the pool calculations.
+        
+        lockManager.updateServiceData(serviceTypeName, serviceName, pack(target, localInUse));
+        
+        // Now, update our localMax
+        if (target == localMax)
+          return;
+        //System.out.println(serviceTypeName+":Updating target: "+target);
+        // Compute the number of instances in use locally
+        localInUse = localMax - numFree;
+        localMax = target;
+        // numFree may turn out to be negative here!!  That's okay; we'll just free released connectors
+        // until we enter positive territory again.
+        numFree = localMax - localInUse;
+        notifyAll();
+      }
+      finally
+      {
+        lockManager.leaveWriteLock(targetCalcLockName);
+      }
+      
+      // Finally, free pooled instances in excess of target
+      while (stack.size() > 0 && stack.size() > numFree)
+      {
+        // Try to find a connector instance that is not actually connected.
+        // These are likely to be at the front of the queue, since those are the
+        // oldest.
+        int j;
+        for (j = 0; j < stack.size(); j++)
+        {
+          if (!stack.get(j).isConnected())
+            break;
+        }
+        T rc;
+        if (j == stack.size())
+          rc = stack.remove(stack.size()-1);
+        else
+          rc = stack.remove(j);
+        rc.setThreadContext(threadContext);
+        try
+        {
+          rc.disconnect();
+        }
+        finally
+        {
+          rc.clearThreadContext();
+        }
+      }
+
+    }
+
+    /** Flush unused connectors.
+    */
+    public synchronized void flushUnused(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      while (stack.size() > 0)
+      {
+        // Disconnect
+        T rc = stack.remove(stack.size()-1);
+        rc.setThreadContext(threadContext);
+        try
+        {
+          rc.disconnect();
+        }
+        finally
+        {
+          rc.clearThreadContext();
+        }
+      }
+    }
+
+    /** Release all free connectors.
+    */
+    public synchronized void releaseAll(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      flushUnused(threadContext);
+      
+      // End service activity
+      isAlive = false;
+      notifyAll();
+      ILockManager lockManager = LockManagerFactory.make(threadContext);
+      lockManager.endServiceActivity(serviceTypeName, serviceName);
+    }
+
+  }
+
+  protected static class SumClass implements IServiceDataAcceptor
+  {
+    protected final String serviceName;
+    protected int numServices = 0;
+    protected int globalTargetTally = 0;
+    protected int globalInUseTally = 0;
+    
+    public SumClass(String serviceName)
+    {
+      this.serviceName = serviceName;
+    }
+    
+    @Override
+    public boolean acceptServiceData(String serviceName, byte[] serviceData)
+      throws ManifoldCFException
+    {
+      numServices++;
+
+      if (!serviceName.equals(this.serviceName))
+      {
+        globalTargetTally += unpackTarget(serviceData);
+        globalInUseTally += unpackInUse(serviceData);
+      }
+      return false;
+    }
+
+    public int getNumServices()
+    {
+      return numServices;
+    }
+    
+    public int getGlobalTarget()
+    {
+      return globalTargetTally;
+    }
+    
+    public int getGlobalInUse()
+    {
+      return globalInUseTally;
+    }
+    
+  }
+  
+  protected static int unpackTarget(byte[] data)
+  {
+    if (data == null || data.length != 8)
+      return 0;
+    return (((int)data[0]) & 0xff) +
+      ((((int)data[1]) << 8) & 0xff00) +
+      ((((int)data[2]) << 16) & 0xff0000) +
+      ((((int)data[3]) << 24) & 0xff000000);
+  }
+
+  protected static int unpackInUse(byte[] data)
+  {
+    if (data == null || data.length != 8)
+      return 0;
+    return (((int)data[4]) & 0xff) +
+      ((((int)data[5]) << 8) & 0xff00) +
+      ((((int)data[6]) << 16) & 0xff0000) +
+      ((((int)data[7]) << 24) & 0xff000000);
+  }
+
+  protected static byte[] pack(int target, int inUse)
+  {
+    byte[] rval = new byte[8];
+    rval[0] = (byte)(target & 0xff);
+    rval[1] = (byte)((target >> 8) & 0xff);
+    rval[2] = (byte)((target >> 16) & 0xff);
+    rval[3] = (byte)((target >> 24) & 0xff);
+    rval[4] = (byte)(inUse & 0xff);
+    rval[5] = (byte)((inUse >> 8) & 0xff);
+    rval[6] = (byte)((inUse >> 16) & 0xff);
+    rval[7] = (byte)((inUse >> 24) & 0xff);
+    return rval;
+  }
+  
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/database/ConnectionFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/database/ConnectionFactory.java
index f84187c..d49b75f 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/database/ConnectionFactory.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/database/ConnectionFactory.java
@@ -35,9 +35,6 @@
 {
   public static final String _rcsid = "@(#)$Id: ConnectionFactory.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  // This default is designed to avoid strange errors with people using postgresql out of the box, where the maximum connection count is set to 100.
-  private static final int defaultMaxDBConnections = 50;
-  private static final int defaultTimeoutValue = 86400;
 
   private static HashMap checkedOutConnections = new HashMap();
 
@@ -47,7 +44,8 @@
   {
   }
 
-  public static WrappedConnection getConnection(String jdbcUrl, String jdbcDriver, String database, String userName, String password)
+  public static WrappedConnection getConnection(String jdbcUrl, String jdbcDriver, String database, String userName, String password,
+    int maxDBConnections, boolean debug)
     throws ManifoldCFException
   {
     // Make sure database driver is registered
@@ -59,35 +57,16 @@
     {
       throw new ManifoldCFException("Unable to load database driver: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
     }
-
-    ConnectionPoolManager cpm = poolManager.createPoolManager();
+    
+    ConnectionPoolManager cpm = poolManager.createPoolManager(debug);
     
     try
     {
       // Hope for a connection now
       WrappedConnection rval;
-      ConnectionPool cp = null;
-      try
-      {
-        cp = cpm.getPool(database);
-      }
-      catch (Exception e)
-      {
-      }
+      ConnectionPool cp = cpm.getPool(database);
       if (cp == null)
       {
-        String handleMax = ManifoldCF.getProperty(ManifoldCF.databaseHandleMaxcountProperty);
-        int maxDBConnections = defaultMaxDBConnections;
-        if (handleMax != null && handleMax.length() > 0)
-          maxDBConnections = Integer.parseInt(handleMax);
-        //String timeoutValueString = ManifoldCF.getProperty(ManifoldCF.databaseHandleTimeoutProperty);
-        //int timeoutValue = defaultTimeoutValue;
-        //if (timeoutValueString != null && timeoutValueString.length() > 0)
-        //  timeoutValue = Integer.parseInt(timeoutValueString);
-
-        // Logging.db.debug("adding pool alias [" + database + "]");
-        // I had to up the timeout from one hour to 3 due to the webconnector keeping some connections open a very long time...
-	//System.out.println("jdbcUrl = '"+jdbcUrl+"', userName='"+userName+"', password='"+password+"'");
         cpm.addAlias(database, jdbcDriver, jdbcUrl,
           userName, password,
           maxDBConnections, 300000L);
@@ -95,9 +74,25 @@
       }
       return getConnectionWithRetries(cp);
     }
-    catch (Exception e)
+    catch (InterruptedException e)
     {
-      throw new ManifoldCFException("Error getting connection: "+e.getMessage(),e,ManifoldCFException.DATABASE_ERROR);
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+    catch (SQLException e)
+    {
+      throw new ManifoldCFException("Error getting connection: "+e.getMessage(),e,ManifoldCFException.DATABASE_CONNECTION_ERROR);
+    }
+    catch (ClassNotFoundException e)
+    {
+      throw new ManifoldCFException("Fatal error getting connection: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+    catch (InstantiationException e)
+    {
+      throw new ManifoldCFException("Fatal error getting connection: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+    catch (IllegalAccessException e)
+    {
+      throw new ManifoldCFException("Fatal error getting connection: "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
     }
   }
 
@@ -114,7 +109,7 @@
   }
 
   protected static WrappedConnection getConnectionWithRetries(ConnectionPool cp)
-    throws SQLException, ManifoldCFException
+    throws SQLException, InterruptedException
   {
     // If we have a problem, we will wait a grand total of 30 seconds
     int retryCount = 3;
@@ -131,19 +126,8 @@
         // Eat the exception and try again
         retryCount--;
       }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
-      }
-      try
-      {
-        // Ten seconds is a long time
-        ManifoldCF.sleep(10000L);
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
-      }
+      // Ten seconds is a long time
+      ManifoldCF.sleep(10000L);
     }
 
   }
@@ -173,19 +157,19 @@
   {
     private Integer poolExistenceLock = new Integer(0);
     private ConnectionPoolManager _pool = null;
-
+    
     private PoolManager()
     {
     }
 
-    public ConnectionPoolManager createPoolManager()
+    public ConnectionPoolManager createPoolManager(boolean debug)
       throws ManifoldCFException
     {
       synchronized (poolExistenceLock)
       {
         if (_pool != null)
           return _pool;
-        _pool = new ConnectionPoolManager(100);
+        _pool = new ConnectionPoolManager(100, debug);
         return _pool;
       }
     }
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceDerby.java b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceDerby.java
index ac48e6c..9f0097b 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceDerby.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceDerby.java
@@ -91,6 +91,7 @@
   protected static String getFullDatabasePath(String databaseName)
     throws ManifoldCFException
   {
+    // Derby is local file based so it cannot currently be used in zookeeper mode
     File path = ManifoldCF.getFileProperty(databasePathProperty);
     if (path == null)
       throw new ManifoldCFException("Derby database requires '"+databasePathProperty+"' property, containing a relative path");
@@ -1245,7 +1246,7 @@
       if (threshold == null)
       {
         // Look for this parameter; if we don't find it, use a default value.
-        reindexThreshold = ManifoldCF.getIntProperty("org.apache.manifold.db.derby.reindex."+tableName,250000);
+        reindexThreshold = lockManager.getSharedConfiguration().getIntProperty("org.apache.manifoldcf.db.derby.reindex."+tableName,250000);
         reindexThresholds.put(tableName,new Integer(reindexThreshold));
       }
       else
@@ -1302,7 +1303,7 @@
       if (threshold == null)
       {
         // Look for this parameter; if we don't find it, use a default value.
-        analyzeThreshold = ManifoldCF.getIntProperty("org.apache.manifold.db.derby.analyze."+tableName,5000);
+        analyzeThreshold = lockManager.getSharedConfiguration().getIntProperty("org.apache.manifoldcf.db.derby.analyze."+tableName,5000);
         analyzeThresholds.put(tableName,new Integer(analyzeThreshold));
       }
       else
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceHSQLDB.java b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceHSQLDB.java
index f81805a..47375ae 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceHSQLDB.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceHSQLDB.java
@@ -58,9 +58,9 @@
   public DBInterfaceHSQLDB(IThreadContext tc, String databaseName, String userName, String password)
     throws ManifoldCFException
   {
-    super(tc,getJDBCString(databaseName),_driver,getDatabaseString(databaseName),userName,password);
+    super(tc,getJDBCString(tc,databaseName),_driver,getDatabaseString(tc,databaseName),userName,password);
     cacheKey = CacheKeyFactory.makeDatabaseKey(this.databaseName);
-    this.isRemote = ManifoldCF.getProperty(databaseProtocolProperty) != null;
+    this.isRemote = LockManagerFactory.getProperty(tc,databaseProtocolProperty) != null;
     this.userName = userName;
     this.password = password;
     if (this.isRemote)
@@ -69,34 +69,34 @@
       schemaNameForQueries = "PUBLIC";
   }
 
-  protected static String getJDBCString(String databaseName)
+  protected static String getJDBCString(IThreadContext tc, String databaseName)
     throws ManifoldCFException
   {
     // For local, we use the database name as the name of the database files.
     // For remote, we connect to an instance specified by a different property, and use the database name as the schema name.
-    String protocol = ManifoldCF.getProperty(databaseProtocolProperty);
+    String protocol = LockManagerFactory.getProperty(tc,databaseProtocolProperty);
     if (protocol == null)
       return _localUrl+getFullDatabasePath(databaseName);
     
     // Remote instance.  Build the URL.
     if (legalProtocolValues.get(protocol) == null)
       throw new ManifoldCFException("The value of the '"+databaseProtocolProperty+"' property was illegal; try hsql, http, or https");
-    String server = ManifoldCF.getProperty(databaseServerProperty);
+    String server = LockManagerFactory.getProperty(tc,databaseServerProperty);
     if (server == null)
       throw new ManifoldCFException("HSQLDB remote mode requires '"+databaseServerProperty+"' property, containing a server name or IP address");
-    String port = ManifoldCF.getProperty(databasePortProperty);
+    String port = LockManagerFactory.getProperty(tc,databasePortProperty);
     if (port != null && port.length() > 0)
       server += ":"+port;
-    String instanceName = ManifoldCF.getProperty(databaseInstanceProperty);
+    String instanceName = LockManagerFactory.getProperty(tc,databaseInstanceProperty);
     if (instanceName != null && instanceName.length() > 0)
       server += "/" + instanceName;
     return _remoteUrl + protocol + "://" + server;
   }
   
-  protected static String getDatabaseString(String databaseName)
+  protected static String getDatabaseString(IThreadContext tc, String databaseName)
     throws ManifoldCFException
   {
-    String protocol = ManifoldCF.getProperty(databaseProtocolProperty);
+    String protocol = LockManagerFactory.getProperty(tc,databaseProtocolProperty);
     if (protocol == null)
       return getFullDatabasePath(databaseName);
     return databaseName;
@@ -142,6 +142,7 @@
   public void closeDatabase()
     throws ManifoldCFException
   {
+    //System.out.println("Close database called");
     if (!isRemote)
     {
       try
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceMySQL.java b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceMySQL.java
index 0062951..fbd6b33 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceMySQL.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfaceMySQL.java
@@ -62,14 +62,15 @@
   public DBInterfaceMySQL(IThreadContext tc, String databaseName, String userName, String password)
     throws ManifoldCFException
   {
-    super(tc,getJdbcUrl(databaseName),_driver,databaseName,userName,password);
+    super(tc,getJdbcUrl(tc,databaseName),_driver,databaseName,userName,password);
     cacheKey = CacheKeyFactory.makeDatabaseKey(this.databaseName);
     lockManager = LockManagerFactory.make(tc);
   }
 
-  private static String getJdbcUrl(String theDatabaseName)
+  private static String getJdbcUrl(IThreadContext tc, String theDatabaseName)
+    throws ManifoldCFException
   {
-    String server =  ManifoldCF.getProperty(mysqlServerProperty);
+    String server =  LockManagerFactory.getProperty(tc,mysqlServerProperty);
     if (server == null || server.length() == 0)
       server = "localhost";
     return "jdbc:mysql://"+server+"/"+theDatabaseName+"?useUnicode=true&characterEncoding=utf8";
@@ -615,7 +616,7 @@
     throws ManifoldCFException
   {
     // Get the client property
-    String client =  ManifoldCF.getProperty(mysqlClientProperty);
+    String client =  lockManager.getSharedConfiguration().getProperty(mysqlClientProperty);
     if (client == null || client.length() == 0)
       client = "localhost";
 
@@ -1278,7 +1279,7 @@
       if (threshold == null)
       {
         // Look for this parameter; if we don't find it, use a default value.
-        analyzeThreshold = ManifoldCF.getIntProperty("org.apache.manifold.db.mysql.analyze."+tableName,10000);
+        analyzeThreshold = lockManager.getSharedConfiguration().getIntProperty("org.apache.manifoldcf.db.mysql.analyze."+tableName,10000);
         analyzeThresholds.put(tableName,new Integer(analyzeThreshold));
       }
       else
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfacePostgreSQL.java b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfacePostgreSQL.java
index edc24d6..e50dbf4 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfacePostgreSQL.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/database/DBInterfacePostgreSQL.java
@@ -38,10 +38,10 @@
   private static final String _driver = "org.postgresql.Driver";
 
   /** A lock manager handle. */
-  protected ILockManager lockManager;
+  protected final ILockManager lockManager;
   
   // Database cache key
-  protected String cacheKey;
+  protected final String cacheKey;
 	
   // Postgresql serializable transactions are broken in that transactions that occur within them do not in fact work properly.
   // So, once we enter the serializable realm, STOP any additional transactions from doing anything at all.
@@ -78,17 +78,18 @@
   public DBInterfacePostgreSQL(IThreadContext tc, String databaseName, String userName, String password)
     throws ManifoldCFException
   {
-    super(tc,getJdbcUrl(databaseName),_driver,databaseName,userName,password);
+    super(tc,getJdbcUrl(tc,databaseName),_driver,databaseName,userName,password);
     cacheKey = CacheKeyFactory.makeDatabaseKey(this.databaseName);
     lockManager = LockManagerFactory.make(tc);
   }
   
-  private static String getJdbcUrl(final String databaseName)
+  private static String getJdbcUrl(final IThreadContext tc, final String databaseName)
+    throws ManifoldCFException
   {
     String jdbcUrl = _defaultUrl + databaseName;
-    final String hostname = ManifoldCF.getProperty(postgresqlHostnameProperty);
-    final String ssl = ManifoldCF.getProperty(postgresqlSslProperty);
-    final String port = ManifoldCF.getProperty(postgresqlPortProperty);
+    final String hostname = LockManagerFactory.getProperty(tc,postgresqlHostnameProperty);
+    final String ssl = LockManagerFactory.getProperty(tc,postgresqlSslProperty);
+    final String port = LockManagerFactory.getProperty(tc,postgresqlPortProperty);
     if (hostname != null && hostname.length() > 0)
     {
       jdbcUrl = "jdbc:postgresql://" + hostname;
@@ -108,6 +109,7 @@
   /** Initialize.  This method is called once per JVM instance, in order to set up
   * database communication.
   */
+  @Override
   public void openDatabase()
     throws ManifoldCFException
   {
@@ -117,6 +119,7 @@
   /** Uninitialize.  This method is called during JVM shutdown, in order to close
   * all database communication.
   */
+  @Override
   public void closeDatabase()
     throws ManifoldCFException
   {
@@ -126,6 +129,7 @@
   /** Get the database general cache key.
   *@return the general cache key for the database.
   */
+  @Override
   public String getDatabaseCacheKey()
   {
     return cacheKey;
@@ -137,6 +141,7 @@
   * invalidated.
   *@param parameterMap is the map of column name/values to write.
   */
+  @Override
   public void performInsert(String tableName, Map<String,Object> parameterMap, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -190,6 +195,7 @@
   *@param whereClause is the where clause describing the match (including the WHERE), or null if none.
   *@param whereParameters are the parameters that come with the where clause, if any.
   */
+  @Override
   public void performUpdate(String tableName, Map<String,Object> parameterMap, String whereClause,
     List whereParameters, StringSet invalidateKeys)
     throws ManifoldCFException
@@ -256,6 +262,7 @@
   *@param whereClause is the where clause describing the match (including the WHERE), or null if none.
   *@param whereParameters are the parameters that come with the where clause, if any.
   */
+  @Override
   public void performDelete(String tableName, String whereClause, List whereParameters, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -282,6 +289,7 @@
   * layer.
   *@param invalidateKeys are the cache keys that should be invalidated, if any.
   */
+  @Override
   public void performCreate(String tableName, Map<String,ColumnDescription> columnMap, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -341,6 +349,7 @@
   *@param columnDeleteList is the list of column names to delete.
   *@param invalidateKeys are the cache keys that should be invalidated, if any.
   */
+  @Override
   public void performAlter(String tableName, Map<String,ColumnDescription> columnMap,
     Map<String,ColumnDescription> columnModifyMap, List<String> columnDeleteList,
     StringSet invalidateKeys)
@@ -435,6 +444,7 @@
   *@param columnList is the list of columns that need to be included
   * in the index, in order.
   */
+  @Override
   public void addTableIndex(String tableName, boolean unique, List<String> columnList)
     throws ManifoldCFException
   {
@@ -453,6 +463,7 @@
   *@param indexName is the optional name of the table index.  If null, a name will be chosen automatically.
   *@param description is the index description.
   */
+  @Override
   public void performAddIndex(String indexName, String tableName, IndexDescription description)
     throws ManifoldCFException
   {
@@ -489,6 +500,7 @@
   *@param indexName is the name of the index to remove.
   *@param tableName is the table the index belongs to.
   */
+  @Override
   public void performRemoveIndex(String indexName, String tableName)
     throws ManifoldCFException
   {
@@ -500,6 +512,7 @@
   *@param tableName is the name of the table to drop.
   *@param invalidateKeys are the cache keys that should be invalidated, if any.
   */
+  @Override
   public void performDrop(String tableName, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -511,6 +524,7 @@
   *@param adminPassword is the admin password.
   *@param invalidateKeys are the cache keys that should be invalidated, if any.
   */
+  @Override
   public void createUserAndDatabase(String adminUserName, String adminPassword, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -565,6 +579,7 @@
   *@param adminPassword is the admin password.
   *@param invalidateKeys are the cache keys that should be invalidated, if any.
   */
+  @Override
   public void dropUserAndDatabase(String adminUserName, String adminPassword, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -623,6 +638,7 @@
   *@param params are the parameterized values, if needed.
   *@param invalidateKeys are the cache keys to invalidate.
   */
+  @Override
   public void performModification(String query, List params, StringSet invalidateKeys)
     throws ManifoldCFException
   {
@@ -643,6 +659,7 @@
   *@return a map of column names and ColumnDescription objects, describing the schema, or null if the
   * table doesn't exist.
   */
+  @Override
   public Map<String,ColumnDescription> getTableSchema(String tableName, StringSet cacheKeys, String queryClass)
     throws ManifoldCFException
   {
@@ -687,6 +704,7 @@
   *@param queryClass is the name of the query class, or null.
   *@return a map of index names and IndexDescription objects, describing the indexes.
   */
+  @Override
   public Map<String,IndexDescription> getTableIndexes(String tableName, StringSet cacheKeys, String queryClass)
     throws ManifoldCFException
   {
@@ -770,6 +788,7 @@
   *@param queryClass is the name of the query class, or null.
   *@return the set of tables.
   */
+  @Override
   public StringSet getAllTables(StringSet cacheKeys, String queryClass)
     throws ManifoldCFException
   {
@@ -795,6 +814,7 @@
   * or null if no LRU behavior desired.
   *@return a resultset.
   */
+  @Override
   public IResultSet performQuery(String query, List params, StringSet cacheKeys, String queryClass)
     throws ManifoldCFException
   {
@@ -818,6 +838,7 @@
   *@param returnLimit is a description of how to limit the return result, or null if no limit.
   *@return a resultset.
   */
+  @Override
   public IResultSet performQuery(String query, List params, StringSet cacheKeys, String queryClass,
     int maxResults, ILimitChecker returnLimit)
     throws ManifoldCFException
@@ -843,6 +864,7 @@
   *@param returnLimit is a description of how to limit the return result, or null if no limit.
   *@return a resultset.
   */
+  @Override
   public IResultSet performQuery(String query, List params, StringSet cacheKeys, String queryClass,
     int maxResults, ResultSpecification resultSpec, ILimitChecker returnLimit)
     throws ManifoldCFException
@@ -863,6 +885,7 @@
   *@param value is the value to be cast.
   *@return the query chunk needed.
   */
+  @Override
   public String constructDoubleCastClause(String value)
   {
     return "CAST("+value+" AS DOUBLE PRECISION)";
@@ -874,6 +897,7 @@
   *@param column is the column string to be counted.
   *@return the query chunk needed.
   */
+  @Override
   public String constructCountClause(String column)
   {
     return "COUNT("+column+")";
@@ -886,6 +910,7 @@
   *@param caseInsensitive is true of the regular expression match is to be case insensitive.
   *@return the query chunk needed, not padded with spaces on either side.
   */
+  @Override
   public String constructRegexpClause(String column, String regularExpression, boolean caseInsensitive)
   {
     return column + "~" + (caseInsensitive?"*":"") + regularExpression;
@@ -899,6 +924,7 @@
   *@param caseInsensitive is true if the regular expression match is to be case insensitive.
   *@return the expression chunk needed, not padded with spaces on either side.
   */
+  @Override
   public String constructSubstringClause(String column, String regularExpression, boolean caseInsensitive)
   {
     StringBuilder sb = new StringBuilder();
@@ -923,6 +949,7 @@
   *@param afterOrderBy is true if this offset/limit comes after an ORDER BY.
   *@return the proper clause, with no padding spaces on either side.
   */
+  @Override
   public String constructOffsetLimitClause(int offset, int limit, boolean afterOrderBy)
   {
     StringBuilder sb = new StringBuilder();
@@ -951,6 +978,7 @@
   *@param otherFields are the rest of the fields to return, keyed by the AS name, value being the base query column value, e.g. "value AS key"
   *@return a revised query that performs the necessary DISTINCT ON operation.  The list outputParameters will also be appropriately filled in.
   */
+  @Override
   public String constructDistinctOnClause(List outputParameters, String baseQuery, List baseParameters,
     String[] distinctFields, String[] orderFields, boolean[] orderFieldsAscending, Map<String,String> otherFields)
   {
@@ -1014,6 +1042,7 @@
   * to drop.
   *@return the maximum number of IN clause members.
   */
+  @Override
   public int getMaxInClause()
   {
     return 100;
@@ -1024,6 +1053,7 @@
   * to drop.
   *@return the maximum number of OR clause members.
   */
+  @Override
   public int getMaxOrClause()
   {
     return 25;
@@ -1033,6 +1063,7 @@
   * that can reasonably be expected to complete in an acceptable time.
   *@return the maximum number of rows.
   */
+  @Override
   public int getWindowedReportMaxRows()
   {
     return 5000;
@@ -1046,6 +1077,7 @@
   * calls to executeQuery().  After this should be a catch for every exception type, including Error, which should call the
   * signalRollback() method, and rethrow the exception.  Then, after that a finally{} block which calls endTransaction().
   */
+  @Override
   public void beginTransaction()
     throws ManifoldCFException
   {
@@ -1061,6 +1093,7 @@
   * signalRollback() method, and rethrow the exception.  Then, after that a finally{} block which calls endTransaction().
   *@param transactionType is the kind of transaction desired.
   */
+  @Override
   public void beginTransaction(int transactionType)
     throws ManifoldCFException
   {
@@ -1106,6 +1139,7 @@
 
   /** Signal that a rollback should occur on the next endTransaction().
   */
+  @Override
   public void signalRollback()
   {
     if (serializableDepth == 0)
@@ -1115,6 +1149,7 @@
   /** End a database transaction, either performing a commit or a rollback (depending on whether
   * signalRollback() was called within the transaction).
   */
+  @Override
   public void endTransaction()
     throws ManifoldCFException
   {
@@ -1143,6 +1178,7 @@
   }
 
   /** Abstract method to start a transaction */
+  @Override
   protected void startATransaction()
     throws ManifoldCFException
   {
@@ -1157,6 +1193,7 @@
   }
 
   /** Abstract method to commit a transaction */
+  @Override
   protected void commitCurrentTransaction()
     throws ManifoldCFException
   {
@@ -1172,6 +1209,7 @@
   }
   
   /** Abstract method to roll back a transaction */
+  @Override
   protected void rollbackCurrentTransaction()
     throws ManifoldCFException
   {
@@ -1186,20 +1224,38 @@
   }
   
   /** Abstract method for explaining a query */
+  @Override
   protected void explainQuery(String query, List params)
     throws ManifoldCFException
   {
-    IResultSet x = executeUncachedQuery("EXPLAIN "+query,params,true,
+    // We really can't retry at this level; it's not clear what the transaction nesting is etc.
+    // So if the EXPLAIN fails due to deadlock, we just give up.
+    IResultSet x;
+    String queryType = "EXPLAIN ";
+    if ("SELECT".equalsIgnoreCase(query.substring(0,6)))
+      queryType += "ANALYZE ";
+    x = executeUncachedQuery(queryType+query,params,true,
       -1,null,null);
-    int k = 0;
-    while (k < x.getRowCount())
+    for (int k = 0; k < x.getRowCount(); k++)
     {
-      IResultRow row = x.getRow(k++);
+      IResultRow row = x.getRow(k);
       Iterator<String> iter = row.getColumns();
       String colName = (String)iter.next();
       Logging.db.warn(" Plan: "+row.getValue(colName).toString());
     }
     Logging.db.warn("");
+
+    if (query.indexOf("jobqueue") != -1)
+    {
+      // Dump jobqueue stats
+      x = executeUncachedQuery("select n_distinct, most_common_vals, most_common_freqs from pg_stats where tablename='jobqueue' and attname='status'",null,true,-1,null,null);
+      for (int k = 0; k < x.getRowCount(); k++)
+      {
+        IResultRow row = x.getRow(k);
+        Logging.db.warn(" Stats: n_distinct="+row.getValue("n_distinct").toString()+" most_common_vals="+row.getValue("most_common_vals").toString()+" most_common_freqs="+row.getValue("most_common_freqs").toString());
+      }
+      Logging.db.warn("");
+    }
   }
 
   
@@ -1365,7 +1421,7 @@
       if (threshold == null)
       {
         // Look for this parameter; if we don't find it, use a default value.
-        reindexThreshold = ManifoldCF.getIntProperty("org.apache.manifold.db.postgres.reindex."+tableName,250000);
+        reindexThreshold = lockManager.getSharedConfiguration().getIntProperty("org.apache.manifoldcf.db.postgres.reindex."+tableName,250000);
         reindexThresholds.put(tableName,new Integer(reindexThreshold));
       }
       else
@@ -1410,7 +1466,7 @@
       lockManager.leaveWriteCriticalSection(tableStatisticsLock);
     }
     
-      // Analysis.
+    // Analysis.
     // Here we count tuple addition.
     eventCount = modifyCount + insertCount;
     tableStatisticsLock = statslockAnalyzePrefix+tableName;
@@ -1422,7 +1478,7 @@
       if (threshold == null)
       {
         // Look for this parameter; if we don't find it, use a default value.
-        analyzeThreshold = ManifoldCF.getIntProperty("org.apache.manifold.db.postgres.analyze."+tableName,5000);
+        analyzeThreshold = lockManager.getSharedConfiguration().getIntProperty("org.apache.manifoldcf.db.postgres.analyze."+tableName,2000);
         analyzeThresholds.put(tableName,new Integer(analyzeThreshold));
       }
       else
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/database/Database.java b/framework/core/src/main/java/org/apache/manifoldcf/core/database/Database.java
index cf25a5c..ab673bf 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/database/Database.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/database/Database.java
@@ -38,11 +38,11 @@
 {
   public static final String _rcsid = "@(#)$Id: Database.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  protected ICacheManager cacheManager;
-  protected IThreadContext context;
-  protected String jdbcUrl;
-  protected String jdbcDriverClass;
-  protected String databaseName;
+  protected final ICacheManager cacheManager;
+  protected final IThreadContext context;
+  protected final String jdbcUrl;
+  protected final String jdbcDriverClass;
+  protected final String databaseName;
   protected String userName;
   protected String password;
   protected TransactionHandle th = null;
@@ -52,7 +52,9 @@
   protected int delayedTransactionDepth = 0;
   protected Map<String,Modifications> modificationsSet = new HashMap<String,Modifications>();
 
-  protected long maxQueryTime;
+  protected final long maxQueryTime;
+  protected final boolean debug;
+  protected final int maxDBConnections;
   
   protected static Random random = new Random();
 
@@ -68,7 +70,10 @@
     this.userName = userName;
     this.password = password;
     
-    this.maxQueryTime = ((long)ManifoldCF.getIntProperty(ManifoldCF.databaseQueryMaxTimeProperty,60)) * 1000L;
+    this.maxQueryTime = ((long)LockManagerFactory.getIntProperty(context, ManifoldCF.databaseQueryMaxTimeProperty,60)) * 1000L;
+    this.debug = LockManagerFactory.getBooleanProperty(context, ManifoldCF.databaseConnectionTrackingProperty, false);
+    this.maxDBConnections = LockManagerFactory.getIntProperty(context, ManifoldCF.databaseHandleMaxcountProperty, 50);
+
     this.cacheManager = CacheManagerFactory.make(context);
   }
 
@@ -247,7 +252,8 @@
     // Get a semipermanent connection
     if (connection == null)
     {
-      connection = ConnectionFactory.getConnection(jdbcUrl,jdbcDriverClass,databaseName,userName,password);
+      connection = ConnectionFactory.getConnection(jdbcUrl,jdbcDriverClass,databaseName,userName,password,
+        maxDBConnections,debug);
       try
       {
         // Initialize the connection (for HSQLDB)
@@ -682,13 +688,26 @@
       }
     }
 
-    public Throwable getException()
+    public IResultSet finishUp()
+      throws ManifoldCFException, InterruptedException
     {
-      return exception;
-    }
-
-    public IResultSet getResponse()
-    {
+      join();
+      Throwable thr = exception;
+      if (thr != null)
+      {
+        if (thr instanceof ManifoldCFException)
+        {
+          // Nest the exceptions so there is a hope we actually see the context, while preserving the kind of error it is
+          ManifoldCFException me = (ManifoldCFException)thr;
+          throw new ManifoldCFException("Database exception: "+me.getMessage(),me.getCause(),me.getErrorCode());
+        }
+        else if (thr instanceof Error)
+          throw (Error)thr;
+        else if (thr instanceof RuntimeException)
+          throw (RuntimeException)thr;
+        else
+          throw new RuntimeException("Unknown exception: "+thr.getClass().getName()+": "+thr.getMessage(),thr);
+      }
       return rval;
     }
   }
@@ -706,24 +725,22 @@
     try
     {
       t.start();
-      t.join();
-      Throwable thr = t.getException();
-      if (thr != null)
-      {
-        if (thr instanceof ManifoldCFException)
-        {
-          // Nest the exceptions so there is a hope we actually see the context, while preserving the kind of error it is
-          ManifoldCFException me = (ManifoldCFException)thr;
-          throw new ManifoldCFException("Database exception: "+me.getMessage(),me.getCause(),me.getErrorCode());
-        }
-        else
-          throw (Error)thr;
-      }
-      return t.getResponse();
+      return t.finishUp();
     }
     catch (InterruptedException e)
     {
+      // Try to kill the background thread - but we can't wait for it...
       t.interrupt();
+      // VERY IMPORTANT: Try to close the connection, so nothing is left dangling.  The connection will be abandoned anyhow.
+      try
+      {
+        if (!connection.getAutoCommit())
+          connection.rollback();
+        connection.close();
+      }
+      catch (Exception e2)
+      {
+      }
       // We need the caller to abandon any connections left around, so rethrow in a way that forces them to process the event properly.
       throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
     }
@@ -755,7 +772,8 @@
     else
     {
       // Grab a connection
-      WrappedConnection tempConnection = ConnectionFactory.getConnection(jdbcUrl,jdbcDriverClass,databaseName,userName,password);
+      WrappedConnection tempConnection = ConnectionFactory.getConnection(jdbcUrl,jdbcDriverClass,databaseName,userName,password,
+        maxDBConnections,debug);
       try
       {
         // Initialize the connection (for HSQLDB)
@@ -986,10 +1004,8 @@
                 {
                   String columnName = (String)iter.next();
                   Object colValue = m.getValue(columnName);
-                  if (colValue instanceof BinaryInput)
-                    ((BinaryInput)colValue).discard();
-                  else if (colValue instanceof CharacterInput)
-                    ((CharacterInput)colValue).discard();
+                  if (colValue instanceof PersistentDatabaseObject)
+                    ((PersistentDatabaseObject)colValue).discard();
                 }
               }
             }
@@ -1014,10 +1030,8 @@
         {
           String colName = (String)iter.next();
           Object o = row.getValue(colName);
-          if (o instanceof BinaryInput)
-            ((BinaryInput)o).discard();
-          else if (o instanceof CharacterInput)
-            ((CharacterInput)o).discard();
+          if (o instanceof PersistentDatabaseObject)
+            ((PersistentDatabaseObject)o).discard();
         }
       }
       if (e instanceof ManifoldCFException)
@@ -1096,18 +1110,11 @@
   {
     if (data != null)
     {
-      for (int i = 0; i < data.size(); i++)
+      for (Object x : data)
       {
-        // If the input type is a string, then set it as such.
-        // Otherwise, if it's an input stream, we make a blob out of it.
-        Object x = data.get(i);
-        if (x instanceof BinaryInput)
+        if (x instanceof PersistentDatabaseObject)
         {
-          ((BinaryInput)x).doneWithStream();
-        }
-        else if (x instanceof CharacterInput)
-        {
-          ((CharacterInput)x).doneWithStream();
+          ((PersistentDatabaseObject)x).doneWithStream();
         }
       }
     }
@@ -1355,10 +1362,8 @@
     }
     catch (Throwable e)
     {
-      if (result instanceof CharacterInput)
-        ((CharacterInput)result).discard();
-      else if (result instanceof BinaryInput)
-        ((BinaryInput)result).discard();
+      if (result instanceof PersistentDatabaseObject)
+        ((PersistentDatabaseObject)result).discard();
       if (e instanceof ManifoldCFException)
         throw (ManifoldCFException)e;
       if (e instanceof RuntimeException)
@@ -1450,7 +1455,11 @@
           }
           catch (ManifoldCFException e)
           {
-            Logging.db.error("Explain failed with error "+e.getMessage(),e);
+            // We need to know if explain generated a TRANSACTION_ABORT.  If so we have to rethrow it.
+            if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT || e.getErrorCode() == e.INTERRUPTED)
+              throw e;
+            // Eat the exception
+            Logging.db.warn("Explain failed with error "+e.getMessage(),e);
           }
 
         }
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/extmimemap/ExtensionMimeMap.java b/framework/core/src/main/java/org/apache/manifoldcf/core/extmimemap/ExtensionMimeMap.java
index 945d254..1715934 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/extmimemap/ExtensionMimeMap.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/extmimemap/ExtensionMimeMap.java
@@ -29,14 +29,27 @@
   protected final static Map<String,String> mimeMap;
   static {
     mimeMap = new HashMap<String,String>();
-    mimeMap.put("txt","text/plain");
-    mimeMap.put("pdf","application/pdf");
-    mimeMap.put("doc","application/msword");
-    mimeMap.put("docx","application/vnd.openxmlformats-officedocument.wordprocessingml.document");
-    mimeMap.put("ppt","application/vnd.ms-powerpoint");
-    mimeMap.put("pptx","application/vnd.openxmlformats-officedocument.presentationml.presentation");
-    mimeMap.put("xls","application/vnd.ms-excel");
-    mimeMap.put("xlsx","application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
+    mimeMap.put("xml", "text/xml");
+    mimeMap.put("csv", "text/csv");
+    mimeMap.put("json", "application/json");
+    mimeMap.put("pdf", "application/pdf");
+    mimeMap.put("rtf", "text/rtf");
+    mimeMap.put("html", "text/html");
+    mimeMap.put("htm", "text/html");
+    mimeMap.put("doc", "application/msword");
+    mimeMap.put("docx", "application/vnd.openxmlformats-officedocument.wordprocessingml.document");
+    mimeMap.put("ppt", "application/vnd.ms-powerpoint");
+    mimeMap.put("pptx", "application/vnd.openxmlformats-officedocument.presentationml.presentation");
+    mimeMap.put("xls", "application/vnd.ms-excel");
+    mimeMap.put("xlsx", "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet");
+    mimeMap.put("odt", "application/vnd.oasis.opendocument.text");
+    mimeMap.put("ott", "application/vnd.oasis.opendocument.text");
+    mimeMap.put("odp", "application/vnd.oasis.opendocument.presentation");
+    mimeMap.put("otp", "application/vnd.oasis.opendocument.presentation");
+    mimeMap.put("ods", "application/vnd.oasis.opendocument.spreadsheet");
+    mimeMap.put("ots", "application/vnd.oasis.opendocument.spreadsheet");
+    mimeMap.put("txt", "text/plain");
+    mimeMap.put("log", "text/plain");
   }
 
   /** Map extension to mime type */
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/TagParseState.java b/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/TagParseState.java
index a3b8766..070f730 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/TagParseState.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/TagParseState.java
@@ -238,6 +238,7 @@
         currentState = TAGPARSESTATE_SAWSECONDRIGHTBRACKET;
       else
       {
+        currentState = TAGPARSESTATE_IN_CDATA_BODY;
         if (noteEscapedCharacter(']'))
           return true;
         if (noteEscapedCharacter(thisChar))
@@ -248,8 +249,15 @@
     case TAGPARSESTATE_SAWSECONDRIGHTBRACKET:
       if (thisChar == '>')
         currentState = TAGPARSESTATE_NORMAL;
+      else if (thisChar == ']')
+      {
+        // currentstate unchanged; emit the first bracket
+        if (noteEscapedCharacter(']'))
+          return true;
+      }
       else
       {
+        currentState = TAGPARSESTATE_IN_CDATA_BODY;
         if (noteEscapedCharacter(']'))
           return true;
         if (noteEscapedCharacter(']'))
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/XMLFuzzyHierarchicalParseState.java b/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/XMLFuzzyHierarchicalParseState.java
index 51df3f5..beee0ca 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/XMLFuzzyHierarchicalParseState.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/fuzzyml/XMLFuzzyHierarchicalParseState.java
@@ -83,7 +83,11 @@
     throws ManifoldCFException
   {
     // This sets currentContext == null as a side effect, unless an error occurs during cleanup!!
-    currentContext.cleanup();
+    if (currentContext != null)
+    {
+      currentContext.cleanup();
+      currentContext = null;
+    }
   }
 
   /** Map version of the noteTag method.
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/BinaryInput.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/BinaryInput.java
index f2e4dd6..ceef1ef 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/BinaryInput.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/BinaryInput.java
@@ -25,7 +25,7 @@
 * There are no implied semantics in this class around managing the stream itself.
 * These semantics must be handled by a derived class.
 */
-public abstract class BinaryInput
+public abstract class BinaryInput extends PersistentDatabaseObject
 {
   public static final String _rcsid = "@(#)$Id: BinaryInput.java 988245 2010-08-23 18:39:35Z kwright $";
 
@@ -59,6 +59,7 @@
   }
 
   /** Close the stream we passed to JDBC */
+  @Override
   public void doneWithStream()
     throws ManifoldCFException
   {
@@ -70,6 +71,7 @@
   public abstract BinaryInput transfer();
 
   /** Discard the object */
+  @Override
   public void discard()
     throws ManifoldCFException
   {
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/CharacterInput.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/CharacterInput.java
index 5b73616..6d6564c 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/CharacterInput.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/CharacterInput.java
@@ -25,7 +25,7 @@
 * There are no implied semantics in this class around managing the stream itself.
 * These semantics must be handled by a derived class.
 */
-public abstract class CharacterInput
+public abstract class CharacterInput extends PersistentDatabaseObject
 {
   public static final String _rcsid = "@(#)$Id: CharacterInput.java 988245 2010-08-23 18:39:35Z kwright $";
 
@@ -49,6 +49,7 @@
     return stream;
   }
 
+  @Override
   public void doneWithStream()
     throws ManifoldCFException
   {
@@ -76,10 +77,15 @@
   public abstract InputStream getUtf8Stream()
     throws ManifoldCFException;
 
+  /** Get binary UTF8 stream length directly */
+  public abstract long getUtf8StreamLength()
+    throws ManifoldCFException;
+
   /** Transfer to a new object; this causes the current object to become "already discarded" */
   public abstract CharacterInput transfer();
 
   /** Discard this object permanently */
+  @Override
   public void discard()
     throws ManifoldCFException
   {
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ConnectorFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ConnectorFactory.java
new file mode 100644
index 0000000..5207193
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ConnectorFactory.java
@@ -0,0 +1,208 @@
+/* $Id: OutputConnectorFactory.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+import java.util.*;
+import java.io.*;
+import java.lang.reflect.*;
+
+/** This is the base factory class for all IConnector objects.
+*/
+public abstract class ConnectorFactory<T extends IConnector>
+{
+  public static final String _rcsid = "@(#)$Id: OutputConnectorFactory.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  protected ConnectorFactory()
+  {
+  }
+
+  /** Override this method to hook into a connector manager.
+  */
+  protected abstract boolean isInstalled(IThreadContext tc, String className)
+    throws ManifoldCFException;
+  
+  /** Install connector.
+  *@param className is the class name.
+  */
+  protected void installThis(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    T connector = getThisConnectorNoCheck(className);
+    connector.install(threadContext);
+  }
+
+  /** Uninstall connector.
+  *@param className is the class name.
+  */
+  protected void deinstallThis(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    T connector = getThisConnectorNoCheck(className);
+    connector.deinstall(threadContext);
+  }
+
+  /** Output the configuration header section.
+  */
+  protected void outputThisConfigurationHeader(IThreadContext threadContext, String className,
+    IHTTPOutput out, Locale locale, ConfigParams parameters, ArrayList tabsArray)
+    throws ManifoldCFException, IOException
+  {
+    T connector = getThisConnector(threadContext, className);
+    if (connector == null)
+      return;
+    connector.outputConfigurationHeader(threadContext,out,locale,parameters,tabsArray);
+  }
+
+  /** Output the configuration body section.
+  */
+  protected void outputThisConfigurationBody(IThreadContext threadContext, String className,
+    IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    T connector = getThisConnector(threadContext, className);
+    if (connector == null)
+      return;
+    connector.outputConfigurationBody(threadContext,out,locale,parameters,tabName);
+  }
+
+  /** Process configuration post data for a connector.
+  */
+  protected String processThisConfigurationPost(IThreadContext threadContext, String className,
+    IPostParameters variableContext, Locale locale, ConfigParams configParams)
+    throws ManifoldCFException
+  {
+    T connector = getThisConnector(threadContext, className);
+    if (connector == null)
+      return null;
+    return connector.processConfigurationPost(threadContext,variableContext,locale,configParams);
+  }
+  
+  /** View connector configuration.
+  */
+  protected void viewThisConfiguration(IThreadContext threadContext, String className,
+    IHTTPOutput out, Locale locale, ConfigParams configParams)
+    throws ManifoldCFException, IOException
+  {
+    T connector = getThisConnector(threadContext, className);
+    // We want to be able to view connections even if they have unregistered connectors.
+    if (connector == null)
+      return;
+    connector.viewConfiguration(threadContext,out,locale,configParams);
+  }
+
+  /** Get a connector instance, without checking for installed connector.
+  *@param className is the class name.
+  *@return the instance.
+  */
+  protected T getThisConnectorNoCheck(String className)
+    throws ManifoldCFException
+  {
+    T rval = getThisConnectorRaw(className);
+    if (rval == null)
+      throw new ManifoldCFException("No connector class '"+className+"' was found.");
+    return rval;
+  }
+
+  /** Get a connector instance.
+  *@param className is the class name.
+  *@return the instance.
+  */
+  protected T getThisConnector(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    if (!isInstalled(threadContext,className))
+      return null;
+
+    return getThisConnectorRaw(className);
+  }
+  
+  /** Instantiate a connector, but return null if the class is not found.
+  */
+  protected T getThisConnectorRaw(String className)
+    throws ManifoldCFException
+  {
+    try
+    {
+      Class theClass = ManifoldCF.findClass(className);
+      Class[] argumentClasses = new Class[0];
+      // Look for a constructor
+      Constructor c = theClass.getConstructor(argumentClasses);
+      Object[] arguments = new Object[0];
+      Object o = c.newInstance(arguments);
+      try
+      {
+        return (T)o;
+      }
+      catch (ClassCastException e)
+      {
+        throw new ManifoldCFException("Class '"+className+"' does not implement IConnector.");
+      }
+    }
+    catch (InvocationTargetException e)
+    {
+      Throwable z = e.getTargetException();
+      if (z instanceof Error)
+        throw (Error)z;
+      else if (z instanceof RuntimeException)
+        throw (RuntimeException)z;
+      else if (z instanceof ManifoldCFException)
+        throw (ManifoldCFException)z;
+      else
+        throw new RuntimeException("Unknown exception type: "+z.getClass().getName()+": "+z.getMessage(),z);
+    }
+    catch (ClassNotFoundException e)
+    {
+      return null;
+    }
+    catch (NoSuchMethodException e)
+    {
+      throw new ManifoldCFException("No appropriate constructor for IConnector implementation '"+
+        className+"'.  Need xxx().",
+        e);
+    }
+    catch (SecurityException e)
+    {
+      throw new ManifoldCFException("Protected constructor for IConnector implementation '"+className+"'",
+        e);
+    }
+    catch (IllegalAccessException e)
+    {
+      throw new ManifoldCFException("Unavailable constructor for IConnector implementation '"+className+"'",
+        e);
+    }
+    catch (IllegalArgumentException e)
+    {
+      throw new ManifoldCFException("Shouldn't happen!!!",e);
+    }
+    catch (InstantiationException e)
+    {
+      throw new ManifoldCFException("InstantiationException for IConnector implementation '"+className+"'",
+        e);
+    }
+    catch (ExceptionInInitializerError e)
+    {
+      throw new ManifoldCFException("ExceptionInInitializerError for IConnector implementation '"+className+"'",
+        e);
+    }
+
+  }
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/DBInterfaceFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/DBInterfaceFactory.java
index 2928db4..13ccce4 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/DBInterfaceFactory.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/DBInterfaceFactory.java
@@ -40,9 +40,8 @@
     Object x = context.get(dbName);
     if (x == null || !(x instanceof IDBInterface))
     {
-      String implementationClass = ManifoldCF.getProperty(ManifoldCF.databaseImplementation);
-      if (implementationClass == null)
-        implementationClass = "org.apache.manifoldcf.core.database.DBInterfacePostgreSQL";
+      String implementationClass = LockManagerFactory.getStringProperty(context, ManifoldCF.databaseImplementation,
+        "org.apache.manifoldcf.core.database.DBInterfacePostgreSQL");
       try
       {
         Class c = Class.forName(implementationClass);
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnectionThrottler.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnectionThrottler.java
new file mode 100644
index 0000000..1d76f00
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnectionThrottler.java
@@ -0,0 +1,112 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import java.util.*;
+
+/** An IConnectionThrottler object is not thread-local.  It gates connection
+* creation and pool management.
+* The underlying model is a pool of connections.  A connection gets pulled off the pool and
+* used to perform a fetch.  If there are insufficient connections in the pool, and there is
+* sufficient capacity to create a new connection, a connection will be created instead.
+* When the fetch is done, the connection is returned, and then there is a decision whether
+* or not to put the connection back into the pool, or to destroy it.  Finally, the pool is
+* periodically evaluated, and connections may be destroyed if either they have expired,
+* or the allocated connections are still over capacity.
+*
+* This object does not in itself contain a connection pool - but it is intended to assist
+* in the management of that pool.  Specifically, it tracks connections that are in the
+* pool, and connections that are handed out for use, and performs ALL the waiting needed
+* due to the pool being empty and/or the number of active connections being at or over
+* the quota.
+*/
+public interface IConnectionThrottler
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // For grabbing a connection for use
+  
+  /** Get the connection from the pool */
+  public final static int CONNECTION_FROM_POOL = 0;
+  /** Create a connection */
+  public final static int CONNECTION_FROM_CREATION = 1;
+  /** Pool shutting down */
+  public final static int CONNECTION_FROM_NOWHERE = -1;
+  
+  /** Get permission to grab a connection for use.  If this object believes there is a connection
+  * available in the pool, it will update its pool size variable and return   If not, this method
+  * evaluates whether a new connection should be created.  If neither condition is true, it
+  * waits until a connection is available.
+  *@return whether to take the connection from the pool, or create one, or whether the
+  * throttler is being shut down.
+  */
+  public int waitConnectionAvailable()
+    throws InterruptedException;
+  
+  /** For a new connection, obtain the fetch throttler to use for the connection.
+  * If the result from waitConnectionAvailable() is CONNECTION_FROM_CREATION,
+  * the calling code is expected to create a connection using the result of this method.
+  *@return the fetch throttler for a new connection.
+  */
+  public IFetchThrottler getNewConnectionFetchThrottler();
+  
+  /** This method indicates whether a formerly in-use connection should be placed back
+  * in the pool or destroyed.
+  *@return true if the connection should not be put into the pool but should instead
+  *  simply be destroyed.  If true is returned, the caller MUST call noteConnectionDestroyed()
+  *  after the connection is destroyed in order for the bookkeeping to work.  If false
+  *  is returned, the caller MUST call noteConnectionReturnedToPool() after the connection
+  *  is returned to the pool.
+  */
+  public boolean noteReturnedConnection();
+  
+  /** This method calculates whether a connection should be taken from the pool and destroyed
+  /* in order to meet quota requirements.  If this method returns
+  /* true, you MUST remove a connection from the pool, and you MUST call
+  /* noteConnectionDestroyed() afterwards.
+  *@return true if a pooled connection should be destroyed.  If true is returned, the
+  * caller MUST call noteConnectionDestroyed() (below) in order for the bookkeeping to work.
+  */
+  public boolean checkDestroyPooledConnection();
+  
+  /** Connection expiration is tricky, because even though a connection may be identified as
+  * being expired, at the very same moment it could be handed out in another thread.  So there
+  * is a natural race condition present.
+  * The way the connection throttler deals with that is to allow the caller to reserve a connection
+  * for expiration.  This must be called BEFORE the actual identified connection is removed from the
+  * connection pool.  If the value returned by this method is "true", then a connection MUST be removed
+  * from the pool and destroyed, whether or not the identified connection is actually still available for
+  * destruction or not.
+  *@return true if a connection from the pool can be expired.  If true is returned, noteConnectionDestruction()
+  *  MUST be called once the connection has actually been destroyed.
+  */
+  public boolean checkExpireConnection();
+  
+  /** Note that a connection has been returned to the pool.  Call this method after a connection has been
+  * placed back into the pool and is available for use.
+  */
+  public void noteConnectionReturnedToPool();
+  
+  /** Note that a connection has been destroyed.  Call this method ONLY after noteReturnedConnection()
+  * or checkDestroyPooledConnection() returns true, AND the connection has been already
+  * destroyed.
+  */
+  public void noteConnectionDestroyed();
+  
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnector.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnector.java
index 3042c40..d82c8e9 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnector.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IConnector.java
@@ -64,6 +64,12 @@
   public void poll()
     throws ManifoldCFException;
 
+  /** This method is called to assess whether to count this connector instance should
+  * actually be counted as being connected.
+  *@return true if the connector instance is actually connected.
+  */
+  public boolean isConnected();
+  
   /** Close the connection.  Call this before discarding the repository connector.
   */
   public void disconnect()
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IFetchThrottler.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IFetchThrottler.java
new file mode 100644
index 0000000..91d51f4
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IFetchThrottler.java
@@ -0,0 +1,46 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+/** An IFetchThrottler object is meant to be used as part of a fetch cycle.  It is not
+* thread-local, and does not require access to a thread context.  It thus also does not
+* throw ManifoldCFExceptions.  It is thus suitable for use in background threads, etc.
+* These objects are typically created by IConnectionThrottler objects - they are not meant
+* to be created directly.
+*/
+public interface IFetchThrottler
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Get permission to fetch a document.  This grants permission to start
+  * fetching a single document, within the connection that has already been
+  * granted permission that created this object.
+  *@return false if the throttler is being shut down.
+  */
+  public boolean obtainFetchDocumentPermission()
+    throws InterruptedException;
+  
+  /** Open a fetch stream.  When done (or aborting), call
+  * IStreamThrottler.closeStream() to note the completion of the document
+  * fetch activity.
+  *@return the stream throttler to use to throttle the actual data access.
+  */
+  public IStreamThrottler createFetchStream();
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutput.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutput.java
index 28d5023..3577ea4 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutput.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutput.java
@@ -21,90 +21,11 @@
 import java.io.*;
 
 /** This interface abstracts from the output character stream used to construct
-* HTML output for a web interface.
+* HTML output for a web interface.  More broadly, it provides the services that all
+* connectors will need in order to provide UI components.
 */
-public interface IHTTPOutput
+public interface IHTTPOutput extends IHTTPOutputActivity, IPasswordMapperActivity
 {
   public static final String _rcsid = "@(#)$Id: IHTTPOutput.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  /** Flush the stream */
-  public void flush()
-    throws IOException;
-  
-  /** Write a newline */
-  public void newLine()
-    throws IOException;
-  
-  /** Write a boolean */
-  public void print(boolean b)
-    throws IOException;
-  
-  /** Write a char */
-  public void print(char c)
-    throws IOException;
-  
-  /** Write an array of chars */
-  public void print(char[] c)
-    throws IOException;
-  
-  /** Write a double */
-  public void print(double d)
-    throws IOException;
-  
-  /** Write a float */
-  public void print(float f)
-    throws IOException;
-  
-  /** Write an int */
-  public void print(int i)
-    throws IOException;
-  
-  /** Write a long */
-  public void print(long l)
-    throws IOException;
-  
-  /** Write an object */
-  public void print(Object o)
-    throws IOException;
-  
-  /** Write a string */
-  public void print(String s)
-    throws IOException;
-  
-  /** Write a boolean */
-  public void println(boolean b)
-    throws IOException;
-  
-  /** Write a char */
-  public void println(char c)
-    throws IOException;
-  
-  /** Write an array of chars */
-  public void println(char[] c)
-    throws IOException;
-  
-  /** Write a double */
-  public void println(double d)
-    throws IOException;
-  
-  /** Write a float */
-  public void println(float f)
-    throws IOException;
-  
-  /** Write an int */
-  public void println(int i)
-    throws IOException;
-  
-  /** Write a long */
-  public void println(long l)
-    throws IOException;
-  
-  /** Write an object */
-  public void println(Object o)
-    throws IOException;
-  
-  /** Write a string */
-  public void println(String s)
-    throws IOException;
-
 }
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutputActivity.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutputActivity.java
new file mode 100644
index 0000000..a6e1e2f
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IHTTPOutputActivity.java
@@ -0,0 +1,110 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import java.io.*;
+
+/** This interface abstracts from the output character stream used to construct
+* HTML output for a web interface.
+*/
+public interface IHTTPOutputActivity
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Flush the stream */
+  public void flush()
+    throws IOException;
+  
+  /** Write a newline */
+  public void newLine()
+    throws IOException;
+  
+  /** Write a boolean */
+  public void print(boolean b)
+    throws IOException;
+  
+  /** Write a char */
+  public void print(char c)
+    throws IOException;
+  
+  /** Write an array of chars */
+  public void print(char[] c)
+    throws IOException;
+  
+  /** Write a double */
+  public void print(double d)
+    throws IOException;
+  
+  /** Write a float */
+  public void print(float f)
+    throws IOException;
+  
+  /** Write an int */
+  public void print(int i)
+    throws IOException;
+  
+  /** Write a long */
+  public void print(long l)
+    throws IOException;
+  
+  /** Write an object */
+  public void print(Object o)
+    throws IOException;
+  
+  /** Write a string */
+  public void print(String s)
+    throws IOException;
+  
+  /** Write a boolean */
+  public void println(boolean b)
+    throws IOException;
+  
+  /** Write a char */
+  public void println(char c)
+    throws IOException;
+  
+  /** Write an array of chars */
+  public void println(char[] c)
+    throws IOException;
+  
+  /** Write a double */
+  public void println(double d)
+    throws IOException;
+  
+  /** Write a float */
+  public void println(float f)
+    throws IOException;
+  
+  /** Write an int */
+  public void println(int i)
+    throws IOException;
+  
+  /** Write a long */
+  public void println(long l)
+    throws IOException;
+  
+  /** Write an object */
+  public void println(Object o)
+    throws IOException;
+  
+  /** Write a string */
+  public void println(String s)
+    throws IOException;
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ILockManager.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ILockManager.java
index 79f284e..37ccc83 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ILockManager.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ILockManager.java
@@ -19,13 +19,140 @@
 package org.apache.manifoldcf.core.interfaces;
 
 
-/** The lock manager manages locks across all threads and JVMs and cluster members.  It also
-* manages shared data, which is not necessarily atomic and should be protected by locks.
+/** The lock manager manages locks and shared data across all threads and JVMs and cluster members.  It also
+* manages transient shared data, which is not necessarily atomic and should be protected by locks.
 */
 public interface ILockManager
 {
   public static final String _rcsid = "@(#)$Id: ILockManager.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  // Node synchronization
+  
+  // The node synchronization model involves keeping track of active agents entities, so that other entities
+  // can perform any necessary cleanup if one of the agents processes goes away unexpectedly.  There is a
+  // registration primitive (which can fail if the same guid is used as is already registered and active), a
+  // shutdown primitive (which makes a process id go inactive), and various inspection primitives.
+  
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    IServiceCleanup cleanup)
+    throws ManifoldCFException;
+  
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param initialData is the initial service data for this service.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    byte[] initialData, IServiceCleanup cleanup)
+    throws ManifoldCFException;
+
+  /** Set service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@param serviceData is the data to update to (may be null).
+  * This updates the service's transient data (or deletes it).  If the service is not active, an exception is thrown.
+  */
+  public void updateServiceData(String serviceType, String serviceName, byte[] serviceData)
+    throws ManifoldCFException;
+
+  /** Retrieve service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@return the service's transient data.
+  */
+  public byte[] retrieveServiceData(String serviceType, String serviceName)
+    throws ManifoldCFException;
+  
+  /** Scan service data for a service type.  Only active service data will be considered.
+  *@param serviceType is the type of service.
+  *@param dataAcceptor is the object that will be notified of each item of data for each service name found.
+  */
+  public void scanServiceData(String serviceType, IServiceDataAcceptor dataAcceptor)
+    throws ManifoldCFException;
+
+  /** Count all active services of a given type.
+  *@param serviceType is the service type.
+  *@return the count.
+  */
+  public int countActiveServices(String serviceType)
+    throws ManifoldCFException;
+  
+  /** Clean up any inactive services found.
+  * Calling this method will invoke cleanup of one inactive service at a time.
+  * If there are no inactive services around, then false will be returned.
+  * Note that this method will block whatever service it finds from starting up
+  * for the time the cleanup is proceeding.  At the end of the cleanup, if
+  * successful, the service will be atomically unregistered.
+  *@param serviceType is the service type.
+  *@param cleanup is the object to call to clean up an inactive service.
+  *@return true if there were no cleanup operations necessary.
+  */
+  public boolean cleanupInactiveService(String serviceType, IServiceCleanup cleanup)
+    throws ManifoldCFException;
+
+  /** End service activity.
+  * This operation exits the "active" zone for the service.  This must take place using the same ILockManager
+  * object that was used to registerServiceBeginServiceActivity() - which implies that it is the same thread.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to exit.
+  */
+  public void endServiceActivity(String serviceType, String serviceName)
+    throws ManifoldCFException;
+    
+  /** Check whether a service is active or not.
+  * This operation returns true if the specified service is considered active at the moment.  Once a service
+  * is not active anymore, it can only return to activity by calling beginServiceActivity() once more.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to check on.
+  *@return true if the service is considered active.
+  */
+  public boolean checkServiceActive(String serviceType, String serviceName)
+    throws ManifoldCFException;
+
+  /** Read specified service-associated data.  The data returned will be blank (empty) if the service
+  * is not active.
+  // Configuration
+  
+  /** Get the current shared configuration.  This configuration is available in common among all nodes,
+  * and thus must not be accessed through here for the purpose of finding configuration data that is specific to any one
+  * specific node.
+  *@param configurationData is the globally-shared configuration information.
+  */
+  public ManifoldCFConfiguration getSharedConfiguration()
+    throws ManifoldCFException;
+
+  // Flags
+  
   /** Raise a flag.  Use this method to assert a condition, or send a global signal.  The flag will be reset when the
   * entire system is restarted.
   *@param flagName is the name of the flag to set.
@@ -46,6 +173,8 @@
   public boolean checkGlobalFlag(String flagName)
     throws ManifoldCFException;
 
+  // Shared data
+  
   /** Read data from a shared data resource.  Use this method to read any existing data, or get a null back if there is no such resource.
   * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
   *@param resourceName is the global name of the resource.
@@ -62,6 +191,8 @@
   public void writeData(String resourceName, byte[] data)
     throws ManifoldCFException;
 
+  // Locks
+  
   /** Wait for a time before retrying a lock.  Use this method to wait
   * after a LockException has been thrown.  )If this is not done, the application
   * will wind up busy waiting.)
@@ -190,6 +321,8 @@
   public void clearLocks()
     throws ManifoldCFException;
 
+  // Critical sections
+  
   /** Enter a named, read critical section (NOT a lock).  Critical sections never cross JVM boundaries.
   * Critical section names do not collide with lock names; they have a distinct namespace.
   *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IParameterActivity.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IParameterActivity.java
new file mode 100644
index 0000000..737b618
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IParameterActivity.java
@@ -0,0 +1,66 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import java.util.*;
+
+/** This interface represents parameters that get posted during UI interaction.
+*/
+public interface IParameterActivity
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Read an array of parameter values.
+  *@param name is the parameter name.
+  *@return the array of values, or null if it doesn't exist.
+  */
+  public String[] getParameterValues(String name);
+  
+  /** Get single parameter value.
+  *@param name is the parameter name.
+  *@return the value, or null if it doesn't exist.
+  */
+  public String getParameter(String name);
+  
+  /** Get a file parameter, as a binary input stream.
+  *@param name is the parameter name.
+  *@return the value, or null if it doesn't exist.
+  */
+  public BinaryInput getBinaryStream(String name)
+    throws ManifoldCFException;
+  
+  /** Get file parameter, as a byte array.
+  *@param name is the parameter name.
+  *@return the binary parameter as an array of bytes.
+  */
+  public byte[] getBinaryBytes(String name);
+  
+  /** Set a parameter value.
+  *@param name is the parameter name.
+  *@param value is the desired value.
+  */
+  public void setParameter(String name, String value);
+  
+  /** Set an array of parameter values.
+  *@param name is the parameter name.
+  *@param values is the array of desired values.
+  */
+  public void setParameterValues(String name, String[] values);
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPasswordMapperActivity.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPasswordMapperActivity.java
new file mode 100644
index 0000000..685ebec
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPasswordMapperActivity.java
@@ -0,0 +1,53 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import java.io.*;
+
+/** This interface abstracts from password mapping activity, available for
+* all connector-provided UI components.
+* Passwords should not appear in any data sent from the crawler UI to the browser.  The
+* following methods are provided to assist the connector UI components in this task.
+* A connector coder should use these services as follows:
+* - When the password would ordinarily be put into a form element as the current password,
+*    instead use mapPasswordToKey() to create a key and put that in instead.
+* - When the "password" is posted, and the post is processed, use mapKeyToPassword() to
+*    restore the correct password.
+*/
+public interface IPasswordMapperActivity
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Map a password to a unique key.
+  * This method works within a specific given browser session to replace an existing password with
+  * a key which can be used to look up the password at a later time.
+  *@param password is the password.
+  *@return the key.
+  */
+  public String mapPasswordToKey(String password);
+  
+  /** Convert a key, created by mapPasswordToKey, back to the original password, within
+  * the lifetime of the browser session.  If the provided key is not an actual key, instead
+  * the key value is assumed to be a new password value.
+  *@param key is the key.
+  *@return the password.
+  */
+  public String mapKeyToPassword(String key);
+  
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPostParameters.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPostParameters.java
index f228cb5..0093191 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPostParameters.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IPostParameters.java
@@ -20,47 +20,11 @@
 
 import java.util.*;
 
-/** This interface represents parameters that get posted during UI interaction.
+/** This interface represents resources that are made available to connector methods
+* during UI post operations.
 */
-public interface IPostParameters
+public interface IPostParameters extends IParameterActivity, IPasswordMapperActivity
 {
   public static final String _rcsid = "@(#)$Id: IPostParameters.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  /** Read an array of parameter values.
-  *@param name is the parameter name.
-  *@return the array of values, or null if it doesn't exist.
-  */
-  public String[] getParameterValues(String name);
-  
-  /** Get single parameter value.
-  *@param name is the parameter name.
-  *@return the value, or null if it doesn't exist.
-  */
-  public String getParameter(String name);
-  
-  /** Get a file parameter, as a binary input stream.
-  *@param name is the parameter name.
-  *@return the value, or null if it doesn't exist.
-  */
-  public BinaryInput getBinaryStream(String name)
-    throws ManifoldCFException;
-  
-  /** Get file parameter, as a byte array.
-  *@param name is the parameter name.
-  *@return the binary parameter as an array of bytes.
-  */
-  public byte[] getBinaryBytes(String name);
-  
-  /** Set a parameter value.
-  *@param name is the parameter name.
-  *@param value is the desired value.
-  */
-  public void setParameter(String name, String value);
-  
-  /** Set an array of parameter values.
-  *@param name is the parameter name.
-  *@param values is the array of desired values.
-  */
-  public void setParameterValues(String name, String[] values);
-
 }
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IServiceCleanup.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IServiceCleanup.java
new file mode 100644
index 0000000..a06c085
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IServiceCleanup.java
@@ -0,0 +1,50 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+
+/** The IServiceCleanup interface describes functionality needed to clean up after
+* a service that has ended, as determined by an ILockManager instance.  It is always
+* throttled in a manner where only one thread in
+* the entire cluster will be cleaning up after any specific service.
+*/
+public interface IServiceCleanup
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Clean up after the specified service.  This method will block any startup of the specified
+  * service for as long as it runs.
+  *@param serviceName is the name of the service.
+  */
+  public void cleanUpService(String serviceName)
+    throws ManifoldCFException;
+
+  /** Clean up after ALL services of the type on the cluster.
+  */
+  public void cleanUpAllServices()
+    throws ManifoldCFException;
+  
+  /** Perform cluster initialization - that is, whatever is needed presuming that the
+  * cluster has been down for an indeterminate period of time, but is otherwise in a clean
+  * state.
+  */
+  public void clusterInit()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IServiceDataAcceptor.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IServiceDataAcceptor.java
new file mode 100644
index 0000000..16e077b
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IServiceDataAcceptor.java
@@ -0,0 +1,37 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+
+/** The IServiceDataAcceptor interface describes functionality needed to
+* tally service data values across all active services of a type.
+*/
+public interface IServiceDataAcceptor
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Accept service data.
+  *@param serviceName is the name of the service that owns the data.
+  *@param serviceData is the actual data that is owned.
+  *@return true to abort the scan.
+  */
+  public boolean acceptServiceData(String serviceName, byte[] serviceData)
+    throws ManifoldCFException;
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IShutdownHook.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IShutdownHook.java
index 7b4d2c9..4a58505 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IShutdownHook.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IShutdownHook.java
@@ -23,7 +23,7 @@
 {
   /** Do the requisite cleanup.
   */
-  public void doCleanup()
+  public void doCleanup(IThreadContext threadContext)
     throws ManifoldCFException;
 }
 
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IStreamThrottler.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IStreamThrottler.java
new file mode 100644
index 0000000..b8e1759
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IStreamThrottler.java
@@ -0,0 +1,50 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+/** An IConnectionThrottler object is meant to be embedded in an InputStream.  It is not
+* thread-local, and does not require access to a thread context.  It thus also does not
+* throw ManifoldCFExceptions.  It is thus suitable for use in background threads, etc.
+* These objects are typically created by IFetchThrottler objects - they are not meant
+* to be created directly.
+*/
+public interface IStreamThrottler
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Obtain permission to read a block of bytes.  This method may wait until it is OK to proceed.
+  * The throttle group, bin names, etc are already known
+  * to this specific interface object, so it is unnecessary to include them here.
+  *@param byteCount is the number of bytes to get permissions to read.
+  *@return true if the wait took place as planned, or false if the system is being shut down.
+  */
+  public boolean obtainReadPermission(int byteCount)
+    throws InterruptedException;
+    
+  /** Note the completion of the read of a block of bytes.  Call this after
+  * obtainReadPermission() was successfully called, and bytes were successfully read.
+  *@param origByteCount is the originally requested number of bytes to get permissions to read.
+  *@param actualByteCount is the number of bytes actually read.
+  */
+  public void releaseReadPermission(int origByteCount, int actualByteCount);
+  
+  /** Note the stream being closed.
+  */
+  public void closeStream();
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IThrottleGroups.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IThrottleGroups.java
new file mode 100644
index 0000000..0e5df5d
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IThrottleGroups.java
@@ -0,0 +1,86 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import java.util.*;
+
+/** An IThrottleGroups object is thread-local and creates a virtual pool
+* of connections to resources whose access needs to be throttled in number, 
+* rate of use, and byte rate.
+*/
+public interface IThrottleGroups
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Get all existing throttle groups for a throttle group type.
+  * The throttle group type typically describes a connector class, while the throttle group represents
+  * a namespace of bin names specific to that connector class.
+  *@param throttleGroupType is the throttle group type.
+  *@return the set of throttle groups for that group type.
+  */
+  public Set<String> getThrottleGroups(String throttleGroupType)
+    throws ManifoldCFException;
+  
+  /** Remove a throttle group.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  */
+  public void removeThrottleGroup(String throttleGroupType, String throttleGroup)
+    throws ManifoldCFException;
+  
+  /** Create or update a throttle group.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  *@param throttleSpec is the desired throttle specification object.
+  */
+  public void createOrUpdateThrottleGroup(String throttleGroupType, String throttleGroup, IThrottleSpec throttleSpec)
+    throws ManifoldCFException;
+
+  /** Construct connection throttler for connections with specific bin names.  This object is meant to be embedded with a connection
+  * pool of similar objects, and used to gate the creation of new connections in that pool.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  *@param binNames are the connection type bin names.
+  *@return the connection throttling object, or null if the pool is being shut down.
+  */
+  public IConnectionThrottler obtainConnectionThrottler(String throttleGroupType, String throttleGroup, String[] binNames)
+    throws ManifoldCFException;
+  
+  /** Poll periodically, to update cluster-wide statistics and allocation.
+  *@param throttleGroupType is the throttle group type to update.
+  */
+  public void poll(String throttleGroupType)
+    throws ManifoldCFException;
+
+  /** Poll periodically, to update ALL cluster-wide statistics and allocation.
+  */
+  public void poll()
+    throws ManifoldCFException;
+
+  /** Free all unused resources.
+  */
+  public void freeUnusedResources()
+    throws ManifoldCFException;
+  
+  /** Shut down throttler permanently.
+  */
+  public void destroy()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IThrottleSpec.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IThrottleSpec.java
new file mode 100644
index 0000000..3b8cfcd
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/IThrottleSpec.java
@@ -0,0 +1,44 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+
+/** An IThrottleSpec object describes what throttling criteria to apply
+* per bin.
+*/
+public interface IThrottleSpec
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Given a bin name, find the max open connections to use for that bin.
+  *@return Integer.MAX_VALUE if no limit found.
+  */
+  public int getMaxOpenConnections(String binName);
+
+  /** Look up minimum milliseconds per byte for a bin.
+  *@return 0.0 if no limit found.
+  */
+  public double getMinimumMillisecondsPerByte(String binName);
+
+  /** Look up minimum milliseconds for a fetch for a bin.
+  *@return 0 if no limit found.
+  */
+  public long getMinimumMillisecondsPerFetch(String binName);
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/LockManagerFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/LockManagerFactory.java
index ab194dc..5954135 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/LockManagerFactory.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/LockManagerFactory.java
@@ -40,9 +40,8 @@
     Object x = context.get(lockManager);
     if (x == null || !(x instanceof ILockManager))
     {
-      String implementationClass = ManifoldCF.getProperty(ManifoldCF.lockManagerImplementation);
-      if (implementationClass == null)
-        implementationClass = "org.apache.manifoldcf.core.lockmanager.LockManager";
+      String implementationClass = ManifoldCF.getStringProperty(ManifoldCF.lockManagerImplementation,
+        "org.apache.manifoldcf.core.lockmanager.LockManager");
       try
       {
         Class c = Class.forName(implementationClass);
@@ -75,5 +74,41 @@
     return (ILockManager)x;
   }
 
+  public static String getProperty(IThreadContext tc, String s)
+    throws ManifoldCFException
+  {
+    return make(tc).getSharedConfiguration().getProperty(s);
+  }
+  
+  public static String getStringProperty(IThreadContext tc, String s, String defaultValue)
+    throws ManifoldCFException
+  {
+    return make(tc).getSharedConfiguration().getStringProperty(s, defaultValue);
+  }
+  
+  public static int getIntProperty(IThreadContext tc, String s, int defaultValue)
+    throws ManifoldCFException
+  {
+    return make(tc).getSharedConfiguration().getIntProperty(s, defaultValue);
+  }
+
+  public static long getLongProperty(IThreadContext tc, String s, long defaultValue)
+    throws ManifoldCFException
+  {
+    return make(tc).getSharedConfiguration().getLongProperty(s, defaultValue);
+  }
+  
+  public static double getDoubleProperty(IThreadContext tc, String s, double defaultValue)
+    throws ManifoldCFException
+  {
+    return make(tc).getSharedConfiguration().getDoubleProperty(s, defaultValue);
+  }
+  
+  public static boolean getBooleanProperty(IThreadContext tc, String s, boolean defaultValue)
+    throws ManifoldCFException
+  {
+    return make(tc).getSharedConfiguration().getBooleanProperty(s, defaultValue);
+  }
+  
 }
 
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ManifoldCFConfiguration.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ManifoldCFConfiguration.java
new file mode 100644
index 0000000..2edcb88
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ManifoldCFConfiguration.java
@@ -0,0 +1,181 @@
+/* $Id: ManifoldCFConfiguration.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+import java.util.*;
+import java.io.*;
+
+/** This class represents the configuration data read from the main ManifoldCF configuration
+* XML file.
+*/
+public class ManifoldCFConfiguration extends Configuration
+{
+  public static final String _rcsid = "@(#)$Id: ManifoldCFConfiguration.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  // Configuration XML node names and attribute names
+  public static final String NODE_PROPERTY = "property";
+  public static final String ATTRIBUTE_NAME = "name";
+  public static final String ATTRIBUTE_VALUE = "value";
+
+  protected final Map<String,String> localProperties = new HashMap<String,String>();
+
+  /** Constructor.
+  */
+  public ManifoldCFConfiguration()
+  {
+    super("configuration");
+  }
+
+  /** Construct from XML.
+  *@param xmlStream is the input XML stream.
+  */
+  public ManifoldCFConfiguration(InputStream xmlStream)
+    throws ManifoldCFException
+  {
+    super("configuration");
+    fromXML(xmlStream);
+    parseProperties();
+  }
+
+  public String getProperty(String s)
+  {
+    return localProperties.get(s);
+  }
+  
+  /** Read a (string) property, either from the system properties, or from the local configuration file.
+  *@param s is the property name.
+  *@param defaultValue is the default value for the property.
+  *@return the property value, as a string.
+  */
+  public String getStringProperty(String s, String defaultValue)
+  {
+    String rval = getProperty(s);
+    if (rval == null)
+      rval = defaultValue;
+    return rval;
+  }
+
+  /** Read a boolean property
+  */
+  public boolean getBooleanProperty(String s, boolean defaultValue)
+    throws ManifoldCFException
+  {
+    String value = getProperty(s);
+    if (value == null)
+      return defaultValue;
+    if (value.equals("true") || value.equals("yes"))
+      return true;
+    if (value.equals("false") || value.equals("no"))
+      return false;
+    throw new ManifoldCFException("Illegal property value for boolean property '"+s+"': '"+value+"'");
+  }
+  
+  /** Read an integer property, either from the system properties, or from the local configuration file.
+  */
+  public int getIntProperty(String s, int defaultValue)
+    throws ManifoldCFException
+  {
+    String value = getProperty(s);
+    if (value == null)
+      return defaultValue;
+    try
+    {
+      return Integer.parseInt(value);
+    }
+    catch (NumberFormatException e)
+    {
+      throw new ManifoldCFException("Illegal property value for integer property '"+s+"': '"+value+"': "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+  }
+
+  /** Read a long property, either from the system properties, or from the local configuration file.
+  */
+  public long getLongProperty(String s, long defaultValue)
+    throws ManifoldCFException
+  {
+    String value = getProperty(s);
+    if (value == null)
+      return defaultValue;
+    try
+    {
+      return Long.parseLong(value);
+    }
+    catch (NumberFormatException e)
+    {
+      throw new ManifoldCFException("Illegal property value for long property '"+s+"': '"+value+"': "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+  }
+
+  /** Read a float property, either from the system properties, or from the local configuration file.
+  */
+  public double getDoubleProperty(String s, double defaultValue)
+    throws ManifoldCFException
+  {
+    String value = getProperty(s);
+    if (value == null)
+      return defaultValue;
+    try
+    {
+      return Double.parseDouble(value);
+    }
+    catch (NumberFormatException e)
+    {
+      throw new ManifoldCFException("Illegal property value for double property '"+s+"': '"+value+"': "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
+    }
+  }
+
+  protected void parseProperties()
+    throws ManifoldCFException
+  {
+    // For convenience, post-process all "property" nodes so that we have a semblance of the earlier name/value pairs available, by default.
+    // e.g. <property name= value=/>
+    localProperties.clear();
+    for (int i = 0; i < getChildCount(); i++)
+    {
+      ConfigurationNode cn = findChild(i);
+      if (cn.getType().equals(NODE_PROPERTY))
+      {
+        String name = cn.getAttributeValue(ATTRIBUTE_NAME);
+        String value = cn.getAttributeValue(ATTRIBUTE_VALUE);
+        if (name == null)
+          throw new ManifoldCFException("Node type '"+NODE_PROPERTY+"' requires a '"+ATTRIBUTE_NAME+"' attribute");
+        localProperties.put(name,value);
+      }
+    }
+  }
+  
+  /** Read from an input stream.
+  */
+  @Override
+  public void fromXML(InputStream is)
+    throws ManifoldCFException
+  {
+    super.fromXML(is);
+    parseProperties();
+  }
+  
+  /** Create a new object of the appropriate class.
+  */
+  @Override
+  protected Configuration createNew()
+  {
+    return new ManifoldCFConfiguration();
+  }
+  
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/NullCharacterInput.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/NullCharacterInput.java
index e433acf..93625bd 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/NullCharacterInput.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/NullCharacterInput.java
@@ -70,6 +70,14 @@
     return new ByteArrayInputStream(new byte[]{});
   }
 
+  /** Get binary UTF8 stream length directly */
+  @Override
+  public long getUtf8StreamLength()
+    throws ManifoldCFException
+  {
+    return 0L;
+  }
+
   /** Transfer to a new object; this causes the current object to become "already discarded" */
   @Override
   public CharacterInput transfer()
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/PersistentDatabaseObject.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/PersistentDatabaseObject.java
new file mode 100644
index 0000000..ac66386
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/PersistentDatabaseObject.java
@@ -0,0 +1,44 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+/** Objects derived from this class can function as database parameters or as results.  In
+* both cases, they must be managed specially because they are potentially backed by disk files,
+* and the data within is treated as a stream (of something) rather than a scalar piece of data.
+*/
+public abstract class PersistentDatabaseObject
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Construct from nothing.
+  */
+  public PersistentDatabaseObject()
+  {
+  }
+
+  /** Close any open streams, but do NOT remove the backing object.
+  * Thus the stream can be reopened in the future. */
+  public abstract void doneWithStream()
+    throws ManifoldCFException;
+  
+  /** Discard this object permanently */
+  public abstract void discard()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/TempFileCharacterInput.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/TempFileCharacterInput.java
index 9bc7ead..7d94588 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/TempFileCharacterInput.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/TempFileCharacterInput.java
@@ -37,13 +37,24 @@
 
   protected final static int CHUNK_SIZE = 65536;
 
-  /** Construct from a length-delimited reader.
+  /** Construct from a non-length-delimited reader.
   *@param is is a reader to transfer from, to the end of the data.  This will, as a side effect, also calculate the character length
   *          and hash value for the data.
   */
   public TempFileCharacterInput(Reader is)
     throws ManifoldCFException
   {
+    this(is,-1L);
+  }
+
+  /** Construct from a length-delimited reader.
+  *@param is is a reader to transfer from, to the end of the data.  This will, as a side effect, also calculate the character length
+  *          and hash value for the data.
+  *@param length is the length limit to transfer, or -1 if no limit
+  */
+  public TempFileCharacterInput(Reader is, long length)
+    throws ManifoldCFException
+  {
     super();
     try
     {
@@ -68,7 +79,13 @@
           long totalMoved = 0;
           while (true)
           {
-            int moveAmount = CHUNK_SIZE;
+            int moveAmount;
+            if (length == -1L || length-totalMoved > CHUNK_SIZE)
+              moveAmount = CHUNK_SIZE;
+            else
+              moveAmount = (int)(length-totalMoved);
+            if (moveAmount == 0)
+              break;
             // Read character data in 64K chunks
             int readsize = is.read(buffer,0,moveAmount);
             if (readsize == -1)
@@ -152,6 +169,16 @@
     return null;
   }
 
+  /** Get binary UTF8 stream length directly */
+  @Override
+  public long getUtf8StreamLength()
+    throws ManifoldCFException
+  {
+    if (file != null)
+      return file.length();
+    return 0L;
+  }
+
   @Override
   protected void openStream()
     throws ManifoldCFException
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ThrottleGroupsFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ThrottleGroupsFactory.java
new file mode 100644
index 0000000..96c4dc2
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/interfaces/ThrottleGroupsFactory.java
@@ -0,0 +1,50 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.interfaces;
+
+/** Thread-local IThrottleGroups factory.
+*/
+public class ThrottleGroupsFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // name to use in thread context pool of objects
+  private final static String objectName = "_ThrottleGroups_";
+
+  private ThrottleGroupsFactory()
+  {
+  }
+
+  /** Make a connection throttle handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IThrottleGroups make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IThrottleGroups))
+    {
+      o = new org.apache.manifoldcf.core.throttler.ThrottleGroups(tc);
+      tc.save(objectName,o);
+    }
+    return (IThrottleGroups)o;
+  }
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/jdbcpool/ConnectionPoolManager.java b/framework/core/src/main/java/org/apache/manifoldcf/core/jdbcpool/ConnectionPoolManager.java
index b830363..3e27b4e 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/jdbcpool/ConnectionPoolManager.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/jdbcpool/ConnectionPoolManager.java
@@ -24,6 +24,7 @@
 import java.util.concurrent.atomic.AtomicBoolean;
 
 import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.LockManagerFactory;
 import org.apache.manifoldcf.core.system.ManifoldCF;
 
 /** An instance of this class manages a number of (independent) connection pools.
@@ -37,10 +38,10 @@
   protected volatile AtomicBoolean shuttingDown = new AtomicBoolean(false);
   protected final boolean debug;
   
-  public ConnectionPoolManager(int count)
+  public ConnectionPoolManager(int count, boolean debug)
     throws ManifoldCFException
   {
-    debug = ManifoldCF.getBooleanProperty(ManifoldCF.databaseConnectionTrackingProperty, false);
+    this.debug = debug;
     poolMap = new HashMap<String,ConnectionPool>(count);
     connectionCloserThread = new ConnectionCloserThread();
     connectionCloserThread.start();
@@ -67,6 +68,7 @@
   
   public void shutdown()
   {
+    //System.out.println("JDBC POOL SHUTDOWN CALLED");
     shuttingDown.set(true);
     while (connectionCloserThread.isAlive())
     {
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/BaseLockManager.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/BaseLockManager.java
new file mode 100644
index 0000000..bfece74
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/BaseLockManager.java
@@ -0,0 +1,1943 @@
+/* $Id: LockManager.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+import java.io.*;
+
+/** A lock manager manages locks and shared information across all threads and JVMs
+* and cluster members.  There should be no more than ONE instance of this class per thread!!!
+* The factory should enforce this.
+* This is the base lock manager class.  Its implementation works solely within one JVM,
+* which makes it ideal for single-process work.  Classes that handle multiple JVMs and thus
+* need cross-JVM synchronization are thus expected to extend this class and override pertinent
+* methods.
+*/
+public class BaseLockManager implements ILockManager
+{
+  public static final String _rcsid = "@(#)$Id: LockManager.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  // These are the lock/section types, in order of escalation
+  protected final static int TYPE_READ = 1;
+  protected final static int TYPE_WRITENONEX = 2;
+  protected final static int TYPE_WRITE = 3;
+
+  // These are for locks which putatitively cross JVM boundaries.
+  // In this implementation, they ar strictly local, and are distinct from sections
+  // just because of the namespace issues.
+  protected final LocalLockPool localLocks = new LocalLockPool();
+  protected final static LockPool myLocks = new LockPool(new LockObjectFactory());
+
+  // These are for critical sections (which do not cross JVM boundaries)
+  protected final LocalLockPool localSections = new LocalLockPool();
+  protected final static LockPool mySections = new LockPool(new LockObjectFactory());
+
+  /** Global flag information.  This is used only when all of ManifoldCF is run within one process. */
+  protected final static Map<String,Boolean> globalFlags = new HashMap<String,Boolean>();
+
+  /** Global resource data.  Used only when ManifoldCF is run entirely out of one process. */
+  protected final static Map<String,byte[]> globalData = new HashMap<String,byte[]>();
+  
+  public BaseLockManager()
+    throws ManifoldCFException
+  {
+  }
+
+  // Node synchronization
+  
+  // The node synchronization model involves keeping track of active agents entities, so that other entities
+  // can perform any necessary cleanup if one of the agents processes goes away unexpectedly.  There is a
+  // registration primitive (which can fail if the same guid is used as is already registered and active), a
+  // shutdown primitive (which makes a process id go inactive), and various inspection primitives.
+  
+  // This implementation of the node infrastructure uses other primitives implemented by the lock
+  // manager for the implementation.  Specifically, instead of synchronizers, we use a write lock
+  // to prevent conflicts, and we use flags to determine whether a service is active or not.  The
+  // tricky thing, though, is the global registry - which must be able to list its contents.  To acheive
+  // that, we use data with a counter scheme; if the data is not found, it's presumed we are at the
+  // end of the list.
+  //
+  // By building on other primitives in this way, the same implementation will suffice for many derived
+  // lockmanager implementations - although ZooKeeper will want a native form.
+
+  /** The service-type global write lock to control sync, followed by the service type */
+  protected final static String serviceTypeLockPrefix = "_SERVICELOCK_";
+  /** A data name prefix, followed by the service type, and then followed by "_" and the instance number */
+  protected final static String serviceListPrefix = "_SERVICELIST_";
+  /** A flag prefix, followed by the service type, and then followed by "_" and the service name */
+  protected final static String servicePrefix = "_SERVICE_";
+  /** A flag prefix, followed by the service type, and then followed by "_" and the service name */
+  protected final static String activePrefix = "_ACTIVE_";
+  /** A data name prefix, followed by the service type, and then followed by "_" and the service name and "_" and the datatype */
+  protected final static String serviceDataPrefix = "_SERVICEDATA_";
+  /** Anonymous service name prefix, to be followed by an integer */
+  protected final static String anonymousServiceNamePrefix = "_ANON_";
+  /** Anonymous global variable name prefix, to be followed by the service type */
+  protected final static String anonymousServiceTypeCounter = "_SERVICECOUNTER_";
+  
+  
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  @Override
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    return registerServiceBeginServiceActivity(serviceType, serviceName, null, cleanup);
+  }
+
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param initialData is the initial service data for this service.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  @Override
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    byte[] initialData, IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterWriteLock(serviceTypeLockName);
+    try
+    {
+      if (serviceName == null)
+        serviceName = constructUniqueServiceName(serviceType);
+      
+      // First, do an active check
+      String serviceActiveFlag = makeActiveServiceFlagName(serviceType, serviceName);
+      if (checkGlobalFlag(serviceActiveFlag))
+        throw new ManifoldCFException("Service '"+serviceName+"' of type '"+serviceType+"' is already active");
+      
+      // First, see where we stand.
+      // We need to find out whether (a) our service is already registered; (b) how many registered services there are;
+      // (c) whether there are other active services.  But no changes will be made at this time.
+      boolean foundService = false;
+      boolean foundActiveService = false;
+      String resourceName;
+      int i = 0;
+      while (true)
+      {
+        resourceName = buildServiceListEntry(serviceType, i);
+        String x = readServiceName(resourceName);
+        if (x == null)
+          break;
+        if (x.equals(serviceName))
+          foundService = true;
+        else if (checkGlobalFlag(makeActiveServiceFlagName(serviceType, x)))
+          foundActiveService = true;
+        i++;
+      }
+
+      // Call the appropriate cleanup.  This will depend on what's actually registered, and what's active.
+      // If there were no services registered at all when we started, then no cleanup is needed, just cluster init.
+      // If this fails, we must revert to having our service not be registered and not be active.
+      boolean unregisterAll = false;
+      if (cleanup != null)
+      {
+        if (i == 0)
+        {
+          // If we could count on locks never being cleaned up, clusterInit()
+          // would be sufficient here.  But then there's no way to recover from
+          // a lock clean.
+          cleanup.cleanUpAllServices();
+          cleanup.clusterInit();
+        }
+        else if (foundService && foundActiveService)
+          cleanup.cleanUpService(serviceName);
+        else if (!foundActiveService)
+        {
+          cleanup.cleanUpAllServices();
+          cleanup.clusterInit();
+          unregisterAll = true;
+        }
+      }
+      
+      if (unregisterAll)
+      {
+        // Unregister all (since we did a global cleanup)
+        int k = i;
+        while (k > 0)
+        {
+          k--;
+          resourceName = buildServiceListEntry(serviceType, k);
+          String x = readServiceName(resourceName);
+          clearGlobalFlag(makeRegisteredServiceFlagName(serviceType, x));
+          writeServiceName(resourceName, null);
+        }
+        foundService = false;
+      }
+
+      // Now, register (if needed)
+      if (!foundService)
+      {
+        writeServiceName(resourceName, serviceName);
+        try
+        {
+          setGlobalFlag(makeRegisteredServiceFlagName(serviceType, serviceName));
+        }
+        catch (Throwable e)
+        {
+          writeServiceName(resourceName, null);
+          if (e instanceof Error)
+            throw (Error)e;
+          if (e instanceof RuntimeException)
+            throw (RuntimeException)e;
+          if (e instanceof ManifoldCFException)
+            throw (ManifoldCFException)e;
+          else
+            throw new RuntimeException("Unknown exception of type: "+e.getClass().getName()+": "+e.getMessage(),e);
+        }
+      }
+
+      // Last, set the appropriate active flag
+      setGlobalFlag(serviceActiveFlag);
+      writeServiceData(serviceType, serviceName, initialData);
+
+      return serviceName;
+    }
+    finally
+    {
+      leaveWriteLock(serviceTypeLockName);
+    }
+  }
+  
+  /** Set service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@param serviceData is the data to update to (may be null).
+  * This updates the service's transient data (or deletes it).  If the service is not active, an exception is thrown.
+  */
+  @Override
+  public void updateServiceData(String serviceType, String serviceName, byte[] serviceData)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterWriteLock(serviceTypeLockName);
+    try
+    {
+      String serviceActiveFlag = makeActiveServiceFlagName(serviceType, serviceName);
+      if (!checkGlobalFlag(serviceActiveFlag))
+        throw new ManifoldCFException("Service '"+serviceName+"' of type '"+serviceType+"' is not active");
+      writeServiceData(serviceType, serviceName, serviceData);
+    }
+    finally
+    {
+      leaveWriteLock(serviceTypeLockName);
+    }
+  }
+
+  /** Retrieve service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@return the service's transient data.
+  */
+  @Override
+  public byte[] retrieveServiceData(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterReadLock(serviceTypeLockName);
+    try
+    {
+      String serviceActiveFlag = makeActiveServiceFlagName(serviceType, serviceName);
+      if (!checkGlobalFlag(serviceActiveFlag))
+        return null;
+      byte[] rval = readServiceData(serviceType, serviceName);
+      if (rval == null)
+        rval = new byte[0];
+      return rval;
+    }
+    finally
+    {
+      leaveReadLock(serviceTypeLockName);
+    }
+  }
+
+  /** Scan service data for a service type.  Only active service data will be considered.
+  *@param serviceType is the type of service.
+  *@param dataType is the type of data.
+  *@param dataAcceptor is the object that will be notified of each item of data for each service name found.
+  */
+  @Override
+  public void scanServiceData(String serviceType, IServiceDataAcceptor dataAcceptor)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterReadLock(serviceTypeLockName);
+    try
+    {
+      int i = 0;
+      while (true)
+      {
+        String resourceName = buildServiceListEntry(serviceType, i);
+        String x = readServiceName(resourceName);
+        if (x == null)
+          break;
+        if (checkGlobalFlag(makeActiveServiceFlagName(serviceType, x)))
+        {
+          byte[] serviceData = readServiceData(serviceType, x);
+          if (dataAcceptor.acceptServiceData(x, serviceData))
+            break;
+        }
+        i++;
+      }
+    }
+    finally
+    {
+      leaveReadLock(serviceTypeLockName);
+    }
+  }
+
+  /** Count all active services of a given type.
+  *@param serviceType is the service type.
+  *@return the count.
+  */
+  @Override
+  public int countActiveServices(String serviceType)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterReadLock(serviceTypeLockName);
+    try
+    {
+      int count = 0;
+      int i = 0;
+      while (true)
+      {
+        String resourceName = buildServiceListEntry(serviceType, i);
+        String x = readServiceName(resourceName);
+        if (x == null)
+          break;
+        if (checkGlobalFlag(makeActiveServiceFlagName(serviceType, x)))
+          count++;
+        i++;
+      }
+      return count;
+    }
+    finally
+    {
+      leaveReadLock(serviceTypeLockName);
+    }
+  }
+
+  /** Clean up any inactive services found.
+  * Calling this method will invoke cleanup of one inactive service at a time.
+  * If there are no inactive services around, then false will be returned.
+  * Note that this method will block whatever service it finds from starting up
+  * for the time the cleanup is proceeding.  At the end of the cleanup, if
+  * successful, the service will be atomically unregistered.
+  *@param serviceType is the service type.
+  *@param cleanup is the object to call to clean up an inactive service.
+  *@return true if there were no cleanup operations necessary.
+  */
+  @Override
+  public boolean cleanupInactiveService(String serviceType, IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterWriteLock(serviceTypeLockName);
+    try
+    {
+      // We find ONE service that is registered but inactive, and clean up after that one.
+      // Presumably the caller will lather, rinse, and repeat.
+      String serviceName;
+      String resourceName;
+      int i = 0;
+      while (true)
+      {
+        resourceName = buildServiceListEntry(serviceType, i);
+        serviceName = readServiceName(resourceName);
+        if (serviceName == null)
+          return true;
+        if (!checkGlobalFlag(makeActiveServiceFlagName(serviceType, serviceName)))
+          break;
+        i++;
+      }
+      
+      // Found one, in serviceName, at position i
+      // Ideally, we should signal at this point that we're cleaning up after it, and then leave
+      // the exclusive lock, so that other activity can take place.  MHL
+      cleanup.cleanUpService(serviceName);
+      
+      // Clean up the registration
+      String serviceRegisteredFlag = makeRegisteredServiceFlagName(serviceType, serviceName);
+      
+      // Find the end of the list
+      int k = i + 1;
+      String lastResourceName = null;
+      String lastServiceName = null;
+      while (true)
+      {
+        String rName = buildServiceListEntry(serviceType, k);
+        String x = readServiceName(rName);
+        if (x == null)
+          break;
+        lastResourceName = rName;
+        lastServiceName = x;
+        k++;
+      }
+
+      // Rearrange the registration
+      clearGlobalFlag(serviceRegisteredFlag);
+      if (lastServiceName != null)
+        writeServiceName(resourceName, lastServiceName);
+      writeServiceName(lastResourceName, null);
+      return false;
+    }
+    finally
+    {
+      leaveWriteLock(serviceTypeLockName);
+    }
+  }
+
+  /** End service activity.
+  * This operation exits the "active" zone for the service.  This must take place using the same ILockManager
+  * object that was used to registerServiceBeginServiceActivity() - which implies that it is the same thread.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to exit.
+  */
+  @Override
+  public void endServiceActivity(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterWriteLock(serviceTypeLockName);
+    try
+    {
+      String serviceActiveFlag = makeActiveServiceFlagName(serviceType, serviceName);
+      if (!checkGlobalFlag(serviceActiveFlag))
+        throw new ManifoldCFException("Service '"+serviceName+"' of type '"+serviceType+" is not active");
+      deleteServiceData(serviceType, serviceName);
+      clearGlobalFlag(serviceActiveFlag);
+    }
+    finally
+    {
+      leaveWriteLock(serviceTypeLockName);
+    }
+  }
+    
+  /** Check whether a service is active or not.
+  * This operation returns true if the specified service is considered active at the moment.  Once a service
+  * is not active anymore, it can only return to activity by calling beginServiceActivity() once more.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to check on.
+  *@return true if the service is considered active.
+  */
+  @Override
+  public boolean checkServiceActive(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    String serviceTypeLockName = buildServiceTypeLockName(serviceType);
+    enterReadLock(serviceTypeLockName);
+    try
+    {
+      return checkGlobalFlag(makeActiveServiceFlagName(serviceType, serviceName));
+    }
+    finally
+    {
+      leaveReadLock(serviceTypeLockName);
+    }
+  }
+
+  /** Construct a unique service name given the service type.
+  */
+  protected String constructUniqueServiceName(String serviceType)
+    throws ManifoldCFException
+  {
+    String serviceCounterName = makeServiceCounterName(serviceType);
+    int serviceUID = readServiceCounter(serviceCounterName);
+    writeServiceCounter(serviceCounterName,serviceUID+1);
+    return anonymousServiceNamePrefix + serviceUID;
+  }
+  
+  /** Make the service counter name for a service type.
+  */
+  protected static String makeServiceCounterName(String serviceType)
+  {
+    return anonymousServiceTypeCounter + serviceType;
+  }
+  
+  /** Read service counter.
+  */
+  protected int readServiceCounter(String serviceCounterName)
+    throws ManifoldCFException
+  {
+    byte[] serviceCounterData = readData(serviceCounterName);
+    if (serviceCounterData == null || serviceCounterData.length != 4)
+      return 0;
+    return (((int)serviceCounterData[0]) & 0xff) +
+      ((((int)serviceCounterData[1]) << 8) & 0xff00) +
+      ((((int)serviceCounterData[2]) << 16) & 0xff0000) +
+      ((((int)serviceCounterData[3]) << 24) & 0xff000000);
+  }
+  
+  /** Write service counter.
+  */
+  protected void writeServiceCounter(String serviceCounterName, int counter)
+    throws ManifoldCFException
+  {
+    byte[] serviceCounterData = new byte[4];
+    serviceCounterData[0] = (byte)(counter & 0xff);
+    serviceCounterData[1] = (byte)((counter >> 8) & 0xff);
+    serviceCounterData[2] = (byte)((counter >> 16) & 0xff);
+    serviceCounterData[3] = (byte)((counter >> 24) & 0xff);
+    writeData(serviceCounterName,serviceCounterData);
+  }
+  
+  protected void writeServiceData(String serviceType, String serviceName, byte[] serviceData)
+    throws ManifoldCFException
+  {
+    writeData(makeServiceDataName(serviceType, serviceName), serviceData);
+  }
+  
+  protected byte[] readServiceData(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    return readData(makeServiceDataName(serviceType, serviceName));
+  }
+  
+  protected void deleteServiceData(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    writeServiceData(serviceType, serviceName, null);
+  }
+  
+  protected static String makeServiceDataName(String serviceType, String serviceName)
+  {
+    return serviceDataPrefix + serviceType + "_" + serviceName;
+  }
+  
+  protected static String makeActiveServiceFlagName(String serviceType, String serviceName)
+  {
+    return activePrefix + serviceType + "_" + serviceName;
+  }
+  
+  protected static String makeRegisteredServiceFlagName(String serviceType, String serviceName)
+  {
+    return servicePrefix + serviceType + "_" + serviceName;
+  }
+
+  protected String readServiceName(String resourceName)
+    throws ManifoldCFException
+  {
+    byte[] bytes = readData(resourceName);
+    if (bytes == null)
+      return null;
+    try
+    {
+      return new String(bytes, "utf-8");
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      throw new RuntimeException("Unsupported encoding: "+e.getMessage(),e);
+    }
+  }
+  
+  protected void writeServiceName(String resourceName, String serviceName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      writeData(resourceName, (serviceName==null)?null:serviceName.getBytes("utf-8"));
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      throw new RuntimeException("Unsupported encoding: "+e.getMessage(),e);
+    }
+  }
+  
+  protected static String buildServiceListEntry(String serviceType, int i)
+  {
+    return serviceListPrefix + serviceType + "_" + i;
+  }
+
+  protected static String buildServiceTypeLockName(String serviceType)
+  {
+    return serviceTypeLockPrefix + serviceType;
+  }
+  
+  /** Get the current shared configuration.  This configuration is available in common among all nodes,
+  * and thus must not be accessed through here for the purpose of finding configuration data that is specific to any one
+  * specific node.
+  *@param configurationData is the globally-shared configuration information.
+  */
+  @Override
+  public ManifoldCFConfiguration getSharedConfiguration()
+    throws ManifoldCFException
+  {
+    // Local implementation vectors through to system property file, which is shared in this case
+    return ManifoldCF.getConfiguration();
+  }
+
+  /** Raise a flag.  Use this method to assert a condition, or send a global signal.  The flag will be reset when the
+  * entire system is restarted.
+  *@param flagName is the name of the flag to set.
+  */
+  @Override
+  public void setGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    // Keep local flag information in memory
+    synchronized (globalFlags)
+    {
+      globalFlags.put(flagName,new Boolean(true));
+    }
+  }
+
+  /** Clear a flag.  Use this method to clear a condition, or retract a global signal.
+  *@param flagName is the name of the flag to clear.
+  */
+  @Override
+  public void clearGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    // Keep flag information in memory
+    synchronized (globalFlags)
+    {
+      globalFlags.remove(flagName);
+    }
+  }
+  
+  /** Check the condition of a specified flag.
+  *@param flagName is the name of the flag to check.
+  *@return true if the flag is set, false otherwise.
+  */
+  @Override
+  public boolean checkGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    // Keep flag information in memory
+    synchronized (globalFlags)
+    {
+      return globalFlags.get(flagName) != null;
+    }
+  }
+
+  /** Read data from a shared data resource.  Use this method to read any existing data, or get a null back if there is no such resource.
+  * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
+  *@param resourceName is the global name of the resource.
+  *@return a byte array containing the data, or null.
+  */
+  @Override
+  public byte[] readData(String resourceName)
+    throws ManifoldCFException
+  {
+    // Keep resource data local
+    synchronized (globalData)
+    {
+      return globalData.get(resourceName);
+    }
+  }
+  
+  /** Write data to a shared data resource.  Use this method to write a body of data into a shared resource.
+  * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
+  *@param resourceName is the global name of the resource.
+  *@param data is the byte array containing the data.  Pass null if you want to delete the resource completely.
+  */
+  @Override
+  public void writeData(String resourceName, byte[] data)
+    throws ManifoldCFException
+  {
+    // Keep resource data local
+    synchronized (globalData)
+    {
+      if (data == null)
+        globalData.remove(resourceName);
+      else
+        globalData.put(resourceName,data);
+    }
+  }
+
+  /** Wait for a time before retrying a lock.
+  */
+  @Override
+  public final void timedWait(int time)
+    throws ManifoldCFException
+  {
+
+    if (Logging.lock.isDebugEnabled())
+    {
+      Logging.lock.debug("Waiting for time "+Integer.toString(time));
+    }
+
+    try
+    {
+      ManifoldCF.sleep(time);
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Override this method to change the nature of global locks.
+  */
+  protected LockPool getGlobalLockPool()
+  {
+    return myLocks;
+  }
+  
+  /** Enter a non-exclusive write-locked area (blocking out all readers, but letting in other "writers").
+  * This kind of lock is designed to be used in conjunction with read locks.  It is used typically in
+  * a situation where the read lock represents a query and the non-exclusive write lock represents a modification
+  * to an individual item that might affect the query, but where multiple modifications do not individually
+  * interfere with one another (use of another, standard, write lock per item can guarantee this).
+  */
+  @Override
+  public final void enterNonExWriteLock(String lockKey)
+    throws ManifoldCFException
+  {
+    enterNonExWrite(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  @Override
+  public final void enterNonExWriteLockNoWait(String lockKey)
+    throws ManifoldCFException, LockException
+  {
+    enterNonExWriteNoWait(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  /** Leave a non-exclusive write lock.
+  */
+  @Override
+  public final void leaveNonExWriteLock(String lockKey)
+    throws ManifoldCFException
+  {
+    leaveNonExWrite(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  /** Enter a write locked area (i.e., block out both readers and other writers)
+  * NOTE: Can't enter until all readers have left.
+  */
+  @Override
+  public final void enterWriteLock(String lockKey)
+    throws ManifoldCFException
+  {
+    enterWrite(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  @Override
+  public final void enterWriteLockNoWait(String lockKey)
+    throws ManifoldCFException, LockException
+  {
+    enterWriteNoWait(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  @Override
+  public final void leaveWriteLock(String lockKey)
+    throws ManifoldCFException
+  {
+    leaveWrite(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  /** Enter a read-only locked area (i.e., block ONLY if there's a writer)
+  */
+  @Override
+  public final void enterReadLock(String lockKey)
+    throws ManifoldCFException
+  {
+    enterRead(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  @Override
+  public final void enterReadLockNoWait(String lockKey)
+    throws ManifoldCFException, LockException
+  {
+    enterReadNoWait(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  @Override
+  public final void leaveReadLock(String lockKey)
+    throws ManifoldCFException
+  {
+    leaveRead(lockKey, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  /** Enter multiple locks
+  */
+  @Override
+  public final void enterLocks(String[] readLocks, String[] nonExWriteLocks, String[] writeLocks)
+    throws ManifoldCFException
+  {
+    enter(readLocks, nonExWriteLocks, writeLocks, "lock", localLocks, getGlobalLockPool());
+  }
+
+  @Override
+  public final void enterLocksNoWait(String[] readLocks, String[] nonExWriteLocks, String[] writeLocks)
+    throws ManifoldCFException, LockException
+  {
+    enterNoWait(readLocks, nonExWriteLocks, writeLocks, "lock", localLocks, getGlobalLockPool());
+  }
+
+  /** Leave multiple locks
+  */
+  @Override
+  public final void leaveLocks(String[] readLocks, String[] writeNonExLocks, String[] writeLocks)
+    throws ManifoldCFException
+  {
+    leave(readLocks, writeNonExLocks, writeLocks, "lock", localLocks, getGlobalLockPool());
+  }
+  
+  @Override
+  public final void clearLocks()
+    throws ManifoldCFException
+  {
+    clear("lock", localLocks, getGlobalLockPool());
+  }
+  
+  /** Enter a named, read critical section (NOT a lock).  Critical sections never cross JVM boundaries.
+  * Critical section names do not collide with lock names; they have a distinct namespace.
+  *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
+  * section at a time.
+  */
+  @Override
+  public final void enterReadCriticalSection(String sectionKey)
+    throws ManifoldCFException
+  {
+    enterRead(sectionKey, "critical section", localSections, mySections);
+  }
+
+  /** Leave a named, read critical section (NOT a lock).  Critical sections never cross JVM boundaries.
+  * Critical section names do not collide with lock names; they have a distinct namespace.
+  *@param sectionKey is the name of the section to leave.  Only one thread can be in any given named
+  * section at a time.
+  */
+  @Override
+  public final void leaveReadCriticalSection(String sectionKey)
+    throws ManifoldCFException
+  {
+    leaveRead(sectionKey, "critical section", localSections, mySections);
+  }
+
+  /** Enter a named, non-exclusive write critical section (NOT a lock).  Critical sections never cross JVM boundaries.
+  * Critical section names do not collide with lock names; they have a distinct namespace.
+  *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
+  * section at a time.
+  */
+  @Override
+  public final void enterNonExWriteCriticalSection(String sectionKey)
+    throws ManifoldCFException
+  {
+    enterNonExWrite(sectionKey, "critical section", localSections, mySections);
+  }
+
+  /** Leave a named, non-exclusive write critical section (NOT a lock).  Critical sections never cross JVM boundaries.
+  * Critical section names do not collide with lock names; they have a distinct namespace.
+  *@param sectionKey is the name of the section to leave.  Only one thread can be in any given named
+  * section at a time.
+  */
+  @Override
+  public final void leaveNonExWriteCriticalSection(String sectionKey)
+    throws ManifoldCFException
+  {
+    leaveNonExWrite(sectionKey, "critical section", localSections, mySections);
+  }
+  
+  /** Enter a named, exclusive critical section (NOT a lock).  Critical sections never cross JVM boundaries.
+  * Critical section names should be distinct from all lock names.
+  *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
+  * section at a time.
+  */
+  @Override
+  public final void enterWriteCriticalSection(String sectionKey)
+    throws ManifoldCFException
+  {
+    enterWrite(sectionKey, "critical section", localSections, mySections);
+  }
+  
+  /** Leave a named, exclusive critical section (NOT a lock).  Critical sections never cross JVM boundaries.
+  * Critical section names should be distinct from all lock names.
+  *@param sectionKey is the name of the section to leave.  Only one thread can be in any given named
+  * section at a time.
+  */
+  @Override
+  public final void leaveWriteCriticalSection(String sectionKey)
+    throws ManifoldCFException
+  {
+    leaveWrite(sectionKey, "critical section", localSections, mySections);
+  }
+
+  /** Enter multiple critical sections simultaneously.
+  *@param readSectionKeys is an array of read section descriptors, or null if there are no read sections desired.
+  *@param nonExSectionKeys is an array of non-ex write section descriptors, or null if none desired.
+  *@param writeSectionKeys is an array of write section descriptors, or null if there are none desired.
+  */
+  @Override
+  public final void enterCriticalSections(String[] readSectionKeys, String[] nonExSectionKeys, String[] writeSectionKeys)
+    throws ManifoldCFException
+  {
+    enter(readSectionKeys, nonExSectionKeys, writeSectionKeys, "critical section", localSections, mySections);
+  }
+
+  /** Leave multiple critical sections simultaneously.
+  *@param readSectionKeys is an array of read section descriptors, or null if there are no read sections desired.
+  *@param nonExSectionKeys is an array of non-ex write section descriptors, or null if none desired.
+  *@param writeSectionKeys is an array of write section descriptors, or null if there are none desired.
+  */
+  @Override
+  public final void leaveCriticalSections(String[] readSectionKeys, String[] nonExSectionKeys, String[] writeSectionKeys)
+    throws ManifoldCFException
+  {
+    leave(readSectionKeys, nonExSectionKeys, writeSectionKeys, "critical section", localSections, mySections);
+  }
+
+
+  // Protected methods
+
+  protected static void enterNonExWrite(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Entering non-ex write "+description+" '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+    // See if we already own a write lock for the object
+    // If we do, there is no reason to change the status of the global lock we own.
+    if (ll.hasNonExWriteLock() || ll.hasWriteLock())
+    {
+      ll.incrementNonExWriteLocks();
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained "+description+"!");
+      return;
+    }
+
+    // Check for illegalities
+    if (ll.hasReadLock())
+    {
+      throw new ManifoldCFException("Illegal "+description+" sequence: NonExWrite "+description+" can't be within read "+description,ManifoldCFException.GENERAL_ERROR);
+    }
+
+    // We don't own a local non-ex write lock.  Get one.  The global lock will need
+    // to know if we already have a a read lock.
+    while (true)
+    {
+      LockObject lo = crossLocks.getObject(lockKey);
+      try
+      {
+        lo.enterNonExWriteLock();
+        break;
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+      }
+      catch (ExpiredObjectException e)
+      {
+        // Try again to get a valid object
+      }
+    }
+    ll.incrementNonExWriteLocks();
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug(" Successfully obtained "+description+"!");
+  }
+
+  protected static void enterNonExWriteNoWait(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException, LockException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Entering non-ex write "+description+" no wait '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+
+    // See if we already own a write lock for the object
+    // If we do, there is no reason to change the status of the global lock we own.
+    if (ll.hasNonExWriteLock() || ll.hasWriteLock())
+    {
+      ll.incrementNonExWriteLocks();
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained "+description+"!");
+      return;
+    }
+
+    // Check for illegalities
+    if (ll.hasReadLock())
+    {
+      throw new ManifoldCFException("Illegal "+description+" sequence: NonExWrite "+description+" can't be within read "+description,ManifoldCFException.GENERAL_ERROR);
+    }
+
+    // We don't own a local non-ex write lock.  Get one.  The global lock will need
+    // to know if we already have a a read lock.
+    while (true)
+    {
+      LockObject lo = crossLocks.getObject(lockKey);
+      try
+      {
+        synchronized (lo)
+        {
+          lo.enterNonExWriteLockNoWait();
+          break;
+        }
+      }
+      catch (LocalLockException e)
+      {
+
+        if (Logging.lock.isDebugEnabled())
+          Logging.lock.debug(" Could not non-ex write "+description+" '"+lockKey+"', lock exception");
+
+        // Throw LockException instead
+        throw new LockException(e.getMessage());
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+      }
+      catch (ExpiredObjectException e)
+      {
+        // Try again to get a valid object
+      }
+    }
+    ll.incrementNonExWriteLocks();
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug(" Successfully obtained "+description+"!");
+  }
+
+  protected static void leaveNonExWrite(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Leaving non-ex write "+description+" '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+    ll.decrementNonExWriteLocks();
+    // See if we no longer have a write lock for the object.
+    // If we retain the stronger exclusive lock, we still do not need to
+    // change the status of the global lock.
+    if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
+    {
+      while (true)
+      {
+        LockObject lo = crossLocks.getObject(lockKey);
+        try
+        {
+          lo.leaveNonExWriteLock();
+          break;
+        }
+        catch (InterruptedException e)
+        {
+          // try one more time
+          try
+          {
+            lo.leaveNonExWriteLock();
+            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+          }
+          catch (InterruptedException e2)
+          {
+            ll.incrementNonExWriteLocks();
+            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
+          }
+          catch (ExpiredObjectException e2)
+          {
+            ll.incrementNonExWriteLocks();
+            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+          }
+        }
+        catch (ExpiredObjectException e)
+        {
+          // Try again to get a valid object
+        }
+      }
+
+      localLocks.releaseLocalLock(lockKey);
+    }
+  }
+
+  protected static void enterWrite(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Entering write "+description+" '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+
+    // See if we already own the write lock for the object
+    if (ll.hasWriteLock())
+    {
+      ll.incrementWriteLocks();
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained "+description+"!");
+      return;
+    }
+
+    // Check for illegalities
+    if (ll.hasReadLock() || ll.hasNonExWriteLock())
+    {
+      throw new ManifoldCFException("Illegal "+description+" sequence: Write "+description+" can't be within read "+description+" or non-ex write "+description,ManifoldCFException.GENERAL_ERROR);
+    }
+
+    // We don't own a local write lock.  Get one.  The global lock will need
+    // to know if we already have a non-exclusive lock or a read lock, which we don't because
+    // it's illegal.
+    while (true)
+    {
+      LockObject lo = crossLocks.getObject(lockKey);
+      try
+      {
+        lo.enterWriteLock();
+        break;
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+      }
+      catch (ExpiredObjectException e)
+      {
+        // Try again
+      }
+    }
+    ll.incrementWriteLocks();
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug(" Successfully obtained "+description+"!");
+  }
+
+  protected static void enterWriteNoWait(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException, LockException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Entering write "+description+" no wait '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+
+    // See if we already own the write lock for the object
+    if (ll.hasWriteLock())
+    {
+      ll.incrementWriteLocks();
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained "+description+"!");
+      return;
+    }
+
+    // Check for illegalities
+    if (ll.hasReadLock() || ll.hasNonExWriteLock())
+    {
+      throw new ManifoldCFException("Illegal "+description+" sequence: Write "+description+" can't be within read "+description+" or non-ex write "+description,ManifoldCFException.GENERAL_ERROR);
+    }
+
+    // We don't own a local write lock.  Get one.  The global lock will need
+    // to know if we already have a non-exclusive lock or a read lock, which we don't because
+    // it's illegal.
+    while (true)
+    {
+      LockObject lo = crossLocks.getObject(lockKey);
+      try
+      {
+        synchronized (lo)
+        {
+          lo.enterWriteLockNoWait();
+          break;
+        }
+      }
+      catch (LocalLockException e)
+      {
+
+        if (Logging.lock.isDebugEnabled())
+        {
+          Logging.lock.debug(" Could not write "+description+" '"+lockKey+"', lock exception");
+        }
+
+        throw new LockException(e.getMessage());
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+      }
+      catch (ExpiredObjectException e)
+      {
+        // Try again
+      }
+    }
+
+    ll.incrementWriteLocks();
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug(" Successfully obtained "+description+"!");
+  }
+
+  protected static void leaveWrite(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Leaving write "+description+" '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+    ll.decrementWriteLocks();
+    if (!ll.hasWriteLock())
+    {
+      while (true)
+      {
+        LockObject lo = crossLocks.getObject(lockKey);
+        try
+        {
+          lo.leaveWriteLock();
+          break;
+        }
+        catch (InterruptedException e)
+        {
+          // try one more time
+          try
+          {
+            lo.leaveWriteLock();
+            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+          }
+          catch (InterruptedException e2)
+          {
+            ll.incrementWriteLocks();
+            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
+          }
+          catch (ExpiredObjectException e2)
+          {
+            ll.incrementWriteLocks();
+            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+          }
+        }
+        catch (ExpiredObjectException e)
+        {
+          // Try again
+        }
+      }
+
+      localLocks.releaseLocalLock(lockKey);
+    }
+  }
+
+  protected static void enterRead(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Entering read "+description+" '"+lockKey+"'");
+
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+    // See if we already own the read lock for the object.
+    // Write locks or non-ex writelocks count as well (they're stronger).
+    if (ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock())
+    {
+      ll.incrementReadLocks();
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained "+description+"!");
+      return;
+    }
+
+    // We don't own a local read lock.  Get one.
+    while (true)
+    {
+      LockObject lo = crossLocks.getObject(lockKey);
+      try
+      {
+        lo.enterReadLock();
+        break;
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+      }
+      catch (ExpiredObjectException e)
+      {
+        // Try again
+      }
+    }
+    ll.incrementReadLocks();
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug(" Successfully obtained "+description+"!");
+  }
+
+  protected static void enterReadNoWait(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException, LockException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Entering read "+description+" no wait '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+    // See if we already own the read lock for the object.
+    // Write locks or non-ex writelocks count as well (they're stronger).
+    if (ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock())
+    {
+      ll.incrementReadLocks();
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained "+description+"!");
+      return;
+    }
+
+    // We don't own a local read lock.  Get one.
+    while (true)
+    {
+      LockObject lo = crossLocks.getObject(lockKey);
+      try
+      {
+        synchronized (lo)
+        {
+          lo.enterReadLockNoWait();
+          break;
+        }
+      }
+      catch (LocalLockException e)
+      {
+
+        if (Logging.lock.isDebugEnabled())
+          Logging.lock.debug(" Could not read "+description+" '"+lockKey+"', lock exception");
+
+        throw new LockException(e.getMessage());
+      }
+      catch (InterruptedException e)
+      {
+        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+      }
+      catch (ExpiredObjectException e)
+      {
+        // Try again
+      }
+    }
+
+    ll.incrementReadLocks();
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug(" Successfully obtained "+description+"!");
+  }
+
+  protected static void leaveRead(String lockKey, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Leaving read "+description+" '"+lockKey+"'");
+
+    LocalLock ll = localLocks.getLocalLock(lockKey);
+
+    ll.decrementReadLocks();
+    if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
+    {
+      while (true)
+      {
+        LockObject lo = crossLocks.getObject(lockKey);
+        try
+        {
+          lo.leaveReadLock();
+          break;
+        }
+        catch (InterruptedException e)
+        {
+          // Try one more time
+          try
+          {
+            lo.leaveReadLock();
+            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+          }
+          catch (InterruptedException e2)
+          {
+            ll.incrementReadLocks();
+            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
+          }
+          catch (ExpiredObjectException e2)
+          {
+            ll.incrementReadLocks();
+            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
+          }
+        }
+        catch (ExpiredObjectException e)
+        {
+          // Try again
+        }
+      }
+      localLocks.releaseLocalLock(lockKey);
+    }
+  }
+
+  protected static void clear(String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+      Logging.lock.debug("Clearing all "+description+"s");
+
+    for (String keyValue : localLocks.keySet())
+    {
+      LocalLock ll = localLocks.getLocalLock(keyValue);
+      while (ll.hasWriteLock())
+        leaveWrite(keyValue, description, localLocks, crossLocks);
+      while (ll.hasNonExWriteLock())
+        leaveNonExWrite(keyValue, description, localLocks, crossLocks);
+      while (ll.hasReadLock())
+        leaveRead(keyValue, description, localLocks, crossLocks);
+    }
+  }
+
+  protected static void enter(String[] readLocks, String[] nonExWriteLocks, String[] writeLocks, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    if (Logging.lock.isDebugEnabled())
+    {
+      Logging.lock.debug("Entering multiple "+description+"s:");
+      int i;
+      if (readLocks != null)
+      {
+        i = 0;
+        while (i < readLocks.length)
+        {
+          Logging.lock.debug(" Read "+description+" '"+readLocks[i++]+"'");
+        }
+      }
+      if (nonExWriteLocks != null)
+      {
+        i = 0;
+        while (i < nonExWriteLocks.length)
+        {
+          Logging.lock.debug(" Non-ex write "+description+" '"+nonExWriteLocks[i++]+"'");
+        }
+      }
+      if (writeLocks != null)
+      {
+        i = 0;
+        while (i < writeLocks.length)
+        {
+          Logging.lock.debug(" Write "+description+" '"+writeLocks[i++]+"'");
+        }
+      }
+    }
+
+
+    // Sort the locks.  This improves the chances of making it through the locking process without
+    // contention!
+    LockDescription lds[] = getSortedUniqueLocks(readLocks,nonExWriteLocks,writeLocks);
+    int locksProcessed = 0;
+    try
+    {
+      while (locksProcessed < lds.length)
+      {
+        LockDescription ld = lds[locksProcessed];
+        int lockType = ld.getType();
+        String lockKey = ld.getKey();
+        LocalLock ll;
+        switch (lockType)
+        {
+        case TYPE_WRITE:
+          ll = localLocks.getLocalLock(lockKey);
+          // Check for illegalities
+          if ((ll.hasReadLock() || ll.hasNonExWriteLock()) && !ll.hasWriteLock())
+          {
+            throw new ManifoldCFException("Illegal "+description+" sequence: Write "+description+" can't be within read "+description+" or non-ex write "+description,ManifoldCFException.GENERAL_ERROR);
+          }
+
+          // See if we already own the write lock for the object
+          if (!ll.hasWriteLock())
+          {
+            // We don't own a local write lock.  Get one.
+            while (true)
+            {
+              LockObject lo = crossLocks.getObject(lockKey);
+              try
+              {
+                lo.enterWriteLock();
+                break;
+              }
+              catch (ExpiredObjectException e)
+              {
+                // Try again
+              }
+            }
+          }
+          ll.incrementWriteLocks();
+          break;
+        case TYPE_WRITENONEX:
+          ll = localLocks.getLocalLock(lockKey);
+          // Check for illegalities
+          if (ll.hasReadLock() && !(ll.hasNonExWriteLock() || ll.hasWriteLock()))
+          {
+            throw new ManifoldCFException("Illegal "+description+" sequence: NonExWrite "+description+" can't be within read "+description,ManifoldCFException.GENERAL_ERROR);
+          }
+
+          // See if we already own the write lock for the object
+          if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
+          {
+            // We don't own a local write lock.  Get one.
+            while (true)
+            {
+              LockObject lo = crossLocks.getObject(lockKey);
+              try
+              {
+                lo.enterNonExWriteLock();
+                break;
+              }
+              catch (ExpiredObjectException e)
+              {
+                // Try again
+              }
+            }
+          }
+          ll.incrementNonExWriteLocks();
+          break;
+        case TYPE_READ:
+          ll = localLocks.getLocalLock(lockKey);
+          if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
+          {
+            // We don't own a local read lock.  Get one.
+            while (true)
+            {
+              LockObject lo = crossLocks.getObject(lockKey);
+              try
+              {
+                lo.enterReadLock();
+                break;
+              }
+              catch (ExpiredObjectException e)
+              {
+                // Try again
+              }
+            }
+          }
+          ll.incrementReadLocks();
+          break;
+        }
+        locksProcessed++;
+      }
+      // Got all; we are done!
+      Logging.lock.debug(" Successfully obtained multiple "+description+"s!");
+      return;
+    }
+    catch (Throwable ex)
+    {
+      // No matter what, undo the locks we've taken
+      ManifoldCFException ae = null;
+      int errno = 0;
+
+      while (--locksProcessed >= 0)
+      {
+        LockDescription ld = lds[locksProcessed];
+        int lockType = ld.getType();
+        String lockKey = ld.getKey();
+        try
+        {
+          switch (lockType)
+          {
+          case TYPE_READ:
+            leaveRead(lockKey,description,localLocks,crossLocks);
+            break;
+          case TYPE_WRITENONEX:
+            leaveNonExWrite(lockKey,description,localLocks,crossLocks);
+            break;
+          case TYPE_WRITE:
+            leaveWrite(lockKey,description,localLocks,crossLocks);
+            break;
+          }
+        }
+        catch (ManifoldCFException e)
+        {
+          ae = e;
+        }
+      }
+
+      if (ae != null)
+      {
+        throw ae;
+      }
+      if (ex instanceof ManifoldCFException)
+      {
+        throw (ManifoldCFException)ex;
+      }
+      if (ex instanceof InterruptedException)
+      {
+        // It's InterruptedException
+        throw new ManifoldCFException("Interrupted",ex,ManifoldCFException.INTERRUPTED);
+      }
+      if (!(ex instanceof Error))
+      {
+        throw new Error("Unexpected exception",ex);
+      }
+      throw (Error)ex;
+    }
+  }
+
+  protected static void enterNoWait(String[] readLocks, String[] nonExWriteLocks, String[] writeLocks, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException, LockException
+  {
+    if (Logging.lock.isDebugEnabled())
+    {
+      Logging.lock.debug("Entering multiple "+description+"s no wait:");
+      int i;
+      if (readLocks != null)
+      {
+        i = 0;
+        while (i < readLocks.length)
+        {
+          Logging.lock.debug(" Read "+description+" '"+readLocks[i++]+"'");
+        }
+      }
+      if (nonExWriteLocks != null)
+      {
+        i = 0;
+        while (i < nonExWriteLocks.length)
+        {
+          Logging.lock.debug(" Non-ex write "+description+" '"+nonExWriteLocks[i++]+"'");
+        }
+      }
+      if (writeLocks != null)
+      {
+        i = 0;
+        while (i < writeLocks.length)
+        {
+          Logging.lock.debug(" Write "+description+" '"+writeLocks[i++]+"'");
+        }
+      }
+    }
+
+
+    // Sort the locks.  This improves the chances of making it through the locking process without
+    // contention!
+    LockDescription lds[] = getSortedUniqueLocks(readLocks,nonExWriteLocks,writeLocks);
+    int locksProcessed = 0;
+    try
+    {
+      while (locksProcessed < lds.length)
+      {
+        LockDescription ld = lds[locksProcessed];
+        int lockType = ld.getType();
+        String lockKey = ld.getKey();
+        LocalLock ll;
+        switch (lockType)
+        {
+        case TYPE_WRITE:
+          ll = localLocks.getLocalLock(lockKey);
+          // Check for illegalities
+          if ((ll.hasReadLock() || ll.hasNonExWriteLock()) && !ll.hasWriteLock())
+          {
+            throw new ManifoldCFException("Illegal "+description+" sequence: Write "+description+" can't be within read "+description+" or non-ex write "+description,ManifoldCFException.GENERAL_ERROR);
+          }
+
+          // See if we already own the write lock for the object
+          if (!ll.hasWriteLock())
+          {
+            // We don't own a local write lock.  Get one.
+            while (true)
+            {
+              LockObject lo = crossLocks.getObject(lockKey);
+              synchronized (lo)
+              {
+                try
+                {
+                  lo.enterWriteLockNoWait();
+                  break;
+                }
+                catch (ExpiredObjectException e)
+                {
+                  // Try again
+                }
+              }
+            }
+          }
+          ll.incrementWriteLocks();
+          break;
+        case TYPE_WRITENONEX:
+          ll = localLocks.getLocalLock(lockKey);
+          // Check for illegalities
+          if (ll.hasReadLock() && !(ll.hasNonExWriteLock() || ll.hasWriteLock()))
+          {
+            throw new ManifoldCFException("Illegal "+description+" sequence: NonExWrite "+description+" can't be within read "+description,ManifoldCFException.GENERAL_ERROR);
+          }
+
+          // See if we already own the write lock for the object
+          if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
+          {
+            // We don't own a local write lock.  Get one.
+            while (true)
+            {
+              LockObject lo = crossLocks.getObject(lockKey);
+              synchronized (lo)
+              {
+                try
+                {
+                  lo.enterNonExWriteLockNoWait();
+                  break;
+                }
+                catch (ExpiredObjectException e)
+                {
+                  // Try again
+                }
+              }
+            }
+          }
+          ll.incrementNonExWriteLocks();
+          break;
+        case TYPE_READ:
+          ll = localLocks.getLocalLock(lockKey);
+          if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
+          {
+            // We don't own a local read lock.  Get one.
+            while (true)
+            {
+              LockObject lo = crossLocks.getObject(lockKey);
+              synchronized (lo)
+              {
+                try
+                {
+                  lo.enterReadLockNoWait();
+                  break;
+                }
+                catch (ExpiredObjectException e)
+                {
+                  // Try again
+                }
+              }
+            }
+          }
+          ll.incrementReadLocks();
+          break;
+        }
+        locksProcessed++;
+      }
+      // Got all; we are done!
+      if (Logging.lock.isDebugEnabled())
+        Logging.lock.debug(" Successfully obtained multiple "+description+"s!");
+      return;
+    }
+    catch (Throwable ex)
+    {
+      // No matter what, undo the locks we've taken
+      ManifoldCFException ae = null;
+      int errno = 0;
+
+      while (--locksProcessed >= 0)
+      {
+        LockDescription ld = lds[locksProcessed];
+        int lockType = ld.getType();
+        String lockKey = ld.getKey();
+        try
+        {
+          switch (lockType)
+          {
+          case TYPE_READ:
+            leaveRead(lockKey,description,localLocks,crossLocks);
+            break;
+          case TYPE_WRITENONEX:
+            leaveNonExWrite(lockKey,description,localLocks,crossLocks);
+            break;
+          case TYPE_WRITE:
+            leaveWrite(lockKey,description,localLocks,crossLocks);
+            break;
+          }
+        }
+        catch (ManifoldCFException e)
+        {
+          ae = e;
+        }
+      }
+
+      if (ae != null)
+      {
+        throw ae;
+      }
+      if (ex instanceof ManifoldCFException)
+      {
+        throw (ManifoldCFException)ex;
+      }
+      if (ex instanceof LockException || ex instanceof LocalLockException)
+      {
+        Logging.lock.debug(" Couldn't get "+description+"; throwing LockException");
+        // It's either LockException or LocalLockException
+        throw new LockException(ex.getMessage());
+      }
+      if (ex instanceof InterruptedException)
+      {
+        throw new ManifoldCFException("Interrupted",ex,ManifoldCFException.INTERRUPTED);
+      }
+      if (!(ex instanceof Error))
+      {
+        throw new Error("Unexpected exception",ex);
+      }
+      throw (Error)ex;
+
+    }
+
+  }
+
+  protected static void leave(String[] readLocks, String[] writeNonExLocks, String[] writeLocks, String description, LocalLockPool localLocks, LockPool crossLocks)
+    throws ManifoldCFException
+  {
+    LockDescription[] lds = getSortedUniqueLocks(readLocks,writeNonExLocks,writeLocks);
+    // Free them all... one at a time is fine
+    ManifoldCFException ae = null;
+    int i = lds.length;
+    while (--i >= 0)
+    {
+      LockDescription ld = lds[i];
+      String lockKey = ld.getKey();
+      int lockType = ld.getType();
+      try
+      {
+        switch (lockType)
+        {
+        case TYPE_READ:
+          leaveRead(lockKey,description,localLocks,crossLocks);
+          break;
+        case TYPE_WRITENONEX:
+          leaveNonExWrite(lockKey,description,localLocks,crossLocks);
+          break;
+        case TYPE_WRITE:
+          leaveWrite(lockKey,description,localLocks,crossLocks);
+          break;
+        }
+      }
+      catch (ManifoldCFException e)
+      {
+        ae = e;
+      }
+    }
+
+    if (ae != null)
+    {
+      throw ae;
+    }
+  }
+
+  /** Process inbound locks into a sorted vector of most-restrictive unique locks
+  */
+  protected static LockDescription[] getSortedUniqueLocks(String[] readLocks, String[] writeNonExLocks,
+    String[] writeLocks)
+  {
+    // First build a unique hash of lock descriptions
+    Map<String,LockDescription> ht = new HashMap<String,LockDescription>();
+    int i;
+    if (readLocks != null)
+    {
+      i = 0;
+      while (i < readLocks.length)
+      {
+        String key = readLocks[i++];
+        LockDescription ld = ht.get(key);
+        if (ld == null)
+        {
+          ld = new LockDescription(TYPE_READ,key);
+          ht.put(key,ld);
+        }
+        else
+          ld.set(TYPE_READ);
+      }
+    }
+    if (writeNonExLocks != null)
+    {
+      i = 0;
+      while (i < writeNonExLocks.length)
+      {
+        String key = writeNonExLocks[i++];
+        LockDescription ld = ht.get(key);
+        if (ld == null)
+        {
+          ld = new LockDescription(TYPE_WRITENONEX,key);
+          ht.put(key,ld);
+        }
+        else
+          ld.set(TYPE_WRITENONEX);
+      }
+    }
+    if (writeLocks != null)
+    {
+      i = 0;
+      while (i < writeLocks.length)
+      {
+        String key = writeLocks[i++];
+        LockDescription ld = ht.get(key);
+        if (ld == null)
+        {
+          ld = new LockDescription(TYPE_WRITE,key);
+          ht.put(key,ld);
+        }
+        else
+          ld.set(TYPE_WRITE);
+      }
+    }
+
+    // Now, sort by key name
+    LockDescription[] rval = new LockDescription[ht.size()];
+    String[] sortarray = new String[ht.size()];
+    i = 0;
+    for (String key : ht.keySet())
+    {
+      sortarray[i++] = key;
+    }
+    java.util.Arrays.sort(sortarray);
+    i = 0;
+    for (String key : sortarray)
+    {
+      rval[i++] = ht.get(key);
+    }
+    return rval;
+  }
+
+  protected static class LockDescription
+  {
+    protected int lockType;
+    protected String lockKey;
+
+    public LockDescription(int lockType, String lockKey)
+    {
+      this.lockType = lockType;
+      this.lockKey = lockKey;
+    }
+
+    public void set(int lockType)
+    {
+      if (lockType > this.lockType)
+        this.lockType = lockType;
+    }
+
+    public int getType()
+    {
+      return lockType;
+    }
+
+    public String getKey()
+    {
+      return lockKey;
+    }
+  }
+
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockManager.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockManager.java
new file mode 100644
index 0000000..f165218
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockManager.java
@@ -0,0 +1,273 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+import java.io.*;
+
+/** This is the file-based lock manager.
+*/
+public class FileLockManager extends BaseLockManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Synchronization directory property - local to this implementation of ILockManager */
+  public static final String synchDirectoryProperty = "org.apache.manifoldcf.synchdirectory";
+
+  // These are for file-based locks (which cross JVM boundaries)
+  protected final static Integer lockPoolInitialization = new Integer(0);
+  protected static LockPool myFileLocks = null;
+
+  // This is the directory used for cross-JVM synchronization, or null if off
+  protected File synchDirectory = null;
+
+  public FileLockManager(File synchDirectory)
+    throws ManifoldCFException
+  {
+    this.synchDirectory = synchDirectory;
+    if (synchDirectory == null)
+      throw new ManifoldCFException("Synch directory cannot be null");
+    if (!synchDirectory.isDirectory())
+      throw new ManifoldCFException("Synch directory must point to an existing, writeable directory!",ManifoldCFException.SETUP_ERROR);
+    synchronized(lockPoolInitialization)
+    {
+      if (myFileLocks == null)
+      {
+        myFileLocks = new LockPool(new FileLockObjectFactory(synchDirectory));
+      }
+    }
+  }
+  
+  public FileLockManager()
+    throws ManifoldCFException
+  {
+    this(getSynchDirectoryProperty());
+  }
+
+  /** Get the synch directory property. */
+  public static File getSynchDirectoryProperty()
+    throws ManifoldCFException
+  {
+    return ManifoldCF.getFileProperty(synchDirectoryProperty);
+  }
+  
+  /** Calculate the name of a flag resource.
+  *@param flagName is the name of the flag.
+  *@return the name for the flag resource.
+  */
+  protected static String getFlagResourceName(String flagName)
+  {
+    return "flag-"+flagName;
+  }
+    
+  /** Raise a flag.  Use this method to assert a condition, or send a global signal.  The flag will be reset when the
+  * entire system is restarted.
+  *@param flagName is the name of the flag to set.
+  */
+  @Override
+  public void setGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    String resourceName = getFlagResourceName(flagName);
+    String path = makeFilePath(resourceName);
+    (new File(path)).mkdirs();
+    File f = new File(path,ManifoldCF.safeFileName(resourceName));
+    try
+    {
+      f.createNewFile();
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+    catch (IOException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+
+  /** Clear a flag.  Use this method to clear a condition, or retract a global signal.
+  *@param flagName is the name of the flag to clear.
+  */
+  @Override
+  public void clearGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    String resourceName = getFlagResourceName(flagName);
+    File f = new File(makeFilePath(resourceName),ManifoldCF.safeFileName(resourceName));
+    f.delete();
+  }
+  
+  /** Check the condition of a specified flag.
+  *@param flagName is the name of the flag to check.
+  *@return true if the flag is set, false otherwise.
+  */
+  @Override
+  public boolean checkGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    String resourceName = getFlagResourceName(flagName);
+    File f = new File(makeFilePath(resourceName),ManifoldCF.safeFileName(resourceName));
+    return f.exists();
+  }
+
+  /** Read data from a shared data resource.  Use this method to read any existing data, or get a null back if there is no such resource.
+  * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
+  *@param resourceName is the global name of the resource.
+  *@return a byte array containing the data, or null.
+  */
+  @Override
+  public byte[] readData(String resourceName)
+    throws ManifoldCFException
+  {
+    File f = new File(makeFilePath(resourceName),ManifoldCF.safeFileName(resourceName));
+    try
+    {
+      InputStream is = new FileInputStream(f);
+      try
+      {
+        ByteArrayBuffer bab = new ByteArrayBuffer();
+        while (true)
+        {
+          int x = is.read();
+          if (x == -1)
+            break;
+          bab.add((byte)x);
+        }
+        return bab.toArray();
+      }
+      finally
+      {
+        is.close();
+      }
+    }
+    catch (FileNotFoundException e)
+    {
+      return null;
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+    catch (IOException e)
+    {
+      throw new ManifoldCFException("IO exception: "+e.getMessage(),e);
+    }
+  }
+  
+  /** Write data to a shared data resource.  Use this method to write a body of data into a shared resource.
+  * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
+  *@param resourceName is the global name of the resource.
+  *@param data is the byte array containing the data.  Pass null if you want to delete the resource completely.
+  */
+  @Override
+  public void writeData(String resourceName, byte[] data)
+    throws ManifoldCFException
+  {
+    try
+    {
+      String path = makeFilePath(resourceName);
+      // Make sure the directory exists
+      (new File(path)).mkdirs();
+      File f = new File(path,ManifoldCF.safeFileName(resourceName));
+      if (data == null)
+      {
+        f.delete();
+        return;
+      }
+      FileOutputStream os = new FileOutputStream(f);
+      try
+      {
+        os.write(data,0,data.length);
+      }
+      finally
+      {
+        os.close();
+      }
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+    catch (IOException e)
+    {
+      throw new ManifoldCFException("IO exception: "+e.getMessage(),e);
+    }
+  }
+
+  /** Override this method to change the nature of global locks.
+  */
+  @Override
+  protected LockPool getGlobalLockPool()
+  {
+    return myFileLocks;
+  }
+
+  /** Create a file path given a key name.
+  *@param key is the key name.
+  *@return the file path.
+  */
+  protected String makeFilePath(String key)
+  {
+    int hashcode = key.hashCode();
+    int outerDirNumber = (hashcode & (1023));
+    int innerDirNumber = ((hashcode >> 10) & (1023));
+    String fullDir = synchDirectory.toString();
+    if (fullDir.length() == 0 || !fullDir.endsWith("/"))
+      fullDir = fullDir + "/";
+    fullDir = fullDir + Integer.toString(outerDirNumber)+"/"+Integer.toString(innerDirNumber);
+    return fullDir;
+  }
+
+  protected static final int BASE_SIZE = 128;
+  
+  protected static class ByteArrayBuffer
+  {
+    protected byte[] buffer;
+    protected int length;
+    
+    public ByteArrayBuffer()
+    {
+      buffer = new byte[BASE_SIZE];
+      length = 0;
+    }
+    
+    public void add(byte b)
+    {
+      if (length == buffer.length)
+      {
+        byte[] oldbuffer = buffer;
+        buffer = new byte[length * 2];
+        System.arraycopy(oldbuffer,0,buffer,0,length);
+      }
+      buffer[length++] = b;
+    }
+    
+    public byte[] toArray()
+    {
+      byte[] rval = new byte[length];
+      System.arraycopy(buffer,0,rval,0,length);
+      return rval;
+    }
+  }
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockObject.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockObject.java
new file mode 100644
index 0000000..73ff12f
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockObject.java
@@ -0,0 +1,480 @@
+/* $Id: LockObject.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import org.apache.manifoldcf.core.system.Logging;
+import java.io.*;
+
+/** One instance of this object exists for each lock on each JVM!
+* This is the file-system version of the lock.
+*/
+public class FileLockObject extends LockObject
+{
+  public static final String _rcsid = "@(#)$Id: LockObject.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  private final static int STATUS_WRITELOCKED = -1;
+
+  private File lockDirectoryName = null;
+  private File lockFileName = null;
+  private boolean isSync;                 // True if we need to be synchronizing across JVM's
+
+  private final static String DOTLOCK = ".lock";
+  private final static String DOTFILE = ".file";
+  private final static String SLASH = "/";
+
+
+  public FileLockObject(LockPool lockPool, Object lockKey, File synchDir)
+  {
+    super(lockPool,lockKey);
+    this.isSync = (synchDir != null);
+    if (isSync)
+    {
+      // Hash the filename
+      int hashcode = lockKey.hashCode();
+      int outerDirNumber = (hashcode & (1023));
+      int innerDirNumber = ((hashcode >> 10) & (1023));
+      String fullDir = synchDir.toString();
+      if (fullDir.length() == 0 || !fullDir.endsWith(SLASH))
+        fullDir = fullDir + SLASH;
+      fullDir = fullDir + Integer.toString(outerDirNumber)+SLASH+Integer.toString(innerDirNumber);
+      (new File(fullDir)).mkdirs();
+      String filename = createFileName(lockKey);
+
+      lockDirectoryName = new File(fullDir,filename+DOTLOCK);
+      lockFileName = new File(fullDir,filename+DOTFILE);
+    }
+  }
+
+  private static String createFileName(Object lockKey)
+  {
+    return "lock-"+ManifoldCF.safeFileName(lockKey.toString());
+  }
+
+  @Override
+  protected void obtainGlobalWriteLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (isSync)
+    {
+      grabFileLock();
+      try
+      {
+        int status = readFile();
+        if (status != 0)
+        {
+          throw new LockException(LOCKEDANOTHERJVM);
+        }
+        writeFile(STATUS_WRITELOCKED);
+      }
+      finally
+      {
+        releaseFileLock();
+      }
+    }
+  }
+
+  @Override
+  protected void clearGlobalWriteLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (isSync)
+    {
+      grabFileLock();
+      try
+      {
+        writeFile(0);
+      }
+      finally
+      {
+        releaseFileLock();
+      }
+    }
+  }
+
+  @Override
+  protected void obtainGlobalNonExWriteLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    // Attempt to obtain a global write lock
+    if (isSync)
+    {
+      grabFileLock();
+      try
+      {
+        int status = readFile();
+        if (status >= STATUS_WRITELOCKED)
+        {
+          throw new LockException(LOCKEDANOTHERJVM);
+        }
+        if (status == 0)
+          status = STATUS_WRITELOCKED;
+        writeFile(status-1);
+      }
+      finally
+      {
+        releaseFileLock();
+      }
+    }
+  }
+
+  @Override
+  protected void clearGlobalNonExWriteLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (isSync)
+    {
+      grabFileLock();
+      try
+      {
+        int status = readFile();
+        if (status >= STATUS_WRITELOCKED)
+          throw new RuntimeException("JVM error: File lock is not in expected state for object "+this.toString());
+        status++;
+        if (status == STATUS_WRITELOCKED)
+          status = 0;
+        writeFile(status);
+      }
+      finally
+      {
+        releaseFileLock();
+      }
+    }
+  }
+
+  @Override
+  protected void obtainGlobalReadLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    // Attempt to obtain a global read lock
+    if (isSync)
+    {
+      grabFileLock();
+      try
+      {
+        int status = readFile();
+        if (status <= STATUS_WRITELOCKED)
+        {
+          throw new LockException(LOCKEDANOTHERJVM);
+        }
+        status++;
+        writeFile(status);
+      }
+      finally
+      {
+        releaseFileLock();
+      }
+    }
+  }
+
+  @Override
+  protected void clearGlobalReadLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (isSync)
+    {
+      grabFileLock();
+      try
+      {
+        int status = readFile();
+        // System.out.println(" Read status = "+Integer.toString(status));
+        if (status == 0)
+          throw new RuntimeException("JVM error: File lock is not in expected state for object "+this.toString());
+        status--;
+        writeFile(status);
+        // System.out.println(" Wrote status = "+Integer.toString(status));
+      }
+      finally
+      {
+        releaseFileLock();
+      }
+    }
+  }
+
+  private final static String FILELOCKED = "File locked";
+
+  private synchronized void grabFileLock()
+    throws LockException, InterruptedException
+  {
+    while (true)
+    {
+      // Try to create the lock file
+      try
+      {
+        if (lockDirectoryName.createNewFile() == false)
+          throw new LockException(FILELOCKED);
+        break;
+      }
+      catch (InterruptedIOException e)
+      {
+        throw new InterruptedException("Interrupted IO: "+e.getMessage());
+      }
+      catch (IOException e)
+      {
+        // Log this if possible
+        try
+        {
+          Logging.lock.warn("Attempt to set file lock '"+lockDirectoryName.toString()+"' failed: "+e.getMessage(),e);
+        }
+        catch (Throwable e2)
+        {
+          e.printStackTrace();
+        }
+        // Winnt sometimes throws an exception when you can't do the lock
+        ManifoldCF.sleep(100);
+        continue;
+      }
+    }
+  }
+
+  private synchronized void releaseFileLock()
+    throws InterruptedException
+  {
+    Throwable ie = null;
+    while (true)
+    {
+      try
+      {
+        if (lockDirectoryName.delete())
+          break;
+        try
+        {
+          Logging.lock.fatal("Failure deleting file lock '"+lockDirectoryName.toString()+"'");
+        }
+        catch (Throwable e2)
+        {
+          System.out.println("Failure deleting file lock '"+lockDirectoryName.toString()+"'");
+        }
+        // Fail hard
+        System.exit(-100);
+      }
+      catch (Error e)
+      {
+        // An error - must try again to delete
+        // Attempting to log this to the log may not work due to disk being full, but try anyway.
+        String message = "Error deleting file lock '"+lockDirectoryName.toString()+"': "+e.getMessage();
+        try
+        {
+          Logging.lock.error(message,e);
+        }
+        catch (Throwable e2)
+        {
+          // Ok, we failed, send it to standard out
+          System.out.println(message);
+          e.printStackTrace();
+        }
+        ie = e;
+        ManifoldCF.sleep(100);
+        continue;
+      }
+      catch (RuntimeException e)
+      {
+        // A runtime exception - try again to delete
+        // Attempting to log this to the log may not work due to disk being full, but try anyway.
+        String message = "Error deleting file lock '"+lockDirectoryName.toString()+"': "+e.getMessage();
+        try
+        {
+          Logging.lock.error(message,e);
+        }
+        catch (Throwable e2)
+        {
+          // Ok, we failed, send it to standard out
+          System.out.println(message);
+          e.printStackTrace();
+        }
+        ie = e;
+        ManifoldCF.sleep(100);
+        continue;
+      }
+    }
+
+    // Succeeded finally - but we need to rethrow any exceptions we got
+    if (ie != null)
+    {
+      if (ie instanceof InterruptedException)
+        throw (InterruptedException)ie;
+      if (ie instanceof Error)
+        throw (Error)ie;
+      if (ie instanceof RuntimeException)
+        throw (RuntimeException)ie;
+    }
+
+  }
+
+  private synchronized int readFile()
+    throws InterruptedException
+  {
+    try
+    {
+      FileReader fr = new FileReader(lockFileName);
+      try
+      {
+        BufferedReader x = new BufferedReader(fr);
+        try
+        {
+          StringBuilder sb = new StringBuilder();
+          while (true)
+          {
+            int rval = x.read();
+            if (rval == -1)
+              break;
+            sb.append((char)rval);
+          }
+          try
+          {
+            return Integer.parseInt(sb.toString());
+          }
+          catch (NumberFormatException e)
+          {
+            // We should never be in a situation where we can't parse a number we have supposedly written.
+            // But, print a stack trace and throw IOException, so we recover.
+            throw new IOException("Lock number read was not valid: "+e.getMessage());
+          }
+        }
+        finally
+        {
+          x.close();
+        }
+      }
+      catch (InterruptedIOException e)
+      {
+        throw new InterruptedException("Interrupted IO: "+e.getMessage());
+      }
+      catch (IOException e)
+      {
+        String message = "Could not read from lock file: '"+lockFileName.toString()+"'";
+        try
+        {
+          Logging.lock.error(message,e);
+        }
+        catch (Throwable e2)
+        {
+          System.out.println(message);
+          e.printStackTrace();
+        }
+        // Don't fail hard or there is no way to recover
+        throw e;
+      }
+      finally
+      {
+        fr.close();
+      }
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new InterruptedException("Interrupted IO: "+e.getMessage());
+    }
+    catch (IOException e)
+    {
+      return 0;
+    }
+
+  }
+
+  private synchronized void writeFile(int value)
+    throws InterruptedException
+  {
+    try
+    {
+      if (value == 0)
+      {
+        if (lockFileName.delete() == false)
+          throw new IOException("Could not delete file '"+lockFileName.toString()+"'");
+      }
+      else
+      {
+        FileWriter fw = new FileWriter(lockFileName);
+        try
+        {
+          BufferedWriter x = new BufferedWriter(fw);
+          try
+          {
+            x.write(Integer.toString(value));
+          }
+          finally
+          {
+            x.close();
+          }
+        }
+        finally
+        {
+          fw.close();
+        }
+      }
+    }
+    catch (Error e)
+    {
+      // Couldn't write for some reason!  Write to BOTH stdout and the log, since we
+      // can't be sure we will succeed at the latter.
+      String message = "Couldn't write to lock file; hard error occurred.  Shutting down process; locks may be left dangling.  You must cleanup before restarting.";
+      try
+      {
+        Logging.lock.error(message,e);
+      }
+      catch (Throwable e2)
+      {
+        System.out.println(message);
+        e.printStackTrace();
+      }
+      System.exit(-100);
+    }
+    catch (RuntimeException e)
+    {
+      // Couldn't write for some reason!  Write to BOTH stdout and the log, since we
+      // can't be sure we will succeed at the latter.
+      String message = "Couldn't write to lock file; JVM error.  Shutting down process; locks may be left dangling.  You must cleanup before restarting.";
+      try
+      {
+        Logging.lock.error(message,e);
+      }
+      catch (Throwable e2)
+      {
+        System.out.println(message);
+        e.printStackTrace();
+      }
+      System.exit(-100);
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new InterruptedException("Interrupted IO: "+e.getMessage());
+    }
+    catch (IOException e)
+    {
+      // Couldn't write for some reason!  Write to BOTH stdout and the log, since we
+      // can't be sure we will succeed at the latter.
+      String message = "Couldn't write to lock file; disk may be full.  Shutting down process; locks may be left dangling.  You must cleanup before restarting.";
+      try
+      {
+        Logging.lock.error(message,e);
+      }
+      catch (Throwable e2)
+      {
+        System.out.println(message);
+        e.printStackTrace();
+      }
+      System.exit(-100);
+      // Hard failure is called for
+      // throw new Error("Lock management system failure",e);
+    }
+  }
+
+
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockObjectFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockObjectFactory.java
new file mode 100644
index 0000000..879f07e
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/FileLockObjectFactory.java
@@ -0,0 +1,45 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import org.apache.manifoldcf.core.system.Logging;
+import java.io.*;
+
+/** Base factory for file lock objects.
+*/
+public class FileLockObjectFactory extends LockObjectFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected final File synchDir;
+  
+  public FileLockObjectFactory(File synchDir)
+  {
+    this.synchDir = synchDir;
+  }
+  
+  @Override
+  public LockObject newLockObject(LockPool lockPool, Object lockKey)
+  {
+    return new FileLockObject(lockPool, lockKey, synchDir);
+  }
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LocalLock.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LocalLock.java
new file mode 100644
index 0000000..fe4ba65
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LocalLock.java
@@ -0,0 +1,81 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import java.util.*;
+import java.io.*;
+
+/** This class describes a local lock, which can have various nested levels
+* of depth.
+*/
+public class LocalLock
+{
+  private int readCount = 0;
+  private int writeCount = 0;
+  private int nonExWriteCount = 0;
+
+  public LocalLock()
+  {
+  }
+
+  public boolean hasWriteLock()
+  {
+    return (writeCount > 0);
+  }
+
+  public boolean hasReadLock()
+  {
+    return (readCount > 0);
+  }
+
+  public boolean hasNonExWriteLock()
+  {
+    return (nonExWriteCount > 0);
+  }
+
+  public void incrementReadLocks()
+  {
+    readCount++;
+  }
+
+  public void incrementNonExWriteLocks()
+  {
+    nonExWriteCount++;
+  }
+  
+  public void incrementWriteLocks()
+  {
+    writeCount++;
+  }
+
+  public void decrementReadLocks()
+  {
+    readCount--;
+  }
+  
+  public void decrementNonExWriteLocks()
+  {
+    nonExWriteCount--;
+  }
+
+  public void decrementWriteLocks()
+  {
+    writeCount--;
+  }
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LocalLockPool.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LocalLockPool.java
new file mode 100644
index 0000000..3e8263d
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LocalLockPool.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import java.util.*;
+import java.io.*;
+
+/** Pool of local locks, designed to gate access within a single thread.
+* Since it is within a single thread, synchronization is not necessary.
+*/
+public class LocalLockPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected final Map<String,LocalLock> localLocks = new HashMap<String,LocalLock>();
+  
+  public LocalLockPool()
+  {
+  }
+  
+  public LocalLock getLocalLock(String lockKey)
+  {
+    LocalLock ll = localLocks.get(lockKey);
+    if (ll == null)
+    {
+      ll = new LocalLock();
+      localLocks.put(lockKey,ll);
+    }
+    return ll;
+  }
+
+  public void releaseLocalLock(String lockKey)
+  {
+    localLocks.remove(lockKey);
+  }
+
+  public Set<String> keySet()
+  {
+    return localLocks.keySet();
+  }
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockManager.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockManager.java
index e79a9f9..0ec7b7c 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockManager.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockManager.java
@@ -31,184 +31,215 @@
 {
   public static final String _rcsid = "@(#)$Id: LockManager.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  /** Synchronization directory property - local to this implementation of ILockManager */
-  public static final String synchDirectoryProperty = "org.apache.manifoldcf.synchdirectory";
-
-  // These are the lock/section types, in order of escalation
-  protected final static int TYPE_READ = 1;
-  protected final static int TYPE_WRITENONEX = 2;
-  protected final static int TYPE_WRITE = 3;
-
-  // These are for locks (which cross JVM boundaries)
-  protected HashMap localLocks = new HashMap();
-  protected static LockPool myLocks = new LockPool();
-
-  // These are for critical sections (which do not cross JVM boundaries)
-  protected HashMap localSections = new HashMap();
-  protected static LockPool mySections = new LockPool();
-
-  // This is the directory used for cross-JVM synchronization, or null if off
-  protected File synchDirectory = null;
+  /** Backing lock manager */
+  protected final ILockManager lockManager;
 
   public LockManager()
     throws ManifoldCFException
   {
-    synchDirectory = ManifoldCF.getFileProperty(synchDirectoryProperty);
+    File synchDirectory = FileLockManager.getSynchDirectoryProperty();
     if (synchDirectory != null)
-    {
-      if (!synchDirectory.isDirectory())
-        throw new ManifoldCFException("Property "+synchDirectoryProperty+" must point to an existing, writeable directory!",ManifoldCFException.SETUP_ERROR);
-    }
+      lockManager = new FileLockManager(synchDirectory);
+    else
+      lockManager = new BaseLockManager();
   }
 
-  /** Calculate the name of a flag resource.
-  *@param flagName is the name of the flag.
-  *@return the name for the flag resource.
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
   */
-  protected static String getFlagResourceName(String flagName)
+  @Override
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    IServiceCleanup cleanup)
+    throws ManifoldCFException
   {
-    return "flag-"+flagName;
+    return lockManager.registerServiceBeginServiceActivity(serviceType, serviceName, cleanup);
   }
   
-  /** Global flag information.  This is used only when all of ManifoldCF is run within one process. */
-  protected static HashMap globalFlags = new HashMap();
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param initialData is the initial service data for this service.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  @Override
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    byte[] initialData, IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    return lockManager.registerServiceBeginServiceActivity(serviceType, serviceName, initialData, cleanup);
+  }
+
+  /** Set service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@param serviceData is the data to update to (may be null).
+  * This updates the service's transient data (or deletes it).  If the service is not active, an exception is thrown.
+  */
+  @Override
+  public void updateServiceData(String serviceType, String serviceName, byte[] serviceData)
+    throws ManifoldCFException
+  {
+    lockManager.updateServiceData(serviceType, serviceName, serviceData);
+  }
+
+  /** Retrieve service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@return the service's transient data.
+  */
+  @Override
+  public byte[] retrieveServiceData(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    return lockManager.retrieveServiceData(serviceType, serviceName);
+  }
+
+  /** Scan service data for a service type.  Only active service data will be considered.
+  *@param serviceType is the type of service.
+  *@param dataAcceptor is the object that will be notified of each item of data for each service name found.
+  */
+  @Override
+  public void scanServiceData(String serviceType, IServiceDataAcceptor dataAcceptor)
+    throws ManifoldCFException
+  {
+    lockManager.scanServiceData(serviceType, dataAcceptor);
+  }
+
+  /** Clean up any inactive services found.
+  * Calling this method will invoke cleanup of one inactive service at a time.
+  * If there are no inactive services around, then false will be returned.
+  * Note that this method will block whatever service it finds from starting up
+  * for the time the cleanup is proceeding.  At the end of the cleanup, if
+  * successful, the service will be atomically unregistered.
+  *@param serviceType is the service type.
+  *@param cleanup is the object to call to clean up an inactive service.
+  *@return true if there were no cleanup operations necessary.
+  */
+  @Override
+  public boolean cleanupInactiveService(String serviceType, IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    return lockManager.cleanupInactiveService(serviceType, cleanup);
+  }
   
+  /** Count all active services of a given type.
+  *@param serviceType is the service type.
+  *@return the count.
+  */
+  @Override
+  public int countActiveServices(String serviceType)
+    throws ManifoldCFException
+  {
+    return lockManager.countActiveServices(serviceType);
+  }
+
+  /** End service activity.
+  * This operation exits the "active" zone for the service.  This must take place using the same ILockManager
+  * object that was used to registerServiceBeginServiceActivity() - which implies that it is the same thread.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to exit.
+  */
+  @Override
+  public void endServiceActivity(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    lockManager.endServiceActivity(serviceType, serviceName);
+  }
+    
+  /** Check whether a service is active or not.
+  * This operation returns true if the specified service is considered active at the moment.  Once a service
+  * is not active anymore, it can only return to activity by calling beginServiceActivity() once more.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to check on.
+  *@return true if the service is considered active.
+  */
+  @Override
+  public boolean checkServiceActive(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    return lockManager.checkServiceActive(serviceType, serviceName);
+  }
+
+  /** Get the current shared configuration.  This configuration is available in common among all nodes,
+  * and thus must not be accessed through here for the purpose of finding configuration data that is specific to any one
+  * specific node.
+  *@param configurationData is the globally-shared configuration information.
+  */
+  @Override
+  public ManifoldCFConfiguration getSharedConfiguration()
+    throws ManifoldCFException
+  {
+    return lockManager.getSharedConfiguration();
+  }
+
   /** Raise a flag.  Use this method to assert a condition, or send a global signal.  The flag will be reset when the
   * entire system is restarted.
   *@param flagName is the name of the flag to set.
   */
+  @Override
   public void setGlobalFlag(String flagName)
     throws ManifoldCFException
   {
-    if (synchDirectory == null)
-    {
-      // Keep local flag information in memory
-      synchronized (globalFlags)
-      {
-        globalFlags.put(flagName,new Boolean(true));
-      }
-    }
-    else
-    {
-      String resourceName = getFlagResourceName(flagName);
-      String path = makeFilePath(resourceName);
-      (new File(path)).mkdirs();
-      File f = new File(path,ManifoldCF.safeFileName(resourceName));
-      try
-      {
-        f.createNewFile();
-      }
-      catch (InterruptedIOException e)
-      {
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (IOException e)
-      {
-        throw new ManifoldCFException(e.getMessage(),e);
-      }
-    }
+    lockManager.setGlobalFlag(flagName);
   }
 
   /** Clear a flag.  Use this method to clear a condition, or retract a global signal.
   *@param flagName is the name of the flag to clear.
   */
+  @Override
   public void clearGlobalFlag(String flagName)
     throws ManifoldCFException
   {
-    if (synchDirectory == null)
-    {
-      // Keep flag information in memory
-      synchronized (globalFlags)
-      {
-        globalFlags.remove(flagName);
-      }
-    }
-    else
-    {
-      String resourceName = getFlagResourceName(flagName);
-      File f = new File(makeFilePath(resourceName),ManifoldCF.safeFileName(resourceName));
-      f.delete();
-    }
+    lockManager.clearGlobalFlag(flagName);
   }
   
   /** Check the condition of a specified flag.
   *@param flagName is the name of the flag to check.
   *@return true if the flag is set, false otherwise.
   */
+  @Override
   public boolean checkGlobalFlag(String flagName)
     throws ManifoldCFException
   {
-    if (synchDirectory == null)
-    {
-      // Keep flag information in memory
-      synchronized (globalFlags)
-      {
-        return globalFlags.get(flagName) != null;
-      }
-    }
-    else
-    {
-      String resourceName = getFlagResourceName(flagName);
-      File f = new File(makeFilePath(resourceName),ManifoldCF.safeFileName(resourceName));
-      return f.exists();
-    }
+    return lockManager.checkGlobalFlag(flagName);
   }
 
-  /** Global resource data.  Used only when ManifoldCF is run entirely out of one process. */
-  protected static HashMap globalData = new HashMap();
-  
   /** Read data from a shared data resource.  Use this method to read any existing data, or get a null back if there is no such resource.
   * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
   *@param resourceName is the global name of the resource.
   *@return a byte array containing the data, or null.
   */
+  @Override
   public byte[] readData(String resourceName)
     throws ManifoldCFException
   {
-    if (synchDirectory == null)
-    {
-      // Keep resource data local
-      synchronized (globalData)
-      {
-        return (byte[])globalData.get(resourceName);
-      }
-    }
-    else
-    {
-      File f = new File(makeFilePath(resourceName),ManifoldCF.safeFileName(resourceName));
-      try
-      {
-        InputStream is = new FileInputStream(f);
-        try
-        {
-          ByteArrayBuffer bab = new ByteArrayBuffer();
-          while (true)
-          {
-            int x = is.read();
-            if (x == -1)
-              break;
-            bab.add((byte)x);
-          }
-          return bab.toArray();
-        }
-        finally
-        {
-          is.close();
-        }
-      }
-      catch (FileNotFoundException e)
-      {
-        return null;
-      }
-      catch (InterruptedIOException e)
-      {
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (IOException e)
-      {
-        throw new ManifoldCFException("IO exception: "+e.getMessage(),e);
-      }
-    }
+    return lockManager.readData(resourceName);
   }
   
   /** Write data to a shared data resource.  Use this method to write a body of data into a shared resource.
@@ -216,73 +247,20 @@
   *@param resourceName is the global name of the resource.
   *@param data is the byte array containing the data.  Pass null if you want to delete the resource completely.
   */
+  @Override
   public void writeData(String resourceName, byte[] data)
     throws ManifoldCFException
   {
-    if (synchDirectory == null)
-    {
-      // Keep resource data local
-      synchronized (globalData)
-      {
-        if (data == null)
-          globalData.remove(resourceName);
-        else
-          globalData.put(resourceName,data);
-      }
-    }
-    else
-    {
-      try
-      {
-        String path = makeFilePath(resourceName);
-        // Make sure the directory exists
-        (new File(path)).mkdirs();
-        File f = new File(path,ManifoldCF.safeFileName(resourceName));
-        if (data == null)
-        {
-          f.delete();
-          return;
-        }
-        FileOutputStream os = new FileOutputStream(f);
-        try
-        {
-          os.write(data,0,data.length);
-        }
-        finally
-        {
-          os.close();
-        }
-      }
-      catch (InterruptedIOException e)
-      {
-        throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (IOException e)
-      {
-        throw new ManifoldCFException("IO exception: "+e.getMessage(),e);
-      }
-    }
+    lockManager.writeData(resourceName,data);
   }
 
   /** Wait for a time before retrying a lock.
   */
+  @Override
   public void timedWait(int time)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Waiting for time "+Integer.toString(time));
-    }
-
-    try
-    {
-      ManifoldCF.sleep(time);
-    }
-    catch (InterruptedException e)
-    {
-      throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-    }
+    lockManager.timedWait(time);
   }
 
   /** Enter a non-exclusive write-locked area (blocking out all readers, but letting in other "writers").
@@ -291,948 +269,106 @@
   * to an individual item that might affect the query, but where multiple modifications do not individually
   * interfere with one another (use of another, standard, write lock per item can guarantee this).
   */
+  @Override
   public void enterNonExWriteLock(String lockKey)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering non-ex write lock '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-
-    // See if we already own a write lock for the object
-    // If we do, there is no reason to change the status of the global lock we own.
-    if (ll.hasNonExWriteLock() || ll.hasWriteLock())
-    {
-      ll.incrementNonExWriteLocks();
-      Logging.lock.debug(" Successfully obtained lock!");
-      return;
-    }
-
-    // Check for illegalities
-    if (ll.hasReadLock())
-    {
-      throw new ManifoldCFException("Illegal lock sequence: NonExWrite lock can't be within read lock",ManifoldCFException.GENERAL_ERROR);
-    }
-
-    // We don't own a local non-ex write lock.  Get one.  The global lock will need
-    // to know if we already have a a read lock.
-    while (true)
-    {
-      LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-      try
-      {
-        lo.enterNonExWriteLock();
-        break;
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again to get a valid object
-      }
-    }
-    ll.incrementNonExWriteLocks();
-    Logging.lock.debug(" Successfully obtained lock!");
+    lockManager.enterNonExWriteLock(lockKey);
   }
 
+  @Override
   public void enterNonExWriteLockNoWait(String lockKey)
     throws ManifoldCFException, LockException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering non-ex write lock no wait '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-
-    // See if we already own a write lock for the object
-    // If we do, there is no reason to change the status of the global lock we own.
-    if (ll.hasNonExWriteLock() || ll.hasWriteLock())
-    {
-      ll.incrementNonExWriteLocks();
-      Logging.lock.debug(" Successfully obtained lock!");
-      return;
-    }
-
-    // Check for illegalities
-    if (ll.hasReadLock())
-    {
-      throw new ManifoldCFException("Illegal lock sequence: NonExWrite lock can't be within read lock",ManifoldCFException.GENERAL_ERROR);
-    }
-
-    // We don't own a local non-ex write lock.  Get one.  The global lock will need
-    // to know if we already have a a read lock.
-    while (true)
-    {
-      LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-      try
-      {
-        synchronized (lo)
-        {
-          lo.enterNonExWriteLockNoWait();
-          break;
-        }
-      }
-      catch (LocalLockException e)
-      {
-
-        if (Logging.lock.isDebugEnabled())
-        {
-          Logging.lock.debug(" Could not non-ex write lock '"+lockKey+"', lock exception");
-        }
-
-        // Throw LockException instead
-        throw new LockException(e.getMessage());
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again to get a valid object
-      }
-    }
-    ll.incrementNonExWriteLocks();
-    Logging.lock.debug(" Successfully obtained lock!");
+    lockManager.enterNonExWriteLockNoWait(lockKey);
   }
 
   /** Leave a non-exclusive write lock.
   */
+  @Override
   public void leaveNonExWriteLock(String lockKey)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Leaving non-ex write lock '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-    ll.decrementNonExWriteLocks();
-    // See if we no longer have a write lock for the object.
-    // If we retain the stronger exclusive lock, we still do not need to
-    // change the status of the global lock.
-    if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-    {
-      while (true)
-      {
-        LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-        try
-        {
-          lo.leaveNonExWriteLock();
-          break;
-        }
-        catch (InterruptedException e)
-        {
-          // try one more time
-          try
-          {
-            lo.leaveNonExWriteLock();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-          catch (InterruptedException e2)
-          {
-            ll.incrementNonExWriteLocks();
-            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
-          }
-          catch (ExpiredObjectException e2)
-          {
-            ll.incrementNonExWriteLocks();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-        }
-        catch (ExpiredObjectException e)
-        {
-          // Try again to get a valid object
-        }
-      }
-
-      releaseLocalLock(lockKey);
-    }
+    lockManager.leaveNonExWriteLock(lockKey);
   }
 
   /** Enter a write locked area (i.e., block out both readers and other writers)
   * NOTE: Can't enter until all readers have left.
   */
+  @Override
   public void enterWriteLock(String lockKey)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering write lock '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-
-    // See if we already own the write lock for the object
-    if (ll.hasWriteLock())
-    {
-      ll.incrementWriteLocks();
-      Logging.lock.debug(" Successfully obtained lock!");
-      return;
-    }
-
-    // Check for illegalities
-    if (ll.hasReadLock() || ll.hasNonExWriteLock())
-    {
-      throw new ManifoldCFException("Illegal lock sequence: Write lock can't be within read lock or non-ex write lock",ManifoldCFException.GENERAL_ERROR);
-    }
-
-    // We don't own a local write lock.  Get one.  The global lock will need
-    // to know if we already have a non-exclusive lock or a read lock, which we don't because
-    // it's illegal.
-    while (true)
-    {
-      LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-      try
-      {
-        lo.enterWriteLock();
-        break;
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-    ll.incrementWriteLocks();
-    Logging.lock.debug(" Successfully obtained lock!");
+    lockManager.enterWriteLock(lockKey);
   }
 
+  @Override
   public void enterWriteLockNoWait(String lockKey)
     throws ManifoldCFException, LockException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering write lock no wait '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-
-    // See if we already own the write lock for the object
-    if (ll.hasWriteLock())
-    {
-      ll.incrementWriteLocks();
-      Logging.lock.debug(" Successfully obtained lock!");
-      return;
-    }
-
-    // Check for illegalities
-    if (ll.hasReadLock() || ll.hasNonExWriteLock())
-    {
-      throw new ManifoldCFException("Illegal lock sequence: Write lock can't be within read lock or non-ex write lock",ManifoldCFException.GENERAL_ERROR);
-    }
-
-    // We don't own a local write lock.  Get one.  The global lock will need
-    // to know if we already have a non-exclusive lock or a read lock, which we don't because
-    // it's illegal.
-    while (true)
-    {
-      LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-      try
-      {
-        synchronized (lo)
-        {
-          lo.enterWriteLockNoWait();
-          break;
-        }
-      }
-      catch (LocalLockException e)
-      {
-
-        if (Logging.lock.isDebugEnabled())
-        {
-          Logging.lock.debug(" Could not write lock '"+lockKey+"', lock exception");
-        }
-
-        throw new LockException(e.getMessage());
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-
-    ll.incrementWriteLocks();
-    Logging.lock.debug(" Successfully obtained lock!");
+    lockManager.enterWriteLockNoWait(lockKey);
   }
 
+  @Override
   public void leaveWriteLock(String lockKey)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Leaving write lock '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-    ll.decrementWriteLocks();
-    if (!ll.hasWriteLock())
-    {
-      while (true)
-      {
-        LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-        try
-        {
-          lo.leaveWriteLock();
-          break;
-        }
-        catch (InterruptedException e)
-        {
-          // try one more time
-          try
-          {
-            lo.leaveWriteLock();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-          catch (InterruptedException e2)
-          {
-            ll.incrementWriteLocks();
-            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
-          }
-          catch (ExpiredObjectException e2)
-          {
-            ll.incrementWriteLocks();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-        }
-        catch (ExpiredObjectException e)
-        {
-          // Try again
-        }
-      }
-
-      releaseLocalLock(lockKey);
-    }
+    lockManager.leaveWriteLock(lockKey);
   }
 
   /** Enter a read-only locked area (i.e., block ONLY if there's a writer)
   */
+  @Override
   public void enterReadLock(String lockKey)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering read lock '"+lockKey+"'");
-    }
-
-
-    LocalLock ll = getLocalLock(lockKey);
-
-    // See if we already own the read lock for the object.
-    // Write locks or non-ex writelocks count as well (they're stronger).
-    if (ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock())
-    {
-      ll.incrementReadLocks();
-      Logging.lock.debug(" Successfully obtained lock!");
-      return;
-    }
-
-    // We don't own a local read lock.  Get one.
-    while (true)
-    {
-      LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-      try
-      {
-        lo.enterReadLock();
-        break;
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-    ll.incrementReadLocks();
-    Logging.lock.debug(" Successfully obtained lock!");
+    lockManager.enterReadLock(lockKey);
   }
 
+  @Override
   public void enterReadLockNoWait(String lockKey)
     throws ManifoldCFException, LockException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering read lock no wait '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-    // See if we already own the read lock for the object.
-    // Write locks or non-ex writelocks count as well (they're stronger).
-    if (ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock())
-    {
-      ll.incrementReadLocks();
-      Logging.lock.debug(" Successfully obtained lock!");
-      return;
-    }
-
-    // We don't own a local read lock.  Get one.
-    while (true)
-    {
-      LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-      try
-      {
-        synchronized (lo)
-        {
-          lo.enterReadLockNoWait();
-          break;
-        }
-      }
-      catch (LocalLockException e)
-      {
-
-        if (Logging.lock.isDebugEnabled())
-        {
-          Logging.lock.debug(" Could not read lock '"+lockKey+"', lock exception");
-        }
-
-        throw new LockException(e.getMessage());
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-
-    ll.incrementReadLocks();
-    Logging.lock.debug(" Successfully obtained lock!");
+    lockManager.enterReadLockNoWait(lockKey);
   }
 
+  @Override
   public void leaveReadLock(String lockKey)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Leaving read lock '"+lockKey+"'");
-    }
-
-    LocalLock ll = getLocalLock(lockKey);
-
-    ll.decrementReadLocks();
-    if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
-    {
-      while (true)
-      {
-        LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-        try
-        {
-          lo.leaveReadLock();
-          break;
-        }
-        catch (InterruptedException e)
-        {
-          // Try one more time
-          try
-          {
-            lo.leaveReadLock();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-          catch (InterruptedException e2)
-          {
-            ll.incrementReadLocks();
-            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
-          }
-          catch (ExpiredObjectException e2)
-          {
-            ll.incrementReadLocks();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-        }
-        catch (ExpiredObjectException e)
-        {
-          // Try again
-        }
-      }
-      releaseLocalLock(lockKey);
-    }
+    lockManager.leaveReadLock(lockKey);
   }
 
+  @Override
   public void clearLocks()
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Clearing all locks");
-    }
-
-
-    Iterator e = localLocks.keySet().iterator();
-    while (e.hasNext())
-    {
-      String keyValue = (String)e.next();
-      LocalLock ll = (LocalLock)localLocks.get(keyValue);
-      while (ll.hasWriteLock())
-        leaveWriteLock(keyValue);
-      while (ll.hasNonExWriteLock())
-        leaveNonExWriteLock(keyValue);
-      while (ll.hasReadLock())
-        leaveReadLock(keyValue);
-    }
+    lockManager.clearLocks();
   }
 
   /** Enter multiple locks
   */
+  @Override
   public void enterLocks(String[] readLocks, String[] nonExWriteLocks, String[] writeLocks)
     throws ManifoldCFException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering multiple locks:");
-      int i;
-      if (readLocks != null)
-      {
-        i = 0;
-        while (i < readLocks.length)
-        {
-          Logging.lock.debug(" Read lock '"+readLocks[i++]+"'");
-        }
-      }
-      if (nonExWriteLocks != null)
-      {
-        i = 0;
-        while (i < nonExWriteLocks.length)
-        {
-          Logging.lock.debug(" Non-ex write lock '"+nonExWriteLocks[i++]+"'");
-        }
-      }
-      if (writeLocks != null)
-      {
-        i = 0;
-        while (i < writeLocks.length)
-        {
-          Logging.lock.debug(" Write lock '"+writeLocks[i++]+"'");
-        }
-      }
-    }
-
-
-    // Sort the locks.  This improves the chances of making it through the locking process without
-    // contention!
-    LockDescription lds[] = getSortedUniqueLocks(readLocks,nonExWriteLocks,writeLocks);
-    int locksProcessed = 0;
-    try
-    {
-      while (locksProcessed < lds.length)
-      {
-        LockDescription ld = lds[locksProcessed];
-        int lockType = ld.getType();
-        String lockKey = ld.getKey();
-        LocalLock ll;
-        switch (lockType)
-        {
-        case TYPE_WRITE:
-          ll = getLocalLock(lockKey);
-          // Check for illegalities
-          if ((ll.hasReadLock() || ll.hasNonExWriteLock()) && !ll.hasWriteLock())
-          {
-            throw new ManifoldCFException("Illegal lock sequence: Write lock can't be within read lock or non-ex write lock",ManifoldCFException.GENERAL_ERROR);
-          }
-
-          // See if we already own the write lock for the object
-          if (!ll.hasWriteLock())
-          {
-            // We don't own a local write lock.  Get one.
-            while (true)
-            {
-              LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-              try
-              {
-                lo.enterWriteLock();
-                break;
-              }
-              catch (ExpiredObjectException e)
-              {
-                // Try again
-              }
-            }
-          }
-          ll.incrementWriteLocks();
-          break;
-        case TYPE_WRITENONEX:
-          ll = getLocalLock(lockKey);
-          // Check for illegalities
-          if (ll.hasReadLock() && !(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            throw new ManifoldCFException("Illegal lock sequence: NonExWrite lock can't be within read lock",ManifoldCFException.GENERAL_ERROR);
-          }
-
-          // See if we already own the write lock for the object
-          if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            // We don't own a local write lock.  Get one.
-            while (true)
-            {
-              LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-              try
-              {
-                lo.enterNonExWriteLock();
-                break;
-              }
-              catch (ExpiredObjectException e)
-              {
-                // Try again
-              }
-            }
-          }
-          ll.incrementNonExWriteLocks();
-          break;
-        case TYPE_READ:
-          ll = getLocalLock(lockKey);
-          if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            // We don't own a local read lock.  Get one.
-            while (true)
-            {
-              LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-              try
-              {
-                lo.enterReadLock();
-                break;
-              }
-              catch (ExpiredObjectException e)
-              {
-                // Try again
-              }
-            }
-          }
-          ll.incrementReadLocks();
-          break;
-        }
-        locksProcessed++;
-      }
-      // Got all; we are done!
-      Logging.lock.debug(" Successfully obtained multiple locks!");
-      return;
-    }
-    catch (Throwable ex)
-    {
-      // No matter what, undo the locks we've taken
-      ManifoldCFException ae = null;
-      int errno = 0;
-
-      while (--locksProcessed >= 0)
-      {
-        LockDescription ld = lds[locksProcessed];
-        int lockType = ld.getType();
-        String lockKey = ld.getKey();
-        try
-        {
-          switch (lockType)
-          {
-          case TYPE_READ:
-            leaveReadLock(lockKey);
-            break;
-          case TYPE_WRITENONEX:
-            leaveNonExWriteLock(lockKey);
-            break;
-          case TYPE_WRITE:
-            leaveWriteLock(lockKey);
-            break;
-          }
-        }
-        catch (ManifoldCFException e)
-        {
-          ae = e;
-        }
-      }
-
-      if (ae != null)
-      {
-        throw ae;
-      }
-      if (ex instanceof ManifoldCFException)
-      {
-        throw (ManifoldCFException)ex;
-      }
-      if (ex instanceof InterruptedException)
-      {
-        // It's InterruptedException
-        throw new ManifoldCFException("Interrupted",ex,ManifoldCFException.INTERRUPTED);
-      }
-      if (!(ex instanceof Error))
-      {
-        throw new Error("Unexpected exception",ex);
-      }
-      throw (Error)ex;
-    }
+    lockManager.enterLocks(readLocks, nonExWriteLocks, writeLocks);
   }
 
+  @Override
   public void enterLocksNoWait(String[] readLocks, String[] nonExWriteLocks, String[] writeLocks)
     throws ManifoldCFException, LockException
   {
-
-    if (Logging.lock.isDebugEnabled())
-    {
-      Logging.lock.debug("Entering multiple locks no wait:");
-      int i;
-      if (readLocks != null)
-      {
-        i = 0;
-        while (i < readLocks.length)
-        {
-          Logging.lock.debug(" Read lock '"+readLocks[i++]+"'");
-        }
-      }
-      if (nonExWriteLocks != null)
-      {
-        i = 0;
-        while (i < nonExWriteLocks.length)
-        {
-          Logging.lock.debug(" Non-ex write lock '"+nonExWriteLocks[i++]+"'");
-        }
-      }
-      if (writeLocks != null)
-      {
-        i = 0;
-        while (i < writeLocks.length)
-        {
-          Logging.lock.debug(" Write lock '"+writeLocks[i++]+"'");
-        }
-      }
-    }
-
-
-    // Sort the locks.  This improves the chances of making it through the locking process without
-    // contention!
-    LockDescription lds[] = getSortedUniqueLocks(readLocks,nonExWriteLocks,writeLocks);
-    int locksProcessed = 0;
-    try
-    {
-      while (locksProcessed < lds.length)
-      {
-        LockDescription ld = lds[locksProcessed];
-        int lockType = ld.getType();
-        String lockKey = ld.getKey();
-        LocalLock ll;
-        switch (lockType)
-        {
-        case TYPE_WRITE:
-          ll = getLocalLock(lockKey);
-          // Check for illegalities
-          if ((ll.hasReadLock() || ll.hasNonExWriteLock()) && !ll.hasWriteLock())
-          {
-            throw new ManifoldCFException("Illegal lock sequence: Write lock can't be within read lock or non-ex write lock",ManifoldCFException.GENERAL_ERROR);
-          }
-
-          // See if we already own the write lock for the object
-          if (!ll.hasWriteLock())
-          {
-            // We don't own a local write lock.  Get one.
-            while (true)
-            {
-              LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-              synchronized (lo)
-              {
-                try
-                {
-                  lo.enterWriteLockNoWait();
-                  break;
-                }
-                catch (ExpiredObjectException e)
-                {
-                  // Try again
-                }
-              }
-            }
-          }
-          ll.incrementWriteLocks();
-          break;
-        case TYPE_WRITENONEX:
-          ll = getLocalLock(lockKey);
-          // Check for illegalities
-          if (ll.hasReadLock() && !(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            throw new ManifoldCFException("Illegal lock sequence: NonExWrite lock can't be within read lock",ManifoldCFException.GENERAL_ERROR);
-          }
-
-          // See if we already own the write lock for the object
-          if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            // We don't own a local write lock.  Get one.
-            while (true)
-            {
-              LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-              synchronized (lo)
-              {
-                try
-                {
-                  lo.enterNonExWriteLockNoWait();
-                  break;
-                }
-                catch (ExpiredObjectException e)
-                {
-                  // Try again
-                }
-              }
-            }
-          }
-          ll.incrementNonExWriteLocks();
-          break;
-        case TYPE_READ:
-          ll = getLocalLock(lockKey);
-          if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            // We don't own a local read lock.  Get one.
-            while (true)
-            {
-              LockObject lo = myLocks.getObject(lockKey,synchDirectory);
-              synchronized (lo)
-              {
-                try
-                {
-                  lo.enterReadLockNoWait();
-                  break;
-                }
-                catch (ExpiredObjectException e)
-                {
-                  // Try again
-                }
-              }
-            }
-          }
-          ll.incrementReadLocks();
-          break;
-        }
-        locksProcessed++;
-      }
-      // Got all; we are done!
-      Logging.lock.debug(" Successfully obtained multiple locks!");
-      return;
-    }
-    catch (Throwable ex)
-    {
-      // No matter what, undo the locks we've taken
-      ManifoldCFException ae = null;
-      int errno = 0;
-
-      while (--locksProcessed >= 0)
-      {
-        LockDescription ld = lds[locksProcessed];
-        int lockType = ld.getType();
-        String lockKey = ld.getKey();
-        try
-        {
-          switch (lockType)
-          {
-          case TYPE_READ:
-            leaveReadLock(lockKey);
-            break;
-          case TYPE_WRITENONEX:
-            leaveNonExWriteLock(lockKey);
-            break;
-          case TYPE_WRITE:
-            leaveWriteLock(lockKey);
-            break;
-          }
-        }
-        catch (ManifoldCFException e)
-        {
-          ae = e;
-        }
-      }
-
-      if (ae != null)
-      {
-        throw ae;
-      }
-      if (ex instanceof ManifoldCFException)
-      {
-        throw (ManifoldCFException)ex;
-      }
-      if (ex instanceof LockException || ex instanceof LocalLockException)
-      {
-        Logging.lock.debug(" Couldn't get lock; throwing LockException");
-        // It's either LockException or LocalLockException
-        throw new LockException(ex.getMessage());
-      }
-      if (ex instanceof InterruptedException)
-      {
-        throw new ManifoldCFException("Interrupted",ex,ManifoldCFException.INTERRUPTED);
-      }
-      if (!(ex instanceof Error))
-      {
-        throw new Error("Unexpected exception",ex);
-      }
-      throw (Error)ex;
-
-    }
-
+    lockManager.enterLocksNoWait(readLocks, nonExWriteLocks, writeLocks);
   }
 
   /** Leave multiple locks
   */
+  @Override
   public void leaveLocks(String[] readLocks, String[] writeNonExLocks, String[] writeLocks)
     throws ManifoldCFException
   {
-    LockDescription[] lds = getSortedUniqueLocks(readLocks,writeNonExLocks,writeLocks);
-    // Free them all... one at a time is fine
-    ManifoldCFException ae = null;
-    int i = lds.length;
-    while (--i >= 0)
-    {
-      LockDescription ld = lds[i];
-      String lockKey = ld.getKey();
-      int lockType = ld.getType();
-      try
-      {
-        switch (lockType)
-        {
-        case TYPE_READ:
-          leaveReadLock(lockKey);
-          break;
-        case TYPE_WRITENONEX:
-          leaveNonExWriteLock(lockKey);
-          break;
-        case TYPE_WRITE:
-          leaveWriteLock(lockKey);
-          break;
-        }
-      }
-      catch (ManifoldCFException e)
-      {
-        ae = e;
-      }
-    }
-
-    if (ae != null)
-    {
-      throw ae;
-    }
+    lockManager.leaveLocks(readLocks, writeNonExLocks, writeLocks);
   }
 
   /** Enter a named, read critical section (NOT a lock).  Critical sections never cross JVM boundaries.
@@ -1240,38 +376,11 @@
   *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
   * section at a time.
   */
+  @Override
   public void enterReadCriticalSection(String sectionKey)
     throws ManifoldCFException
   {
-    LocalLock ll = getLocalSection(sectionKey);
-
-    // See if we already own the read lock for the object.
-    // Write locks or non-ex writelocks count as well (they're stronger).
-    if (ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock())
-    {
-      ll.incrementReadLocks();
-      return;
-    }
-
-    // We don't own a local read lock.  Get one.
-    while (true)
-    {
-      LockObject lo = mySections.getObject(sectionKey,null);
-      try
-      {
-        lo.enterReadLock();
-        break;
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-    ll.incrementReadLocks();
+    lockManager.enterReadCriticalSection(sectionKey);
   }
 
   /** Leave a named, read critical section (NOT a lock).  Critical sections never cross JVM boundaries.
@@ -1279,49 +388,11 @@
   *@param sectionKey is the name of the section to leave.  Only one thread can be in any given named
   * section at a time.
   */
+  @Override
   public void leaveReadCriticalSection(String sectionKey)
     throws ManifoldCFException
   {
-    LocalLock ll = getLocalSection(sectionKey);
-
-    ll.decrementReadLocks();
-    if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
-    {
-      while (true)
-      {
-        LockObject lo = mySections.getObject(sectionKey,null);
-        try
-        {
-          lo.leaveReadLock();
-          break;
-        }
-        catch (InterruptedException e)
-        {
-          // Try one more time
-          try
-          {
-            lo.leaveReadLock();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-          catch (InterruptedException e2)
-          {
-            ll.incrementReadLocks();
-            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
-          }
-          catch (ExpiredObjectException e2)
-          {
-            ll.incrementReadLocks();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-        }
-        catch (ExpiredObjectException e)
-        {
-          // Try again
-        }
-      }
-
-      releaseLocalSection(sectionKey);
-    }
+    lockManager.leaveReadCriticalSection(sectionKey);
   }
 
   /** Enter a named, non-exclusive write critical section (NOT a lock).  Critical sections never cross JVM boundaries.
@@ -1329,46 +400,11 @@
   *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
   * section at a time.
   */
+  @Override
   public void enterNonExWriteCriticalSection(String sectionKey)
     throws ManifoldCFException
   {
-    LocalLock ll = getLocalSection(sectionKey);
-
-
-    // See if we already own a write lock for the object
-    // If we do, there is no reason to change the status of the global lock we own.
-    if (ll.hasNonExWriteLock() || ll.hasWriteLock())
-    {
-      ll.incrementNonExWriteLocks();
-      return;
-    }
-
-    // Check for illegalities
-    if (ll.hasReadLock())
-    {
-      throw new ManifoldCFException("Illegal lock sequence: NonExWrite critical section can't be within read critical section",ManifoldCFException.GENERAL_ERROR);
-    }
-
-    // We don't own a local non-ex write lock.  Get one.  The global lock will need
-    // to know if we already have a a read lock.
-    while (true)
-    {
-      LockObject lo = mySections.getObject(sectionKey,null);
-      try
-      {
-        lo.enterNonExWriteLock();
-        break;
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-    ll.incrementNonExWriteLocks();
+    lockManager.enterNonExWriteCriticalSection(sectionKey);
   }
 
   /** Leave a named, non-exclusive write critical section (NOT a lock).  Critical sections never cross JVM boundaries.
@@ -1376,52 +412,11 @@
   *@param sectionKey is the name of the section to leave.  Only one thread can be in any given named
   * section at a time.
   */
+  @Override
   public void leaveNonExWriteCriticalSection(String sectionKey)
     throws ManifoldCFException
   {
-    LocalLock ll = getLocalSection(sectionKey);
-
-    ll.decrementNonExWriteLocks();
-    // See if we no longer have a write lock for the object.
-    // If we retain the stronger exclusive lock, we still do not need to
-    // change the status of the global lock.
-    if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-    {
-      while (true)
-      {
-        LockObject lo = mySections.getObject(sectionKey,null);
-        try
-        {
-          lo.leaveNonExWriteLock();
-          break;
-        }
-        catch (InterruptedException e)
-        {
-          // try one more time
-          try
-          {
-            lo.leaveNonExWriteLock();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-          catch (InterruptedException e2)
-          {
-            ll.incrementNonExWriteLocks();
-            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
-          }
-          catch (ExpiredObjectException e2)
-          {
-            ll.incrementNonExWriteLocks();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-        }
-        catch (ExpiredObjectException e)
-        {
-          // Try again
-        }
-      }
-
-      releaseLocalSection(sectionKey);
-    }
+    lockManager.leaveNonExWriteCriticalSection(sectionKey);
   }
 
   /** Enter a named, exclusive critical section (NOT a lock).  Critical sections never cross JVM boundaries.
@@ -1429,47 +424,11 @@
   *@param sectionKey is the name of the section to enter.  Only one thread can be in any given named
   * section at a time.
   */
+  @Override
   public void enterWriteCriticalSection(String sectionKey)
     throws ManifoldCFException
   {
-    LocalLock ll = getLocalSection(sectionKey);
-
-
-    // See if we already own the write lock for the object
-    if (ll.hasWriteLock())
-    {
-      ll.incrementWriteLocks();
-      return;
-    }
-
-    // Check for illegalities
-    if (ll.hasReadLock() || ll.hasNonExWriteLock())
-    {
-      throw new ManifoldCFException("Illegal lock sequence: Write lock can't be within read lock or non-ex write lock",ManifoldCFException.GENERAL_ERROR);
-    }
-
-    // We don't own a local write lock.  Get one.  The global lock will need
-    // to know if we already have a non-exclusive lock or a read lock, which we don't because
-    // it's illegal.
-    while (true)
-    {
-      LockObject lo = mySections.getObject(sectionKey,null);
-      try
-      {
-        lo.enterWriteLock();
-        break;
-      }
-      catch (InterruptedException e)
-      {
-        throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-      }
-      catch (ExpiredObjectException e)
-      {
-        // Try again
-      }
-    }
-    ll.incrementWriteLocks();
-
+    lockManager.enterWriteCriticalSection(sectionKey);
   }
 
   /** Leave a named, exclusive critical section (NOT a lock).  Critical sections never cross JVM boundaries.
@@ -1477,49 +436,11 @@
   *@param sectionKey is the name of the section to leave.  Only one thread can be in any given named
   * section at a time.
   */
+  @Override
   public void leaveWriteCriticalSection(String sectionKey)
     throws ManifoldCFException
   {
-    LocalLock ll = getLocalSection(sectionKey);
-
-    ll.decrementWriteLocks();
-    if (!ll.hasWriteLock())
-    {
-      while (true)
-      {
-        LockObject lo = mySections.getObject(sectionKey,null);
-        try
-        {
-          lo.leaveWriteLock();
-          break;
-        }
-        catch (InterruptedException e)
-        {
-          // try one more time
-          try
-          {
-            lo.leaveWriteLock();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-          catch (InterruptedException e2)
-          {
-            ll.incrementWriteLocks();
-            throw new ManifoldCFException("Interrupted",e2,ManifoldCFException.INTERRUPTED);
-          }
-          catch (ExpiredObjectException e2)
-          {
-            ll.incrementWriteLocks();
-            throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-          }
-        }
-        catch (ExpiredObjectException e)
-        {
-          // Try again
-        }
-      }
-
-      releaseLocalSection(sectionKey);
-    }
+    lockManager.leaveWriteCriticalSection(sectionKey);
   }
 
   /** Enter multiple critical sections simultaneously.
@@ -1527,157 +448,11 @@
   *@param nonExSectionKeys is an array of non-ex write section descriptors, or null if none desired.
   *@param writeSectionKeys is an array of write section descriptors, or null if there are none desired.
   */
+  @Override
   public void enterCriticalSections(String[] readSectionKeys, String[] nonExSectionKeys, String[] writeSectionKeys)
     throws ManifoldCFException
   {
-    // Sort the locks.  This improves the chances of making it through the locking process without
-    // contention!
-    LockDescription lds[] = getSortedUniqueLocks(readSectionKeys,nonExSectionKeys,writeSectionKeys);
-    int locksProcessed = 0;
-    try
-    {
-      while (locksProcessed < lds.length)
-      {
-        LockDescription ld = lds[locksProcessed];
-        int lockType = ld.getType();
-        String lockKey = ld.getKey();
-        LocalLock ll;
-        switch (lockType)
-        {
-        case TYPE_WRITE:
-          ll = getLocalSection(lockKey);
-          // Check for illegalities
-          if ((ll.hasReadLock() || ll.hasNonExWriteLock()) && !ll.hasWriteLock())
-          {
-            throw new ManifoldCFException("Illegal lock sequence: Write critical section can't be within read critical section or non-ex write critical section",ManifoldCFException.GENERAL_ERROR);
-          }
-
-          // See if we already own the write lock for the object
-          if (!ll.hasWriteLock())
-          {
-            // We don't own a local write lock.  Get one.
-            while (true)
-            {
-              LockObject lo = mySections.getObject(lockKey,null);
-              try
-              {
-                lo.enterWriteLock();
-                break;
-              }
-              catch (ExpiredObjectException e)
-              {
-                // Try again
-              }
-            }
-          }
-          ll.incrementWriteLocks();
-          break;
-        case TYPE_WRITENONEX:
-          ll = getLocalSection(lockKey);
-          // Check for illegalities
-          if (ll.hasReadLock() && !(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            throw new ManifoldCFException("Illegal lock sequence: NonExWrite critical section can't be within read critical section",ManifoldCFException.GENERAL_ERROR);
-          }
-
-          // See if we already own the write lock for the object
-          if (!(ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            // We don't own a local write lock.  Get one.
-            while (true)
-            {
-              LockObject lo = mySections.getObject(lockKey,null);
-              try
-              {
-                lo.enterNonExWriteLock();
-                break;
-              }
-              catch (ExpiredObjectException e)
-              {
-                // Try again
-              }
-            }
-          }
-          ll.incrementNonExWriteLocks();
-          break;
-        case TYPE_READ:
-          ll = getLocalSection(lockKey);
-          if (!(ll.hasReadLock() || ll.hasNonExWriteLock() || ll.hasWriteLock()))
-          {
-            // We don't own a local read lock.  Get one.
-            while (true)
-            {
-              LockObject lo = mySections.getObject(lockKey,null);
-              try
-              {
-                lo.enterReadLock();
-                break;
-              }
-              catch (ExpiredObjectException e)
-              {
-                // Try again
-              }
-            }
-          }
-          ll.incrementReadLocks();
-          break;
-        }
-        locksProcessed++;
-      }
-      // Got all; we are done!
-      return;
-    }
-    catch (Throwable ex)
-    {
-      // No matter what, undo the locks we've taken
-      ManifoldCFException ae = null;
-      int errno = 0;
-
-      while (--locksProcessed >= 0)
-      {
-        LockDescription ld = lds[locksProcessed];
-        int lockType = ld.getType();
-        String lockKey = ld.getKey();
-        try
-        {
-          switch (lockType)
-          {
-          case TYPE_READ:
-            leaveReadCriticalSection(lockKey);
-            break;
-          case TYPE_WRITENONEX:
-            leaveNonExWriteCriticalSection(lockKey);
-            break;
-          case TYPE_WRITE:
-            leaveWriteCriticalSection(lockKey);
-            break;
-          }
-        }
-        catch (ManifoldCFException e)
-        {
-          ae = e;
-        }
-      }
-
-      if (ae != null)
-      {
-        throw ae;
-      }
-      if (ex instanceof ManifoldCFException)
-      {
-        throw (ManifoldCFException)ex;
-      }
-      if (ex instanceof InterruptedException)
-      {
-        // It's InterruptedException
-        throw new ManifoldCFException("Interrupted",ex,ManifoldCFException.INTERRUPTED);
-      }
-      if (!(ex instanceof Error))
-      {
-        throw new Error("Unexpected exception",ex);
-      }
-      throw (Error)ex;
-    }
+    lockManager.enterCriticalSections(readSectionKeys, nonExSectionKeys, writeSectionKeys);
   }
 
   /** Leave multiple critical sections simultaneously.
@@ -1685,284 +460,11 @@
   *@param nonExSectionKeys is an array of non-ex write section descriptors, or null if none desired.
   *@param writeSectionKeys is an array of write section descriptors, or null if there are none desired.
   */
+  @Override
   public void leaveCriticalSections(String[] readSectionKeys, String[] nonExSectionKeys, String[] writeSectionKeys)
     throws ManifoldCFException
   {
-    LockDescription[] lds = getSortedUniqueLocks(readSectionKeys,nonExSectionKeys,writeSectionKeys);
-    // Free them all... one at a time is fine
-    ManifoldCFException ae = null;
-    int i = lds.length;
-    while (--i >= 0)
-    {
-      LockDescription ld = lds[i];
-      String lockKey = ld.getKey();
-      int lockType = ld.getType();
-      try
-      {
-        switch (lockType)
-        {
-        case TYPE_READ:
-          leaveReadCriticalSection(lockKey);
-          break;
-        case TYPE_WRITENONEX:
-          leaveNonExWriteCriticalSection(lockKey);
-          break;
-        case TYPE_WRITE:
-          leaveWriteCriticalSection(lockKey);
-          break;
-        }
-      }
-      catch (ManifoldCFException e)
-      {
-        ae = e;
-      }
-    }
-
-    if (ae != null)
-    {
-      throw ae;
-    }
-  }
-
-  protected LocalLock getLocalLock(String lockKey)
-  {
-    LocalLock ll = (LocalLock)localLocks.get(lockKey);
-    if (ll == null)
-    {
-      ll = new LocalLock();
-      localLocks.put(lockKey,ll);
-    }
-    return ll;
-  }
-
-  protected void releaseLocalLock(String lockKey)
-  {
-    localLocks.remove(lockKey);
-  }
-
-  protected LocalLock getLocalSection(String sectionKey)
-  {
-    LocalLock ll = (LocalLock)localSections.get(sectionKey);
-    if (ll == null)
-    {
-      ll = new LocalLock();
-      localSections.put(sectionKey,ll);
-    }
-    return ll;
-  }
-
-  protected void releaseLocalSection(String sectionKey)
-  {
-    localSections.remove(sectionKey);
-  }
-
-  /** Process inbound locks into a sorted vector of most-restrictive unique locks
-  */
-  protected LockDescription[] getSortedUniqueLocks(String[] readLocks, String[] writeNonExLocks,
-    String[] writeLocks)
-  {
-    // First build a unique hash of lock descriptions
-    HashMap ht = new HashMap();
-    int i;
-    if (readLocks != null)
-    {
-      i = 0;
-      while (i < readLocks.length)
-      {
-        String key = readLocks[i++];
-        LockDescription ld = (LockDescription)ht.get(key);
-        if (ld == null)
-        {
-          ld = new LockDescription(TYPE_READ,key);
-          ht.put(key,ld);
-        }
-        else
-          ld.set(TYPE_READ);
-      }
-    }
-    if (writeNonExLocks != null)
-    {
-      i = 0;
-      while (i < writeNonExLocks.length)
-      {
-        String key = writeNonExLocks[i++];
-        LockDescription ld = (LockDescription)ht.get(key);
-        if (ld == null)
-        {
-          ld = new LockDescription(TYPE_WRITENONEX,key);
-          ht.put(key,ld);
-        }
-        else
-          ld.set(TYPE_WRITENONEX);
-      }
-    }
-    if (writeLocks != null)
-    {
-      i = 0;
-      while (i < writeLocks.length)
-      {
-        String key = writeLocks[i++];
-        LockDescription ld = (LockDescription)ht.get(key);
-        if (ld == null)
-        {
-          ld = new LockDescription(TYPE_WRITE,key);
-          ht.put(key,ld);
-        }
-        else
-          ld.set(TYPE_WRITE);
-      }
-    }
-
-    // Now, sort by key name
-    LockDescription[] rval = new LockDescription[ht.size()];
-    String[] sortarray = new String[ht.size()];
-    i = 0;
-    Iterator iter = ht.keySet().iterator();
-    while (iter.hasNext())
-    {
-      String key = (String)iter.next();
-      sortarray[i++] = key;
-    }
-    java.util.Arrays.sort(sortarray);
-    i = 0;
-    while (i < sortarray.length)
-    {
-      rval[i] = (LockDescription)ht.get(sortarray[i]);
-      i++;
-    }
-    return rval;
-  }
-
-  /** Create a file path given a key name.
-  *@param key is the key name.
-  *@return the file path.
-  */
-  protected String makeFilePath(String key)
-  {
-    int hashcode = key.hashCode();
-    int outerDirNumber = (hashcode & (1023));
-    int innerDirNumber = ((hashcode >> 10) & (1023));
-    String fullDir = synchDirectory.toString();
-    if (fullDir.length() == 0 || !fullDir.endsWith("/"))
-      fullDir = fullDir + "/";
-    fullDir = fullDir + Integer.toString(outerDirNumber)+"/"+Integer.toString(innerDirNumber);
-    return fullDir;
-  }
-
-  protected class LockDescription
-  {
-    protected int lockType;
-    protected String lockKey;
-
-    public LockDescription(int lockType, String lockKey)
-    {
-      this.lockType = lockType;
-      this.lockKey = lockKey;
-    }
-
-    public void set(int lockType)
-    {
-      if (lockType > this.lockType)
-        this.lockType = lockType;
-    }
-
-    public int getType()
-    {
-      return lockType;
-    }
-
-    public String getKey()
-    {
-      return lockKey;
-    }
-  }
-
-  protected class LocalLock
-  {
-    private int readCount = 0;
-    private int writeCount = 0;
-    private int nonExWriteCount = 0;
-
-    public LocalLock()
-    {
-    }
-
-    public boolean hasWriteLock()
-    {
-      return (writeCount > 0);
-    }
-
-    public boolean hasReadLock()
-    {
-      return (readCount > 0);
-    }
-
-    public boolean hasNonExWriteLock()
-    {
-      return (nonExWriteCount > 0);
-    }
-
-    public void incrementReadLocks()
-    {
-      readCount++;
-    }
-
-    public void incrementNonExWriteLocks()
-    {
-      nonExWriteCount++;
-    }
-
-    public void incrementWriteLocks()
-    {
-      writeCount++;
-    }
-
-    public void decrementReadLocks()
-    {
-      readCount--;
-    }
-
-    public void decrementNonExWriteLocks()
-    {
-      nonExWriteCount--;
-    }
-
-    public void decrementWriteLocks()
-    {
-      writeCount--;
-    }
-  }
-  
-  protected static final int BASE_SIZE = 128;
-  
-  protected static class ByteArrayBuffer
-  {
-    protected byte[] buffer;
-    protected int length;
-    
-    public ByteArrayBuffer()
-    {
-      buffer = new byte[BASE_SIZE];
-      length = 0;
-    }
-    
-    public void add(byte b)
-    {
-      if (length == buffer.length)
-      {
-        byte[] oldbuffer = buffer;
-        buffer = new byte[length * 2];
-        System.arraycopy(oldbuffer,0,buffer,0,length);
-      }
-      buffer[length++] = b;
-    }
-    
-    public byte[] toArray()
-    {
-      byte[] rval = new byte[length];
-      System.arraycopy(buffer,0,rval,0,length);
-      return rval;
-    }
+    lockManager.leaveCriticalSections(readSectionKeys, nonExSectionKeys, writeSectionKeys);
   }
 
 }
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObject.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObject.java
index 53bf63c..a18d6f1 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObject.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObject.java
@@ -23,51 +23,27 @@
 import org.apache.manifoldcf.core.system.Logging;
 import java.io.*;
 
-/** One instance of this object exists for each lock on each JVM!
+/** Base class.  One instance of this object exists for each lock on each JVM!
 */
 public class LockObject
 {
   public static final String _rcsid = "@(#)$Id: LockObject.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  private final static int STATUS_WRITELOCKED = -1;
+  protected Object lockKey;
 
   private LockPool lockPool;
-  private Object lockKey;
-  private File lockDirectoryName = null;
-  private File lockFileName = null;
   private boolean obtainedWrite = false;  // Set to true if this object already owns the permission to exclusively write
   private int obtainedRead = 0;           // Set to a count if this object already owns the permission to read
   private int obtainedNonExWrite = 0;     // Set to a count if this object already owns the permission to non-exclusively write
-  private boolean isSync;                 // True if we need to be synchronizing across JVM's
 
-  private final static String DOTLOCK = ".lock";
-  private final static String DOTFILE = ".file";
-  private final static String SLASH = "/";
-  private static final String LOCKEDANOTHERTHREAD = "Locked by another thread in this JVM";
-  private static final String LOCKEDANOTHERJVM = "Locked by another JVM";
+  protected static final String LOCKEDANOTHERTHREAD = "Locked by another thread in this JVM";
+  protected static final String LOCKEDANOTHERJVM = "Locked by another JVM";
 
 
-  public LockObject(LockPool lockPool, Object lockKey, File synchDir)
+  public LockObject(LockPool lockPool, Object lockKey)
   {
     this.lockPool = lockPool;
     this.lockKey = lockKey;
-    this.isSync = (synchDir != null);
-    if (isSync)
-    {
-      // Hash the filename
-      int hashcode = lockKey.hashCode();
-      int outerDirNumber = (hashcode & (1023));
-      int innerDirNumber = ((hashcode >> 10) & (1023));
-      String fullDir = synchDir.toString();
-      if (fullDir.length() == 0 || !fullDir.endsWith(SLASH))
-        fullDir = fullDir + SLASH;
-      fullDir = fullDir + Integer.toString(outerDirNumber)+SLASH+Integer.toString(innerDirNumber);
-      (new File(fullDir)).mkdirs();
-      String filename = createFileName(lockKey);
-
-      lockDirectoryName = new File(fullDir,filename+DOTLOCK);
-      lockFileName = new File(fullDir,filename+DOTFILE);
-    }
   }
 
   public synchronized void makeInvalid()
@@ -75,19 +51,12 @@
     this.lockPool = null;
   }
 
-  private static String createFileName(Object lockKey)
-  {
-    return "lock-"+ManifoldCF.safeFileName(lockKey.toString());
-  }
-
   /** This method WILL NOT BE CALLED UNLESS we are actually committing a write lock for the
   * first time for a given thread.
   */
   public void enterWriteLock()
-    throws InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, InterruptedException, ExpiredObjectException
   {
-    // if (lockFileName != null)
-    //  System.out.println("Entering write lock for resource "+lockFileName.toString());
     while (true)
     {
       try
@@ -102,8 +71,6 @@
             try
             {
               enterWriteLockNoWait();
-              // if (lockFileName != null)
-              //      System.out.println("Leaving write lock for resource "+lockFileName.toString());
               return;
             }
             catch (LocalLockException le)
@@ -129,7 +96,7 @@
   * exclusive write area.
   */
   public synchronized void enterWriteLockNoWait()
-    throws LockException, LocalLockException, InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, LockException, LocalLockException, InterruptedException, ExpiredObjectException
   {
     if (lockPool == null)
       throw new ExpiredObjectException("Invalid");
@@ -141,31 +108,19 @@
     if (obtainedRead > 0 || obtainedNonExWrite > 0)
       throw new LocalLockException(LOCKEDANOTHERTHREAD);
     // Attempt to obtain a global write lock
-    if (isSync)
-    {
-      grabFileLock();
-      try
-      {
-        int status = readFile();
-        if (status != 0)
-        {
-          throw new LockException(LOCKEDANOTHERJVM);
-        }
-        writeFile(STATUS_WRITELOCKED);
-      }
-      finally
-      {
-        releaseFileLock();
-      }
-    }
+    obtainGlobalWriteLockNoWait();
     obtainedWrite = true;
   }
 
-  public void leaveWriteLock()
-    throws InterruptedException, ExpiredObjectException
+  protected void obtainGlobalWriteLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
   {
-    // if (lockFileName != null)
-    //      System.out.println("Releasing write lock for resource "+lockFileName.toString());
+  }
+  
+
+  public void leaveWriteLock()
+    throws ManifoldCFException, InterruptedException, ExpiredObjectException
+  {
     while (true)
     {
       try
@@ -178,43 +133,30 @@
           if (obtainedWrite == false)
             throw new RuntimeException("JVM failure: Don't hold lock for object "+this.toString());
           obtainedWrite = false;
-          if (isSync)
+          try
           {
-            try
-            {
-              grabFileLock();
-              try
-              {
-                writeFile(0);
-              }
-              finally
-              {
-                releaseFileLock();
-              }
-            }
-            catch (LockException le)
-            {
-              obtainedWrite = true;
-              throw le;
-            }
-            catch (Error e)
-            {
-              obtainedWrite = true;
-              throw e;
-            }
-            catch (RuntimeException e)
-            {
-              obtainedWrite = true;
-              throw e;
-            }
+            clearGlobalWriteLock();
+          }
+          catch (LockException le)
+          {
+            obtainedWrite = true;
+            throw le;
+          }
+          catch (Error e)
+          {
+            obtainedWrite = true;
+            throw e;
+          }
+          catch (RuntimeException e)
+          {
+            obtainedWrite = true;
+            throw e;
           }
 
           // Lock is free, so release this object from the pool
           lockPool.releaseObject(lockKey,this);
 
           notifyAll();
-          // if (lockFileName != null)
-          //      System.out.println("Write lock released for resource "+lockFileName.toString());
           return;
         }
       }
@@ -227,8 +169,13 @@
 
   }
 
+  protected void clearGlobalWriteLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+  }
+  
   public void enterNonExWriteLock()
-    throws InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, InterruptedException, ExpiredObjectException
   {
     while (true)
     {
@@ -270,7 +217,7 @@
   * exclusive write area.
   */
   public synchronized void enterNonExWriteLockNoWait()
-    throws LockException, LocalLockException, InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, LockException, LocalLockException, InterruptedException, ExpiredObjectException
   {
     if (lockPool == null)
       throw new ExpiredObjectException("Invalid");
@@ -284,32 +231,18 @@
       obtainedNonExWrite++;
       return;
     }
-
-    // Attempt to obtain a global write lock
-    if (isSync)
-    {
-      grabFileLock();
-      try
-      {
-        int status = readFile();
-        if (status >= STATUS_WRITELOCKED)
-        {
-          throw new LockException(LOCKEDANOTHERJVM);
-        }
-        if (status == 0)
-          status = STATUS_WRITELOCKED;
-        writeFile(status-1);
-      }
-      finally
-      {
-        releaseFileLock();
-      }
-    }
+    obtainGlobalNonExWriteLockNoWait();
     obtainedNonExWrite++;
   }
 
+  protected void obtainGlobalNonExWriteLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+  }
+  
+
   public void leaveNonExWriteLock()
-    throws InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, InterruptedException, ExpiredObjectException
   {
     // System.out.println("Releasing non-ex-write lock for resource "+lockFileName.toString());
     while (true)
@@ -327,41 +260,24 @@
           if (obtainedNonExWrite > 0)
             return;
 
-          if (isSync)
+          try
           {
-            try
-            {
-              grabFileLock();
-              try
-              {
-                int status = readFile();
-                if (status >= STATUS_WRITELOCKED)
-                  throw new RuntimeException("JVM error: File lock is not in expected state for object "+this.toString());
-                status++;
-                if (status == STATUS_WRITELOCKED)
-                  status = 0;
-                writeFile(status);
-              }
-              finally
-              {
-                releaseFileLock();
-              }
-            }
-            catch (LockException le)
-            {
-              obtainedNonExWrite++;
-              throw le;
-            }
-            catch (Error e)
-            {
-              obtainedNonExWrite++;
-              throw e;
-            }
-            catch (RuntimeException e)
-            {
-              obtainedNonExWrite++;
-              throw e;
-            }
+            clearGlobalNonExWriteLock();
+          }
+          catch (LockException le)
+          {
+            obtainedNonExWrite++;
+            throw le;
+          }
+          catch (Error e)
+          {
+            obtainedNonExWrite++;
+            throw e;
+          }
+          catch (RuntimeException e)
+          {
+            obtainedNonExWrite++;
+            throw e;
           }
 
           // Lock is free, so release this object from the pool
@@ -380,8 +296,14 @@
     // System.out.println("Non-ex Write lock released for resource "+lockFileName.toString());
   }
 
+  protected void clearGlobalNonExWriteLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+  }
+  
+
   public void enterReadLock()
-    throws InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, InterruptedException, ExpiredObjectException
   {
     // if (lockFileName != null)
     //      System.out.println("Entering read lock for resource "+lockFileName.toString()+" "+toString());
@@ -419,7 +341,7 @@
   }
 
   public synchronized void enterReadLockNoWait()
-    throws LockException, LocalLockException, InterruptedException, ExpiredObjectException
+    throws ManifoldCFException, LockException, LocalLockException, InterruptedException, ExpiredObjectException
   {
     if (lockPool == null)
       throw new ExpiredObjectException("Invalid");
@@ -432,38 +354,20 @@
       return;
     }
     // Got the read token locally!
-
-    // Attempt to obtain a global read lock
-    if (isSync)
-    {
-      grabFileLock();
-      try
-      {
-        int status = readFile();
-        // System.out.println(" Read "+Integer.toString(status));
-        if (status <= STATUS_WRITELOCKED)
-        {
-          throw new LockException(LOCKEDANOTHERJVM);
-        }
-        status++;
-        writeFile(status);
-        // System.out.println(" Wrote "+Integer.toString(status));
-      }
-      finally
-      {
-        releaseFileLock();
-      }
-      // System.out.println(" Exiting");
-    }
+    obtainGlobalReadLockNoWait();
 
     obtainedRead = 1;
   }
 
-  public void leaveReadLock()
-    throws InterruptedException, ExpiredObjectException
+  protected void obtainGlobalReadLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
   {
-    // if (lockFileName != null)
-    //      System.out.println("Leaving read lock for resource "+lockFileName.toString()+" "+toString());
+  }
+  
+
+  public void leaveReadLock()
+    throws ManifoldCFException, InterruptedException, ExpiredObjectException
+  {
     while (true)
     {
       try
@@ -478,54 +382,32 @@
           obtainedRead--;
           if (obtainedRead > 0)
           {
-            // if (lockFileName != null)
-            //      System.out.println("Freed read lock for resource "+lockFileName.toString()+" (obtainedRead > 0)");
             return;
           }
-          if (isSync)
+          try
           {
-            try
-            {
-              grabFileLock();
-              try
-              {
-                int status = readFile();
-                // System.out.println(" Read status = "+Integer.toString(status));
-                if (status == 0)
-                  throw new RuntimeException("JVM error: File lock is not in expected state for object "+this.toString());
-                status--;
-                writeFile(status);
-                // System.out.println(" Wrote status = "+Integer.toString(status));
-              }
-              finally
-              {
-                releaseFileLock();
-              }
-              // System.out.println(" Done");
-            }
-            catch (LockException le)
-            {
-              obtainedRead++;
-              throw le;
-            }
-            catch (Error e)
-            {
-              obtainedRead++;
-              throw e;
-            }
-            catch (RuntimeException e)
-            {
-              obtainedRead++;
-              throw e;
-            }
+            clearGlobalReadLock();
+          }
+          catch (LockException le)
+          {
+            obtainedRead++;
+            throw le;
+          }
+          catch (Error e)
+          {
+            obtainedRead++;
+            throw e;
+          }
+          catch (RuntimeException e)
+          {
+            obtainedRead++;
+            throw e;
           }
 
           // Lock is free, so release this object from the pool
           lockPool.releaseObject(lockKey,this);
 
           notifyAll();
-          // if (lockFileName != null)
-          //      System.out.println("Freed read lock for resource "+lockFileName.toString());
           return;
         }
       }
@@ -537,273 +419,10 @@
     }
   }
 
-  private final static String FILELOCKED = "File locked";
-
-  private synchronized void grabFileLock()
-    throws LockException, InterruptedException
+  protected void clearGlobalReadLock()
+    throws ManifoldCFException, LockException, InterruptedException
   {
-    while (true)
-    {
-      // Try to create the lock file
-      try
-      {
-        if (lockDirectoryName.createNewFile() == false)
-          throw new LockException(FILELOCKED);
-        break;
-      }
-      catch (InterruptedIOException e)
-      {
-        throw new InterruptedException("Interrupted IO: "+e.getMessage());
-      }
-      catch (IOException e)
-      {
-        // Log this if possible
-        try
-        {
-          Logging.lock.warn("Attempt to set file lock '"+lockDirectoryName.toString()+"' failed: "+e.getMessage(),e);
-        }
-        catch (Throwable e2)
-        {
-          e.printStackTrace();
-        }
-        // Winnt sometimes throws an exception when you can't do the lock
-        ManifoldCF.sleep(100);
-        continue;
-      }
-    }
   }
-
-  private synchronized void releaseFileLock()
-    throws InterruptedException
-  {
-    Throwable ie = null;
-    while (true)
-    {
-      try
-      {
-        if (lockDirectoryName.delete())
-          break;
-        try
-        {
-          Logging.lock.fatal("Failure deleting file lock '"+lockDirectoryName.toString()+"'");
-        }
-        catch (Throwable e2)
-        {
-          System.out.println("Failure deleting file lock '"+lockDirectoryName.toString()+"'");
-        }
-        // Fail hard
-        System.exit(-100);
-      }
-      catch (Error e)
-      {
-        // An error - must try again to delete
-        // Attempting to log this to the log may not work due to disk being full, but try anyway.
-        String message = "Error deleting file lock '"+lockDirectoryName.toString()+"': "+e.getMessage();
-        try
-        {
-          Logging.lock.error(message,e);
-        }
-        catch (Throwable e2)
-        {
-          // Ok, we failed, send it to standard out
-          System.out.println(message);
-          e.printStackTrace();
-        }
-        ie = e;
-        ManifoldCF.sleep(100);
-        continue;
-      }
-      catch (RuntimeException e)
-      {
-        // A runtime exception - try again to delete
-        // Attempting to log this to the log may not work due to disk being full, but try anyway.
-        String message = "Error deleting file lock '"+lockDirectoryName.toString()+"': "+e.getMessage();
-        try
-        {
-          Logging.lock.error(message,e);
-        }
-        catch (Throwable e2)
-        {
-          // Ok, we failed, send it to standard out
-          System.out.println(message);
-          e.printStackTrace();
-        }
-        ie = e;
-        ManifoldCF.sleep(100);
-        continue;
-      }
-    }
-
-    // Succeeded finally - but we need to rethrow any exceptions we got
-    if (ie != null)
-    {
-      if (ie instanceof InterruptedException)
-        throw (InterruptedException)ie;
-      if (ie instanceof Error)
-        throw (Error)ie;
-      if (ie instanceof RuntimeException)
-        throw (RuntimeException)ie;
-    }
-
-  }
-
-  private synchronized int readFile()
-    throws InterruptedException
-  {
-    try
-    {
-      FileReader fr = new FileReader(lockFileName);
-      try
-      {
-        BufferedReader x = new BufferedReader(fr);
-        try
-        {
-          StringBuilder sb = new StringBuilder();
-          while (true)
-          {
-            int rval = x.read();
-            if (rval == -1)
-              break;
-            sb.append((char)rval);
-          }
-          try
-          {
-            return Integer.parseInt(sb.toString());
-          }
-          catch (NumberFormatException e)
-          {
-            // We should never be in a situation where we can't parse a number we have supposedly written.
-            // But, print a stack trace and throw IOException, so we recover.
-            throw new IOException("Lock number read was not valid: "+e.getMessage());
-          }
-        }
-        finally
-        {
-          x.close();
-        }
-      }
-      catch (InterruptedIOException e)
-      {
-        throw new InterruptedException("Interrupted IO: "+e.getMessage());
-      }
-      catch (IOException e)
-      {
-        String message = "Could not read from lock file: '"+lockFileName.toString()+"'";
-        try
-        {
-          Logging.lock.error(message,e);
-        }
-        catch (Throwable e2)
-        {
-          System.out.println(message);
-          e.printStackTrace();
-        }
-        // Don't fail hard or there is no way to recover
-        throw e;
-      }
-      finally
-      {
-        fr.close();
-      }
-    }
-    catch (InterruptedIOException e)
-    {
-      throw new InterruptedException("Interrupted IO: "+e.getMessage());
-    }
-    catch (IOException e)
-    {
-      return 0;
-    }
-
-  }
-
-  private synchronized void writeFile(int value)
-    throws InterruptedException
-  {
-    try
-    {
-      if (value == 0)
-      {
-        if (lockFileName.delete() == false)
-          throw new IOException("Could not delete file '"+lockFileName.toString()+"'");
-      }
-      else
-      {
-        FileWriter fw = new FileWriter(lockFileName);
-        try
-        {
-          BufferedWriter x = new BufferedWriter(fw);
-          try
-          {
-            x.write(Integer.toString(value));
-          }
-          finally
-          {
-            x.close();
-          }
-        }
-        finally
-        {
-          fw.close();
-        }
-      }
-    }
-    catch (Error e)
-    {
-      // Couldn't write for some reason!  Write to BOTH stdout and the log, since we
-      // can't be sure we will succeed at the latter.
-      String message = "Couldn't write to lock file; hard error occurred.  Shutting down process; locks may be left dangling.  You must cleanup before restarting.";
-      try
-      {
-        Logging.lock.error(message,e);
-      }
-      catch (Throwable e2)
-      {
-        System.out.println(message);
-        e.printStackTrace();
-      }
-      System.exit(-100);
-    }
-    catch (RuntimeException e)
-    {
-      // Couldn't write for some reason!  Write to BOTH stdout and the log, since we
-      // can't be sure we will succeed at the latter.
-      String message = "Couldn't write to lock file; JVM error.  Shutting down process; locks may be left dangling.  You must cleanup before restarting.";
-      try
-      {
-        Logging.lock.error(message,e);
-      }
-      catch (Throwable e2)
-      {
-        System.out.println(message);
-        e.printStackTrace();
-      }
-      System.exit(-100);
-    }
-    catch (InterruptedIOException e)
-    {
-      throw new InterruptedException("Interrupted IO: "+e.getMessage());
-    }
-    catch (IOException e)
-    {
-      // Couldn't write for some reason!  Write to BOTH stdout and the log, since we
-      // can't be sure we will succeed at the latter.
-      String message = "Couldn't write to lock file; disk may be full.  Shutting down process; locks may be left dangling.  You must cleanup before restarting.";
-      try
-      {
-        Logging.lock.error(message,e);
-      }
-      catch (Throwable e2)
-      {
-        System.out.println(message);
-        e.printStackTrace();
-      }
-      System.exit(-100);
-      // Hard failure is called for
-      // throw new Error("Lock management system failure",e);
-    }
-  }
-
-
+  
 }
 
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObjectFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObjectFactory.java
new file mode 100644
index 0000000..9df5686
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockObjectFactory.java
@@ -0,0 +1,42 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import org.apache.manifoldcf.core.system.Logging;
+import java.io.*;
+
+/** Base factory for lock objects.  This will be extended to
+* support different kinds of lock objects.
+*/
+public class LockObjectFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  public LockObjectFactory()
+  {
+  }
+  
+  public LockObject newLockObject(LockPool lockPool, Object lockKey)
+  {
+    return new LockObject(lockPool, lockKey);
+  }
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockPool.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockPool.java
index fa94f69..21774a8 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockPool.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/LockPool.java
@@ -21,18 +21,29 @@
 import java.util.*;
 import java.io.*;
 
+/** Base class of all lock pools.  A lock pool is a global set of lock objects looked up by
+* a key.  Lock object instantiation differs for different kinds of lock objects, so this
+* base class is expected to be extended accordingly.
+*/
 public class LockPool
 {
   public static final String _rcsid = "@(#)$Id: LockPool.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  private HashMap myLocks = new HashMap();
+  protected final Map<Object,LockObject> myLocks = new HashMap<Object,LockObject>();
 
-  public synchronized LockObject getObject(Object lockKey, File synchDir)
+  protected final LockObjectFactory factory;
+  
+  public LockPool(LockObjectFactory factory)
   {
-    LockObject lo = (LockObject)myLocks.get(lockKey);
+    this.factory = factory;
+  }
+  
+  public synchronized LockObject getObject(Object lockKey)
+  {
+    LockObject lo = myLocks.get(lockKey);
     if (lo == null)
     {
-      lo = new LockObject(this,lockKey,synchDir);
+      lo = factory.newLockObject(this,lockKey);
       myLocks.put(lockKey,lo);
     }
     return lo;
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperConnection.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperConnection.java
new file mode 100644
index 0000000..90617e7
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperConnection.java
@@ -0,0 +1,611 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+import org.apache.zookeeper.*;
+import org.apache.zookeeper.data.ACL;
+import org.apache.zookeeper.data.Stat;
+
+import java.util.*;
+import java.io.*;
+
+/** An instance of this class is the Zookeeper analog to a database connection.
+* Basically, it bundles up the Zookeeper functionality we need in a nice package,
+* which we can share between users as needed.  These connections will be pooled,
+* and will be closed when the process they live in is shut down.
+*/
+public class ZooKeeperConnection
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private static final String READ_PREFIX = "read-";
+  private static final String NONEXWRITE_PREFIX = "nonexwrite-";
+  private static final String WRITE_PREFIX = "write-";
+
+  private static final String CHILD_PREFIX = "child-";
+  
+  // Our zookeeper client
+  protected ZooKeeper zookeeper = null;
+  protected ZooKeeperWatcher zookeeperWatcher = null;
+
+  // Transient state
+  protected String lockNode = null;
+
+  /** Constructor. */
+  public ZooKeeperConnection(String connectString, int sessionTimeout)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      zookeeperWatcher = new ZooKeeperWatcher();
+      zookeeper = new ZooKeeper(connectString, sessionTimeout, zookeeperWatcher);
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new InterruptedException(e.getMessage());
+    }
+    catch (IOException e)
+    {
+      throw new ManifoldCFException("Zookeeper initialization error: "+e.getMessage(),e);
+    }
+  }
+
+  /** Create a transient node.
+  */
+  public void createNode(String nodePath, byte[] nodeData)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      zookeeper.create(nodePath, nodeData, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL);
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Check whether a node exists.
+  *@param nodePath is the path of the node.
+  *@return the data, if the node if exists, otherwise null.
+  */
+  public boolean checkNodeExists(String nodePath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      return (zookeeper.exists(nodePath,false) != null);
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+
+  /** Get node data.
+  *@param nodePath is the path of the node.
+  *@return the data, if the node if exists, otherwise null.
+  */
+  public byte[] getNodeData(String nodePath)
+    throws ManifoldCFException, InterruptedException
+  {
+    return readData(nodePath);
+  }
+  
+  /** Set node data.
+  */
+  public void setNodeData(String nodePath, byte[] data)
+    throws ManifoldCFException, InterruptedException
+  {
+    writeData(nodePath, data);
+  }
+  
+  /** Delete a node.
+  */
+  public void deleteNode(String nodePath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      zookeeper.delete(nodePath,-1);
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Delete all a node's children.
+  */
+  public void deleteNodeChildren(String nodePath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      List<String> children = zookeeper.getChildren(nodePath,false);
+      for (String child : children)
+      {
+        zookeeper.delete(nodePath + "/" + child,-1);
+      }
+    }
+    catch (KeeperException.NoNodeException e)
+    {
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Get the relative paths of all node's children.  If the node does not exist,
+  * return an empty list.
+  */
+  public List<String> getChildren(String nodePath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      //System.out.println("Children of '"+nodePath+"':");
+      List<String> children = zookeeper.getChildren(nodePath,false);
+      List<String> rval = new ArrayList<String>();
+      for (String child : children)
+      {
+        //System.out.println(" '"+child+"'");
+        if (child.startsWith(CHILD_PREFIX))
+          rval.add(child.substring(CHILD_PREFIX.length()));
+      }
+      return rval;
+    }
+    catch (KeeperException.NoNodeException e)
+    {
+      return new ArrayList<String>();
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Create a persistent child of a node.
+  */
+  public void createChild(String nodePath, String childName)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      //System.out.println("Creating child '"+childName+"' of nodepath '"+nodePath+"'");
+      while (true)
+      {
+        try
+        {
+          zookeeper.create(nodePath + "/" + CHILD_PREFIX + childName, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+          break;
+        }
+        catch (KeeperException.NoNodeException e)
+        {
+          try
+          {
+            zookeeper.create(nodePath, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+          }
+          catch (KeeperException.NodeExistsException e2)
+          {
+          }
+        }
+      }
+      System.out.println("...done");
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Delete the child of a node.
+  */
+  public void deleteChild(String nodePath, String childName)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      //System.out.println("Deleting child '"+childName+"' of nodePath '"+nodePath+"'");
+      zookeeper.delete(nodePath + "/" + CHILD_PREFIX + childName, -1);
+      //System.out.println("...done");
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Obtain a write lock, with no wait.
+  *@param lockPath is the lock node path.
+  *@return true if the lock was obtained, false otherwise.
+  */
+  public boolean obtainWriteLockNoWait(String lockPath)
+    throws ManifoldCFException, InterruptedException
+  {
+    if (lockNode != null)
+      throw new IllegalStateException("Already have a lock in place: '"+lockNode+"'; can't also write lock '"+lockPath+"'");
+
+    try
+    {
+      // Assert that we want a write lock
+      lockNode = createSequentialChild(lockPath,WRITE_PREFIX);
+      String lockSequenceNumber = lockNode.substring(lockPath.length() + 1 + WRITE_PREFIX.length());
+      // See if we got it
+      List<String> children = zookeeper.getChildren(lockPath,false);
+      for (String x : children)
+      {
+        String otherLock;
+        if (x.startsWith(WRITE_PREFIX))
+          otherLock = x.substring(WRITE_PREFIX.length());
+        else if (x.startsWith(NONEXWRITE_PREFIX))
+          otherLock = x.substring(NONEXWRITE_PREFIX.length());
+        else if (x.startsWith(READ_PREFIX))
+          otherLock = x.substring(READ_PREFIX.length());
+        else
+          continue;
+        if (otherLock.compareTo(lockSequenceNumber) < 0)
+        {
+          // We didn't get the lock.  Clean up and exit
+          zookeeper.delete(lockNode,-1);
+          lockNode = null;
+          return false;
+        }
+      }
+      // We got it!
+      return true;
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Obtain a non-ex-write lock, with no wait.
+  *@param lockPath is the lock node path.
+  *@return true if the lock was obtained, false otherwise.
+  */
+  public boolean obtainNonExWriteLockNoWait(String lockPath)
+    throws ManifoldCFException, InterruptedException
+  {
+    if (lockNode != null)
+      throw new IllegalStateException("Already have a lock in place: '"+lockNode+"'; can't also non-ex write lock '"+lockPath+"'");
+
+    try
+    {
+      // Assert that we want a read lock
+      lockNode = createSequentialChild(lockPath,NONEXWRITE_PREFIX);
+      String lockSequenceNumber = lockNode.substring(lockPath.length() + 1 + NONEXWRITE_PREFIX.length());
+      // See if we got it
+      List<String> children = zookeeper.getChildren(lockPath,false);
+      for (String x : children)
+      {
+        String otherLock;
+        if (x.startsWith(WRITE_PREFIX))
+          otherLock = x.substring(WRITE_PREFIX.length());
+        else if (x.startsWith(READ_PREFIX))
+          otherLock = x.substring(READ_PREFIX.length());
+        else
+          continue;
+        if (otherLock.compareTo(lockSequenceNumber) < 0)
+        {
+          // We didn't get the lock.  Clean up and exit
+          zookeeper.delete(lockNode,-1);
+          lockNode = null;
+          return false;
+        }
+      }
+      // We got it!
+      return true;
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+
+  /** Obtain a read lock, with no wait.
+  *@param lockPath is the lock node path.
+  *@return true if the lock was obtained, false otherwise.
+  */
+  public boolean obtainReadLockNoWait(String lockPath)
+    throws ManifoldCFException, InterruptedException
+  {
+    if (lockNode != null)
+      throw new IllegalStateException("Already have a lock in place: '"+lockNode+"'; can't also read lock '"+lockPath+"'");
+
+    try
+    {
+      // Assert that we want a read lock
+      lockNode = createSequentialChild(lockPath,READ_PREFIX);
+      String lockSequenceNumber = lockNode.substring(lockPath.length() + 1 + READ_PREFIX.length());
+      // See if we got it
+      List<String> children = zookeeper.getChildren(lockPath,false);
+      for (String x : children)
+      {
+        String otherLock;
+        if (x.startsWith(WRITE_PREFIX))
+          otherLock = x.substring(WRITE_PREFIX.length());
+        else if (x.startsWith(NONEXWRITE_PREFIX))
+          otherLock = x.substring(NONEXWRITE_PREFIX.length());
+        else
+          continue;
+        if (otherLock.compareTo(lockSequenceNumber) < 0)
+        {
+          // We didn't get the lock.  Clean up and exit
+          zookeeper.delete(lockNode,-1);
+          lockNode = null;
+          return false;
+        }
+      }
+      // We got it!
+      return true;
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Release the (saved) lock.
+  */
+  public void releaseLock()
+    throws ManifoldCFException, InterruptedException
+  {
+    if (lockNode == null)
+      throw new IllegalStateException("Can't release lock we don't hold");
+    try
+    {
+      zookeeper.delete(lockNode,-1);
+      lockNode = null;
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+
+  public byte[] readData(String resourcePath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      return zookeeper.getData(resourcePath,false,null);
+    }
+    catch (KeeperException.NoNodeException e)
+    {
+      return null;
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  public void writeData(String resourcePath, byte[] data)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      if (data == null)
+      {
+        try
+        {
+          zookeeper.delete(resourcePath, -1);
+        }
+        catch (KeeperException.NoNodeException e)
+        {
+        }
+      }
+      else
+      {
+        while (true)
+        {
+          try
+          {
+            zookeeper.setData(resourcePath, data, -1);
+            break;
+          }
+          catch (KeeperException.NoNodeException e)
+          {
+            try
+            {
+              zookeeper.create(resourcePath, data, ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+              break;
+            }
+            catch (KeeperException.NodeExistsException e2)
+            {
+              continue;
+            }
+          }
+        }
+      }
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+
+  public void setGlobalFlag(String flagPath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      try
+      {
+        zookeeper.create(flagPath, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+      }
+      catch (KeeperException.NodeExistsException e)
+      {
+      }
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+
+  public void clearGlobalFlag(String flagPath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      try
+      {
+        zookeeper.delete(flagPath,-1);
+      }
+      catch (KeeperException.NoNodeException e)
+      {
+      }
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  public boolean checkGlobalFlag(String flagPath)
+    throws ManifoldCFException, InterruptedException
+  {
+    try
+    {
+      Stat s = zookeeper.exists(flagPath,false);
+      return s != null;
+    }
+    catch (KeeperException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+  }
+  
+  /** Close this connection. */
+  public void close()
+    throws InterruptedException
+  {
+    if (lockNode != null)
+      throw new IllegalStateException("Should not be closing handles that have open locks!  Locknode: '"+lockNode+"'");
+    zookeeper.close();
+    zookeeper = null;
+    zookeeperWatcher = null;
+  }
+  
+  public static String zooKeeperSafeName(String input)
+  {
+    // Escape "/" characters
+    StringBuilder sb = new StringBuilder();
+    for (int i = 0; i < input.length(); i++)
+    {
+      char x = input.charAt(i);
+      if (x == '/')
+        sb.append('\\').append('0');
+      else if (x == '\u007f')
+        sb.append('\\').append('1');
+      else if (x == '\\')
+        sb.append('\\').append('\\');
+      else if (x >= '\u0000' && x < '\u0020')
+        sb.append('\\').append(x + '\u0040');
+      else if (x >= '\u0080' && x < '\u00a0')
+        sb.append('\\').append(x + '\u0060' - '\u0080');
+      else
+        sb.append(x);
+    }
+    return sb.toString();
+  }
+
+  public static String zooKeeperDecodeSafeName(String input)
+  {
+    // Escape "/" characters
+    StringBuilder sb = new StringBuilder();
+    int i = 0;
+    while (i < input.length())
+    {
+      char x = input.charAt(i);
+      if (x == '\\')
+      {
+        i++;
+        if (i == input.length())
+          throw new RuntimeException("Supposedly safe zookeeper name is not properly encoded!!");
+        x = input.charAt(i);
+        if (x == '0')
+          sb.append('/');
+        else if (x == '1')
+          sb.append('\u007f');
+        else if (x == '\\')
+          sb.append('\\');
+        else if (x >= '\u0040' && x < '\u0060')
+          sb.append(x - '\u0040');
+        else if (x >= '\u0060' && x < '\u0080')
+          sb.append(x - '\u0060' + '\u0080');
+        else
+          throw new RuntimeException("Supposedly safe zookeeper name is not properly encoded!!");
+      }
+      else
+        sb.append(x);
+      i++;
+    }
+    return sb.toString();
+  }
+
+  // Protected methods
+  
+  /** Create a node and a sequential child node.  Neither node has any data.
+  */
+  protected String createSequentialChild(String mainNode, String childPrefix)
+    throws KeeperException, InterruptedException
+  {
+    // Because zookeeper is so slow, AND reports all exceptions to the log, we do the minimum.
+    while (true)
+    {
+      try
+      {
+        return zookeeper.create(mainNode + "/" + childPrefix, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.EPHEMERAL_SEQUENTIAL);
+      }
+      catch (KeeperException.NoNodeException e)
+      {
+        try
+        {
+          zookeeper.create(mainNode, new byte[0], ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
+        }
+        catch (KeeperException.NodeExistsException e2)
+        {
+        }
+      }
+    }
+  }
+
+  /** Watcher class for zookeeper, so we get notified about zookeeper events. */
+  protected static class ZooKeeperWatcher implements Watcher
+  {
+    public ZooKeeperWatcher()
+    {
+    }
+    
+    public void process(WatchedEvent event)
+    {
+    }
+
+  }
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperConnectionPool.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperConnectionPool.java
new file mode 100644
index 0000000..84ecd38
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperConnectionPool.java
@@ -0,0 +1,65 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import java.io.*;
+
+/** Pool of ZooKeeper connections.
+* ZooKeeper connections are not trivial to set up and each one carries a cost.  Plus,
+* if we want to shut them all down on exit we need them all in one place.
+*/
+public class ZooKeeperConnectionPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected final String connectString;
+  protected final int sessionTimeout;
+  
+  protected final List<ZooKeeperConnection> openConnectionList = new ArrayList<ZooKeeperConnection>();
+  
+  public ZooKeeperConnectionPool(String connectString, int sessionTimeout)
+  {
+    this.connectString = connectString;
+    this.sessionTimeout = sessionTimeout;
+  }
+  
+  public synchronized ZooKeeperConnection grab()
+    throws ManifoldCFException, InterruptedException
+  {
+    if (openConnectionList.size() == 0)
+      openConnectionList.add(new ZooKeeperConnection(connectString, sessionTimeout));
+    return openConnectionList.remove(openConnectionList.size()-1);
+  }
+
+  public synchronized void release(ZooKeeperConnection connection)
+  {
+    openConnectionList.add(connection);
+  }
+  
+  public synchronized void closeAll()
+    throws InterruptedException
+  {
+    for (ZooKeeperConnection c : openConnectionList)
+    {
+      c.close();
+    }
+  }
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockManager.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockManager.java
new file mode 100644
index 0000000..abeee94
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockManager.java
@@ -0,0 +1,944 @@
+/* $Id: LockManager.java 988245 2010-08-23 18:39:35Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+import org.apache.zookeeper.*;
+
+import java.util.*;
+import java.io.*;
+
+/** The lock manager manages locks across all threads and JVMs and cluster members, using Zookeeper.
+* There should be no more than ONE instance of this class per thread!!!  The factory should enforce this.
+*/
+public class ZooKeeperLockManager extends BaseLockManager implements ILockManager
+{
+  public static final String _rcsid = "@(#)$Id: LockManager.java 988245 2010-08-23 18:39:35Z kwright $";
+
+  protected final static String zookeeperConnectStringParameter = "org.apache.manifoldcf.zookeeper.connectstring";
+  protected final static String zookeeperSessionTimeoutParameter = "org.apache.manifoldcf.zookeeper.sessiontimeout";
+
+  private final static String CONFIGURATION_PATH = "/org.apache.manifoldcf.configuration";
+  private final static String RESOURCE_PATH_PREFIX = "/org.apache.manifoldcf.resources-";
+  private final static String FLAG_PATH_PREFIX = "/org.apache.manifoldcf.flags-";
+  private final static String SERVICETYPE_LOCK_PATH_PREFIX = "/org.apache.manifoldcf.servicelock-";
+  private final static String SERVICETYPE_ACTIVE_PATH_PREFIX = "/org.apache.manifoldcf.serviceactive-";
+  private final static String SERVICETYPE_REGISTER_PATH_PREFIX = "/org.apache.manifoldcf.service-";
+  /** Anonymous global variable name prefix, to be followed by the service type */
+  private final static String SERVICETYPE_ANONYMOUS_COUNTER_PREFIX = "/org.apache.manifoldcf.serviceanon-";
+  
+  /** Anonymous service name prefix, to be followed by an integer */
+  protected final static String anonymousServiceNamePrefix = "_ANON_";
+
+  // ZooKeeper connection pool
+  protected static Integer connectionPoolLock = new Integer(0);
+  protected static ZooKeeperConnectionPool pool = null;
+  protected static Integer zookeeperPoolLocker = new Integer(0);
+  protected static LockPool myZooKeeperLocks = null;
+
+  /** Constructor */
+  public ZooKeeperLockManager()
+    throws ManifoldCFException
+  {
+    synchronized (connectionPoolLock)
+    {
+      if (pool == null)
+      {
+        // Initialize the ZooKeeper connection pool
+        String connectString = ManifoldCF.getStringProperty(zookeeperConnectStringParameter,null);
+        if (connectString == null)
+          throw new ManifoldCFException("Zookeeper lock manager requires a valid "+zookeeperConnectStringParameter+" property");
+        int sessionTimeout = ManifoldCF.getIntProperty(zookeeperSessionTimeoutParameter,300000);
+        ManifoldCF.addShutdownHook(new ZooKeeperShutdown());
+        pool = new ZooKeeperConnectionPool(connectString, sessionTimeout);
+      }
+    }
+    synchronized (zookeeperPoolLocker)
+    {
+      if (myZooKeeperLocks == null)
+      {
+        myZooKeeperLocks = new LockPool(new ZooKeeperLockObjectFactory(pool));
+      }
+    }
+  }
+  
+  // The node synchronization model involves keeping track of active agents entities, so that other entities
+  // can perform any necessary cleanup if one of the agents processes goes away unexpectedly.  There is a
+  // registration primitive (which can fail if the same guid is used as is already registered and active), a
+  // shutdown primitive (which makes a process id go inactive), and various inspection primitives.
+  
+  // For the zookeeper implementation, we'll need the following:
+  // - a service-type-specific global write lock transient node
+  // - a service-type-specific permanent root node that has registered services as children
+  // - a service-type-specific transient root node that has active services as children
+  //
+  // This is not necessarily the best implementation that meets the constraints, but it is straightforward
+  // and will serve until we come up with a better one.
+  
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  @Override
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    return registerServiceBeginServiceActivity(serviceType, serviceName, null, cleanup);
+  }
+    
+  /** Register a service and begin service activity.
+  * This atomic operation creates a permanent registration entry for a service.
+  * If the permanent registration entry already exists, this method will not create it or
+  * treat it as an error.  This operation also enters the "active" zone for the service.  The "active" zone will remain in force until it is
+  * canceled, or until the process is interrupted.  Ideally, the corresponding endServiceActivity method will be
+  * called when the service shuts down.  Some ILockManager implementations require that this take place for
+  * proper management.
+  * If the transient registration already exists, it is treated as an error and an exception will be thrown.
+  * If registration will succeed, then this method may call an appropriate IServiceCleanup method to clean up either the
+  * current service, or all services on the cluster.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to register.  If null is passed, a transient unique service name will be
+  *    created, and will be returned to the caller.
+  *@param initialData is the initial service data for this service.
+  *@param cleanup is called to clean up either the current service, or all services of this type, if no other active service exists.
+  *    May be null.  Local service cleanup is never called if the serviceName argument is null.
+  *@return the actual service name.
+  */
+  public String registerServiceBeginServiceActivity(String serviceType, String serviceName,
+    byte[] initialData, IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryWriteLock(connection, serviceType);
+        try
+        {
+          if (serviceName == null)
+            serviceName = constructUniqueServiceName(connection, serviceType);
+
+          String encodedServiceName = ZooKeeperConnection.zooKeeperSafeName(serviceName);
+          
+          String activePath = buildServiceTypeActivePath(serviceType, encodedServiceName);
+          if (connection.checkNodeExists(activePath))
+            throw new ManifoldCFException("Service '"+serviceName+"' of type '"+serviceType+"' is already active");
+          // First, see where we stand.
+          // We need to find out whether (a) our service is already registered; (b) how many registered services there are;
+          // (c) whether there are other active services.  But no changes will be made at this time.
+          String registrationNodePath = buildServiceTypeRegistrationPath(serviceType);
+          List<String> children = connection.getChildren(registrationNodePath);
+          boolean foundService = false;
+          boolean foundActiveService = false;
+          for (String encodedRegisteredServiceName : children)
+          {
+            if (encodedRegisteredServiceName.equals(encodedServiceName))
+              foundService = true;
+            if (connection.checkNodeExists(buildServiceTypeActivePath(serviceType, encodedRegisteredServiceName)))
+              foundActiveService = true;
+          }
+          
+          // Call the appropriate cleanup.  This will depend on what's actually registered, and what's active.
+          // If there were no services registered at all when we started, then no cleanup is needed, just cluster init.
+          // If this fails, we must revert to having our service not be registered and not be active.
+          boolean unregisterAll = false;
+          if (cleanup != null)
+          {
+            if (children.size() == 0)
+            {
+              // If we could count on locks never being cleaned up, clusterInit()
+              // would be sufficient here.  But then there's no way to recover from
+              // a lock clean.
+              cleanup.cleanUpAllServices();
+              cleanup.clusterInit();
+            }
+            else if (foundService && foundActiveService)
+              cleanup.cleanUpService(serviceName);
+            else if (!foundActiveService)
+            {
+              cleanup.cleanUpAllServices();
+              cleanup.clusterInit();
+              unregisterAll = true;
+            }
+          }
+
+          if (unregisterAll)
+          {
+            // Unregister all (since we did a global cleanup)
+            for (String encodedRegisteredServiceName : children)
+            {
+              if (!encodedRegisteredServiceName.equals(encodedServiceName))
+                connection.deleteChild(registrationNodePath, encodedRegisteredServiceName);
+            }
+          }
+
+          // Now, register (if needed)
+          if (!foundService)
+          {
+            connection.createChild(registrationNodePath, encodedServiceName);
+          }
+          
+          // Last, set the appropriate active flag
+          connection.createNode(activePath, initialData);
+          return serviceName;
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+  
+  /** Set service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@param serviceData is the data to update to (may be null).
+  * This updates the service's transient data (or deletes it).  If the service is not active, an exception is thrown.
+  */
+  @Override
+  public void updateServiceData(String serviceType, String serviceName, byte[] serviceData)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryWriteLock(connection, serviceType);
+        try
+        {
+          String activePath = buildServiceTypeActivePath(serviceType, ZooKeeperConnection.zooKeeperSafeName(serviceName));
+          connection.setNodeData(activePath, (serviceData==null)?new byte[0]:serviceData);
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Retrieve service data for a service.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service.
+  *@return the service's transient data.
+  */
+  @Override
+  public byte[] retrieveServiceData(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryReadLock(connection, serviceType);
+        try
+        {
+          String activePath = buildServiceTypeActivePath(serviceType, ZooKeeperConnection.zooKeeperSafeName(serviceName));
+          return connection.getNodeData(activePath);
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Scan service data for a service type.  Only active service data will be considered.
+  *@param serviceType is the type of service.
+  *@param dataAcceptor is the object that will be notified of each item of data for each service name found.
+  */
+  @Override
+  public void scanServiceData(String serviceType, IServiceDataAcceptor dataAcceptor)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryReadLock(connection, serviceType);
+        try
+        {
+          String registrationNodePath = buildServiceTypeRegistrationPath(serviceType);
+          List<String> children = connection.getChildren(registrationNodePath);
+          for (String encodedRegisteredServiceName : children)
+          {
+            String activeNodePath = buildServiceTypeActivePath(serviceType, encodedRegisteredServiceName);
+            if (connection.checkNodeExists(activeNodePath))
+            {
+              byte[] serviceData = connection.getNodeData(activeNodePath);
+              if (dataAcceptor.acceptServiceData(ZooKeeperConnection.zooKeeperDecodeSafeName(encodedRegisteredServiceName), serviceData))
+                break;
+            }
+          }
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+
+  }
+
+  /** Count all active services of a given type.
+  *@param serviceType is the service type.
+  *@return the count.
+  */
+  @Override
+  public int countActiveServices(String serviceType)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryReadLock(connection, serviceType);
+        try
+        {
+          String registrationNodePath = buildServiceTypeRegistrationPath(serviceType);
+          List<String> children = connection.getChildren(registrationNodePath);
+          int activeServiceCount = 0;
+          for (String encodedRegisteredServiceName : children)
+          {
+            if (connection.checkNodeExists(buildServiceTypeActivePath(serviceType, encodedRegisteredServiceName)))
+              activeServiceCount++;
+          }
+          return activeServiceCount;
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+  
+  /** Clean up any inactive services found.
+  * Calling this method will invoke cleanup of one inactive service at a time.
+  * If there are no inactive services around, then false will be returned.
+  * Note that this method will block whatever service it finds from starting up
+  * for the time the cleanup is proceeding.  At the end of the cleanup, if
+  * successful, the service will be atomically unregistered.
+  *@param serviceType is the service type.
+  *@param cleanup is the object to call to clean up an inactive service.
+  *@return true if there were no cleanup operations necessary.
+  */
+  @Override
+  public boolean cleanupInactiveService(String serviceType, IServiceCleanup cleanup)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryWriteLock(connection, serviceType);
+        try
+        {
+          // We find ONE service that is registered but inactive, and clean up after that one.
+          // Presumably the caller will lather, rinse, and repeat.
+          String registrationNodePath = buildServiceTypeRegistrationPath(serviceType);
+          List<String> children = connection.getChildren(registrationNodePath);
+          String encodedServiceName = null;
+          for (String encodedRegisteredServiceName : children)
+          {
+            if (!connection.checkNodeExists(buildServiceTypeActivePath(serviceType, encodedRegisteredServiceName)))
+            {
+              encodedServiceName = encodedRegisteredServiceName;
+              break;
+            }
+          }
+          if (encodedServiceName == null)
+            return true;
+          
+          // Found one, in serviceName, at position i
+          // Ideally, we should signal at this point that we're cleaning up after it, and then leave
+          // the exclusive lock, so that other activity can take place.  MHL
+          cleanup.cleanUpService(ZooKeeperConnection.zooKeeperDecodeSafeName(encodedServiceName));
+
+          // Unregister the service.
+          connection.deleteChild(registrationNodePath, encodedServiceName);
+          return false;
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** End service activity.
+  * This operation exits the "active" zone for the service.  This must take place using the same ILockManager
+  * object that was used to registerServiceBeginServiceActivity() - which implies that it is the same thread.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to exit.
+  */
+  @Override
+  public void endServiceActivity(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryWriteLock(connection, serviceType);
+        try
+        {
+          connection.deleteNode(buildServiceTypeActivePath(serviceType, ZooKeeperConnection.zooKeeperSafeName(serviceName)));
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+    
+  /** Check whether a service is active or not.
+  * This operation returns true if the specified service is considered active at the moment.  Once a service
+  * is not active anymore, it can only return to activity by calling beginServiceActivity() once more.
+  *@param serviceType is the type of service.
+  *@param serviceName is the name of the service to check on.
+  *@return true if the service is considered active.
+  */
+  @Override
+  public boolean checkServiceActive(String serviceType, String serviceName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        enterServiceRegistryReadLock(connection, serviceType);
+        try
+        {
+          return connection.checkNodeExists(buildServiceTypeActivePath(serviceType, ZooKeeperConnection.zooKeeperSafeName(serviceName)));
+        }
+        finally
+        {
+          leaveServiceRegistryLock(connection);
+        }
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Enter service registry read lock */
+  protected void enterServiceRegistryReadLock(ZooKeeperConnection connection, String serviceType)
+    throws ManifoldCFException, InterruptedException
+  {
+    String serviceTypeLock = buildServiceTypeLockPath(serviceType);
+    while (true)
+    {
+      if (connection.obtainReadLockNoWait(serviceTypeLock))
+        return;
+      ManifoldCF.sleep(100L);
+    }
+  }
+  
+  /** Enter service registry write lock */
+  protected void enterServiceRegistryWriteLock(ZooKeeperConnection connection, String serviceType)
+    throws ManifoldCFException, InterruptedException
+  {
+    String serviceTypeLock = buildServiceTypeLockPath(serviceType);
+    while (true)
+    {
+      if (connection.obtainWriteLockNoWait(serviceTypeLock))
+        return;
+      ManifoldCF.sleep(100L);
+    }
+  }
+  
+  /** Leave service registry lock */
+  protected void leaveServiceRegistryLock(ZooKeeperConnection connection)
+    throws ManifoldCFException, InterruptedException
+  {
+    connection.releaseLock();
+  }
+  
+  /** Construct a unique service name given the service type.
+  */
+  protected String constructUniqueServiceName(ZooKeeperConnection connection, String serviceType)
+    throws ManifoldCFException, InterruptedException
+  {
+    String serviceCounterName = makeServiceCounterName(serviceType);
+    int serviceUID = readServiceCounter(connection, serviceCounterName);
+    writeServiceCounter(connection, serviceCounterName,serviceUID+1);
+    return anonymousServiceNamePrefix + serviceUID;
+  }
+  
+  /** Make the service counter name for a service type.
+  */
+  protected static String makeServiceCounterName(String serviceType)
+  {
+    return SERVICETYPE_ANONYMOUS_COUNTER_PREFIX + ZooKeeperConnection.zooKeeperSafeName(serviceType);
+  }
+  
+  /** Read service counter.
+  */
+  protected int readServiceCounter(ZooKeeperConnection connection, String serviceCounterName)
+    throws ManifoldCFException, InterruptedException
+  {
+    int rval;
+    byte[] serviceCounterData = connection.readData(serviceCounterName);
+    if (serviceCounterData == null || serviceCounterData.length != 4)
+    {
+      rval = 0;
+      //System.out.println(" Null or bad data length for service counter '"+serviceCounterName+"'");
+    }
+    else
+    {
+      rval = (((int)serviceCounterData[0]) & 0xff) +
+        ((((int)serviceCounterData[1]) << 8) & 0xff00) +
+        ((((int)serviceCounterData[2]) << 16) & 0xff0000) +
+        ((((int)serviceCounterData[3]) << 24) & 0xff000000);
+      //System.out.println(" Read actual data from service counter '"+serviceCounterName+"': "+java.util.Arrays.toString(serviceCounterData));
+    }
+    //System.out.println("Read service counter '"+serviceCounterName+"'; value = "+rval);
+    return rval;
+  }
+  
+  /** Write service counter.
+  */
+  protected void writeServiceCounter(ZooKeeperConnection connection, String serviceCounterName, int counter)
+    throws ManifoldCFException, InterruptedException
+  {
+    byte[] serviceCounterData = new byte[4];
+    serviceCounterData[0] = (byte)(counter & 0xff);
+    serviceCounterData[1] = (byte)((counter >> 8) & 0xff);
+    serviceCounterData[2] = (byte)((counter >> 16) & 0xff);
+    serviceCounterData[3] = (byte)((counter >> 24) & 0xff);
+    connection.writeData(serviceCounterName,serviceCounterData);
+    //System.out.println("Wrote service counter '"+serviceCounterName+"'; value = "+counter+": "+java.util.Arrays.toString(serviceCounterData));
+  }
+
+  /** Build a zk path for the lock for a specific service type.
+  */
+  protected static String buildServiceTypeLockPath(String serviceType)
+  {
+    return SERVICETYPE_LOCK_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(serviceType);
+  }
+  
+  /** Build a zk path for the active node for a specific service of a specific type.
+  */
+  protected static String buildServiceTypeActivePath(String serviceType, String encodedServiceName)
+  {
+    return SERVICETYPE_ACTIVE_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(serviceType) + "-" + encodedServiceName;
+  }
+  
+  /** Build a zk path for the registration node for a specific service type.
+  */
+  protected static String buildServiceTypeRegistrationPath(String serviceType)
+  {
+    return SERVICETYPE_REGISTER_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(serviceType);
+  }
+  
+  // Shared configuration
+
+  /** Get the current shared configuration.  This configuration is available in common among all nodes,
+  * and thus must not be accessed through here for the purpose of finding configuration data that is specific to any one
+  * specific node.
+  *@param configurationData is the globally-shared configuration information.
+  */
+  @Override
+  public ManifoldCFConfiguration getSharedConfiguration()
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        // Read as a byte array, then parse
+        byte[] configurationData = connection.readData(CONFIGURATION_PATH);
+        if (configurationData != null)
+          return new ManifoldCFConfiguration(new ByteArrayInputStream(configurationData));
+        else
+          return new ManifoldCFConfiguration();
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Write shared configuration.  Caller closes the input stream.
+  */
+  public void setSharedConfiguration(InputStream configurationInputStream)
+    throws ManifoldCFException
+  {
+    try
+    {
+      // Read to a byte array
+      ByteArrayOutputStream buffer = new ByteArrayOutputStream();
+
+      byte[] data = new byte[65536];
+
+      while (true)
+      {
+        int nRead = configurationInputStream.read(data, 0, data.length);
+        if (nRead == -1)
+          break;
+        buffer.write(data, 0, nRead);
+      }
+      buffer.flush();
+
+      byte[] toWrite = buffer.toByteArray();
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        connection.writeData(CONFIGURATION_PATH, toWrite);
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedIOException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+    catch (IOException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e);
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+  
+  /** Raise a flag.  Use this method to assert a condition, or send a global signal.  The flag will be reset when the
+  * entire system is restarted.
+  *@param flagName is the name of the flag to set.
+  */
+  @Override
+  public void setGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        connection.setGlobalFlag(FLAG_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(flagName));
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Clear a flag.  Use this method to clear a condition, or retract a global signal.
+  *@param flagName is the name of the flag to clear.
+  */
+  @Override
+  public void clearGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        connection.clearGlobalFlag(FLAG_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(flagName));
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+  
+  /** Check the condition of a specified flag.
+  *@param flagName is the name of the flag to check.
+  *@return true if the flag is set, false otherwise.
+  */
+  @Override
+  public boolean checkGlobalFlag(String flagName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        return connection.checkGlobalFlag(FLAG_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(flagName));
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  /** Read data from a shared data resource.  Use this method to read any existing data, or get a null back if there is no such resource.
+  * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
+  *@param resourceName is the global name of the resource.
+  *@return a byte array containing the data, or null.
+  */
+  @Override
+  public byte[] readData(String resourceName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        return connection.readData(RESOURCE_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(resourceName));
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+  
+  /** Write data to a shared data resource.  Use this method to write a body of data into a shared resource.
+  * Note well that this is not necessarily an atomic operation, and it must thus be protected by a lock.
+  *@param resourceName is the global name of the resource.
+  *@param data is the byte array containing the data.  Pass null if you want to delete the resource completely.
+  */
+  @Override
+  public void writeData(String resourceName, byte[] data)
+    throws ManifoldCFException
+  {
+    try
+    {
+      ZooKeeperConnection connection = pool.grab();
+      try
+      {
+        connection.writeData(RESOURCE_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(resourceName), data);
+      }
+      finally
+      {
+        pool.release(connection);
+      }
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+    }
+  }
+
+  // Main method - for loading Zookeeper data
+  
+  public static void main(String[] argv)
+  {
+    if (argv.length != 1)
+    {
+      System.err.println("Usage: ZooKeeperLockManager <shared_configuration_file>");
+      System.exit(1);
+    }
+    
+    File file = new File(argv[0]);
+    
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+
+      try
+      {
+        FileInputStream fis = new FileInputStream(file);
+        try
+        {
+          new ZooKeeperLockManager().setSharedConfiguration(fis);
+        }
+        finally
+        {
+          fis.close();
+        }
+      }
+      finally
+      {
+        ManifoldCF.cleanUpEnvironment(tc);
+      }
+    }
+    catch (Throwable e)
+    {
+      e.printStackTrace(System.err);
+      System.exit(-1);
+    }
+  }
+  
+  // Protected methods and classes
+  
+  /** Override this method to change the nature of global locks.
+  */
+  @Override
+  protected LockPool getGlobalLockPool()
+  {
+    return myZooKeeperLocks;
+  }
+
+  /** Shutdown the connection pool.
+  */
+  protected static void shutdownPool()
+    throws ManifoldCFException
+  {
+    synchronized (connectionPoolLock)
+    {
+      if (pool != null)
+      {
+        try
+        {
+          pool.closeAll();
+          pool = null;
+        }
+        catch (InterruptedException e)
+        {
+          throw new ManifoldCFException(e.getMessage(),e,ManifoldCFException.INTERRUPTED);
+        }
+      }
+    }
+  }
+  
+  protected static class ZooKeeperShutdown implements IShutdownHook
+  {
+    public ZooKeeperShutdown()
+    {
+    }
+    
+    /** Do the requisite cleanup.
+    */
+    @Override
+    public void doCleanup(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      shutdownPool();
+    }
+
+  }
+  
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockObject.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockObject.java
new file mode 100644
index 0000000..b68c970
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockObject.java
@@ -0,0 +1,155 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import org.apache.manifoldcf.core.system.Logging;
+import java.io.*;
+
+/** One instance of this object exists for each lock on each JVM!
+* This is the ZooKeeper version of the lock.
+*/
+public class ZooKeeperLockObject extends LockObject
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private final static String LOCK_PATH_PREFIX = "/org.apache.manifoldcf.locks-";
+
+  private final ZooKeeperConnectionPool pool;
+  private final String lockPath;
+  
+  private ZooKeeperConnection currentConnection = null;
+
+  public ZooKeeperLockObject(LockPool lockPool, Object lockKey, ZooKeeperConnectionPool pool)
+  {
+    super(lockPool,lockKey);
+    this.pool = pool;
+    this.lockPath = LOCK_PATH_PREFIX + ZooKeeperConnection.zooKeeperSafeName(lockKey.toString());
+  }
+
+  @Override
+  protected void obtainGlobalWriteLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (currentConnection != null)
+      throw new IllegalStateException("Already have a connection before write locking: "+lockPath);
+    boolean succeeded = false;
+    currentConnection = pool.grab();
+    try
+    {
+      succeeded = currentConnection.obtainWriteLockNoWait(lockPath);
+      if (!succeeded)
+        throw new LockException(LOCKEDANOTHERJVM);
+    }
+    finally
+    {
+      if (!succeeded)
+      {
+        pool.release(currentConnection);
+        currentConnection = null;
+      }
+    }
+  }
+
+  @Override
+  protected void clearGlobalWriteLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (currentConnection == null)
+      throw new IllegalStateException("Cannot clear write lock we don't have: "+lockPath);
+    clearLock();
+  }
+  
+  @Override
+  protected void obtainGlobalNonExWriteLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (currentConnection != null)
+      throw new IllegalStateException("Already have a connection before non-ex-write locking: "+lockPath);
+    boolean succeeded = false;
+    currentConnection = pool.grab();
+    try
+    {
+      succeeded = currentConnection.obtainNonExWriteLockNoWait(lockPath);
+      if (!succeeded)
+        throw new LockException(LOCKEDANOTHERJVM);
+    }
+    finally
+    {
+      if (!succeeded)
+      {
+        pool.release(currentConnection);
+        currentConnection = null;
+      }
+    }
+  }
+
+  @Override
+  protected void clearGlobalNonExWriteLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (currentConnection == null)
+      throw new IllegalStateException("Cannot clear non-ex-write lock we don't have: "+lockPath);
+    clearLock();
+  }
+
+  @Override
+  protected void obtainGlobalReadLockNoWait()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (currentConnection != null)
+      throw new IllegalStateException("Already have a connection before read locking: "+lockPath);
+    boolean succeeded = false;
+    currentConnection = pool.grab();
+    try
+    {
+      succeeded = currentConnection.obtainReadLockNoWait(lockPath);
+      if (!succeeded)
+        throw new LockException(LOCKEDANOTHERJVM);
+    }
+    finally
+    {
+      if (!succeeded)
+      {
+        pool.release(currentConnection);
+        currentConnection = null;
+      }
+    }
+  }
+
+  @Override
+  protected void clearGlobalReadLock()
+    throws ManifoldCFException, LockException, InterruptedException
+  {
+    if (currentConnection == null)
+      throw new IllegalStateException("Cannot clear read lock we don't have: "+lockPath);
+    clearLock();
+  }
+
+  protected void clearLock()
+    throws ManifoldCFException, InterruptedException
+  {
+    currentConnection.releaseLock();
+    pool.release(currentConnection);
+    currentConnection = null;
+  }
+
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockObjectFactory.java b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockObjectFactory.java
new file mode 100644
index 0000000..bddf1f3
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperLockObjectFactory.java
@@ -0,0 +1,45 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import org.apache.manifoldcf.core.system.Logging;
+import java.io.*;
+
+/** Base factory for zookeeper lock objects.
+*/
+public class ZooKeeperLockObjectFactory extends LockObjectFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected final ZooKeeperConnectionPool pool;
+  
+  public ZooKeeperLockObjectFactory(ZooKeeperConnectionPool pool)
+  {
+    this.pool = pool;
+  }
+  
+  @Override
+  public LockObject newLockObject(LockPool lockPool, Object lockKey)
+  {
+    return new ZooKeeperLockObject(lockPool, lockKey, pool);
+  }
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/system/Logging.java b/framework/core/src/main/java/org/apache/manifoldcf/core/system/Logging.java
index 37a41b0..7dcd80a 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/system/Logging.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/system/Logging.java
@@ -96,7 +96,8 @@
 
   /** Reset all loggers
   */
-  public static void setLogLevels()
+  public static void setLogLevels(IThreadContext threadContext)
+    throws ManifoldCFException
   {
     // System.out.println("Setting log levels @ " + new Date().toString());
     Iterator it = loggerTable.entrySet().iterator();
@@ -109,7 +110,7 @@
       String loggername = (String)e.getKey();
 
       // logger level
-      String level = ManifoldCF.getProperty(loggername);
+      String level = LockManagerFactory.getProperty(threadContext, loggername);
 
       Level loglevel = null;
       if (level != null && level.length() > 0)
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCF.java b/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCF.java
index d1c324c..59683d8 100644
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCF.java
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCF.java
@@ -28,12 +28,14 @@
   public static final String _rcsid = "@(#)$Id: ManifoldCF.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Configuration XML node names and attribute names
-  public static final String NODE_PROPERTY = "property";
-  public static final String ATTRIBUTE_NAME = "name";
-  public static final String ATTRIBUTE_VALUE = "value";
   public static final String NODE_LIBDIR = "libdir";
   public static final String ATTRIBUTE_PATH = "path";
   
+  // This is the unique process identifier, which has to be unique and repeatable within a cluster
+  
+  /** Process ID (no more than 16 characters) */
+  protected static String processID = null;
+  
   // "Working directory"
   
   /** This is the working directory file object. */
@@ -81,11 +83,12 @@
   protected static Integer initializeFlagLock = new Integer(0);
 
   // Local member variables
+  protected static String loginUserName = null;
+  protected static String loginPassword = null;
   protected static String masterDatabaseName = null;
   protected static String masterDatabaseUsername = null;
   protected static String masterDatabasePassword = null;
   protected static ManifoldCFConfiguration localConfiguration = null;
-  protected static Map localProperties = null;
   protected static long propertyFilelastMod = -1L;
   protected static String propertyFilePath = null;
 
@@ -96,6 +99,16 @@
 
   // System property/config file property names
   
+  // Process ID property
+  /** Process ID - cannot exceed 16 characters */
+  public static final String processIDProperty = "org.apache.manifoldcf.processid";
+  
+  // Admin properties
+  /** UI login user name */
+  public static final String loginUserNameProperty = "org.apache.manifoldcf.login.name";
+  /** UI login password */
+  public static final String loginPasswordProperty = "org.apache.manifoldcf.login.password";
+  
   // Database access properties
   /** Database name property */
   public static final String masterDatabaseNameProperty = "org.apache.manifoldcf.database.name";
@@ -132,21 +145,32 @@
   /** File to look for to block access to UI during database maintenance */
   public static final String maintenanceFileSignalProperty = "org.apache.manifoldcf.database.maintenanceflag";
 
+  /** Reset environment, minting a thread context for convenience and backwards
+  * compatibility.
+  */
+  @Deprecated
+  public static void resetEnvironment()
+  {
+    resetEnvironment(ThreadContextFactory.make());
+  }
+  
   /** Reset environment.
   */
-  public static void resetEnvironment()
+  public static void resetEnvironment(IThreadContext threadContext)
   {
     synchronized (initializeFlagLock)
     {
       if (initializeLevel > 0)
       {
         // Clean up the system doing the same thing the shutdown thread would have if the process was killed
-        cleanUpEnvironment();
+        cleanUpEnvironment(threadContext);
+        processID = null;
+        loginUserName = null;
+        loginPassword = null;
         masterDatabaseName = null;
         masterDatabaseUsername = null;
         masterDatabasePassword = null;
         localConfiguration = null;
-        localProperties = null;
         propertyFilelastMod = -1L;
         propertyFilePath = null;
         alreadyClosed = false;
@@ -156,9 +180,18 @@
     }
   }
   
+  /** Initialize environment, minting a thread context for backwards compatibility.
+  */
+  @Deprecated
+  public static void initializeEnvironment()
+    throws ManifoldCFException
+  {
+    initializeEnvironment(ThreadContextFactory.make());
+  }
+  
   /** Initialize environment.
   */
-  public static void initializeEnvironment()
+  public static void initializeEnvironment(IThreadContext threadContext)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
@@ -189,10 +222,15 @@
           resourceLoader = new ManifoldCFResourceLoader(Thread.currentThread().getContextClassLoader());
           
           // Read configuration!
-          localConfiguration = new ManifoldCFConfiguration();
-          localProperties = new HashMap();
+          localConfiguration = new OverrideableManifoldCFConfiguration();
           checkProperties();
 
+          // Process ID is always local
+          processID = getStringProperty(processIDProperty,"");
+          if (processID.length() > 16)
+            throw new ManifoldCFException("Process ID cannot exceed 16 characters!");
+
+          // Log file is always local
           File logConfigFile = getFileProperty(logConfigFileProperty);
           if (logConfigFile == null)
           {
@@ -206,18 +244,17 @@
 
           // Set up local loggers
           Logging.initializeLoggers();
-          Logging.setLogLevels();
+          Logging.setLogLevels(threadContext);
 
-          masterDatabaseName = getProperty(masterDatabaseNameProperty);
-          if (masterDatabaseName == null)
-            masterDatabaseName = "dbname";
-          masterDatabaseUsername = getProperty(masterDatabaseUsernameProperty);
-          if (masterDatabaseUsername == null)
-            masterDatabaseUsername = "manifoldcf";
-          masterDatabasePassword = getProperty(masterDatabasePasswordProperty);
-          if (masterDatabasePassword == null)
-            masterDatabasePassword = "local_pg_passwd";
+          loginUserName = LockManagerFactory.getStringProperty(threadContext,loginUserNameProperty,"admin");
+          loginPassword = LockManagerFactory.getStringProperty(threadContext,loginPasswordProperty,"admin");
 
+          masterDatabaseName = LockManagerFactory.getStringProperty(threadContext,masterDatabaseNameProperty,"dbname");
+          masterDatabaseUsername = LockManagerFactory.getStringProperty(threadContext,masterDatabaseUsernameProperty,"manifoldcf");
+          masterDatabasePassword = LockManagerFactory.getStringProperty(threadContext,masterDatabasePasswordProperty,"local_pg_passwd");
+
+          // Register the throttler for cleanup on shutdown
+          addShutdownHook(new ThrottlerShutdown());
           // Register the file tracker for cleanup on shutdown
           tracker = new FileTrack();
           addShutdownHook(tracker);
@@ -225,8 +262,7 @@
           addShutdownHook(new DatabaseShutdown());
 
           // Open the database.  Done once per JVM.
-          IThreadContext threadcontext = ThreadContextFactory.make();
-          DBInterfaceFactory.make(threadcontext,masterDatabaseName,masterDatabaseUsername,masterDatabasePassword).openDatabase();
+          DBInterfaceFactory.make(threadContext,masterDatabaseName,masterDatabaseUsername,masterDatabasePassword).openDatabase();
         }
         catch (ManifoldCFException e)
         {
@@ -237,7 +273,40 @@
     }
 
   }
+  
+  /** For local properties (not shared!!), this class allows them to be overridden directly from the command line.
+  */
+  protected static class OverrideableManifoldCFConfiguration extends ManifoldCFConfiguration
+  {
+    public OverrideableManifoldCFConfiguration()
+    {
+      super();
+    }
+    
+    @Override
+    public String getProperty(String s)
+    {
+      String rval = System.getProperty(s);
+      if (rval == null)
+        rval = super.getProperty(s);
+      return rval;
+    }
+    
+  }
 
+  /** Get process ID */
+  public static final String getProcessID()
+  {
+    return processID;
+  }
+  
+  /** Get current properties.  Makes no attempt to reread or interpret them.
+  */
+  public static final ManifoldCFConfiguration getConfiguration()
+  {
+    return localConfiguration;
+  }
+  
   /** Reloads properties as needed.
   */
   public static final void checkProperties()
@@ -271,23 +340,13 @@
       throw new ManifoldCFException("Could not read configuration file '"+f.toString()+"'",e);
     }
     
-    // For convenience, post-process all "property" nodes so that we have a semblance of the earlier name/value pairs available, by default.
-    // e.g. <property name= value=/>
-    localProperties.clear();
+    // For convenience, post-process all "lib" nodes.
     ArrayList libDirs = new ArrayList();
     int i = 0;
     while (i < localConfiguration.getChildCount())
     {
       ConfigurationNode cn = localConfiguration.findChild(i++);
-      if (cn.getType().equals(NODE_PROPERTY))
-      {
-        String name = cn.getAttributeValue(ATTRIBUTE_NAME);
-        String value = cn.getAttributeValue(ATTRIBUTE_VALUE);
-        if (name == null)
-          throw new ManifoldCFException("Node type '"+NODE_PROPERTY+"' requires a '"+ATTRIBUTE_NAME+"' attribute");
-        localProperties.put(name,value);
-      }
-      else if (cn.getType().equals(NODE_LIBDIR))
+      if (cn.getType().equals(NODE_LIBDIR))
       {
         String path = cn.getAttributeValue(ATTRIBUTE_PATH);
         if (path == null)
@@ -317,10 +376,7 @@
   */
   public static String getProperty(String s)
   {
-    String rval = System.getProperty(s);
-    if (rval == null)
-      rval = (String)localProperties.get(s);
-    return rval;
+    return localConfiguration.getProperty(s);
   }
 
   /** Read a File property, either from the system properties, or from the local configuration file.
@@ -333,38 +389,47 @@
       return null;
     return resolvePath(value);
   }
-  
+
+  /** Read a (string) property, either from the system properties, or from the local configuration file.
+  *@param s is the property name.
+  *@param defaultValue is the default value for the property.
+  *@return the property value, as a string.
+  */
+  public static String getStringProperty(String s, String defaultValue)
+  {
+    return localConfiguration.getStringProperty(s, defaultValue);
+  }
+
   /** Read a boolean property
   */
   public static boolean getBooleanProperty(String s, boolean defaultValue)
     throws ManifoldCFException
   {
-    String value = getProperty(s);
-    if (value == null)
-      return defaultValue;
-    if (value.equals("true") || value.equals("yes"))
-      return true;
-    if (value.equals("false") || value.equals("no"))
-      return false;
-    throw new ManifoldCFException("Illegal property value for boolean property '"+s+"': '"+value+"'");
+    return localConfiguration.getBooleanProperty(s, defaultValue);
   }
   
-  /** Read an integer propert, either from the system properties, or from the local configuration file.
+  /** Read an integer property, either from the system properties, or from the local configuration file.
   */
   public static int getIntProperty(String s, int defaultValue)
     throws ManifoldCFException
   {
-    String value = getProperty(s);
-    if (value == null)
-      return defaultValue;
-    try
-    {
-      return Integer.parseInt(value);
-    }
-    catch (NumberFormatException e)
-    {
-      throw new ManifoldCFException("Illegal property value for integer property '"+s+"': '"+value+"': "+e.getMessage(),e,ManifoldCFException.SETUP_ERROR);
-    }
+    return localConfiguration.getIntProperty(s, defaultValue);
+  }
+
+  /** Read a long property, either from the system properties, or from the local configuration file.
+  */
+  public static long getLongProperty(String s, long defaultValue)
+    throws ManifoldCFException
+  {
+    return localConfiguration.getLongProperty(s, defaultValue);
+  }
+
+  /** Read a float property, either from the system properties, or from the local configuration file.
+  */
+  public static double getDoubleProperty(String s, double defaultValue)
+    throws ManifoldCFException
+  {
+    return localConfiguration.getDoubleProperty(s, defaultValue);
   }
   
   /** Attempt to make sure a path is a folder
@@ -552,6 +617,25 @@
     }
   }
 
+  /** Verify login.
+  */
+  public static boolean verifyLogin(IThreadContext threadContext, String userID, String userPassword)
+    throws ManifoldCFException
+  {
+    if (userID != null && userPassword != null)
+    {
+      /*
+      IDBInterface database = DBInterfaceFactory.make(threadContext,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+      */
+      // MHL to use a database table, when we get that sophisticated
+      return userID.equals(loginUserName) &&  userPassword.equals(loginPassword);
+    }
+    return false;
+  }
+  
   /** Perform standard one-way encryption of a string.
   *@param input is the string to encrypt.
   *@return the encrypted string.
@@ -770,6 +854,7 @@
   */
   public static boolean checkMaintenanceUnderway()
   {
+    // File check is always local; this whole bit of logic needs to be rethought though.
     String fileToCheck = getProperty(maintenanceFileSignalProperty);
     if (fileToCheck != null && fileToCheck.length() > 0)
     {
@@ -784,6 +869,7 @@
   public static void noteConfigurationChange()
     throws ManifoldCFException
   {
+    // Always a local file.  This needs to be rethought how it should operate in a clustered world.
     String configChangeSignalCommand = getProperty(configSignalCommandProperty);
     if (configChangeSignalCommand == null || configChangeSignalCommand.length() == 0)
       return;
@@ -1193,9 +1279,16 @@
     return resourceLoader.findClass(cname);
   }
   
-  /** Perform system shutdown, using the registered shutdown hooks. */
+  /** Perform system shutdown, minting thread context for backwards compatibility */
+  @Deprecated
   public static void cleanUpEnvironment()
   {
+    cleanUpEnvironment(ThreadContextFactory.make());
+  }
+  
+  /** Perform system shutdown, using the registered shutdown hooks. */
+  public static void cleanUpEnvironment(IThreadContext threadContext)
+  {
     synchronized (initializeFlagLock)
     {
       initializeLevel--;
@@ -1212,7 +1305,7 @@
             IShutdownHook hook = (IShutdownHook)cleanupHooks.get(i);
             try
             {
-              hook.doCleanup();
+              hook.doCleanup(threadContext);
             }
             catch (ManifoldCFException e)
             {
@@ -1259,7 +1352,8 @@
     }
 
     /** Delete all remaining files */
-    public void doCleanup()
+    @Override
+    public void doCleanup(IThreadContext threadContext)
       throws ManifoldCFException
     {
       synchronized (this)
@@ -1281,7 +1375,7 @@
     {
       try
       {
-        doCleanup();
+        doCleanup(ThreadContextFactory.make());
       }
       finally
       {
@@ -1291,6 +1385,38 @@
 
   }
 
+  /** Class that cleans up throttler on exit */
+  protected static class ThrottlerShutdown implements IShutdownHook
+  {
+    public ThrottlerShutdown()
+    {
+    }
+    
+    @Override
+    public void doCleanup(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      IThrottleGroups connectionThrottler = ThrottleGroupsFactory.make(threadContext);
+      connectionThrottler.destroy();
+    }
+    
+    /** Finalizer, which is designed to catch class unloading that tomcat 5.5 does.
+    */
+    protected void finalize()
+      throws Throwable
+    {
+      try
+      {
+        doCleanup(ThreadContextFactory.make());
+      }
+      finally
+      {
+        super.finalize();
+      }
+    }
+
+  }
+  
   /** Class that cleans up database handles on exit */
   protected static class DatabaseShutdown implements IShutdownHook
   {
@@ -1298,7 +1424,8 @@
     {
     }
     
-    public void doCleanup()
+    @Override
+    public void doCleanup(IThreadContext threadContext)
       throws ManifoldCFException
     {
       // Clean up the database handles
@@ -1366,7 +1493,7 @@
     public void run()
     {
       // This thread is run at shutdown time.
-      cleanUpEnvironment();
+      cleanUpEnvironment(ThreadContextFactory.make());
     }
   }
 
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCFConfiguration.java b/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCFConfiguration.java
deleted file mode 100644
index a1a862f..0000000
--- a/framework/core/src/main/java/org/apache/manifoldcf/core/system/ManifoldCFConfiguration.java
+++ /dev/null
@@ -1,56 +0,0 @@
-/* $Id: ManifoldCFConfiguration.java 988245 2010-08-23 18:39:35Z kwright $ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-*
-* http://www.apache.org/licenses/LICENSE-2.0
-*
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-package org.apache.manifoldcf.core.system;
-
-import org.apache.manifoldcf.core.interfaces.*;
-import java.util.*;
-import java.io.*;
-
-/** This class represents the configuration data read from the main ManifoldCF configuration
-* XML file.
-*/
-public class ManifoldCFConfiguration extends Configuration
-{
-  public static final String _rcsid = "@(#)$Id: ManifoldCFConfiguration.java 988245 2010-08-23 18:39:35Z kwright $";
-
-  /** Constructor.
-  */
-  public ManifoldCFConfiguration()
-  {
-    super("configuration");
-  }
-
-  /** Construct from XML.
-  *@param xmlStream is the input XML stream.
-  */
-  public ManifoldCFConfiguration(InputStream xmlStream)
-    throws ManifoldCFException
-  {
-    super("configuration");
-    fromXML(xmlStream);
-  }
-
-  /** Create a new object of the appropriate class.
-  */
-  protected Configuration createNew()
-  {
-    return new ManifoldCFConfiguration();
-  }
-  
-}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ConnectionBin.java b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ConnectionBin.java
new file mode 100644
index 0000000..32d64c2
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ConnectionBin.java
@@ -0,0 +1,437 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.throttler;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.concurrent.atomic.*;
+import java.util.*;
+
+/** Connection tracking for a bin.
+*
+* This class keeps track of information needed to figure out throttling for connections,
+* on a bin-by-bin basis.  It is *not*, however, a connection pool.  Actually establishing
+* connections, and pooling established connections, is functionality that must reside in the
+* caller.
+*
+* The 'connections' each connection bin tracks are connections outstanding that share this bin name.
+* Not all such connections are identical; some may in fact have entirely different sets of
+* bins associated with them, but they all have the specific bin in common.  Since each bin has its
+* own unique limit, this effectively means that in order to get a connection, you need to find an
+* available slot in ALL of its constituent connection bins.  If the connections are pooled, it makes
+* the most sense to divide the pool up by characteristics such that identical connections are all
+* handled together - and it is reasonable to presume that an identical connection has identical
+* connection bins.
+*
+* NOTE WELL: This is entirely local in operation
+*/
+public class ConnectionBin
+{
+  /** True if this bin is alive still */
+  protected boolean isAlive = true;
+  /** This is the bin name which this connection pool belongs to */
+  protected final String binName;
+  /** Service type name */
+  protected final String serviceTypeName;
+  /** The (anonymous) service name */
+  protected final String serviceName;
+  /** The target calculation lock name */
+  protected final String targetCalcLockName;
+  
+  /** This is the maximum number of active connections allowed for this bin */
+  protected int maxActiveConnections = 0;
+  
+  /** This is the local maximum number of active connections allowed for this bin */
+  protected int localMax = 0;
+  /** This is the number of connections in this bin that have been reserved - that is, they
+  * are promised to various callers, but those callers have not yet committed to obtaining them. */
+  protected int reservedConnections = 0;
+  /** This is the number of connections in this bin that are connected; immaterial whether they are
+  * in use or in a pool somewhere. */
+  protected int inUseConnections = 0;
+
+  /** The service type prefix for connection bins */
+  protected final static String serviceTypePrefix = "_CONNECTIONBIN_";
+
+  /** The target calculation lock prefix */
+  protected final static String targetCalcLockPrefix = "_CONNECTIONBINTARGET_";
+  
+  /** Random number */
+  protected final static Random randomNumberGenerator = new Random();
+
+  /** Constructor. */
+  public ConnectionBin(IThreadContext threadContext, String throttlingGroupName, String binName)
+    throws ManifoldCFException
+  {
+    this.binName = binName;
+    this.serviceTypeName = buildServiceTypeName(throttlingGroupName, binName);
+    this.targetCalcLockName = buildTargetCalcLockName(throttlingGroupName, binName);
+    // Now, register and activate service anonymously, and record the service name we get.
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    this.serviceName = lockManager.registerServiceBeginServiceActivity(serviceTypeName, null, null);
+  }
+
+  protected static String buildServiceTypeName(String throttlingGroupName, String binName)
+  {
+    return serviceTypePrefix + throttlingGroupName + "_" + binName;
+  }
+
+  protected static String buildTargetCalcLockName(String throttlingGroupName, String binName)
+  {
+    return targetCalcLockPrefix + throttlingGroupName + "_" + binName;
+  }
+  
+  /** Get the bin name. */
+  public String getBinName()
+  {
+    return binName;
+  }
+
+  /** Update the maximum number of active connections.
+  */
+  public synchronized void updateMaxActiveConnections(int maxActiveConnections)
+  {
+    // Update the number; the poller will wake up any waiting threads.
+    this.maxActiveConnections = maxActiveConnections;
+  }
+
+  /** Wait for a connection to become available, in the context of an existing connection pool.
+  *@param poolCount is the number of connections in the pool times the number of bins per connection.
+  * This parameter is only ever changed in this class!!
+  *@return a recommendation as to how to proceed, using the IConnectionThrottler values.  If the
+  * recommendation is to create a connection, a slot will be reserved for that purpose.  A
+  * subsequent call to noteConnectionCreation() will be needed to confirm the reservation, or clearReservation() to
+  * release the reservation.
+  */
+  public synchronized int waitConnectionAvailable(AtomicInteger poolCount)
+    throws InterruptedException
+  {
+    // Reserved connections keep a slot available which can't be used by anyone else.
+    // Connection bins are always sorted so that deadlocks can't occur.
+    // Once all slots are reserved, the caller will go ahead and create the necessary connection
+    // and convert the reservation to a new connection.
+    
+    while (true)
+    {
+      if (!isAlive)
+        return IConnectionThrottler.CONNECTION_FROM_NOWHERE;
+      int currentPoolCount = poolCount.get();
+      if (currentPoolCount > 0)
+      {
+        // Recommendation is to pull the connection from the pool.
+        poolCount.set(currentPoolCount - 1);
+        return IConnectionThrottler.CONNECTION_FROM_POOL;
+      }
+      if (inUseConnections + reservedConnections < localMax)
+      {
+        reservedConnections++;
+        return IConnectionThrottler.CONNECTION_FROM_CREATION;
+      }
+      // Wait for a connection to free up.  Note that it is up to the caller to free stuff up.
+      wait();
+    }
+  }
+  
+  /** Undo what we had decided to do before.
+  *@param recommendation is the decision returned by waitForConnection() above.
+  */
+  public synchronized void undoReservation(int recommendation, AtomicInteger poolCount)
+  {
+    if (recommendation == IConnectionThrottler.CONNECTION_FROM_CREATION)
+    {
+      if (reservedConnections == 0)
+        throw new IllegalStateException("Can't clear a reservation we don't have");
+      reservedConnections--;
+      notifyAll();
+    }
+    else if (recommendation == IConnectionThrottler.CONNECTION_FROM_POOL)
+    {
+      poolCount.set(poolCount.get() + 1);
+      notifyAll();
+    }
+  }
+  
+  /** Note the creation of an active connection that belongs to this bin.  The connection MUST
+  * have been reserved prior to the connection being created.
+  */
+  public synchronized void noteConnectionCreation()
+  {
+    if (reservedConnections == 0)
+      throw new IllegalStateException("Creating a connection when no connection slot reserved!");
+    reservedConnections--;
+    inUseConnections++;
+    // No notification needed because the total number of reserved+active connections did not change.
+  }
+
+  /** Figure out whether we are currently over target or not for this bin.
+  */
+  public synchronized boolean shouldReturnedConnectionBeDestroyed()
+  {
+    // We don't count reserved connections here because those are not yet committed
+    return inUseConnections > localMax;
+  }
+  
+  public static final int CONNECTION_DESTROY = 0;
+  public static final int CONNECTION_POOLEMPTY = 1;
+  public static final int CONNECTION_WITHINBOUNDS = 2;
+  
+  /** Figure out whether we are currently over target or not for this bin, and whether a
+  * connection should be pulled from the pool and destroyed.
+  * Note that this is tricky in conjunction with other bins, because those other bins
+  * may conclude that we can't destroy a connection.  If so, we just return the stolen
+  * connection back to the pool.
+  *@return CONNECTION_DESTROY, CONNECTION_POOLEMPTY, or CONNECTION_WITHINBOUNDS.
+  */
+  public synchronized int shouldPooledConnectionBeDestroyed(AtomicInteger poolCount)
+  {
+    int currentPoolCount = poolCount.get();
+    if (currentPoolCount > 0)
+    {
+      // Consider it removed from the pool for the purposes of consideration.  If we change our minds, we'll
+      // return it, and no harm done.
+      poolCount.set(currentPoolCount-1);
+      // We don't count reserved connections here because those are not yet committed.
+      if (inUseConnections > localMax)
+      {
+        return CONNECTION_DESTROY;
+      }
+      return CONNECTION_WITHINBOUNDS;
+    }
+    return CONNECTION_POOLEMPTY;
+  }
+
+  /** Check only if there's a pooled connection, and make moves to take it from the pool.
+  */
+  public synchronized boolean hasPooledConnection(AtomicInteger poolCount)
+  {
+    int currentPoolCount = poolCount.get();
+    if (currentPoolCount > 0)
+    {
+      poolCount.set(currentPoolCount-1);
+      return true;
+    }
+    return false;
+  }
+  
+  /** Undo the decision to destroy a pooled connection.
+  */
+  public synchronized void undoPooledConnectionDecision(AtomicInteger poolCount)
+  {
+    poolCount.set(poolCount.get() + 1);
+    notifyAll();
+  }
+  
+  /** Note a connection returned to the pool.
+  */
+  public synchronized void noteConnectionReturnedToPool(AtomicInteger poolCount)
+  {
+    poolCount.set(poolCount.get() + 1);
+    // Wake up threads possibly waiting on a pool return.
+    notifyAll();
+  }
+  
+  /** Note the destruction of an active connection that belongs to this bin.
+  */
+  public synchronized void noteConnectionDestroyed()
+  {
+    inUseConnections--;
+    notifyAll();
+  }
+
+  /** Poll this bin */
+  public synchronized void poll(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // The meat of the cross-cluster apportionment algorithm goes here!
+    // Two global numbers each service posts: "in-use" and "target".  At no time does a service *ever* post either a "target"
+    // that, together with all other active service targets, is in excess of the max.  Also, at no time a service post
+    // a target that, when added to the other "in-use" values, exceeds the max.  If the "in-use" values everywhere else
+    // already equal or exceed the max, then the target will be zero.
+    // The target quota is calculated as follows:
+    // (1) Target is summed, excluding ours.  This is GlobalTarget.
+    // (2) In-use is summed, excluding ours.  This is GlobalInUse.
+    // (3) Our MaximumTarget is computed, which is Maximum - GlobalTarget or Maximum - GlobalInUse, whichever is
+    //     smaller, but never less than zero.
+    // (4) Our FairTarget is computed.  The FairTarget divides the Maximum by the number of services, and adds
+    //     1 randomly based on the remainder.
+    // (5) We compute OptimalTarget as follows: We start with current local target.  If current local target
+    //    exceeds current local in-use count, we adjust OptimalTarget downward by one.  Otherwise we increase it
+    //    by one.
+    // (6) Finally, we compute Target by taking the minimum of MaximumTarget, FairTarget, and OptimalTarget.
+
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.enterWriteLock(targetCalcLockName);
+    try
+    {
+      // Compute MaximumTarget
+      SumClass sumClass = new SumClass(serviceName);
+      lockManager.scanServiceData(serviceTypeName, sumClass);
+      //System.out.println("numServices = "+sumClass.getNumServices()+"; globalTarget = "+sumClass.getGlobalTarget()+"; globalInUse = "+sumClass.getGlobalInUse());
+        
+      int numServices = sumClass.getNumServices();
+      if (numServices == 0)
+        return;
+      int globalTarget = sumClass.getGlobalTarget();
+      int globalInUse = sumClass.getGlobalInUse();
+      int maximumTarget = maxActiveConnections - globalTarget;
+      if (maximumTarget > maxActiveConnections - globalInUse)
+        maximumTarget = maxActiveConnections - globalInUse;
+      if (maximumTarget < 0)
+        maximumTarget = 0;
+        
+      // Compute FairTarget
+      int fairTarget = maxActiveConnections / numServices;
+      int remainder = maxActiveConnections % numServices;
+      // Randomly choose whether we get an addition to the FairTarget
+      if (randomNumberGenerator.nextInt(numServices) < remainder)
+        fairTarget++;
+        
+      // Compute OptimalTarget
+      int localInUse = inUseConnections;
+      int optimalTarget = localMax;
+      if (localMax > localInUse)
+        optimalTarget--;
+      else
+      {
+        // We want a fast ramp up, so make this proportional to maxActiveConnections
+        int increment = maxActiveConnections >> 2;
+        if (increment == 0)
+          increment = 1;
+        optimalTarget += increment;
+      }
+        
+      //System.out.println("maxTarget = "+maximumTarget+"; fairTarget = "+fairTarget+"; optimalTarget = "+optimalTarget);
+
+      // Now compute actual target
+      int target = maximumTarget;
+      if (target > fairTarget)
+        target = fairTarget;
+      if (target > optimalTarget)
+        target = optimalTarget;
+        
+      // Write these values to the service data variables.
+      // NOTE that there is a race condition here; the target value depends on all the calculations above being accurate, and not changing out from under us.
+      // So, that's why we have a write lock around the pool calculations.
+        
+      lockManager.updateServiceData(serviceTypeName, serviceName, pack(target, localInUse));
+        
+      // Now, update our localMax, if it needs it.
+      if (target == localMax)
+        return;
+      localMax = target;
+      notifyAll();
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(targetCalcLockName);
+    }
+  }
+
+  /** Shut down the bin, and release everything that is waiting on it.
+  */
+  public synchronized void shutDown(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    isAlive = false;
+    notifyAll();
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.endServiceActivity(serviceTypeName, serviceName);
+  }
+  
+  // Protected classes and methods
+  
+  protected static class SumClass implements IServiceDataAcceptor
+  {
+    protected final String serviceName;
+    protected int numServices = 0;
+    protected int globalTargetTally = 0;
+    protected int globalInUseTally = 0;
+    
+    public SumClass(String serviceName)
+    {
+      this.serviceName = serviceName;
+    }
+    
+    @Override
+    public boolean acceptServiceData(String serviceName, byte[] serviceData)
+      throws ManifoldCFException
+    {
+      numServices++;
+
+      if (!serviceName.equals(this.serviceName))
+      {
+        globalTargetTally += unpackTarget(serviceData);
+        globalInUseTally += unpackInUse(serviceData);
+      }
+      return false;
+    }
+
+    public int getNumServices()
+    {
+      return numServices;
+    }
+    
+    public int getGlobalTarget()
+    {
+      return globalTargetTally;
+    }
+    
+    public int getGlobalInUse()
+    {
+      return globalInUseTally;
+    }
+    
+  }
+  
+  protected static int unpackTarget(byte[] data)
+  {
+    if (data == null || data.length != 8)
+      return 0;
+    return (((int)data[0]) & 0xff) +
+      ((((int)data[1]) << 8) & 0xff00) +
+      ((((int)data[2]) << 16) & 0xff0000) +
+      ((((int)data[3]) << 24) & 0xff000000);
+  }
+
+  protected static int unpackInUse(byte[] data)
+  {
+    if (data == null || data.length != 8)
+      return 0;
+    return (((int)data[4]) & 0xff) +
+      ((((int)data[5]) << 8) & 0xff00) +
+      ((((int)data[6]) << 16) & 0xff0000) +
+      ((((int)data[7]) << 24) & 0xff000000);
+  }
+
+  protected static byte[] pack(int target, int inUse)
+  {
+    byte[] rval = new byte[8];
+    rval[0] = (byte)(target & 0xff);
+    rval[1] = (byte)((target >> 8) & 0xff);
+    rval[2] = (byte)((target >> 16) & 0xff);
+    rval[3] = (byte)((target >> 24) & 0xff);
+    rval[4] = (byte)(inUse & 0xff);
+    rval[5] = (byte)((inUse >> 8) & 0xff);
+    rval[6] = (byte)((inUse >> 16) & 0xff);
+    rval[7] = (byte)((inUse >> 24) & 0xff);
+    return rval;
+  }
+
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/FetchBin.java b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/FetchBin.java
new file mode 100644
index 0000000..5e48dbf
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/FetchBin.java
@@ -0,0 +1,369 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.throttler;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+/** Connection tracking for a bin.
+*
+* This class keeps track of information needed to figure out fetch rate throttling for connections,
+* on a bin-by-bin basis. 
+*
+* NOTE WELL: This is entirely local in operation
+*/
+public class FetchBin
+{
+  /** This is set to true until the bin is shut down. */
+  protected boolean isAlive = true;
+  /** This is the bin name which this connection pool belongs to */
+  protected final String binName;
+  /** Service type name */
+  protected final String serviceTypeName;
+  /** The (anonymous) service name */
+  protected final String serviceName;
+  /** The target calculation lock name */
+  protected final String targetCalcLockName;
+
+  /** This is the minimum time between fetches for this bin, in ms. */
+  protected long minTimeBetweenFetches = Long.MAX_VALUE;
+
+  /** The local minimum time between fetches */
+  protected long localMinimum = Long.MAX_VALUE;
+
+  /** This is the last time a fetch was done on this bin */
+  protected long lastFetchTime = 0L;
+  /** Is the next fetch reserved? */
+  protected boolean reserveNextFetch = false;
+
+  /** The service type prefix for fetch bins */
+  protected final static String serviceTypePrefix = "_FETCHBIN_";
+
+  /** The target calculation lock prefix */
+  protected final static String targetCalcLockPrefix = "_FETCHBINTARGET_";
+
+  /** Constructor. */
+  public FetchBin(IThreadContext threadContext, String throttlingGroupName, String binName)
+    throws ManifoldCFException
+  {
+    this.binName = binName;
+    this.serviceTypeName = buildServiceTypeName(throttlingGroupName, binName);
+    this.targetCalcLockName = buildTargetCalcLockName(throttlingGroupName, binName);
+    // Now, register and activate service anonymously, and record the service name we get.
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    this.serviceName = lockManager.registerServiceBeginServiceActivity(serviceTypeName, null, null);
+  }
+
+  protected static String buildServiceTypeName(String throttlingGroupName, String binName)
+  {
+    return serviceTypePrefix + throttlingGroupName + "_" + binName;
+  }
+
+  protected static String buildTargetCalcLockName(String throttlingGroupName, String binName)
+  {
+    return targetCalcLockPrefix + throttlingGroupName + "_" + binName;
+  }
+
+  /** Get the bin name. */
+  public String getBinName()
+  {
+    return binName;
+  }
+
+  /** Update the maximum number of active connections.
+  */
+  public synchronized void updateMinTimeBetweenFetches(long minTimeBetweenFetches)
+  {
+    // Update the number and wake up any waiting threads; they will take care of everything.
+    this.minTimeBetweenFetches = minTimeBetweenFetches;
+  }
+
+  /** Reserve a request to fetch a document from this bin.  The actual fetch is not yet committed
+  * with this call, but if it succeeds for all bins associated with the document, then the caller
+  * has permission to do the fetch, and can update the last fetch time.
+  *@return false if the fetch bin is being shut down.
+  */
+  public synchronized boolean reserveFetchRequest()
+    throws InterruptedException
+  {
+    // First wait for the ability to even get the next fetch from this bin
+    while (true)
+    {
+      if (!isAlive)
+        return false;
+      if (!reserveNextFetch)
+      {
+        reserveNextFetch = true;
+        return true;
+      }
+      wait();
+    }
+  }
+  
+  /** Clear reserved request.
+  */
+  public synchronized void clearReservation()
+  {
+    if (!reserveNextFetch)
+      throw new IllegalStateException("Can't clear a fetch reservation we don't have");
+    reserveNextFetch = false;
+    notifyAll();
+  }
+  
+  /** Wait the necessary time to do the fetch.  Presumes we've reserved the next fetch
+  * rights already, via reserveFetchRequest().
+  *@return false if the wait did not complete because the bin was shut down.
+  */
+  public synchronized boolean waitNextFetch()
+    throws InterruptedException
+  {
+    if (!reserveNextFetch)
+      throw new IllegalStateException("No fetch request reserved!");
+    
+    while (true)
+    {
+      if (!isAlive)
+        // Leave it to the caller to undo reservations
+        return false;
+      if (localMinimum == Long.MAX_VALUE)
+      {
+        // wait forever - but eventually someone will set a smaller interval and wake us up.
+        wait();
+      }
+      else
+      {
+        long currentTime = System.currentTimeMillis();
+        // Compute how long we have to wait, based on the current time and the time of the last fetch.
+        long waitAmt = lastFetchTime + localMinimum - currentTime;
+        if (waitAmt <= 0L)
+        {
+          // Note actual time we start the fetch.
+          if (currentTime > lastFetchTime)
+            lastFetchTime = currentTime;
+          reserveNextFetch = false;
+          notifyAll();
+          return true;
+        }
+        wait(waitAmt);
+      }
+    }
+  }
+  
+  /** Poll this bin */
+  public synchronized void poll(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.enterWriteLock(targetCalcLockName);
+    try
+    {
+      // This is where the cross-cluster logic happens.
+      // Each service records the following information:
+      // -- the target rate, in fetches per millisecond
+      // -- the earliest possible time for the service's next fetch, in ms from start of epoch
+      // Target rates are apportioned in fetches-per-ms space, as follows:
+      // (1) Target rate is summed cross-cluster, excluding our local service.  This is GlobalTarget.
+      // (2) MaximumTarget is computed, which is Maximum-GlobalTarget.
+      // (3) FairTarget is computed, which is Maximum/numServices + rand(Maximum%numServices).
+      // (4) Finally, we compute Target rate by taking the minimum of MaximumTarget, FairTarget.
+      // The earliest time for the next fetch is computed as follows:
+      // (1) Find the LATEST most recent fetch time across the services, including an updated time for
+      //   the local service.
+      // (2) Compute the next possible fetch time, using the Target rate and that fetch time.
+      // (3) The new targeted fetch time will be set to that value.
+
+      SumClass sumClass = new SumClass(serviceName);
+      lockManager.scanServiceData(serviceTypeName, sumClass);
+
+      int numServices = sumClass.getNumServices();
+      if (numServices == 0)
+        return;
+      double globalTarget = sumClass.getGlobalTarget();
+      long earliestTargetTime = sumClass.getEarliestTime();
+      long currentTime = System.currentTimeMillis();
+      
+      if (lastFetchTime == 0L)
+        earliestTargetTime = currentTime;
+      else if (earliestTargetTime > lastFetchTime)
+        earliestTargetTime = lastFetchTime;
+      
+      // Now, compute the target rate
+      double globalMaxFetchesPerMillisecond;
+      double maximumTarget;
+      double fairTarget;
+      if (minTimeBetweenFetches == 0.0)
+      {
+        //System.out.println(binName+":Global minimum milliseconds per byte = 0.0");
+        globalMaxFetchesPerMillisecond = Double.MAX_VALUE;
+        maximumTarget = globalMaxFetchesPerMillisecond;
+        fairTarget = globalMaxFetchesPerMillisecond;
+      }
+      else
+      {
+        globalMaxFetchesPerMillisecond = 1.0 / minTimeBetweenFetches;
+        //System.out.println(binName+":Global max bytes per millisecond = "+globalMaxBytesPerMillisecond);
+        maximumTarget = globalMaxFetchesPerMillisecond - globalTarget;
+        if (maximumTarget < 0.0)
+          maximumTarget = 0.0;
+
+        // Compute FairTarget
+        fairTarget = globalMaxFetchesPerMillisecond / numServices;
+      }
+
+      // Now compute actual target
+      double inverseTarget = maximumTarget;
+      if (inverseTarget > fairTarget)
+        inverseTarget = fairTarget;
+
+      long target;
+      if (inverseTarget == 0.0)
+        target = Long.MAX_VALUE;
+      else
+        target = (long)(1.0/inverseTarget +0.5);
+      
+      long nextFetchTime = earliestTargetTime + target;
+      
+      lockManager.updateServiceData(serviceTypeName, serviceName, pack(inverseTarget, nextFetchTime));
+
+      // Update local parameters: the rate, and the next time.
+      // But in order to update the next time, we have to update the last time.
+      if (target == localMinimum && earliestTargetTime == lastFetchTime)
+        return;
+      //System.out.println(binName+":Setting localMinimum="+target+"; last fetch time="+earliestTargetTime);
+      localMinimum = target;
+      lastFetchTime = earliestTargetTime;
+      notifyAll();
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(targetCalcLockName);
+    }
+
+  }
+
+  /** Shut the bin down, and wake up all threads waiting on it.
+  */
+  public synchronized void shutDown(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    isAlive = false;
+    notifyAll();
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.endServiceActivity(serviceTypeName, serviceName);
+  }
+  
+  // Protected classes and methods
+  
+  protected static class SumClass implements IServiceDataAcceptor
+  {
+    protected final String serviceName;
+    protected int numServices = 0;
+    protected double globalTargetTally = 0;
+    protected long earliestTime = Long.MAX_VALUE;
+    
+    public SumClass(String serviceName)
+    {
+      this.serviceName = serviceName;
+    }
+    
+    @Override
+    public boolean acceptServiceData(String serviceName, byte[] serviceData)
+      throws ManifoldCFException
+    {
+      numServices++;
+
+      if (!serviceName.equals(this.serviceName))
+      {
+        globalTargetTally += unpackTarget(serviceData);
+        long checkTime = unpackEarliestTime(serviceData);
+        if (checkTime < earliestTime)
+          earliestTime = checkTime;
+      }
+      return false;
+    }
+
+    public int getNumServices()
+    {
+      return numServices;
+    }
+    
+    public double getGlobalTarget()
+    {
+      return globalTargetTally;
+    }
+    
+    public long getEarliestTime()
+    {
+      return earliestTime;
+    }
+  }
+
+  protected static double unpackTarget(byte[] data)
+  {
+    if (data == null || data.length != 8)
+      return 0.0;
+    return Double.longBitsToDouble((((long)data[0]) & 0xffL) +
+      ((((long)data[1]) << 8) & 0xff00L) +
+      ((((long)data[2]) << 16) & 0xff0000L) +
+      ((((long)data[3]) << 24) & 0xff000000L) +
+      ((((long)data[4]) << 32) & 0xff00000000L) +
+      ((((long)data[5]) << 40) & 0xff0000000000L) +
+      ((((long)data[6]) << 48) & 0xff000000000000L) +
+      ((((long)data[7]) << 56) & 0xff00000000000000L));
+  }
+
+  protected static long unpackEarliestTime(byte[] data)
+  {
+    if (data == null || data.length != 16)
+      return Long.MAX_VALUE;
+    return (((long)data[8]) & 0xffL) +
+      ((((long)data[9]) << 8) & 0xff00L) +
+      ((((long)data[10]) << 16) & 0xff0000L) +
+      ((((long)data[11]) << 24) & 0xff000000L) +
+      ((((long)data[12]) << 32) & 0xff00000000L) +
+      ((((long)data[13]) << 40) & 0xff0000000000L) +
+      ((((long)data[14]) << 48) & 0xff000000000000L) +
+      ((((long)data[15]) << 56) & 0xff00000000000000L);
+  }
+
+  protected static byte[] pack(double targetDouble, long earliestTime)
+  {
+    long target = Double.doubleToLongBits(targetDouble);
+    byte[] rval = new byte[16];
+    rval[0] = (byte)(target & 0xffL);
+    rval[1] = (byte)((target >> 8) & 0xffL);
+    rval[2] = (byte)((target >> 16) & 0xffL);
+    rval[3] = (byte)((target >> 24) & 0xffL);
+    rval[4] = (byte)((target >> 32) & 0xffL);
+    rval[5] = (byte)((target >> 40) & 0xffL);
+    rval[6] = (byte)((target >> 48) & 0xffL);
+    rval[7] = (byte)((target >> 56) & 0xffL);
+    rval[8] = (byte)(earliestTime & 0xffL);
+    rval[9] = (byte)((earliestTime >> 8) & 0xffL);
+    rval[10] = (byte)((earliestTime >> 16) & 0xffL);
+    rval[11] = (byte)((earliestTime >> 24) & 0xffL);
+    rval[12] = (byte)((earliestTime >> 32) & 0xffL);
+    rval[13] = (byte)((earliestTime >> 40) & 0xffL);
+    rval[14] = (byte)((earliestTime >> 48) & 0xffL);
+    rval[15] = (byte)((earliestTime >> 56) & 0xffL);
+    return rval;
+  }
+
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ThrottleBin.java b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ThrottleBin.java
new file mode 100644
index 0000000..dac1d39
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ThrottleBin.java
@@ -0,0 +1,450 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.throttler;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+
+/** Throttles for a bin.
+* An instance of this class keeps track of the information needed to bandwidth throttle access
+* to a url belonging to a specific bin.
+*
+* In order to calculate
+* the effective "burst" fetches per second and bytes per second, we need to have some idea what the window is.
+* For example, a long hiatus from fetching could cause overuse of the server when fetching resumes, if the
+* window length is too long.
+*
+* One solution to this problem would be to keep a list of the individual fetches as records.  Then, we could
+* "expire" a fetch by discarding the old record.  However, this is quite memory consumptive for all but the
+* smallest intervals.
+*
+* Another, better, solution is to hook into the start and end of individual fetches.  These will, presumably, occur
+* at the fastest possible rate without long pauses spent doing something else.  The only complication is that
+* fetches may well overlap, so we need to "reference count" the fetches to know when to reset the counters.
+* For "fetches per second", we can simply make sure we "schedule" the next fetch at an appropriate time, rather
+* than keep records around.  The overall rate may therefore be somewhat less than the specified rate, but that's perfectly
+* acceptable.
+*
+* Some notes on the algorithms used to limit server bandwidth impact
+* ==================================================================
+*
+* In a single connection case, the algorithm we'd want to use works like this.  On the first chunk of a series,
+* the total length of time and the number of bytes are recorded.  Then, prior to each subsequent chunk, a calculation
+* is done which attempts to hit the bandwidth target by the end of the chunk read, using the rate of the first chunk
+* access as a way of estimating how long it will take to fetch those next n bytes.
+*
+* For a multi-connection case, which this is, it's harder to either come up with a good maximum bandwidth estimate,
+* and harder still to "hit the target", because simultaneous fetches will intrude.  The strategy is therefore:
+*
+* 1) The first chunk of any series should proceed without interference from other connections to the same server.
+*    The goal here is to get a decent quality estimate without any possibility of overwhelming the server.
+*
+* 2) The bandwidth of the first chunk is treated as the "maximum bandwidth per connection".  That is, if other
+*    connections are going on, we can presume that each connection will use at most the bandwidth that the first fetch
+*    took.  Thus, by generating end-time estimates based on this number, we are actually being conservative and
+*    using less server bandwidth.
+*
+* 3) For chunks that have started but not finished, we keep track of their size and estimated elapsed time in order to schedule when
+*    new chunks from other connections can start.
+*
+* NOTE WELL: This is entirely local in operation
+*/
+public class ThrottleBin
+{
+  /** This signals whether the bin is alive or not. */
+  protected boolean isAlive = true;
+  /** This is the bin name which this throttle belongs to. */
+  protected final String binName;
+  /** Service type name */
+  protected final String serviceTypeName;
+  /** The (anonymous) service name */
+  protected final String serviceName;
+  /** The target calculation lock name */
+  protected final String targetCalcLockName;
+
+  /** The minimum milliseconds per byte */
+  protected double minimumMillisecondsPerByte = Double.MAX_VALUE;
+
+  /** The local minimum milliseconds per byte */
+  protected double localMinimum = Double.MAX_VALUE;
+  
+  /** This is the reference count for this bin (which records active references) */
+  protected volatile int refCount = 0;
+  /** The inverse rate estimate of the first fetch, in ms/byte */
+  protected double rateEstimate = 0.0;
+  /** Flag indicating whether a rate estimate is needed */
+  protected volatile boolean estimateValid = false;
+  /** Flag indicating whether rate estimation is in progress yet */
+  protected volatile boolean estimateInProgress = false;
+  /** The start time of this series */
+  protected long seriesStartTime = -1L;
+  /** Total actual bytes read in this series; this includes fetches in progress */
+  protected long totalBytesRead = -1L;
+
+  /** The service type prefix for throttle bins */
+  protected final static String serviceTypePrefix = "_THROTTLEBIN_";
+  
+  /** The target calculation lock prefix */
+  protected final static String targetCalcLockPrefix = "_THROTTLEBINTARGET_";
+
+  /** Constructor. */
+  public ThrottleBin(IThreadContext threadContext, String throttlingGroupName, String binName)
+    throws ManifoldCFException
+  {
+    this.binName = binName;
+    this.serviceTypeName = buildServiceTypeName(throttlingGroupName, binName);
+    this.targetCalcLockName = buildTargetCalcLockName(throttlingGroupName, binName);
+    // Now, register and activate service anonymously, and record the service name we get.
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    this.serviceName = lockManager.registerServiceBeginServiceActivity(serviceTypeName, null, null);
+  }
+
+  protected static String buildServiceTypeName(String throttlingGroupName, String binName)
+  {
+    return serviceTypePrefix + throttlingGroupName + "_" + binName;
+  }
+  
+  protected static String buildTargetCalcLockName(String throttlingGroupName, String binName)
+  {
+    return targetCalcLockPrefix + throttlingGroupName + "_" + binName;
+  }
+
+  /** Get the bin name. */
+  public String getBinName()
+  {
+    return binName;
+  }
+
+  /** Update minimumMillisecondsPerBytePerServer */
+  public synchronized void updateMinimumMillisecondsPerByte(double min)
+  {
+    this.minimumMillisecondsPerByte = min;
+  }
+  
+  /** Note the start of a fetch operation for a bin.  Call this method just before the actual stream access begins.
+  * May wait until schedule allows.
+  */
+  public void beginFetch()
+  {
+    synchronized (this)
+    {
+      if (refCount == 0)
+      {
+        // Now, reset bandwidth throttling counters
+        estimateValid = false;
+        rateEstimate = 0.0;
+        totalBytesRead = 0L;
+        estimateInProgress = false;
+        seriesStartTime = -1L;
+      }
+      refCount++;
+    }
+
+  }
+
+  /** Abort the fetch.
+  */
+  public void abortFetch()
+  {
+    synchronized (this)
+    {
+      refCount--;
+    }
+  }
+    
+  /** Note the start of an individual byte read of a specified size.  Call this method just before the
+  * read request takes place.  Performs the necessary delay prior to reading specified number of bytes from the server.
+  *@return false if the wait was interrupted due to the bin being shut down.
+  */
+  public boolean beginRead(int byteCount)
+    throws InterruptedException
+  {
+
+    synchronized (this)
+    {
+      while (true)
+      {
+        if (!isAlive)
+          return false;
+        if (estimateInProgress)
+        {
+          wait();
+          continue;
+        }
+
+        // Update the current time
+        long currentTime = System.currentTimeMillis();
+        
+        if (estimateValid == false)
+        {
+          seriesStartTime = currentTime;
+          estimateInProgress = true;
+          // Add these bytes to the estimated total
+          totalBytesRead += (long)byteCount;
+          // Exit early; this thread isn't going to do any waiting
+          return true;
+        }
+
+        // If we haven't set a proper throttle yet, wait until we do.
+        if (localMinimum == Double.MAX_VALUE)
+        {
+          wait();
+          continue;
+        }
+        
+        // Estimate the time this read will take, and wait accordingly
+        long estimatedTime = (long)(rateEstimate * (double)byteCount);
+
+        // Figure out how long the total byte count should take, to meet the constraint
+        long desiredEndTime = seriesStartTime + (long)(((double)(totalBytesRead + (long)byteCount)) * localMinimum);
+
+
+        // The wait time is the difference between our desired end time, minus the estimated time to read the data, and the
+        // current time.  But it can't be negative.
+        long waitTime = (desiredEndTime - estimatedTime) - currentTime;
+
+        // If no wait is needed, go ahead and update what needs to be updated and exit.  Otherwise, do the wait.
+        if (waitTime <= 0L)
+        {
+          // Add these bytes to the estimated total
+          totalBytesRead += (long)byteCount;
+          return true;
+        }
+        
+        this.wait(waitTime);
+        // Back around again...
+      }
+    }
+  }
+
+  /** Abort a read in progress.
+  */
+  public void abortRead()
+  {
+    synchronized (this)
+    {
+      if (estimateInProgress)
+      {
+        estimateInProgress = false;
+        notifyAll();
+      }
+    }
+  }
+    
+  /** Note the end of an individual read from the server.  Call this just after an individual read completes.
+  * Pass the actual number of bytes read to the method.
+  */
+  public void endRead(int originalCount, int actualCount)
+  {
+    synchronized (this)
+    {
+      totalBytesRead = totalBytesRead + (long)actualCount - (long)originalCount;
+      if (estimateInProgress)
+      {
+        if (actualCount == 0)
+          // Didn't actually get any bytes, so use 0.0
+          rateEstimate = 0.0;
+        else
+          rateEstimate = ((double)(System.currentTimeMillis() - seriesStartTime))/(double)actualCount;
+        estimateValid = true;
+        estimateInProgress = false;
+        notifyAll();
+      }
+    }
+  }
+
+  /** Note the end of a fetch operation.  Call this method just after the fetch completes.
+  */
+  public boolean endFetch()
+  {
+    synchronized (this)
+    {
+      refCount--;
+      return (refCount == 0);
+    }
+
+  }
+
+  /** Poll this bin */
+  public synchronized void poll(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    
+    // Enter write lock
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.enterWriteLock(targetCalcLockName);
+    try
+    {
+      // The cross-cluster apportionment of byte fetching goes here.
+      // For byte-rate throttling, the apportioning algorithm is simple.  First, it's done
+      // in bytes per millisecond, which is the inverse of what we actually use for the
+      // rest of this class.  Each service posts its current value for the maximum bytes
+      // per millisecond, and a target value for the same.
+      // The target value is computed as follows:
+      // (1) Target is summed cross-cluster, excluding our local service.  This is GlobalTarget.
+      // (2) MaximumTarget is computed, which is Maximum-GlobalTarget.
+      // (3) FairTarget is computed, which is Maximum/numServices + rand(Maximum%numServices).
+      // (4) Finally, we compute Target by taking the minimum of MaximumTarget, FairTarget.
+
+      // Compute MaximumTarget
+      SumClass sumClass = new SumClass(serviceName);
+      lockManager.scanServiceData(serviceTypeName, sumClass);
+        
+      int numServices = sumClass.getNumServices();
+      if (numServices == 0)
+        return;
+      double globalTarget = sumClass.getGlobalTarget();
+      double globalMaxBytesPerMillisecond;
+      double maximumTarget;
+      double fairTarget;
+      if (minimumMillisecondsPerByte == 0.0)
+      {
+        //System.out.println(binName+":Global minimum milliseconds per byte = 0.0");
+        globalMaxBytesPerMillisecond = Double.MAX_VALUE;
+        maximumTarget = globalMaxBytesPerMillisecond;
+        fairTarget = globalMaxBytesPerMillisecond;
+      }
+      else
+      {
+        globalMaxBytesPerMillisecond = 1.0 / minimumMillisecondsPerByte;
+        //System.out.println(binName+":Global max bytes per millisecond = "+globalMaxBytesPerMillisecond);
+        maximumTarget = globalMaxBytesPerMillisecond - globalTarget;
+        if (maximumTarget < 0.0)
+          maximumTarget = 0.0;
+
+        // Compute FairTarget
+        fairTarget = globalMaxBytesPerMillisecond / numServices;
+      }
+
+      // Now compute actual target
+      double inverseTarget = maximumTarget;
+      if (inverseTarget > fairTarget)
+        inverseTarget = fairTarget;
+
+      //System.out.println(binName+":Inverse target = "+inverseTarget+"; maximumTarget = "+maximumTarget+"; fairTarget = "+fairTarget);
+      
+      // Write these values to the service data variables.
+      // NOTE that there is a race condition here; the target value depends on all the calculations above being accurate, and not changing out from under us.
+      // So, that's why we have a write lock around the pool calculations.
+        
+      lockManager.updateServiceData(serviceTypeName, serviceName, pack(inverseTarget));
+
+      // Update our local minimum.
+      double target;
+      if (inverseTarget == 0.0)
+        target = Double.MAX_VALUE;
+      else
+        target = 1.0 / inverseTarget;
+      
+      // Reset local minimum, if it has changed.
+      if (target == localMinimum)
+        return;
+      //System.out.println(binName+":Updating local minimum to "+target);
+      localMinimum = target;
+      notifyAll();
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(targetCalcLockName);
+    }
+
+  }
+
+  /** Shut down this bin.
+  */
+  public synchronized void shutDown(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    isAlive = false;
+    notifyAll();
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
+    lockManager.endServiceActivity(serviceTypeName, serviceName);
+  }
+  
+  // Protected classes and methods
+  
+  protected static class SumClass implements IServiceDataAcceptor
+  {
+    protected final String serviceName;
+    protected int numServices = 0;
+    protected double globalTargetTally = 0;
+    
+    public SumClass(String serviceName)
+    {
+      this.serviceName = serviceName;
+    }
+    
+    @Override
+    public boolean acceptServiceData(String serviceName, byte[] serviceData)
+      throws ManifoldCFException
+    {
+      numServices++;
+
+      if (!serviceName.equals(this.serviceName))
+      {
+        globalTargetTally += unpackTarget(serviceData);
+      }
+      return false;
+    }
+
+    public int getNumServices()
+    {
+      return numServices;
+    }
+    
+    public double getGlobalTarget()
+    {
+      return globalTargetTally;
+    }
+    
+  }
+  
+  protected static double unpackTarget(byte[] data)
+  {
+    if (data == null || data.length != 8)
+      return 0.0;
+    return Double.longBitsToDouble((((long)data[0]) & 0xffL) +
+      ((((long)data[1]) << 8) & 0xff00L) +
+      ((((long)data[2]) << 16) & 0xff0000L) +
+      ((((long)data[3]) << 24) & 0xff000000L) +
+      ((((long)data[4]) << 32) & 0xff00000000L) +
+      ((((long)data[5]) << 40) & 0xff0000000000L) +
+      ((((long)data[6]) << 48) & 0xff000000000000L) +
+      ((((long)data[7]) << 56) & 0xff00000000000000L));
+  }
+
+  protected static byte[] pack(double targetDouble)
+  {
+    long target = Double.doubleToLongBits(targetDouble);
+    byte[] rval = new byte[8];
+    rval[0] = (byte)(target & 0xffL);
+    rval[1] = (byte)((target >> 8) & 0xffL);
+    rval[2] = (byte)((target >> 16) & 0xffL);
+    rval[3] = (byte)((target >> 24) & 0xffL);
+    rval[4] = (byte)((target >> 32) & 0xffL);
+    rval[5] = (byte)((target >> 40) & 0xffL);
+    rval[6] = (byte)((target >> 48) & 0xffL);
+    rval[7] = (byte)((target >> 56) & 0xffL);
+    return rval;
+  }
+
+
+}
+
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ThrottleGroups.java b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ThrottleGroups.java
new file mode 100644
index 0000000..cb38c42
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/ThrottleGroups.java
@@ -0,0 +1,134 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.throttler;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+
+/** An implementation of IThrottleGroups, which establishes a JVM-wide
+* pool of throttlers that can be used as a resource by any connector that needs
+* it.
+*/
+public class ThrottleGroups implements IThrottleGroups
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** The thread context */
+  protected final IThreadContext threadContext;
+    
+  /** The actual static pool */
+  protected final static Throttler throttler = new Throttler();
+  
+  /** Constructor */
+  public ThrottleGroups(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    this.threadContext = threadContext;
+  }
+  
+  /** Get all existing throttle groups for a throttle group type.
+  * The throttle group type typically describes a connector class, while the throttle group represents
+  * a namespace of bin names specific to that connector class.
+  *@param throttleGroupType is the throttle group type.
+  *@return the set of throttle groups for that group type.
+  */
+  @Override
+  public Set<String> getThrottleGroups(String throttleGroupType)
+    throws ManifoldCFException
+  {
+    return throttler.getThrottleGroups(threadContext, throttleGroupType);
+  }
+  
+  /** Remove a throttle group.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  */
+  @Override
+  public void removeThrottleGroup(String throttleGroupType, String throttleGroup)
+    throws ManifoldCFException
+  {
+    throttler.removeThrottleGroup(threadContext, throttleGroupType, throttleGroup);
+  }
+  
+  /** Set or update throttle specification for a throttle group.  This creates the
+  * throttle group if it does not yet exist.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  *@param throttleSpec is the desired throttle specification object.
+  */
+  @Override
+  public void createOrUpdateThrottleGroup(String throttleGroupType, String throttleGroup, IThrottleSpec throttleSpec)
+    throws ManifoldCFException
+  {
+    throttler.createOrUpdateThrottleGroup(threadContext, throttleGroupType, throttleGroup, throttleSpec);
+  }
+
+  /** Construct connection throttler for connections with specific bin names.  This object is meant to be embedded with a connection
+  * pool of similar objects, and used to gate the creation of new connections in that pool.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  *@param binNames are the connection type bin names.
+  *@return the connection throttling object, or null if the pool is being shut down.
+  */
+  @Override
+  public IConnectionThrottler obtainConnectionThrottler(String throttleGroupType, String throttleGroup, String[] binNames)
+    throws ManifoldCFException
+  {
+    java.util.Arrays.sort(binNames);
+    return throttler.obtainConnectionThrottler(threadContext, throttleGroupType, throttleGroup, binNames);
+  }
+
+  /** Poll periodically, to update cluster-wide statistics and allocation.
+  *@param throttleGroupType is the throttle group type to update.
+  */
+  @Override
+  public void poll(String throttleGroupType)
+    throws ManifoldCFException
+  {
+    throttler.poll(threadContext, throttleGroupType);
+  }
+  
+  /** Poll periodically, to update ALL cluster-wide statistics and allocation.
+  */
+  @Override
+  public void poll()
+    throws ManifoldCFException
+  {
+    throttler.poll(threadContext);
+  }
+
+  /** Free unused resources.
+  */
+  @Override
+  public void freeUnusedResources()
+    throws ManifoldCFException
+  {
+    throttler.freeUnusedResources(threadContext);
+  }
+  
+  /** Shut down throttler permanently.
+  */
+  @Override
+  public void destroy()
+    throws ManifoldCFException
+  {
+    throttler.destroy(threadContext);
+  }
+
+}
diff --git a/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/Throttler.java b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/Throttler.java
new file mode 100644
index 0000000..e3d4406
--- /dev/null
+++ b/framework/core/src/main/java/org/apache/manifoldcf/core/throttler/Throttler.java
@@ -0,0 +1,1111 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.throttler;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import java.util.concurrent.atomic.*;
+
+/** A Throttler object creates a virtual pool of connections to resources
+* whose access needs to be throttled in number, rate of use, and byte rate.
+* This code is modeled on the code for distributed connection pools, and is intended
+* to work in a similar manner.  Basically, a periodic assessment is done about what the
+* local throttling parameters should be (on a per-pool basis), and the local throttling
+* activities then adjust what they are doing based on the new parameters.  A service
+* model is used to keep track of which pools have what clients working with them.
+* This implementation has the advantage that:
+* (1) Only local throttling ever takes place on a method-by-method basis, which makes
+*   it possible to use throttling even in streams and background threads;
+* (2) Throttling resources are apportioned fairly, on average, between all the various
+*   cluster members, so it is unlikely that any persistent starvation conditions can
+*   arise.
+*/
+public class Throttler
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Throttle group hash table.  Keyed by throttle group type, value is throttling groups */
+  protected final Map<String,ThrottlingGroups> throttleGroupsHash = new HashMap<String,ThrottlingGroups>();
+
+  /** Create a throttler instance.  Usually there will be one of these per connector
+  * type that needs throttling.
+  */
+  public Throttler()
+  {
+  }
+  
+  // There are a lot of synchronizers to coordinate here.  They are indeed hierarchical.  It is not possible to simply
+  // throw a synchronizer at every level, and require that we hold all of them, because when we wait somewhere in the
+  // inner level, we will continue to hold locks and block access to all the outer levels.
+  //
+  // Instead, I've opted for a model whereby individual resources are protected.  This is tricky to coordinate, though,
+  // because (for instance) after a resource has been removed from the hash table, it had better be cleaned up
+  // thoroughly before the outer lock is removed, or two versions of the resource might wind up coming into existence.
+  // The general rule is therefore:
+  // (1) Creation or deletion of resources involves locking the parent where the resource is being added or removed
+  // (2) Anything that waits CANNOT also add or remove.
+  
+  /** Get all existing throttle groups for a throttle group type.
+  * The throttle group type typically describes a connector class, while the throttle group represents
+  * a namespace of bin names specific to that connector class.
+  *@param throttleGroupType is the throttle group type.
+  *@return the set of throttle groups for that group type.
+  */
+  public Set<String> getThrottleGroups(IThreadContext threadContext, String throttleGroupType)
+    throws ManifoldCFException
+  {
+    synchronized (throttleGroupsHash)
+    {
+      return throttleGroupsHash.keySet();
+    }
+  }
+  
+  /** Remove a throttle group.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  */
+  public void removeThrottleGroup(IThreadContext threadContext, String throttleGroupType, String throttleGroup)
+    throws ManifoldCFException
+  {
+    // Removal.  Lock the whole hierarchy.
+    synchronized (throttleGroupsHash)
+    {
+      ThrottlingGroups tg = throttleGroupsHash.get(throttleGroupType);
+      if (tg != null)
+      {
+        tg.removeThrottleGroup(threadContext, throttleGroup);
+      }
+    }
+  }
+  
+  /** Set or update throttle specification for a throttle group.  This creates the
+  * throttle group if it does not yet exist.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  *@param throttleSpec is the desired throttle specification object.
+  */
+  public void createOrUpdateThrottleGroup(IThreadContext threadContext, String throttleGroupType, String throttleGroup, IThrottleSpec throttleSpec)
+    throws ManifoldCFException
+  {
+    // Potential addition.  Lock the whole hierarchy.
+    synchronized (throttleGroupsHash)
+    {
+      ThrottlingGroups tg = throttleGroupsHash.get(throttleGroupType);
+      if (tg == null)
+      {
+        tg = new ThrottlingGroups(throttleGroupType);
+        throttleGroupsHash.put(throttleGroupType, tg);
+      }
+      tg.createOrUpdateThrottleGroup(threadContext, throttleGroup, throttleSpec);
+    }
+  }
+
+  /** Construct connection throttler for connections with specific bin names.  This object is meant to be embedded with a connection
+  * pool of similar objects, and used to gate the creation of new connections in that pool.
+  *@param throttleGroupType is the throttle group type.
+  *@param throttleGroup is the throttle group.
+  *@param binNames are the connection type bin names.
+  *@return the connection throttling object, or null if the pool is being shut down.
+  */
+  public IConnectionThrottler obtainConnectionThrottler(IThreadContext threadContext, String throttleGroupType, String throttleGroup, String[] binNames)
+    throws ManifoldCFException
+  {
+    // No waiting, so lock the entire tree.
+    synchronized (throttleGroupsHash)
+    {
+      ThrottlingGroups tg = throttleGroupsHash.get(throttleGroupType);
+      if (tg != null)
+        return tg.obtainConnectionThrottler(threadContext, throttleGroup, binNames);
+      return null;
+    }
+  }
+  
+  /** Poll periodically.
+  */
+  public void poll(IThreadContext threadContext, String throttleGroupType)
+    throws ManifoldCFException
+  {
+    // No waiting, so lock the entire tree.
+    synchronized (throttleGroupsHash)
+    {
+      ThrottlingGroups tg = throttleGroupsHash.get(throttleGroupType);
+      if (tg != null)
+        tg.poll(threadContext);
+    }
+      
+  }
+
+  /** Poll ALL bins periodically.
+  */
+  public void poll(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // No waiting, so lock the entire tree.
+    synchronized (throttleGroupsHash)
+    {
+      for (ThrottlingGroups tg : throttleGroupsHash.values())
+      {
+        tg.poll(threadContext);
+      }
+    }
+      
+  }
+  
+  /** Free unused resources.
+  */
+  public void freeUnusedResources(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // This potentially affects the entire hierarchy.
+    // Go through the whole pool and clean it out
+    synchronized (throttleGroupsHash)
+    {
+      Iterator<ThrottlingGroups> iter = throttleGroupsHash.values().iterator();
+      while (iter.hasNext())
+      {
+        ThrottlingGroups p = iter.next();
+        p.freeUnusedResources(threadContext);
+      }
+    }
+  }
+  
+  /** Shut down all throttlers and deregister them.
+  */
+  public void destroy(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    // This affects the entire hierarchy, so lock the whole thing.
+    // Go through the whole pool and clean it out
+    synchronized (throttleGroupsHash)
+    {
+      Iterator<ThrottlingGroups> iter = throttleGroupsHash.values().iterator();
+      while (iter.hasNext())
+      {
+        ThrottlingGroups p = iter.next();
+        p.destroy(threadContext);
+        iter.remove();
+      }
+    }
+  }
+
+  // Protected methods and classes
+  
+  protected static String buildThrottlingGroupName(String throttlingGroupType, String throttlingGroupName)
+  {
+    return throttlingGroupType + "_" + throttlingGroupName;
+  }
+
+  /** This class represents a throttling group pool */
+  protected class ThrottlingGroups
+  {
+    /** The throttling group type for this throttling group pool */
+    protected final String throttlingGroupTypeName;
+    /** The pool of individual throttle group services for this pool, keyed by throttle group name */
+    protected final Map<String,ThrottlingGroup> groups = new HashMap<String,ThrottlingGroup>();
+    
+    public ThrottlingGroups(String throttlingGroupTypeName)
+    {
+      this.throttlingGroupTypeName = throttlingGroupTypeName;
+    }
+    
+    /** Update throttle specification */
+    public void createOrUpdateThrottleGroup(IThreadContext threadContext, String throttleGroup, IThrottleSpec throttleSpec)
+      throws ManifoldCFException
+    {
+      synchronized (groups)
+      {
+        ThrottlingGroup g = groups.get(throttleGroup);
+        if (g == null)
+        {
+          g = new ThrottlingGroup(threadContext, throttlingGroupTypeName, throttleGroup, throttleSpec);
+          groups.put(throttleGroup, g);
+        }
+        else
+        {
+          g.updateThrottleSpecification(throttleSpec);
+        }
+      }
+    }
+    
+    /** Obtain connection throttler.
+    *@return the throttler, or null of the hierarchy has changed.
+    */
+    public IConnectionThrottler obtainConnectionThrottler(IThreadContext threadContext, String throttleGroup, String[] binNames)
+      throws ManifoldCFException
+    {
+      synchronized (groups)
+      {
+        ThrottlingGroup g = groups.get(throttleGroup);
+        if (g == null)
+          return null;
+        return g.obtainConnectionThrottler(threadContext, binNames);
+      }
+    }
+    
+    /** Remove specified throttle group */
+    public void removeThrottleGroup(IThreadContext threadContext, String throttleGroup)
+      throws ManifoldCFException
+    {
+      // Must synch the whole thing, because otherwise there would be a risk of someone recreating the
+      // group right after we removed it from the map, and before we destroyed it.
+      synchronized (groups)
+      {
+        ThrottlingGroup g = groups.remove(throttleGroup);
+        if (g != null)
+        {
+          g.destroy(threadContext);
+        }
+      }
+    }
+    
+    /** Poll this set of throttle groups.
+    */
+    public void poll(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      synchronized (groups)
+      {
+        Iterator<String> iter = groups.keySet().iterator();
+        while (iter.hasNext())
+        {
+          String throttleGroup = iter.next();
+          ThrottlingGroup p = groups.get(throttleGroup);
+          p.poll(threadContext);
+        }
+      }
+    }
+
+    /** Free unused resources */
+    public void freeUnusedResources(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      synchronized (groups)
+      {
+        Iterator<ThrottlingGroup> iter = groups.values().iterator();
+        while (iter.hasNext())
+        {
+          ThrottlingGroup g = iter.next();
+          g.freeUnusedResources(threadContext);
+        }
+      }
+    }
+    
+    /** Destroy and shutdown all */
+    public void destroy(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      synchronized (groups)
+      {
+        Iterator<ThrottlingGroup> iter = groups.values().iterator();
+        while (iter.hasNext())
+        {
+          ThrottlingGroup p = iter.next();
+          p.destroy(threadContext);
+          iter.remove();
+        }
+      }
+    }
+  }
+  
+  /** This class represents a throttling group, of a specific throttling group type.  It basically
+  * describes an entire self-consistent throttling environment.
+  */
+  protected class ThrottlingGroup
+  {
+    /** The throttling group name */
+    protected final String throttlingGroupName;
+    /** The current throttle spec */
+    protected IThrottleSpec throttleSpec;
+    
+    /** The connection bins */
+    protected final Map<String,ConnectionBin> connectionBins = new HashMap<String,ConnectionBin>();
+    /** The fetch bins */
+    protected final Map<String,FetchBin> fetchBins = new HashMap<String,FetchBin>();
+    /** The throttle bins */
+    protected final Map<String,ThrottleBin> throttleBins = new HashMap<String,ThrottleBin>();
+
+    // For synchronization, we use several in this class.
+    // Modification to the connectionBins, fetchBins, or throttleBins hashes uses the appropriate local synchronizer.
+    // Changes to other local variables use the main synchronizer.
+    
+    /** Constructor
+    */
+    public ThrottlingGroup(IThreadContext threadContext, String throttlingGroupType, String throttleGroup, IThrottleSpec throttleSpec)
+      throws ManifoldCFException
+    {
+      this.throttlingGroupName = buildThrottlingGroupName(throttlingGroupType, throttleGroup);
+      this.throttleSpec = throttleSpec;
+      // Once all that is done, perform the initial setting of all the bin cutoffs
+      poll(threadContext);
+    }
+
+    /** Create a bunch of bins, corresponding to the bin names specified.
+    * Note that this also registers them as services etc.
+    *@param binNames describes the set of bins to create.
+    */
+    public synchronized IConnectionThrottler obtainConnectionThrottler(IThreadContext threadContext, String[] binNames)
+      throws ManifoldCFException
+    {
+      synchronized (connectionBins)
+      {
+        for (String binName : binNames)
+        {
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin == null)
+          {
+            bin = new ConnectionBin(threadContext, throttlingGroupName, binName);
+            connectionBins.put(binName, bin);
+          }
+        }
+      }
+      
+      synchronized (fetchBins)
+      {
+        for (String binName : binNames)
+        {
+          FetchBin bin = fetchBins.get(binName);
+          if (bin == null)
+          {
+            bin = new FetchBin(threadContext, throttlingGroupName, binName);
+            fetchBins.put(binName, bin);
+          }
+        }
+      }
+      
+      synchronized (throttleBins)
+      {
+        for (String binName : binNames)
+        {
+          ThrottleBin bin = throttleBins.get(binName);
+          if (bin == null)
+          {
+            bin = new ThrottleBin(threadContext, throttlingGroupName, binName);
+            throttleBins.put(binName, bin);
+          }
+        }
+      }
+      
+      return new ConnectionThrottler(this, binNames);
+    }
+    
+    /** Update the throttle spec.
+    *@param throttleSpec is the new throttle spec for this throttle group.
+    */
+    public synchronized void updateThrottleSpecification(IThrottleSpec throttleSpec)
+      throws ManifoldCFException
+    {
+      this.throttleSpec = throttleSpec;
+    }
+    
+    
+    // IConnectionThrottler support methods
+    
+    /** Wait for a connection to become available.
+    *@param poolCount is a description of how many connections
+    * are available in the current pool, across all bins.
+    *@return the IConnectionThrottler codes for results.
+    */
+    public int waitConnectionAvailable(String[] binNames, AtomicInteger poolCount)
+      throws InterruptedException
+    {
+      // Each bin can signal something different.  Bins that signal
+      // CONNECTION_FROM_NOWHERE are shutting down, but there's also
+      // apparently the conflicting possibilities of distinct answers of
+      // CONNECTION_FROM_POOL and CONNECTION_FROM_CREATION.
+      // However: the pool count we track is in fact N * the actual pool count,
+      // where N is the number of bins in each connection.  This means that a conflict 
+      // is ALWAYS due to two entities simultaneously calling waitConnectionAvailable(),
+      // and deadlocking each other.  The solution is therefore to back off and retry.
+
+      // This is the retry loop
+      while (true)
+      {
+        int currentRecommendation = IConnectionThrottler.CONNECTION_FROM_NOWHERE;
+        
+        boolean retry = false;
+
+        // First, make sure all the bins exist, and reserve a slot in each
+        int i = 0;
+        while (i < binNames.length)
+        {
+          String binName = binNames[i];
+          ConnectionBin bin;
+          synchronized (connectionBins)
+          {
+            bin = connectionBins.get(binName);
+          }
+          if (bin != null)
+          {
+            // Reserve a slot
+            int result = bin.waitConnectionAvailable(poolCount);
+            if (result == IConnectionThrottler.CONNECTION_FROM_NOWHERE ||
+              (currentRecommendation != IConnectionThrottler.CONNECTION_FROM_NOWHERE && currentRecommendation != result))
+            {
+              // Release previous reservations, and either return, or retry
+              while (i > 0)
+              {
+                i--;
+                binName = binNames[i];
+                synchronized (connectionBins)
+                {
+                  bin = connectionBins.get(binName);
+                }
+                if (bin != null)
+                  bin.undoReservation(currentRecommendation, poolCount);
+              }
+              if (result == IConnectionThrottler.CONNECTION_FROM_NOWHERE)
+                return result;
+              // Break out of the outer loop so we can retry
+              retry = true;
+              break;
+            }
+            if (currentRecommendation == IConnectionThrottler.CONNECTION_FROM_NOWHERE)
+              currentRecommendation = result;
+          }
+          i++;
+        }
+        
+        if (retry)
+          continue;
+        
+        // Complete the reservation process (if that is what we decided)
+        if (currentRecommendation == IConnectionThrottler.CONNECTION_FROM_CREATION)
+        {
+          // All reservations have been made!  Convert them.
+          for (String binName : binNames)
+          {
+            ConnectionBin bin;
+            synchronized (connectionBins)
+            {
+              bin = connectionBins.get(binName);
+            }
+            if (bin != null)
+              bin.noteConnectionCreation();
+          }
+        }
+
+        return currentRecommendation;
+      }
+      
+    }
+    
+    public IFetchThrottler getNewConnectionFetchThrottler(String[] binNames)
+    {
+      return new FetchThrottler(this, binNames);
+    }
+    
+    public boolean noteReturnedConnection(String[] binNames)
+    {
+      // If ANY of the bins think the connection should be destroyed, then that will be
+      // the recommendation.
+      synchronized (connectionBins)
+      {
+        boolean destroyConnection = false;
+
+        for (String binName : binNames)
+        {
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin != null)
+          {
+            destroyConnection |= bin.shouldReturnedConnectionBeDestroyed();
+          }
+        }
+        
+        return destroyConnection;
+      }
+    }
+    
+    public boolean checkDestroyPooledConnection(String[] binNames, AtomicInteger poolCount)
+    {
+      // Only if all believe we can destroy a pool connection, will we do it.
+      // This is because some pools may be empty, etc.
+      synchronized (connectionBins)
+      {
+        boolean destroyConnection = false;
+
+        int i = 0;
+        while (i < binNames.length)
+        {
+          String binName = binNames[i];
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin != null)
+          {
+            int result = bin.shouldPooledConnectionBeDestroyed(poolCount);
+            if (result == ConnectionBin.CONNECTION_POOLEMPTY)
+            {
+              // Give up now, and undo all the other bins
+              while (i > 0)
+              {
+                i--;
+                binName = binNames[i];
+                bin = connectionBins.get(binName);
+                bin.undoPooledConnectionDecision(poolCount);
+              }
+              return false;
+            }
+            else if (result == ConnectionBin.CONNECTION_DESTROY)
+            {
+              destroyConnection = true;
+            }
+          }
+          i++;
+        }
+        
+        if (destroyConnection)
+          return true;
+        
+        // Undo pool reservation, since everything is apparently within bounds.
+        for (String binName : binNames)
+        {
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin != null)
+            bin.undoPooledConnectionDecision(poolCount);
+        }
+        
+        return false;
+      }
+
+    }
+    
+    /** Connection expiration is tricky, because even though a connection may be identified as
+    * being expired, at the very same moment it could be handed out in another thread.  So there
+    * is a natural race condition present.
+    * The way the connection throttler deals with that is to allow the caller to reserve a connection
+    * for expiration.  This must be called BEFORE the actual identified connection is removed from the
+    * connection pool.  If the value returned by this method is "true", then a connection MUST be removed
+    * from the pool and destroyed, whether or not the identified connection is actually still available for
+    * destruction or not.
+    *@return true if a connection from the pool can be expired.  If true is returned, noteConnectionDestruction()
+    *  MUST be called once the connection has actually been destroyed.
+    */
+    public boolean checkExpireConnection(String[] binNames, AtomicInteger poolCount)
+    {
+      synchronized (connectionBins)
+      {
+        int i = 0;
+        while (i < binNames.length)
+        {
+          String binName = binNames[i];
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin != null)
+          {
+            if (!bin.hasPooledConnection(poolCount))
+            {
+              // Give up now, and undo all the other bins
+              while (i > 0)
+              {
+                i--;
+                binName = binNames[i];
+                bin = connectionBins.get(binName);
+                bin.undoPooledConnectionDecision(poolCount);
+              }
+              return false;
+            }
+          }
+          i++;
+        }
+        return true;
+      }
+    }
+
+    public void noteConnectionReturnedToPool(String[] binNames, AtomicInteger poolCount)
+    {
+      synchronized (connectionBins)
+      {
+        for (String binName : binNames)
+        {
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin != null)
+            bin.noteConnectionReturnedToPool(poolCount);
+        }
+      }
+    }
+
+    public void noteConnectionDestroyed(String[] binNames)
+    {
+      synchronized (connectionBins)
+      {
+        for (String binName : binNames)
+        {
+          ConnectionBin bin = connectionBins.get(binName);
+          if (bin != null)
+            bin.noteConnectionDestroyed();
+        }
+      }
+    }
+    
+    // IFetchThrottler support methods
+    
+    /** Get permission to fetch a document.  This grants permission to start
+    * fetching a single document, within the connection that has already been
+    * granted permission that created this object.
+    *@param binNames are the names of the bins.
+    *@return false if being shut down
+    */
+    public boolean obtainFetchDocumentPermission(String[] binNames)
+      throws InterruptedException
+    {
+      // First, make sure all the bins exist, and reserve a slot in each
+      int i = 0;
+      while (i < binNames.length)
+      {
+        String binName = binNames[i];
+        FetchBin bin;
+        synchronized (fetchBins)
+        {
+          bin = fetchBins.get(binName);
+        }
+        // Reserve a slot
+        if (bin == null || !bin.reserveFetchRequest())
+        {
+          // Release previous reservations, and return null
+          while (i > 0)
+          {
+            i--;
+            binName = binNames[i];
+            synchronized (fetchBins)
+            {
+              bin = fetchBins.get(binName);
+            }
+            if (bin != null)
+              bin.clearReservation();
+          }
+          return false;
+        }
+        i++;
+      }
+      
+      // All reservations have been made!  Convert them.
+      // (These are guaranteed to succeed - but they may wait)
+      i = 0;
+      while (i < binNames.length)
+      {
+        String binName = binNames[i];
+        FetchBin bin;
+        synchronized (fetchBins)
+        {
+          bin = fetchBins.get(binName);
+        }
+        if (bin != null)
+        {
+          if (!bin.waitNextFetch())
+          {
+            // Undo the reservations we haven't processed yet
+            while (i < binNames.length)
+            {
+              binName = binNames[i];
+              synchronized (fetchBins)
+              {
+                bin = fetchBins.get(binName);
+              }
+              if (bin != null)
+                bin.clearReservation();
+              i++;
+            }
+            return false;
+          }
+        }
+        i++;
+      }
+      return true;
+    }
+    
+    public IStreamThrottler createFetchStream(String[] binNames)
+    {
+      // Do a "begin fetch" for all throttle bins
+      synchronized (throttleBins)
+      {
+        for (String binName : binNames)
+        {
+          ThrottleBin bin = throttleBins.get(binName);
+          if (bin != null)
+            bin.beginFetch();
+        }
+      }
+      
+      return new StreamThrottler(this, binNames);
+    }
+    
+    // IStreamThrottler support methods
+    
+    /** Obtain permission to read a block of bytes.  This method may wait until it is OK to proceed.
+    * The throttle group, bin names, etc are already known
+    * to this specific interface object, so it is unnecessary to include them here.
+    *@param byteCount is the number of bytes to get permissions to read.
+    *@return true if the wait took place as planned, or false if the system is being shut down.
+    */
+    public boolean obtainReadPermission(String[] binNames, int byteCount)
+      throws InterruptedException
+    {
+      int i = 0;
+      while (i < binNames.length)
+      {
+        String binName = binNames[i];
+        ThrottleBin bin;
+        synchronized (throttleBins)
+        {
+          bin = throttleBins.get(binName);
+        }
+        if (bin == null || !bin.beginRead(byteCount))
+        {
+          // End bins we've already done, and exit
+          while (i > 0)
+          {
+            i--;
+            binName = binNames[i];
+            synchronized (throttleBins)
+            {
+              bin = throttleBins.get(binName);
+            }
+            if (bin != null)
+              bin.endRead(byteCount,0);
+          }
+          return false;
+        }
+        i++;
+      }
+      return true;
+    }
+      
+    /** Note the completion of the read of a block of bytes.  Call this after
+    * obtainReadPermission() was successfully called, and bytes were successfully read.
+    *@param origByteCount is the originally requested number of bytes to get permissions to read.
+    *@param actualByteCount is the number of bytes actually read.
+    */
+    public void releaseReadPermission(String[] binNames, int origByteCount, int actualByteCount)
+    {
+      synchronized (throttleBins)
+      {
+        for (String binName : binNames)
+        {
+          ThrottleBin bin = throttleBins.get(binName);
+          if (bin != null)
+            bin.endRead(origByteCount, actualByteCount);
+        }
+      }
+    }
+
+    /** Note the stream being closed.
+    */
+    public void closeStream(String[] binNames)
+    {
+      synchronized (throttleBins)
+      {
+        for (String binName : binNames)
+        {
+          ThrottleBin bin = throttleBins.get(binName);
+          if (bin != null)
+            bin.endFetch();
+        }
+      }
+    }
+
+    // Bookkeeping methods
+    
+    /** Call this periodically.
+    */
+    public synchronized void poll(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      // Go through all existing bins and update each one.
+      synchronized (connectionBins)
+      {
+        for (ConnectionBin bin : connectionBins.values())
+        {
+          bin.updateMaxActiveConnections(throttleSpec.getMaxOpenConnections(bin.getBinName()));
+          bin.poll(threadContext);
+        }
+      }
+  
+      synchronized (fetchBins)
+      {
+        for (FetchBin bin : fetchBins.values())
+        {
+          bin.updateMinTimeBetweenFetches(throttleSpec.getMinimumMillisecondsPerFetch(bin.getBinName()));
+          bin.poll(threadContext);
+        }
+      }
+      
+      synchronized (throttleBins)
+      {
+        for (ThrottleBin bin : throttleBins.values())
+        {
+          bin.updateMinimumMillisecondsPerByte(throttleSpec.getMinimumMillisecondsPerByte(bin.getBinName()));
+          bin.poll(threadContext);
+        }
+      }
+      
+    }
+    
+    /** Free unused resources.
+    */
+    public synchronized void freeUnusedResources(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      // Does nothing; there are not really resources to free
+    }
+    
+    /** Destroy this pool.
+    */
+    public synchronized void destroy(IThreadContext threadContext)
+      throws ManifoldCFException
+    {
+      synchronized (connectionBins)
+      {
+        Iterator<ConnectionBin> binIter = connectionBins.values().iterator();
+        while (binIter.hasNext())
+        {
+          ConnectionBin bin = binIter.next();
+          bin.shutDown(threadContext);
+          binIter.remove();
+        }
+      }
+      
+      synchronized (fetchBins)
+      {
+        Iterator<FetchBin> binIter = fetchBins.values().iterator();
+        while (binIter.hasNext())
+        {
+          FetchBin bin = binIter.next();
+          bin.shutDown(threadContext);
+          binIter.remove();
+        }
+      }
+      
+      synchronized (throttleBins)
+      {
+        Iterator<ThrottleBin> binIter = throttleBins.values().iterator();
+        while (binIter.hasNext())
+        {
+          ThrottleBin bin = binIter.next();
+          bin.shutDown(threadContext);
+          binIter.remove();
+        }
+      }
+
+    }
+  }
+  
+  /** Connection throttler implementation class.
+  * This class instance stores some parameters and links back to ThrottlingGroup.  But each class instance
+  * models a connection pool with the specified bins.  But the description of each pool consists of more than just
+  * the bin names that describe the throttling - it also may include connection parameters which we have
+  * no insight into at this level.
+  *
+  * Thus, in order to do pool tracking properly, we cannot simply rely on the individual connection bin instances
+  * to do all the work, since they cannot distinguish between different pools properly.  So that leaves us with
+  * two choices.  (1) We can somehow push the separate pool instance parameters down to the connection bin
+  * level, or (2) the connection bins cannot actually do any waiting or blocking.
+  *
+  * The benefit of having blocking take place in connection bins is that they are in fact designed to be precisely
+  * the thing you would want to synchronize on.   If we presume that the waits happen in those classes,
+  * then we need the ability to send in our local pool count to them, and we need to be able to "wake up"
+  * those underlying classes when the local pool count changes.
+  */
+  protected static class ConnectionThrottler implements IConnectionThrottler
+  {
+    protected final ThrottlingGroup parent;
+    protected final String[] binNames;
+    
+    // Keep track of local pool parameters.
+
+    /** This is the number of connections in the pool, times the number of bins per connection */
+    protected final AtomicInteger poolCount = new AtomicInteger(0);
+
+    public ConnectionThrottler(ThrottlingGroup parent, String[] binNames)
+    {
+      this.parent = parent;
+      this.binNames = binNames;
+    }
+    
+    /** Get permission to grab a connection for use.  If this object believes there is a connection
+    * available in the pool, it will update its pool size variable and return   If not, this method
+    * evaluates whether a new connection should be created.  If neither condition is true, it
+    * waits until a connection is available.
+    *@return whether to take the connection from the pool, or create one, or whether the
+    * throttler is being shut down.
+    */
+    @Override
+    public int waitConnectionAvailable()
+      throws InterruptedException
+    {
+      return parent.waitConnectionAvailable(binNames, poolCount);
+    }
+    
+    /** For a new connection, obtain the fetch throttler to use for the connection.
+    * If the result from waitConnectionAvailable() is CONNECTION_FROM_CREATION,
+    * the calling code is expected to create a connection using the result of this method.
+    *@return the fetch throttler for a new connection.
+    */
+    @Override
+    public IFetchThrottler getNewConnectionFetchThrottler()
+    {
+      return parent.getNewConnectionFetchThrottler(binNames);
+    }
+    
+    /** For returning a connection from use, there is only one method.  This method signals
+    /* whether a formerly in-use connection should be placed back in the pool or destroyed.
+    *@return true if the connection should NOT be put into the pool but should instead
+    *  simply be destroyed.  If true is returned, the caller MUST call noteConnectionDestroyed()
+    *  (below) in order for the bookkeeping to work.
+    */
+    @Override
+    public boolean noteReturnedConnection()
+    {
+      return parent.noteReturnedConnection(binNames);
+    }
+    
+    /** This method calculates whether a connection should be taken from the pool and destroyed
+    /* in order to meet quota requirements.  If this method returns
+    /* true, you MUST remove a connection from the pool, and you MUST call
+    /* noteConnectionDestroyed() afterwards.
+    *@return true if a pooled connection should be destroyed.  If true is returned, the
+    * caller MUST call noteConnectionDestroyed() (below) in order for the bookkeeping to work.
+    */
+    @Override
+    public boolean checkDestroyPooledConnection()
+    {
+      return parent.checkDestroyPooledConnection(binNames, poolCount);
+    }
+    
+    /** Connection expiration is tricky, because even though a connection may be identified as
+    * being expired, at the very same moment it could be handed out in another thread.  So there
+    * is a natural race condition present.
+    * The way the connection throttler deals with that is to allow the caller to reserve a connection
+    * for expiration.  This must be called BEFORE the actual identified connection is removed from the
+    * connection pool.  If the value returned by this method is "true", then a connection MUST be removed
+    * from the pool and destroyed, whether or not the identified connection is actually still available for
+    * destruction or not.
+    *@return true if a connection from the pool can be expired.  If true is returned, noteConnectionDestruction()
+    *  MUST be called once the connection has actually been destroyed.
+    */
+    @Override
+    public boolean checkExpireConnection()
+    {
+      return parent.checkExpireConnection(binNames, poolCount);
+    }
+    
+    /** Note that a connection has been returned to the pool.  Call this method after a connection has been
+    * placed back into the pool and is available for use.
+    */
+    @Override
+    public void noteConnectionReturnedToPool()
+    {
+      parent.noteConnectionReturnedToPool(binNames, poolCount);
+    }
+
+    /** Note that a connection has been destroyed.  Call this method ONLY after noteReturnedConnection()
+    * or checkDestroyPooledConnection() returns true, AND the connection has been already
+    * destroyed.
+    */
+    @Override
+    public void noteConnectionDestroyed()
+    {
+      parent.noteConnectionDestroyed(binNames);
+    }
+  }
+  
+  /** Fetch throttler implementation class.
+  * This basically stores some parameters and links back to ThrottlingGroup.
+  */
+  protected static class FetchThrottler implements IFetchThrottler
+  {
+    protected final ThrottlingGroup parent;
+    protected final String[] binNames;
+    
+    public FetchThrottler(ThrottlingGroup parent, String[] binNames)
+    {
+      this.parent = parent;
+      this.binNames = binNames;
+    }
+    
+    /** Get permission to fetch a document.  This grants permission to start
+    * fetching a single document, within the connection that has already been
+    * granted permission that created this object.
+    *@return false if the throttler is being shut down.
+    */
+    @Override
+    public boolean obtainFetchDocumentPermission()
+      throws InterruptedException
+    {
+      return parent.obtainFetchDocumentPermission(binNames);
+    }
+    
+    /** Open a fetch stream.  When done (or aborting), call
+    * IStreamThrottler.closeStream() to note the completion of the document
+    * fetch activity.
+    *@return the stream throttler to use to throttle the actual data access.
+    */
+    @Override
+    public IStreamThrottler createFetchStream()
+    {
+      return parent.createFetchStream(binNames);
+    }
+
+  }
+  
+  /** Stream throttler implementation class.
+  * This basically stores some parameters and links back to ThrottlingGroup.
+  */
+  protected static class StreamThrottler implements IStreamThrottler
+  {
+    protected final ThrottlingGroup parent;
+    protected final String[] binNames;
+    
+    public StreamThrottler(ThrottlingGroup parent, String[] binNames)
+    {
+      this.parent = parent;
+      this.binNames = binNames;
+    }
+
+    /** Obtain permission to read a block of bytes.  This method may wait until it is OK to proceed.
+    * The throttle group, bin names, etc are already known
+    * to this specific interface object, so it is unnecessary to include them here.
+    *@param byteCount is the number of bytes to get permissions to read.
+    *@return true if the wait took place as planned, or false if the system is being shut down.
+    */
+    @Override
+    public boolean obtainReadPermission(int byteCount)
+      throws InterruptedException
+    {
+      return parent.obtainReadPermission(binNames, byteCount);
+    }
+      
+    /** Note the completion of the read of a block of bytes.  Call this after
+    * obtainReadPermission() was successfully called, and bytes were successfully read.
+    *@param origByteCount is the originally requested number of bytes to get permissions to read.
+    *@param actualByteCount is the number of bytes actually read.
+    */
+    @Override
+    public void releaseReadPermission(int origByteCount, int actualByteCount)
+    {
+      parent.releaseReadPermission(binNames, origByteCount, actualByteCount);
+    }
+
+    /** Note the stream being closed.
+    */
+    @Override
+    public void closeStream()
+    {
+      parent.closeStream(binNames);
+    }
+
+  }
+  
+}
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/common/DateTest.java b/framework/core/src/test/java/org/apache/manifoldcf/core/common/DateTest.java
index 9ab1500..24e23f3 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/common/DateTest.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/common/DateTest.java
@@ -37,8 +37,12 @@
     assertNotNull(d);
     d = DateParser.parseISO8601Date("2012-11-15T01:32:33+0100");
     assertNotNull(d);
+    d = DateParser.parseISO8601Date("2012-11-15T01:32:33-03:00");
+    assertNotNull(d);
     d = DateParser.parseISO8601Date("2012-11-15T01:32:33GMT-03:00");
     assertNotNull(d);
+    d = DateParser.parseISO8601Date("2012-11-15T01:32:33.001-04:00");
+    assertNotNull(d);
   }
 
 
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/fuzzyml/TestFuzzyML.java b/framework/core/src/test/java/org/apache/manifoldcf/core/fuzzyml/TestFuzzyML.java
new file mode 100644
index 0000000..ab058f0
--- /dev/null
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/fuzzyml/TestFuzzyML.java
@@ -0,0 +1,234 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.fuzzyml;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** Test fuzzyml parser */
+public class TestFuzzyML
+{
+  
+  protected final static String fuzzyTestString = 
+"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n"+
+"<rss version=\"2.0\"\n"+
+"	xmlns:content=\"http://purl.org/rss/1.0/modules/content/\"\n"+
+"	xmlns:wfw=\"http://wellformedweb.org/CommentAPI/\"\n"+
+"	xmlns:dc=\"http://purl.org/dc/elements/1.1/\"\n"+
+"	xmlns:atom=\"http://www.w3.org/2005/Atom\"\n"+
+"	xmlns:sy=\"http://purl.org/rss/1.0/modules/syndication/\"\n"+
+"	xmlns:slash=\"http://purl.org/rss/1.0/modules/slash/\"\n"+
+"	>\n"+
+"		<item>\n"+
+"		<title>fme File Exchange Plattform – Austausch mit Externen leichtgemacht</title>\n"+
+"		<link>http://blog.fme.de/allgemein/2013-07/fme-file-exchange-plattform-austausch-mit-externen-leichtgemacht</link>\n"+
+"		<comments>http://blog.fme.de/allgemein/2013-07/fme-file-exchange-plattform-austausch-mit-externen-leichtgemacht#comments</comments>\n"+
+"		<pubDate>Thu, 18 Jul 2013 07:39:03 +0000</pubDate>\n"+
+"		<dc:creator>Jan Pfitzner</dc:creator>\n"+
+"				<category><![CDATA[Allgemein]]></category>\n"+
+"		<category><![CDATA[ECM mit Alfresco]]></category>\n"+
+"		<category><![CDATA[Alfresco]]></category>\n"+
+"		<category><![CDATA[Content Repository]]></category>\n"+
+"		<category><![CDATA[File Exchange Plattform]]></category>\n"+
+"		<category><![CDATA[File Sharing]]></category>\n"+
+"		<category><![CDATA[Webtechnologien]]></category>\n"+
+"\n"+
+"		<guid isPermaLink=\"false\">http://blog.fme.de/?p=1806</guid>\n"+
+"		<description><![CDATA[Der Austausch von Dokumenten mit Partnern, Agenturen oder auch Kunden ist aus Sicht der Unternehmens-IT immer noch ein oft problematisches Feld. Der Austausch von großen Dateimengen per Mail &#8220;bläht&#8221; die Postfächer unnötig auf und führt zu ineffizienten Prozessen (&#8220;welche Version ist noch aktuell?&#8221;). Die von den Endanwendern in Eigenverantwortung oft gewählte Cloud-Alternative – die Nutzung [...]]]></description>\n"+
+"			<content:encoded><![CDATA[<p><strong>Der Austausch von Dokumenten mit Partnern, Agenturen oder auch Kunden ist aus Sicht der Unternehmens-IT immer noch ein oft problematisches Feld. Der Austausch von großen Dateimengen per Mail &#8220;bläht&#8221; die Postfächer unnötig auf und führt zu ineffizienten Prozessen (&#8220;welche Version ist noch aktuell?&#8221;). Die von den Endanwendern in Eigenverantwortung oft gewählte Cloud-Alternative – die Nutzung von Cloud-basierten File-Sharing-Diensten wie Dropbox, Google Drive &amp; Co – ist und bleibt nicht erst seit #Prism ein &#8220;Schreckgespenst&#8221; für viele CIOs. Die fme file Exchange Plattform bietet eine compliance- und integrationsfähige Alternative.</strong></p>\n"+
+"<p>Die typischen Anforderungen an eine Dateiaustauschplattform lassen sich in der folgenden Form zusammenfassen:</p>\n"+
+"<p><strong>Als Mitarbeiter des Unternehmens möchte ich…</strong></p>\n"+
+"<ul>\n"+
+"<li>mich mit meinem Standard-Unternehmens-Account anmelden können, am besten per SSO,</li>\n"+
+"<li>Sammlungen von Dateien mit externen Benutzern austauschen,</li>\n"+
+"<li>angeben können, welche Benutzer lesend oder schreibend auf die Dateien zugreifen können,</li>\n"+
+"<li>Dokumente vom lokalen PC oder dem internen Dokumenten-Management-System hochladen/auswählen können,</li>\n"+
+"<li>weitere Dateien zu einer Dateisammlung hinzufügen und vorhandene aktualisieren oder löschen können und</li>\n"+
+"<li>die Benutzer automatisch benachrichtigen, wenn neue oder geänderte Dateien vorhanden sind.</li>\n"+
+"</ul>\n"+
+"<p><span id=\"more-1806\"></span></p>\n"+
+"<p><strong>Als externer Benutzer möchte ich…</strong></p>\n"+
+"<ul>\n"+
+"<li>mich einfach im System registrieren können,</li>\n"+
+"<li>vom System benachrichtigt werden, wenn ein anderer Benutzer neue Dateien für mich bereitgestellt hat und</li>\n"+
+"<li>auf die für mich freigegeben Dateisammlungen einfach zugreifen können.</li>\n"+
+"</ul>\n"+
+"<p><strong>Als IT-Verantwortlicher möchte ich …</strong></p>\n"+
+"<ul>\n"+
+"<li>dass das System in der eigenen DMZ betrieben werden kann,</li>\n"+
+"<li>dass jede Registrierung von einem Mitarbeiter meines Unternehmens – der Kontaktperson des Externen – freigeschaltet werden muss,</li>\n"+
+"<li>eine Übersicht über alle registrierten Benutzer und deren Dateisammlungen haben und dort jederzeit einzelne Dateien oder ganze Sammlungen löschen können,</li>\n"+
+"<li>dass freigebende Dateisammlungen nach einer konfigurierbaren Zeitspanne gelöscht werden und</li>\n"+
+"<li>verhindern, dass registrierte externe Benutzer das System missbrauchen, indem sie Dateien ohne Beteiligung eines Mitarbeiters austauschen.</li>\n"+
+"</ul>\n"+
+"<p>Anhand dieser primären Anforderungen eines unserer Kunden haben wir auf Basis von Alfresco und einiger modernen Webtechnologien wie Bootstrap &amp; AngularJS  die fme File Exchange Plattform entwickelt:</p>\n"+
+"<p style=\"text-align: center;\"><a href=\"http://blog.fme.de/wp-content/uploads/2013/07/File-Exchange-Plattform.png\" rel=\"lightbox[1806]\"><img class=\"aligncenter  wp-image-1819\" title=\"File Exchange Plattform\" src=\"http://blog.fme.de/wp-content/uploads/2013/07/File-Exchange-Plattform.png\" alt=\"\" width=\"405\" height=\"240\" /></a></p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p>Alfresco dient in dieser Lösung vor allem als Content Repository inkl. der benötigten Basis -Funktionen wie Berechtigungen, Versionierung, Mailbenachrichtigungen, Dokumentenvorschau sowie Gruppen- &amp; Benutzerverwaltung. Außerdem wird der Registrierungsmechanismus für die externen Benutzer über die in Alfresco integrierte Workflow-Engine Activiti umgesetzt.</p>\n"+
+"<p>Die Benutzungsoberfläche der Applikation ist in Form einer Single Page Webapplikation auf Basis der beiden Frameworks AngularJS (<a href=\"http://angularjs.org/\">http://angularjs.org/</a>) &amp; Bootstrap (<a href=\"http://twitter.github.io/bootstrap/\">http://twitter.github.io/bootstrap/</a>) implementiert.</p>\n"+
+"<p>Die Kommunikation zwischen diesem JavaScript-Frontend und dem Alfresco-Backend ist mittels sog. Alfresco WebScripts umgesetzt. Das Alfresco WebScript-Framework ist eines der größten Vorteile von Alfresco und ermöglicht die Implementierung einer eigenen auf Anwendungsfall angepassten REST API.</p>\n"+
+"<p>Aus der Sicht des Anwenders teilt sich die Applikation in die folgenden, beiden Komponenten:</p>\n"+
+"<p><strong>New File Collection</strong></p>\n"+
+"<p>Hier haben die Mitarbeiter des Unternehmens die Möglichkeit, eine neue Dateisammlung zum Austausch zu erstellen. Zur Erstellung einer neuen Dateisammlung sind lediglich die folgenden Informationen notwendig:</p>\n"+
+"<ol>\n"+
+"<li>Name der Dateisammlung.</li>\n"+
+"<li>Berechtigte Benutzer – lesend oder schreibend</li>\n"+
+"<li>Die Dateien, die hochgeladen werden sollen</li>\n"+
+"</ol>\n"+
+"<p>Um diesen Erstellungsprozess so intuitiv &amp; komfortabel wie möglich zu halten, wurde zur Benutzerauswahl ein Eingabefeld inkl. einer Autovervollständigung anhand der vorhandenen Benutzerkonten eingesetzt.</p>\n"+
+"<p>Zum einfachen Hinzufügen von Dateien vom PC des Benutzers wird neben der üblichen Dateiauswahl auch ein direktes Drag&amp;Drop unterstützt.</p>\n"+
+"<p>Während des Hochladevorgangs der Dateien ist der Benutzer durch den Einsatz von Fortschrittsbalken jederzeit über den aktuellen Fortschritt des Uploads informiert.</p>\n"+
+"<p>Mit Abschluss des Hochladevorgangs werden die berechtigten Benutzer vom System per Mail über die neue Dateisammlung informiert.</p>\n"+
+"<p style=\"text-align: center;\"><a href=\"http://blog.fme.de/wp-content/uploads/2013/07/New-File-Collection1.png\" rel=\"lightbox[1806]\"><img class=\"aligncenter  wp-image-1827\" title=\"New File Collection\" src=\"http://blog.fme.de/wp-content/uploads/2013/07/New-File-Collection1.png\" alt=\"\" width=\"456\" height=\"377\" /></a></p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p><strong>My File Collections</strong></p>\n"+
+"<p>Im Bereich „My File Collection” hat der angemeldete Benutzer einen direkten und einfachen Zugriff auf die Dateisammlungen, für die er berechtigt ist. Je nach vergebener Berechtigung kann der Benutzer die  enthaltenen Dateien herunterladen, aktualisieren &amp; löschen oder aber auch neue Dokumente hochladen und  Ordner erstellen.</p>\n"+
+"<p>Zudem  kann der Ersteller einer Dateisammlung oder ein Administrator die Sammlung für weitere Benutzer freigeben bzw. Freigaben zurücknehmen und die automatische Ablauffrist einer Dateisammlung aktivieren bzw. deaktivieren.</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p style=\"text-align: center;\"><a href=\"http://blog.fme.de/wp-content/uploads/2013/07/My-File-Collections.png\" rel=\"lightbox[1806]\"><img class=\"aligncenter  wp-image-1828\" title=\"My File Collections\" src=\"http://blog.fme.de/wp-content/uploads/2013/07/My-File-Collections.png\" alt=\"\" width=\"492\" height=\"377\" /></a></p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p><strong>Fazit</strong></p>\n"+
+"<p>Die Kombination von Alfresco, AngularJS &amp; Bootstrap hat sich während der Lösungsentwicklung als wunderbar harmonierende und sehr produktive Kombination zur Entwicklung von eigenen dokumentenorientieren Webapplikationen erwiesen.</p>\n"+
+"<p>Die Nutzung dieser erprobten Werkzeuge bringt zudem die Unterstützung von Webbrowsers mobiler Geräte nahezu zum Nulltarif mit. So ist die Bedienung der Anwendung mit einem iPad oder iPhone problemlos möglich und bspw.  das einfache Bereitstellen der Fotografien vom Flipchart aus dem gerade beendeten Workshop  schnell &amp; einfach erledigt.</p>\n"+
+"<p>Die Lösung zeigt einmal mehr, dass es neben der Konfiguration &amp; Anpassung eines Standard-Clients immer auch eine zweite zu betrachtende Variante gibt:</p>\n"+
+"<p>Die Implementierung eines eigenen Clients anhand von best-of-breed Technologien, wie in diesem Beispiel zur vollsten Kundenzufriedenheit gezeigt.</p>\n"+
+"<p>&nbsp;</p>\n"+
+"]]></content:encoded>\n"+
+"			<wfw:commentRss>http://blog.fme.de/allgemein/2013-07/fme-file-exchange-plattform-austausch-mit-externen-leichtgemacht/feed</wfw:commentRss>\n"+
+"		<slash:comments>0</slash:comments>\n"+
+"		</item>\n"+
+"		<item>\n"+
+"		<title>migration-center vs. EMC&#8217;s EMA &#8211; the main differences</title>\n"+
+"		<link>http://blog.fme.de/allgemein/2013-07/migration-center-vs-emcs-ema-the-main-differences</link>\n"+
+"		<comments>http://blog.fme.de/allgemein/2013-07/migration-center-vs-emcs-ema-the-main-differences#comments</comments>\n"+
+"		<pubDate>Thu, 04 Jul 2013 09:30:09 +0000</pubDate>\n"+
+"		<dc:creator>fpiaszyk</dc:creator>\n"+
+"				<category><![CDATA[Allgemein]]></category>\n"+
+"		<category><![CDATA[ECM Consulting]]></category>\n"+
+"		<category><![CDATA[EMC Documentum]]></category>\n"+
+"		<category><![CDATA[Software Technology]]></category>\n"+
+"		<category><![CDATA[D2]]></category>\n"+
+"		<category><![CDATA[delta Migration]]></category>\n"+
+"		<category><![CDATA[Documentum. xCP]]></category>\n"+
+"		<category><![CDATA[Migration]]></category>\n"+
+"		<category><![CDATA[migration-center]]></category>\n"+
+"		<category><![CDATA[no downtime]]></category>\n"+
+"\n"+
+"		<guid isPermaLink=\"false\">http://blog.fme.de/?p=1793</guid>\n"+
+"		<description><![CDATA[EMA is NOT an out-of-the-box product, it&#8217;s a tool set (framework) that ONLY the Documentum Professional Services Team uses. This tool is exclusively available through EMC IIG Services. Once the engagement is over, EMA leaves with the team and cannot be used for additional migrations. Partners and customers are not able to use EMA without [...]]]></description>\n"+
+"			<content:encoded><![CDATA[<p><a href=\"http://blog.fme.de/wp-content/uploads/2013/07/product_06.jpg\" rel=\"lightbox[1793]\"><img class=\"alignright size-thumbnail wp-image-1802\" title=\"product_06\" src=\"http://blog.fme.de/wp-content/uploads/2013/07/product_06-150x150.jpg\" alt=\"\" width=\"150\" height=\"150\" /></a></p>\n"+
+"<p>EMA is NOT an out-of-the-box product, it&#8217;s a tool set (framework) that ONLY the Documentum Professional Services Team uses. This tool is exclusively available through EMC IIG Services. Once the engagement is over, EMA leaves with the team and cannot be used for additional migrations. Partners and customers are not able to use EMA without IIG Consulting in a project because EMA bypasses the API (DFC).</p>\n"+
+"<p>The main use case for EMA is the high speed cloning from Oracle based on-premise Documentum installations to MS SQL Server based off-premise (EMC onDemand) installations. For this approach a simple dump&amp;load is not feasible and a tool for cloning is needed. In addition EMC addressed some other use cases at EMC World 2013 like version upgrades (DCM to D2 life sciences and Webtop to D2 or xCP) and third-party migrations.</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p><strong>Speed vs. Business Requirements and methodology</strong></p>\n"+
+"<p>Cloning or a 1:1 migration of all objects located in a repository without reorganization and clean-up has no additional business value, the result is just a new platform and/or version (garbage in, garbage out). With EMA, changes on business requirements can not be applied easily during the migration (e.g. new metadata, new object types, business logic etc.). The results of the actual migration can not be discussed with the business department before the content is imported in the target system. If needed, duplicates can not be dictated and managed. And furthermore it is not possible to apply changes during the project with just a few clicks of the mouse as you could with migration-center.</p>\n"+
+"<p><span id=\"more-1793\"></span></p>\n"+
+"<p>migration-center is designed for administrators to support them in all phases of a migration project and not just during the import. Over the last eight years migration-center has been developed together with our partners and customers into a product which integrates a full project methodology and all essential capabilities for the different steps during a project. Today migration-center is not only a product but rather a migration platform which covers numerous migration scenarios.</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p><strong>Summary</strong></p>\n"+
+"<p>Unlike EMC&#8217;s EMA migration-center is an out-of-the box product and comes with a user-friendly user interface, full maintenance and support as well as regular updates. The product has been EMC certified since 2005 and is a proven migration platform for any type of platform independent content migration. migration-center has proven its quality in more than 130 projects around the world, especially within regulated environments. The product may be either leased or purchased by customers, partners or other preferred system integrators. migration-center is scalable because of the open architecture which is designed for large migrations without interrupting daily business operations. It reaches the maximum performance of the source and target systems without any restrictions. A large variety of algorithms and commands are available to meet all migration requirements without additional programming or scripting effort.</p>\n"+
+"<p>migration-center&#8217;s abilities will make a positive difference in your business if you are challenged with migrating enterprise content. Convince yourself and let us demonstrate your data working within migration-center in just a couple of days before you lease or buy the whole product.</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p><strong>Benefits of using migration-center instead of EMA</strong></p>\n"+
+"<p>•  No service engagement with EMC IIG Services required</p>\n"+
+"<p>•  migration-center is a proven product since 2005 and was used in more than 130 projects around the world</p>\n"+
+"<p>•  It is a full function, out-of-the-box software, fully documented, easy to deploy, with an excellent graphical user interface</p>\n"+
+"<p>•  Product and migration project support available</p>\n"+
+"<p>•  NO DOWNTIME &#8211; It works without interrupting any of your normal business operations because of the delta migration capability</p>\n"+
+"<p>•  The software is designed to efficiently move and classify large volumes of documents into the chosen target repository (up to 1 million documents per day with just a single process)</p>\n"+
+"<p>•  A large variety of algorithms and commands are available to create a very specific mapping and transformation logic, covering every migration requirement possible and all this without additional programming or scripting effort</p>\n"+
+"<p>•  migration-center is able to apply the business logic of highly customized applications (e.g. D2, TBOs, server methods etc.)</p>\n"+
+"<p>•  Complete rule simulation and error handling are provided, allowing in depth testing of transformation rules before committing content to the actual repository import</p>\n"+
+"<p>•  The migration-center grants more than 55 out-of-the-box connections from various source to various target systems</p>\n"+
+"<p>•  migration-center supports Documentum versions from 4i to D7</p>\n"+
+"<p>•  migration-center reflects proven migration methodology in its step-by-step processes</p>\n"+
+"<p>•  migration-center has been validated and approved by many companies in regulated environments, e.g. international pharmaceutical players</p>\n"+
+"<p>•  The migration-center pricing model is simple and very flexible</p>\n"+
+"<p>•  Through its enormous flexibility the migration-center may be deployed for a variety of migration scenarios</p>\n"+
+"<p>•  With our international partners we offer customer product training as well as complete implementation by highly competent professionals</p>\n"+
+"<p>•  migration-center is much more them a simple ETL tool</p>\n"+
+"<p>•  migration-center offers a variety of proven pre-confugure services</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p>&nbsp;</p>\n"+
+"<p>&nbsp;</p>\n"+
+"]]></content:encoded>\n"+
+"			<wfw:commentRss>http://blog.fme.de/allgemein/2013-07/migration-center-vs-emcs-ema-the-main-differences/feed</wfw:commentRss>\n"+
+"		<slash:comments>0</slash:comments>\n"+
+"		</item>\n"
+;
+
+  @Test
+  public void testFailure()
+    throws IOException, ManifoldCFException
+  {
+    org.apache.manifoldcf.core.system.Logging.misc = org.apache.log4j.Logger.getLogger("test");
+    InputStream is = new ByteArrayInputStream(fuzzyTestString.getBytes("utf-8"));
+    Parser p = new Parser();
+    // Parse the document.  This will cause various things to occur, within the instantiated XMLParsingContext class.
+    XMLFuzzyHierarchicalParseState x = new XMLFuzzyHierarchicalParseState();
+    x.setContext(new TestParsingContext(x));
+    try
+    {
+      // Believe it or not, there are no parsing errors we can get back now.
+      p.parseWithCharsetDetection(null,is,x);
+    }
+    finally
+    {
+      x.cleanup();
+    }
+  }
+
+  protected static class TestParsingContext extends XMLParsingContext
+  {
+    protected String thisTag = null;
+    
+    public TestParsingContext(XMLFuzzyHierarchicalParseState theStream)
+    {
+      super(theStream);
+    }
+    
+    public TestParsingContext(XMLFuzzyHierarchicalParseState theStream, String namespace, String localname, String qname, Map<String,String> theseAttributes)
+    {
+      super(theStream,namespace,localname,qname,theseAttributes);
+    }
+    
+    @Override
+    protected XMLParsingContext beginTag(String namespace, String localName, String qName, Map<String,String> atts)
+      throws ManifoldCFException
+    {
+      thisTag = qName;
+      return new TestParsingContext(theStream,namespace,localName,qName,atts);
+    }
+
+    @Override
+    protected void endTag()
+      throws ManifoldCFException
+    {
+      super.endTag();
+    }
+
+  }
+  
+}
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/TestZooKeeperLocks.java b/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/TestZooKeeperLocks.java
new file mode 100644
index 0000000..53fb1ed
--- /dev/null
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/TestZooKeeperLocks.java
@@ -0,0 +1,345 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.io.*;
+import java.util.*;
+import java.util.concurrent.atomic.*;
+import org.junit.*;
+import static org.junit.Assert.*;
+
+public class TestZooKeeperLocks extends ZooKeeperBase
+{
+  protected File synchDir = null;
+  
+  protected final static int readerThreadCount = 10;
+  protected final static int writerThreadCount = 5;
+
+  @Test
+  public void multiThreadZooKeeperLockTest()
+    throws Exception
+  {
+    // First, set off the threads
+    ZooKeeperConnectionPool pool = new ZooKeeperConnectionPool("localhost:8348",2000);
+    LockObjectFactory factory = new ZooKeeperLockObjectFactory(pool);
+
+    runTest(factory);
+  }
+  
+  @Before
+  public void createSynchDir()
+    throws Exception
+  {
+    synchDir = new File("synchdir");
+    synchDir.mkdir();
+  }
+
+  @After
+  public void removeSynchDir()
+    throws Exception
+  {
+    if (synchDir != null)
+      deleteRecursively(synchDir);
+    synchDir = null;
+  }
+  
+  @Test
+  public void multiThreadFileLockTest()
+    throws Exception
+  {
+    runTest(new FileLockObjectFactory(synchDir));
+  }
+  
+  protected static void runTest(LockObjectFactory factory)
+    throws Exception
+  {
+    String lockKey = "testkey";
+    AtomicInteger ai = new AtomicInteger(0);
+    
+    ReaderThread[] readerThreads = new ReaderThread[readerThreadCount];
+    for (int i = 0 ; i < readerThreadCount ; i++)
+    {
+      readerThreads[i] = new ReaderThread(factory, lockKey, ai);
+      readerThreads[i].start();
+    }
+
+    WriterThread[] writerThreads = new WriterThread[writerThreadCount];
+    for (int i = 0 ; i < writerThreadCount ; i++)
+    {
+      writerThreads[i] = new WriterThread(factory, lockKey, ai);
+      writerThreads[i].start();
+    }
+    
+    for (int i = 0 ; i < readerThreadCount ; i++)
+    {
+      Throwable e = readerThreads[i].finishUp();
+      if (e != null)
+      {
+        if (e instanceof RuntimeException)
+          throw (RuntimeException)e;
+        if (e instanceof Error)
+          throw (Error)e;
+        if (e instanceof Exception)
+          throw (Exception)e;
+      }
+    }
+    
+    for (int i = 0 ; i < writerThreadCount ; i++)
+    {
+      Throwable e = writerThreads[i].finishUp();
+      if (e != null)
+      {
+        if (e instanceof RuntimeException)
+          throw (RuntimeException)e;
+        if (e instanceof Error)
+          throw (Error)e;
+        if (e instanceof Exception)
+          throw (Exception)e;
+      }
+    }
+    
+  }
+  
+  protected static void enterReadLock(LockObject lo)
+    throws Exception
+  {
+    try
+    {
+      lo.enterReadLock();
+    }
+    catch (ExpiredObjectException e)
+    {
+      throw new ManifoldCFException("Unexpected exception: "+e.getMessage(),e);
+    }
+  }
+  
+  protected static void leaveReadLock(LockObject lo)
+    throws Exception
+  {
+    try
+    {
+      lo.leaveReadLock();
+    }
+    catch (ExpiredObjectException e)
+    {
+      throw new ManifoldCFException("Unexpected exception: "+e.getMessage(),e);
+    }
+  }
+
+  protected static void enterWriteLock(LockObject lo)
+    throws Exception
+  {
+    try
+    {
+      lo.enterWriteLock();
+    }
+    catch (ExpiredObjectException e)
+    {
+      throw new ManifoldCFException("Unexpected exception: "+e.getMessage(),e);
+    }
+  }
+  
+  protected static void leaveWriteLock(LockObject lo)
+    throws Exception
+  {
+    try
+    {
+      lo.leaveWriteLock();
+    }
+    catch (ExpiredObjectException e)
+    {
+      throw new ManifoldCFException("Unexpected exception: "+e.getMessage(),e);
+    }
+  }
+  
+  /** Reader thread */
+  protected static class ReaderThread extends Thread
+  {
+    protected final LockObjectFactory factory;
+    protected final Object lockKey;
+    protected final AtomicInteger ai;
+    
+    protected Throwable exception = null;
+    
+    public ReaderThread(LockObjectFactory factory, Object lockKey, AtomicInteger ai)
+    {
+      setName("reader");
+      this.factory = factory;
+      this.lockKey = lockKey;
+      this.ai = ai;
+    }
+    
+    public void run()
+    {
+      try
+      {
+        // Create a new lock pool since that is the best way to insure real
+        // zookeeper action.
+        LockPool lp = new LockPool(factory);
+        LockObject lo;
+        // First test: count all reader threads inside read lock.
+        // This guarantees that read locks are non-exclusive.
+        // Enter read lock
+        System.out.println("Entering read lock");
+        lo = lp.getObject(lockKey);
+        enterReadLock(lo);
+        try
+        {
+          System.out.println(" Read lock entered!");
+          // Count this thread
+          ai.incrementAndGet();
+          // Wait until all readers have been counted.  This test will hang if the readers function
+          // exclusively
+          while (ai.get() < readerThreadCount)
+          {
+            Thread.sleep(10L);
+          }
+        }
+        finally
+        {
+          System.out.println("Leaving read lock");
+          leaveReadLock(lo);
+          System.out.println(" Left read lock!");
+        }
+        // Now, all the writers will get involved; we just need to make sure we never see an inconsistent value
+        while (ai.get() < readerThreadCount + 2*writerThreadCount)
+        {
+          System.out.println("Waiting for all write threads to succeed...");
+          lo = lp.getObject(lockKey);
+          enterReadLock(lo);
+          try
+          {
+            // The writer thread will increment the counter twice for every thread, both times within the lock.
+            // We never want to see the intermediate values.
+            if ((ai.get() - readerThreadCount) % 2 == 1)
+              throw new Exception("Was able to read when write lock in place");
+          }
+          finally
+          {
+            leaveReadLock(lo);
+          }
+          Thread.sleep(100L);
+        }
+        System.out.println("Done with reader thread");
+      }
+      catch (InterruptedException e)
+      {
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public Throwable finishUp()
+      throws InterruptedException
+    {
+      join();
+      return exception;
+    }
+    
+  }
+
+  /** Writer thread */
+  protected static class WriterThread extends Thread
+  {
+    protected final LockObjectFactory factory;
+    protected final Object lockKey;
+    protected final AtomicInteger ai;
+
+    protected Throwable exception = null;
+    
+    public WriterThread(LockObjectFactory factory, Object lockKey, AtomicInteger ai)
+    {
+      setName("writer");
+      this.factory = factory;
+      this.lockKey = lockKey;
+      this.ai = ai;
+    }
+    
+    public void run()
+    {
+      try
+      {
+        // Create a new lock pool since that is the best way to insure real
+        // zookeeper action.
+        // LockPool is a dummy
+        LockPool lp = new LockPool(factory);
+        LockObject lo;
+        // Take write locks but free them if read is what's active
+        while (true)
+        {
+          lo = lp.getObject(lockKey);
+          enterWriteLock(lo);
+          System.out.println("Made it into write lock");
+          try
+          {
+            // Check if we made it in during read cycle... that would be bad.
+            if (ai.get() > 0 && ai.get() < readerThreadCount)
+              throw new Exception("Was able to write even when readers were active");
+            if (ai.get() >= readerThreadCount)
+              break;
+          }
+          finally
+          {
+            leaveWriteLock(lo);
+          }
+          Thread.sleep(100L);
+        }
+        
+        // Get write lock, increment twice, and leave write lock
+        lo = lp.getObject(lockKey);
+        enterWriteLock(lo);
+        try
+        {
+          if ((ai.get() - readerThreadCount) % 2 == 1)
+            throw new Exception("More than one writer thread active at the same time!");
+          ai.incrementAndGet();
+          // Keep the lock for a while so other threads have to wait
+          Thread.sleep(50L);
+          // Increment again
+          ai.incrementAndGet();
+          System.out.println("Updated write count");
+        }
+        finally
+        {
+          leaveWriteLock(lo);
+        }
+        // Completed successfully!
+      }
+      catch (InterruptedException e)
+      {
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public Throwable finishUp()
+      throws InterruptedException
+    {
+      join();
+      return exception;
+    }
+    
+  }
+  
+}
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperBase.java b/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperBase.java
new file mode 100644
index 0000000..436df9d
--- /dev/null
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperBase.java
@@ -0,0 +1,64 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+import static org.junit.Assert.*;
+
+public class ZooKeeperBase
+{
+  protected File tempDir;
+  protected ZooKeeperInstance instance;
+  
+  @Before
+  public void startZooKeeper()
+    throws Exception
+  {
+    tempDir = new File("zookeeper");
+    tempDir.mkdir();
+    instance = new ZooKeeperInstance(8348,tempDir);
+    instance.start();
+  }
+  
+  @After
+  public void stopZookeeper()
+    throws Exception
+  {
+    instance.stop();
+    deleteRecursively(tempDir);
+  }
+  
+  protected static void deleteRecursively(File tempDir)
+    throws Exception
+  {
+    if (tempDir.isDirectory())
+    {
+      File[] files = tempDir.listFiles();
+      for (File f : files)
+      {
+        deleteRecursively(f);
+      }
+    }
+    tempDir.delete();
+  }
+  
+
+}
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperInstance.java b/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperInstance.java
new file mode 100644
index 0000000..75f99d6
--- /dev/null
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/lockmanager/ZooKeeperInstance.java
@@ -0,0 +1,128 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.lockmanager;
+
+import java.util.*;
+import java.io.*;
+import org.apache.zookeeper.server.*;
+import org.apache.zookeeper.server.quorum.*;
+
+public class ZooKeeperInstance
+{
+  protected final int zkPort;
+  protected final File tempDir;
+  
+  protected ZooKeeperThread zookeeperThread = null;
+  
+  public ZooKeeperInstance(int zkPort, File tempDir)
+  {
+    this.zkPort = zkPort;
+    this.tempDir = tempDir;
+  }
+
+  public void start()
+    throws Exception
+  {
+    Properties startupProperties = new Properties();
+    startupProperties.setProperty("tickTime","2000");
+    startupProperties.setProperty("dataDir",tempDir.toString());
+    startupProperties.setProperty("clientPort",Integer.toString(zkPort));
+
+    final QuorumPeerConfig quorumConfiguration = new QuorumPeerConfig();
+    quorumConfiguration.parseProperties(startupProperties);
+
+    final ServerConfig configuration = new ServerConfig();
+    configuration.readFrom(quorumConfiguration);
+
+    zookeeperThread = new ZooKeeperThread(configuration);
+    zookeeperThread.start();
+    // We have no way of knowing whether zookeeper is alive or not, but the
+    // client is supposed to know about that.  But it doesn't, so wait for 5 seconds
+    Thread.sleep(5000L);
+  }
+  
+  public void stop()
+    throws Exception
+  {
+    while (true)
+    {
+      if (zookeeperThread == null)
+        break;
+      else if (!zookeeperThread.isAlive())
+      {
+        Throwable e = zookeeperThread.finishUp();
+        if (e != null)
+        {
+          if (e instanceof RuntimeException)
+            throw (RuntimeException)e;
+          else if (e instanceof Exception)
+            throw (Exception)e;
+          else if (e instanceof Error)
+            throw (Error)e;
+        }
+        zookeeperThread = null;
+      }
+      else
+      {
+        // This isn't the best way to kill zookeeper but it's the only way
+        // we've got.
+        zookeeperThread.interrupt();
+        Thread.sleep(1000L);
+      }
+    }
+  }
+  
+  protected static class ZooKeeperThread extends Thread
+  {
+    protected final ServerConfig config;
+    
+    protected Throwable exception = null;
+    
+    public ZooKeeperThread(ServerConfig config)
+    {
+      this.config = config;
+    }
+    
+    public void run()
+    {
+      try
+      {
+        ZooKeeperServerMain server = new ZooKeeperServerMain();
+        server.runFromConfig(config);
+      }
+      catch (IOException e)
+      {
+        // Ignore IOExceptions, since that seems to be normal when shutting
+        // down zookeeper via thread.interrupt()
+      }
+      catch (Throwable e)
+      {
+        exception = e;
+      }
+    }
+    
+    public Throwable finishUp()
+      throws InterruptedException
+    {
+      join();
+      return exception;
+    }
+  }
+  
+}
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/Base.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/Base.java
index 8322d51..d1cea32 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/Base.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/Base.java
@@ -184,7 +184,7 @@
     writeConnectorsXML(connectorsXMLContents);
     writeFile(connectorFile,connectorsXMLContents.toString());
 
-    ManifoldCF.initializeEnvironment();
+    ManifoldCF.initializeEnvironment(ThreadContextFactory.make());
   }
   
   protected void localSetUp()
@@ -223,9 +223,10 @@
       loggingFile.delete();
       connectorFile.delete();
       
-      ManifoldCF.cleanUpEnvironment();
+      IThreadContext threadContext = ThreadContextFactory.make();
+      ManifoldCF.cleanUpEnvironment(threadContext);
       // Just in case we're not synchronized...
-      ManifoldCF.resetEnvironment();
+      ManifoldCF.resetEnvironment(threadContext);
     }
   }
   
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDatabase.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDatabase.java
new file mode 100644
index 0000000..e16738e
--- /dev/null
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDatabase.java
@@ -0,0 +1,145 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a testing base class that is responsible for selecting the database via an abstract method which can be overridden. */
+public abstract class BaseDatabase extends Base
+{
+
+  /** Method to add properties to properties.xml contents.
+  * Override this method to add properties clauses to the property file.
+  */
+  @Override
+  protected void writeProperties(StringBuilder output)
+    throws Exception
+  {
+    super.writeProperties(output);
+    writeDatabaseProperties(output);
+    writeDatabaseMaxQueryTimeProperty(output);
+    writeDatabaseMaxHandlesProperty(output);
+    writeCrawlerThreadsProperty(output);
+    writeExpireThreadsProperty(output);
+    writeCleanupThreadsProperty(output);
+    writeDeleteThreadsProperty(output);
+    writeConnectorDebugProperty(output);
+  }
+  
+  /** Method to add database-specific (as opposed to test-specific) parameters to
+  * property file.
+  */
+  protected void writeDatabaseProperties(StringBuilder output)
+    throws Exception
+  {
+    writeDatabaseImplementationProperty(output);
+    writeDatabaseControlProperties(output);
+  }
+  
+  /** Method to write the database implementation property */
+  protected void writeDatabaseImplementationProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.databaseimplementationclass\" value=\""+getDatabaseImplementationClass()+"\"/>\n"
+    );
+  }
+  
+  /** Method to get database implementation class */
+  protected abstract String getDatabaseImplementationClass()
+    throws Exception;
+
+  /** Method to write the database control properties. */
+  protected abstract void writeDatabaseControlProperties(StringBuilder output)
+    throws Exception;
+  
+  /** Method to write the max query time. */
+  protected void writeDatabaseMaxQueryTimeProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\""+getDatabaseMaxQueryTimeProperty()+"\"/>\n"
+    );
+  }
+  
+  /** Method to get max query time property. */
+  protected int getDatabaseMaxQueryTimeProperty()
+    throws Exception
+  {
+    return 30;
+  }
+  
+  /** Method to write the max handles. */
+  protected void writeDatabaseMaxHandlesProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.database.maxhandles\" value=\"80\"/>\n"
+    );
+  }
+  
+  /** Method to write crawler threads property. */
+  protected void writeCrawlerThreadsProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.crawler.threads\" value=\"30\"/>\n"
+    );
+  }
+  
+  /** Method to write expire threads property. */
+  protected void writeExpireThreadsProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.crawler.expirethreads\" value=\"10\"/>\n"
+    );
+  }
+
+  /** Method to write cleanup threads property. */
+  protected void writeCleanupThreadsProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.crawler.cleanupthreads\" value=\"10\"/>\n"
+    );
+  }
+
+  /** Method to write delete threads property. */
+  protected void writeDeleteThreadsProperty(StringBuilder output)
+    throws Exception
+  {
+    output.append(
+      "  <property name=\"org.apache.manifoldcf.crawler.deletethreads\" value=\"10\"/>\n"
+    );
+  }
+
+  /** Method to write connector debug property. */
+  protected void writeConnectorDebugProperty(StringBuilder output)
+    throws Exception
+  {
+    // By default, leave debug off
+  }
+
+}
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDerby.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDerby.java
index 8c7b55a..f46f3b6 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDerby.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseDerby.java
@@ -26,27 +26,26 @@
 import org.junit.*;
 
 /** This is a testing base class that is responsible for setting up/tearing down the core Derby database. */
-public class BaseDerby extends Base
+public class BaseDerby extends BaseDatabase
 {
 
-  /** Method to add properties to properties.xml contents.
-  * Override this method to add properties clauses to the property file.
-  */
-  protected void writeProperties(StringBuilder output)
+  /** Method to get database implementation class */
+  @Override
+  protected String getDatabaseImplementationClass()
     throws Exception
   {
-    super.writeProperties(output);
+    return "org.apache.manifoldcf.core.database.DBInterfaceDerby";
+  }
+
+  /** Method to set database properties */
+  @Override
+  protected void writeDatabaseControlProperties(StringBuilder output)
+    throws Exception
+  {
     String currentPathString = currentPath.getAbsolutePath();
     output.append(
-      "  <property name=\"org.apache.manifoldcf.databaseimplementationclass\" value=\"org.apache.manifoldcf.core.database.DBInterfaceDerby\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.derbydatabasepath\" value=\""+currentPathString.replaceAll("\\\\","/")+"\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.threads\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.expirethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.cleanupthreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.deletethreads\" value=\"10\"/>\n" +
-      //"  <property name=\"org.apache.manifoldcf.connectors\" value=\"DEBUG\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxhandles\" value=\"80\"/>\n"
+      "  <property name=\"org.apache.manifoldcf.derbydatabasepath\" value=\""+currentPathString.replaceAll("\\\\","/")+"\"/>\n"
     );
   }
+
 }
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDB.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDB.java
index 4a2be26..ca1d9ba 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDB.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDB.java
@@ -26,26 +26,26 @@
 import org.junit.*;
 
 /** This is a testing base class that is responsible for setting up/tearing down the core Derby database. */
-public class BaseHSQLDB extends Base
+public class BaseHSQLDB extends BaseDatabase
 {
-  /** Method to add properties to properties.xml contents.
-  * Override this method to add properties clauses to the property file.
-  */
-  protected void writeProperties(StringBuilder output)
+  
+  /** Method to get database implementation class */
+  @Override
+  protected String getDatabaseImplementationClass()
     throws Exception
   {
-    super.writeProperties(output);
+    return "org.apache.manifoldcf.core.database.DBInterfaceHSQLDB";
+  }
+
+  /** Method to set database properties */
+  @Override
+  protected void writeDatabaseControlProperties(StringBuilder output)
+    throws Exception
+  {
     String currentPathString = currentPath.getAbsolutePath();
     output.append(
-      "  <property name=\"org.apache.manifoldcf.databaseimplementationclass\" value=\"org.apache.manifoldcf.core.database.DBInterfaceHSQLDB\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.hsqldbdatabasepath\" value=\""+currentPathString.replaceAll("\\\\","/")+"\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.threads\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.expirethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.cleanupthreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.deletethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxhandles\" value=\"80\"/>\n"
+      "  <property name=\"org.apache.manifoldcf.hsqldbdatabasepath\" value=\""+currentPathString.replaceAll("\\\\","/")+"\"/>\n"
     );
   }
-  
+
 }
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDBext.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDBext.java
index 9a1573d..09df3d8 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDBext.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseHSQLDBext.java
@@ -27,33 +27,25 @@
 import java.lang.reflect.*;
 
 /** This is a testing base class that is responsible for setting up/tearing down the core HSQLDB remote database. */
-public class BaseHSQLDBext extends Base
+public class BaseHSQLDBext extends BaseHSQLDB
 {
   protected DatabaseThread databaseThread = null;
   
-  /** Method to add properties to properties.xml contents.
-  * Override this method to add properties clauses to the property file.
-  */
-  protected void writeProperties(StringBuilder output)
+  /** Method to set database properties */
+  @Override
+  protected void writeDatabaseControlProperties(StringBuilder output)
     throws Exception
   {
-    super.writeProperties(output);
     output.append(
-      "  <property name=\"org.apache.manifoldcf.databaseimplementationclass\" value=\"org.apache.manifoldcf.core.database.DBInterfaceHSQLDB\"/>\n" +
       "  <property name=\"org.apache.manifoldcf.hsqldbdatabaseprotocol\" value=\"hsql\"/>\n" +
       "  <property name=\"org.apache.manifoldcf.hsqldbdatabaseserver\" value=\"localhost\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.hsqldbdatabaseinstance\" value=\"xdb\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.threads\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.expirethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.cleanupthreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.deletethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxhandles\" value=\"80\"/>\n"
-      );
+      "  <property name=\"org.apache.manifoldcf.hsqldbdatabaseinstance\" value=\"xdb\"/>\n"
+    );
   }
 
   /** Method to get database superuser name.
   */
+  @Override
   protected String getDatabaseSuperuserName()
     throws Exception
   {
@@ -62,6 +54,7 @@
   
   /** Method to get database superuser password.
   */
+  @Override
   protected String getDatabaseSuperuserPassword()
     throws Exception
   {
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseMySQL.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseMySQL.java
index 0eb8ad4..5238931 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseMySQL.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BaseMySQL.java
@@ -26,34 +26,41 @@
 import org.junit.*;
 
 /** This is a testing base class that is responsible for setting up/tearing down the core MySQL database. */
-public class BaseMySQL extends Base
+public class BaseMySQL extends BaseDatabase
 {
   protected final static String SUPER_USER_NAME = "root";
   protected final static String SUPER_USER_PASSWORD = "mysql";
   
-  /** Method to add properties to properties.xml contents.
-  * Override this method to add properties clauses to the property file.
-  */
-  protected void writeProperties(StringBuilder output)
+  /** Method to get database implementation class */
+  @Override
+  protected String getDatabaseImplementationClass()
     throws Exception
   {
-    super.writeProperties(output);
+    return "org.apache.manifoldcf.core.database.DBInterfaceMySQL";
+  }
+
+  /** Method to set database properties */
+  @Override
+  protected void writeDatabaseControlProperties(StringBuilder output)
+    throws Exception
+  {
     output.append(
-      "  <property name=\"org.apache.manifoldcf.databaseimplementationclass\" value=\"org.apache.manifoldcf.core.database.DBInterfaceMySQL\"/>\n" +
       "  <property name=\"org.apache.manifoldcf.database.name\" value=\"testdb\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.username\" value=\"testuser\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.threads\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.expirethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.cleanupthreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.deletethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxhandles\" value=\"80\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"15\"/>\n"
+      "  <property name=\"org.apache.manifoldcf.database.username\" value=\"testuser\"/>\n"
     );
   }
 
+  /** Method to get max query time property. */
+  @Override
+  protected int getDatabaseMaxQueryTimeProperty()
+    throws Exception
+  {
+    return 15;
+  }
+
   /** Method to get database superuser name.
   */
+  @Override
   protected String getDatabaseSuperuserName()
     throws Exception
   {
@@ -62,6 +69,7 @@
   
   /** Method to get database superuser password.
   */
+  @Override
   protected String getDatabaseSuperuserPassword()
     throws Exception
   {
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BasePostgresql.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BasePostgresql.java
index abe3a88..dd3e0ef 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BasePostgresql.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/BasePostgresql.java
@@ -26,34 +26,41 @@
 import org.junit.*;
 
 /** This is a testing base class that is responsible for setting up/tearing down the core Postgresql database. */
-public class BasePostgresql extends Base
+public class BasePostgresql extends BaseDatabase
 {
   protected final static String SUPER_USER_NAME = "postgres";
   protected final static String SUPER_USER_PASSWORD = "postgres";
 
-  /** Method to add properties to properties.xml contents.
-  * Override this method to add properties clauses to the property file.
-  */
-  protected void writeProperties(StringBuilder output)
+  /** Method to get database implementation class */
+  @Override
+  protected String getDatabaseImplementationClass()
     throws Exception
   {
-    super.writeProperties(output);
+    return "org.apache.manifoldcf.core.database.DBInterfacePostgreSQL";
+  }
+
+  /** Method to set database properties */
+  @Override
+  protected void writeDatabaseControlProperties(StringBuilder output)
+    throws Exception
+  {
     output.append(
-      "  <property name=\"org.apache.manifoldcf.databaseimplementationclass\" value=\"org.apache.manifoldcf.core.database.DBInterfacePostgreSQL\"/>\n" +
       "  <property name=\"org.apache.manifoldcf.database.name\" value=\"testdb\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.username\" value=\"testuser\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.threads\" value=\"30\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.expirethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.cleanupthreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.crawler.deletethreads\" value=\"10\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxhandles\" value=\"80\"/>\n" +
-      "  <property name=\"org.apache.manifoldcf.database.maxquerytime\" value=\"15\"/>\n"
+      "  <property name=\"org.apache.manifoldcf.database.username\" value=\"testuser\"/>\n"
     );
   }
 
+  /** Method to get max query time property. */
+  @Override
+  protected int getDatabaseMaxQueryTimeProperty()
+    throws Exception
+  {
+    return 15;
+  }
+
   /** Method to get database superuser name.
   */
+  @Override
   protected String getDatabaseSuperuserName()
     throws Exception
   {
@@ -62,6 +69,7 @@
   
   /** Method to get database superuser password.
   */
+  @Override
   protected String getDatabaseSuperuserPassword()
     throws Exception
   {
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/UILockSpinner.java b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/UILockSpinner.java
index 930a5b8..ceba70e 100644
--- a/framework/core/src/test/java/org/apache/manifoldcf/core/tests/UILockSpinner.java
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/tests/UILockSpinner.java
@@ -28,11 +28,10 @@
   public static void main(String[] argv)
     throws Exception
   {
-    ManifoldCF.initializeEnvironment();
-
+    IThreadContext threadContext = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(threadContext);
 
     // Create a thread context object.
-    IThreadContext threadContext = ThreadContextFactory.make();
     ILockManager lockManager = LockManagerFactory.make(threadContext);
 
     System.out.println("Starting test");
diff --git a/framework/core/src/test/java/org/apache/manifoldcf/core/throttler/TestThrottler.java b/framework/core/src/test/java/org/apache/manifoldcf/core/throttler/TestThrottler.java
new file mode 100644
index 0000000..28de2f9
--- /dev/null
+++ b/framework/core/src/test/java/org/apache/manifoldcf/core/throttler/TestThrottler.java
@@ -0,0 +1,516 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.core.throttler;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.io.*;
+import java.util.*;
+import java.util.concurrent.atomic.*;
+import org.junit.*;
+import static org.junit.Assert.*;
+
+public class TestThrottler extends org.apache.manifoldcf.core.tests.BaseDerby
+{
+  @Test
+  public void multiThreadConnectionPoolTest()
+    throws Exception
+  {
+    // First, create the throttle group.
+    IThreadContext threadContext = ThreadContextFactory.make();
+    IThrottleGroups tg = ThrottleGroupsFactory.make(threadContext);
+    tg.createOrUpdateThrottleGroup("test","test",new ThrottleSpec());
+    
+    // We create a pretend connection pool
+    IConnectionThrottler connectionThrottler = tg.obtainConnectionThrottler("test","test",new String[]{"A","B","C"});
+    System.out.println("Connection throttler obtained");
+    
+    // How best to test this?
+    // Well, what I'm going to do is to have multiple threads active.  Each one will do perfectly sensible things
+    // while generating a log that includes timestamps for everything that happens.  At the end, the log will be
+    // analyzed for violations of throttling policy.
+    
+    PollingThread pt = new PollingThread();
+    pt.start();
+
+    EventLog eventLog = new EventLog();
+    
+    int numThreads = 10;
+    
+    TesterThread[] threads = new TesterThread[numThreads];
+    for (int i = 0; i < numThreads; i++)
+    {
+      threads[i] = new TesterThread(connectionThrottler, eventLog);
+      threads[i].start();
+    }
+    
+    // Now, join all the threads at the end
+    for (int i = 0; i < numThreads; i++)
+    {
+      threads[i].finishUp();
+    }
+    
+    pt.interrupt();
+    pt.finishUp();
+
+    // Shut down the throttle group
+    tg.removeThrottleGroup("test","test");
+
+    // Finally, do the log analysis
+    eventLog.analyze();
+    
+    System.out.println("Done test");
+  }
+  
+  protected static class PollingThread extends Thread
+  {
+    protected Throwable exception = null;
+    
+    public PollingThread()
+    {
+    }
+    
+    public void run()
+    {
+      try
+      {
+        IThreadContext threadContext = ThreadContextFactory.make();
+        IThrottleGroups throttleGroups = ThrottleGroupsFactory.make(threadContext);
+        
+        while (true)
+        {
+          throttleGroups.poll("test");
+          Thread.sleep(1000L);
+        }
+      }
+      catch (InterruptedException e)
+      {
+      }
+      catch (Exception e)
+      {
+        exception = e;
+      }
+
+    }
+    
+    public void finishUp()
+      throws Exception
+    {
+      join();
+      if (exception != null)
+      {
+        if (exception instanceof RuntimeException)
+          throw (RuntimeException)exception;
+        else if (exception instanceof Error)
+          throw (Error)exception;
+        else if (exception instanceof Exception)
+          throw (Exception)exception;
+        else
+          throw new RuntimeException("Unknown exception: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+      }
+    }
+
+  }
+  
+  protected static class TesterThread extends Thread
+  {
+    protected final EventLog eventLog;
+    protected final IConnectionThrottler connectionThrottler;
+    protected Throwable exception = null;
+    
+    public TesterThread(IConnectionThrottler connectionThrottler, EventLog eventLog)
+    {
+      this.connectionThrottler = connectionThrottler;
+      this.eventLog = eventLog;
+    }
+    
+    public void run()
+    {
+      try
+      {
+        int numberConnectionCycles = 3;
+        int numberFetchesPerCycle = 3;
+        
+        for (int k = 0; k < numberConnectionCycles; k++)
+        {
+          // First grab a connection.
+          int rval = connectionThrottler.waitConnectionAvailable();
+          if (rval == IConnectionThrottler.CONNECTION_FROM_NOWHERE)
+            throw new Exception("Unexpected return value from waitConnectionAvailable()");
+          IFetchThrottler fetchThrottler;
+          if (rval == IConnectionThrottler.CONNECTION_FROM_CREATION)
+          {
+            // Pretend to create the connection
+            eventLog.addLogEntry(new ConnectionCreatedEvent());
+          }
+          else
+          {
+            // Pretend to get it from the pool
+            eventLog.addLogEntry(new ConnectionFromPoolEvent());
+          }
+          fetchThrottler = connectionThrottler.getNewConnectionFetchThrottler();
+
+          for (int l = 0; l < numberFetchesPerCycle; l++)
+          {
+            // Perform a fake fetch
+            if (fetchThrottler.obtainFetchDocumentPermission() == false)
+              throw new Exception("Unexpected return value for obtainFetchDocumentPermission()");
+            eventLog.addLogEntry(new FetchStartEvent());
+            IStreamThrottler streamThrottler = fetchThrottler.createFetchStream();
+            try
+            {
+              // Do one read
+              if (streamThrottler.obtainReadPermission(1000) == false)
+                throw new Exception("False from obtainReadPermission!");
+              eventLog.addLogEntry(new ReadStartEvent(1000));
+              streamThrottler.releaseReadPermission(1000, 1000);
+              eventLog.addLogEntry(new ReadDoneEvent(1000));
+              // Do another read
+              if (streamThrottler.obtainReadPermission(1000) == false)
+                throw new Exception("False from obtainReadPermission!");
+              eventLog.addLogEntry(new ReadStartEvent(1000));
+              streamThrottler.releaseReadPermission(1000, 1000);
+              eventLog.addLogEntry(new ReadDoneEvent(1000));
+              // Do a third read
+              if (streamThrottler.obtainReadPermission(1000) == false)
+                throw new Exception("False from obtainReadPermission!");
+              eventLog.addLogEntry(new ReadStartEvent(1000));
+              streamThrottler.releaseReadPermission(1000, 100);
+              eventLog.addLogEntry(new ReadDoneEvent(100));
+            }
+            finally
+            {
+              // Close the stream
+              streamThrottler.closeStream();
+            }
+            eventLog.addLogEntry(new FetchDoneEvent());
+          }
+          
+          // Pretend to release the connection
+          boolean destroyIt = connectionThrottler.noteReturnedConnection();
+          if (destroyIt)
+          {
+            eventLog.addLogEntry(new ConnectionDestroyedEvent());
+            connectionThrottler.noteConnectionDestroyed();
+          }
+          else
+          {
+            eventLog.addLogEntry(new ConnectionReturnedToPoolEvent());
+            connectionThrottler.noteConnectionReturnedToPool();
+          }
+        }
+      }
+      catch (Exception e)
+      {
+        e.printStackTrace();
+        exception = e;
+      }
+    }
+    
+    public void finishUp()
+      throws Exception
+    {
+      join();
+      if (exception != null)
+      {
+        if (exception instanceof RuntimeException)
+          throw (RuntimeException)exception;
+        else if (exception instanceof Error)
+          throw (Error)exception;
+        else if (exception instanceof Exception)
+          throw (Exception)exception;
+        else
+          throw new RuntimeException("Unknown exception: "+exception.getClass().getName()+": "+exception.getMessage(),exception);
+      }
+    }
+    
+  }
+  
+  protected static class ThrottleSpec implements IThrottleSpec
+  {
+    public ThrottleSpec()
+    {
+    }
+    
+    /** Given a bin name, find the max open connections to use for that bin.
+    *@return -1 if no limit found.
+    */
+    @Override
+    public int getMaxOpenConnections(String binName)
+    {
+      if (binName.equals("A"))
+        return 3;
+      if (binName.equals("B"))
+        return 4;
+      return Integer.MAX_VALUE;
+    }
+
+    /** Look up minimum milliseconds per byte for a bin.
+    *@return 0.0 if no limit found.
+    */
+    @Override
+    public double getMinimumMillisecondsPerByte(String binName)
+    {
+      if (binName.equals("B"))
+        return 0.5;
+      if (binName.equals("C"))
+        return 0.75;
+      return 0.0;
+    }
+
+    /** Look up minimum milliseconds for a fetch for a bin.
+    *@return 0 if no limit found.
+    */
+    @Override
+    public long getMinimumMillisecondsPerFetch(String binName)
+    {
+      if (binName.equals("A"))
+        return 5;
+      if (binName.equals("C"))
+        return 20;
+      return 0;
+    }
+
+  }
+  
+  protected static class EventLog
+  {
+    protected final List<LogEntry> logList = new ArrayList<LogEntry>();
+    
+    public EventLog()
+    {
+    }
+    
+    public synchronized void addLogEntry(LogEntry x)
+    {
+      System.out.println(x.toString());
+      logList.add(x);
+    }
+    
+    public synchronized void analyze()
+      throws Exception
+    {
+      State s = new State();
+      for (LogEntry l : logList)
+      {
+        l.apply(s);
+      }
+      // Success!
+    }
+    
+  }
+  
+  protected static abstract class LogEntry
+  {
+    protected final long timestamp;
+    
+    public LogEntry(long timestamp)
+    {
+      this.timestamp = timestamp;
+    }
+    
+    public abstract void apply(State state)
+      throws Exception;
+    
+    public String toString()
+    {
+      return "Time: "+timestamp;
+    }
+    
+  }
+  
+  protected static class ConnectionCreatedEvent extends LogEntry
+  {
+    public ConnectionCreatedEvent()
+    {
+      super(System.currentTimeMillis());
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+      if (state.outstandingConnections + 1 > 3)
+        throw new Exception("Too many outstanding connections at once!");
+      state.outstandingConnections++;
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Connection created";
+    }
+
+  }
+
+  protected static class ConnectionDestroyedEvent extends LogEntry
+  {
+    public ConnectionDestroyedEvent()
+    {
+      super(System.currentTimeMillis());
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+      state.outstandingConnections--;
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Connection destroyed";
+    }
+
+  }
+
+  protected static class ConnectionFromPoolEvent extends LogEntry
+  {
+    public ConnectionFromPoolEvent()
+    {
+      super(System.currentTimeMillis());
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Connection from pool";
+    }
+
+  }
+
+  protected static class ConnectionReturnedToPoolEvent extends LogEntry
+  {
+    public ConnectionReturnedToPoolEvent()
+    {
+      super(System.currentTimeMillis());
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Connection back to pool";
+    }
+
+  }
+
+  protected static class FetchStartEvent extends LogEntry
+  {
+    public FetchStartEvent()
+    {
+      super(System.currentTimeMillis());
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+      if (timestamp < state.lastFetch + 20L - 1L)
+        throw new Exception("Fetch too fast: took place in "+ (timestamp - state.lastFetch) + " milliseconds");
+      state.lastFetch = timestamp;
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Fetch start";
+    }
+  }
+  
+  protected static class FetchDoneEvent extends LogEntry
+  {
+    public FetchDoneEvent()
+    {
+      super(System.currentTimeMillis());
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Fetch done";
+    }
+  }
+  
+  protected static class ReadStartEvent extends LogEntry
+  {
+    final int proposed;
+    
+    public ReadStartEvent(int proposed)
+    {
+      super(System.currentTimeMillis());
+      this.proposed = proposed;
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+      if (state.firstByteReadTime == -1L)
+        state.firstByteReadTime = timestamp;
+      else
+      {
+        // Calculate running minimum amount of time it should have taken for the bytes given
+        long minTime = (long)(((double)state.byteTotal) * 0.75 + 0.5);
+        if (timestamp - state.firstByteReadTime < minTime)
+          throw new Exception("Took too short a time to read "+state.byteTotal+" bytes: "+(timestamp - state.firstByteReadTime));
+      }
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Read start("+proposed+")";
+    }
+  }
+
+  protected static class ReadDoneEvent extends LogEntry
+  {
+    final int actual;
+    
+    public ReadDoneEvent(int actual)
+    {
+      super(System.currentTimeMillis());
+      this.actual = actual;
+    }
+    
+    public void apply(State state)
+      throws Exception
+    {
+      state.byteTotal += actual;
+    }
+    
+    public String toString()
+    {
+      return super.toString() + "; Read done("+actual+")";
+    }
+  }
+
+  protected static class State
+  {
+    public int outstandingConnections = 0;
+    public long lastFetch = 0L;
+    public long firstByteReadTime = -1L;
+    public long byteTotal = 0L;
+  }
+  
+}
diff --git a/framework/core/src/test/resources/org/apache/manifoldcf/core/tests/VirtualBrowser.py b/framework/core/src/test/resources/org/apache/manifoldcf/core/tests/VirtualBrowser.py
index 3b5c160..13f3ae9 100755
--- a/framework/core/src/test/resources/org/apache/manifoldcf/core/tests/VirtualBrowser.py
+++ b/framework/core/src/test/resources/org/apache/manifoldcf/core/tests/VirtualBrowser.py
@@ -19,6 +19,8 @@
 import Javascript
 import urllib
 import urllib2
+import httplib
+import cookielib
 import HTMLParser
 import base64
 import re
@@ -735,18 +737,60 @@
     def set_bodytext( self, bodytext ):
         self.linktext = bodytext
 
+# Dummy request (so we can use cookiejar)
+class DummyRequest:
+    """
+    Dummy request (for interfacing with cookiejar).
+    """
+
+    def __init__( self, protocol, host, url ):
+        self.protocol = protocol
+        self.host = host
+        self.url = url
+        self.headers = {}
+
+    def has_header( self, name ):
+        return name in self.headers
+
+    def add_header( self, key, val ):
+        self.headers[key.capitalize()] = val
+
+    def add_unredirected_header(self, key, val):
+        self.headers[key.capitalize()] = val
+
+    def is_unverifiable( self ):
+        return True
+
+    def get_type( self ):
+        return self.protocol
+
+    def get_full_url( self ):
+        return self.url
+
+    def get_header( self, header_name, default=None ):
+        return self.headers.get( header_name, default )
+
+    def get_host( self ):
+        return self.host
+
+    get_origin_req_host = get_host
+
+    def get_headers( self ):
+        return self.headers
+
 # Class that describes a virtual browser window.  Each virtual window has some set of forms and links,
 # as well as a set of dialog boxes (which can be popped up due to various actions, and dismissed
 # by virtual user activity)
 class VirtualWindow:
 
     def __init__( self, browser_instance, window_name, data, parent, current_url ):
-        print "Loading window '%s' with data from url %s" % (window_name, current_url)
+        print >>sys.stderr, "Loading window '%s' with data from url %s" % (window_name, current_url)
         self.links = { }
         self.buttons = { }
         self.forms = { }
         self.window_name = window_name
         self.data = data
+        #print >>sys.stderr, data
         self.parent = parent
         self.browser_instance = browser_instance
         self.is_open = True
@@ -993,16 +1037,19 @@
         self.password = password
         self.win_host = win_host
         self.language = language
+        self.cookiejar = cookielib.CookieJar()
         if win_host == None and username != None:
             # Set up basic auth
-            self.urllibopener = urllib2.build_opener( urllib2.HTTPHandler ( ) )
+            pass
+            #self.urllibopener = urllib2.build_opener( urllib2.HTTPHandler ( ) )
         elif win_host != None and username != None:
             # Proxy-based auth
             # MHL
             raise Exception("Feature not yet implemented")
         else:
             # Use standard opener
-            self.urllibopener = urllib2.build_opener( urllib2.HTTPHandler ( ) )
+            pass
+            #self.urllibopener = urllib2.build_opener( urllib2.HTTPHandler ( ) )
 
     # Public part of the Virtual Browser interface
 
@@ -1073,6 +1120,7 @@
         window_data = self.fetch_data_with_get( url )
         self.reload_window( window_name, window_data, url )
 
+    """
     def fetch_and_decode( self, req ):
         f = self.urllibopener.open( req )
         fetch_info = f.info()
@@ -1083,10 +1131,13 @@
             if charset_index != -1:
                 encoding = content_type[charset_index+8:len(content_type)]
         return f.read( ).decode(encoding)
+    """
 
     # Read a url with get.  Returns the data as a string.
     def fetch_data_with_get( self, url ):
         print >> sys.stderr, "Getting url '%s'..." % url
+        return self.talk_to_server(url)
+        """
         req = urllib2.Request( url )
         if self.language != None:
             req.add_header("Accept-Language", self.language)
@@ -1097,11 +1148,14 @@
         # MHL - not yet implemented
         # req.add_header('Referer', 'http://www.python.org/')
         return self.fetch_and_decode( req )
+        """
 
     # Read a url with post.  Pass the parameters as an array of ( name, value ) tuples.
     def fetch_data_with_post( self, parameters, url ):
         paramstring = urllib.urlencode( parameters, doseq=True )
         print >> sys.stderr, "Posting url '%s' with parameters '%s'..." % (url, paramstring)
+        return self.talk_to_server( url, method="POST", body=paramstring, content_type="application/x-www-form-urlencoded" )
+        """
         req = urllib2.Request( url, paramstring )
         if self.language != None:
             req.add_header("Accept-Language", self.language)
@@ -1111,6 +1165,7 @@
         # Add cookies by domain
         # MHL
         return self.fetch_and_decode( req )
+        """
 
     # Private method to post using multipart forms
     def fetch_data_with_multipart_post( self, parameters, files, url ):
@@ -1121,17 +1176,6 @@
             filecount = len(files)
         print >> sys.stderr, "Multipart posting url '%s' with parameters '%s' and %d files..." % (url, paramstring, filecount)
 
-        # Turn URL into protocol, host, and selector
-        urlpieces = url.split("://")
-        protocol = urlpieces[0]
-        uri = urlpieces[1]
-        # Split uri at the first /
-        uripieces = uri.split("/")
-        host = uripieces[0]
-        selector = uri[len(host):len(uri)]
-
-        import httplib
-
         """
         Post fields and files to an http host as multipart/form-data.
         fields is a sequence of (name, value) elements for regular form fields.
@@ -1139,52 +1183,100 @@
         Return the server's response page.
         """
         content_type, body = encode_multipart_formdata(parameters, files)
-        if protocol == "http":
-            h = httplib.HTTPConnection(host)
-        elif protocol == "https":
-            h = httplib.HTTPSConnection(host)
-        else:
-            raise Exception("Unknown protocol: %s" % protocol)
+        return self.talk_to_server(url, method="POST", content_type=content_type, body=body)
 
-        h.connect()
-        try:
-            # Set the request type and url
-            h.putrequest("POST", selector)
+    def talk_to_server(self, url, method="GET", content_type=None, body=None):
+        
+        # Redirection loop
+        while True:
 
-            # Set the content type and length
-            h.putheader("content-type", content_type)
-            h.putheader("content-length", str(len(body)))
+            # Turn URL into protocol, host, and selector
+            urlpieces = url.split("://")
+            protocol = urlpieces[0]
+            uri = urlpieces[1]
+            # Split uri at the first /
+            uripieces = uri.split("/")
+            host = uripieces[0]
+            selector = uri[len(host):len(uri)]
 
-            # Add cookies by domain
-            # MHL
+            if protocol == "http":
+                h = httplib.HTTPConnection(host)
+            elif protocol == "https":
+                h = httplib.HTTPSConnection(host)
+            else:
+                raise Exception("Unknown protocol: %s" % protocol)
 
-            if self.language != None:
-                h.putheader("Accept-Language", self.language)
+            h.connect()
+            try:
+                # Set the request type and url
+                h.putrequest(method, selector)
 
-            # Add basic auth credentials, if needed.
-            if self.username != None:
-                base64string = base64.encodestring("%s:%s" % (self.username, self.password))[:-1]
-                h.putheader("Authorization", "Basic %s" % base64string)
+                # Set the content type and length
+                if content_type != None:
+                    h.putheader("content-type", content_type)
+                if body != None:
+                    h.putheader("content-length", str(len(body)))
 
-            h.endheaders()
+                # Add cookies by domain.  To do this with httplib and cookielib,
+                # we create a dummy urllib2 request.
+                urllib2_request = DummyRequest( protocol, host, url )
 
-            # Send the body
-            h.send(body)
-            response = h.getresponse()
-            status = response.status
-            headers = response.getheaders()
-            encoding = "iso-8859-1"
-            content_type = response.getheader("Content-type","text/html; charset=iso-8859-1")
-            charset_index = content_type.find("charset=")
-            if charset_index != -1:
-                encoding = content_type[charset_index+8:len(content_type)]
+                # add cookies to fake request
+                self.cookiejar.add_cookie_header( urllib2_request )
+                
+                # apply cookie headers to actual request
+                for header in urllib2_request.get_headers().keys():
+                    header_value = urllib2_request.get_headers()[header]
+                    h.putheader( header, header_value )
 
-            value = response.read().decode(encoding)
-            if status != 200:
-                raise Exception("Received an error response %d from url: '%s'" % (status,url) )
-            return value
-        finally:
-            h.close()
+                if self.language != None:
+                    h.putheader("Accept-Language", self.language)
+
+                # Add basic auth credentials, if needed.
+                if self.username != None:
+                    base64string = base64.encodestring("%s:%s" % (self.username, self.password))[:-1]
+                    h.putheader("Authorization", "Basic %s" % base64string)
+
+                h.endheaders()
+
+                # Send the body
+                if body != None:
+                    h.send(body)
+                response = h.getresponse()
+                status = response.status
+                headers = response.getheaders()
+                encoding = "iso-8859-1"
+                content_type = response.getheader("Content-type","text/html; charset=iso-8859-1")
+                charset_index = content_type.find("charset=")
+                if charset_index != -1:
+                    encoding = content_type[charset_index+8:len(content_type)]
+
+                value = response.read().decode(encoding)
+
+                # HACK: pretend we're urllib2 response for cookielib
+                response.info = lambda : response.msg
+
+                # read and store cookies from response
+                self.cookiejar.extract_cookies(response, urllib2_request)
+
+                # If redirection, go around again
+                if status == 301 or status == 302:
+                    # Redirection!  New url to process.
+                    location_header = response.msg.getheader("location")
+                    if location_header != None:
+                        print >>sys.stderr, "Redirecting from url '%s' to url '%s'..." % ( url, location_header )
+                        url = location_header
+                        continue
+                    else:
+                        raise Exception("Missing redirection location header")
+
+                # If NOT a redirection, handle it.
+                if status != 200:
+                    raise Exception("Received an error response %d from url: '%s'" % (status,url) )
+
+                return value
+            finally:
+                h.close()
 
 # Static method for multipart encoding
 def encode_multipart_formdata(fields, files):
@@ -1236,7 +1328,7 @@
             raise Exception("confirm method requires one string argument")
         # Evaluate to string
         message = argset[0].str_value( )
-        print "CONFIRM: "+message
+        print >>sys.stderr, "CONFIRM: "+message
         # Now, decide whether we return true or false.
         return Javascript.JSBoolean( self.window_instance.get_answer( message, True ) )
 
@@ -1254,7 +1346,7 @@
             raise Exception("alert method requires one string argument")
         # Evaluate just to be sure there's no error
         message = argset[0].str_value( )
-        print "ALERT: "+message
+        print >>sys.stderr, "ALERT: "+message
         # Always click "OK"
         return Javascript.JSBoolean( True )
 
@@ -1299,7 +1391,7 @@
         self.owner = owner
 
     def call( self, argset, context ):
-        print "FOCUS: On field '%s'" % self.owner.get_name( )
+        print >>sys.stderr, "FOCUS: On field '%s'" % self.owner.get_name( )
         return Javascript.JSBoolean( True )
 
 # JS object representing a window in Javascript
@@ -1556,7 +1648,7 @@
                 multipart = False
             if method == "POST" and multipart == True:
                 method = "MULTIPART"
-            print "Form of type %s detected" % method
+            print >>sys.stderr, "Form of type %s detected" % method
             self.current_form_instance = VirtualForm( self.window_instance, name, action, method )
             self.window_instance.add_form( self.current_form_instance )
         except:
diff --git a/framework/crawler-ui/pom.xml b/framework/crawler-ui/pom.xml
index 39c1cba..366f411 100644
--- a/framework/crawler-ui/pom.xml
+++ b/framework/crawler-ui/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -93,7 +93,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/IdleCleanupThread.java b/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/IdleCleanupThread.java
new file mode 100644
index 0000000..870d34a
--- /dev/null
+++ b/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/IdleCleanupThread.java
@@ -0,0 +1,141 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawlerui;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.core.system.Logging;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** This thread periodically calls the cleanup method in all connected repository connectors.  The ostensible purpose
+* is to allow the connectors to shutdown idle connections etc.
+*/
+public class IdleCleanupThread extends Thread
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Constructor.
+  */
+  public IdleCleanupThread()
+    throws ManifoldCFException
+  {
+    super();
+    setName("Idle cleanup thread");
+    setDaemon(true);
+  }
+
+  public void run()
+  {
+    Logging.root.debug("Start up idle cleanup thread");
+    try
+    {
+      // Create a thread context object.
+      IThreadContext threadContext = ThreadContextFactory.make();
+      // Get the cache handle.
+      ICacheManager cacheManager = CacheManagerFactory.make(threadContext);
+      
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(threadContext);
+      IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(threadContext);
+      
+      IThrottleGroups throttleGroups = ThrottleGroupsFactory.make(threadContext);
+
+      // Loop
+      while (true)
+      {
+        // Do another try/catch around everything in the loop
+        try
+        {
+          // Do the cleanup
+          repositoryConnectorPool.pollAllConnectors();
+          outputConnectorPool.pollAllConnectors();
+          authorityConnectorPool.pollAllConnectors();
+          mappingConnectorPool.pollAllConnectors();
+          
+          throttleGroups.poll();
+          
+          cacheManager.expireObjects(System.currentTimeMillis());
+          
+          // Sleep for the retry interval.
+          ManifoldCF.sleep(5000L);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            break;
+
+          if (e.getErrorCode() == ManifoldCFException.DATABASE_CONNECTION_ERROR)
+          {
+            Logging.root.error("Idle cleanup thread aborting and restarting due to database connection reset: "+e.getMessage(),e);
+            try
+            {
+              // Give the database a chance to catch up/wake up
+              ManifoldCF.sleep(10000L);
+            }
+            catch (InterruptedException se)
+            {
+              break;
+            }
+            continue;
+          }
+
+          // Log it, but keep the thread alive
+          Logging.root.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
+        }
+        catch (OutOfMemoryError e)
+        {
+          System.err.println("Crawler UI ran out of memory - shutting down");
+          e.printStackTrace(System.err);
+          System.exit(-200);
+        }
+        catch (Throwable e)
+        {
+          // A more severe error - but stay alive
+          Logging.root.fatal("Error tossed: "+e.getMessage(),e);
+        }
+      }
+    }
+    catch (Throwable e)
+    {
+      // Severe error on initialization
+      System.err.println("Crawler UI could not start - shutting down");
+      Logging.root.fatal("IdleCleanupThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
+    }
+
+  }
+
+}
diff --git a/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/ServletListener.java b/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/ServletListener.java
index c97e53e..92a2e86 100644
--- a/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/ServletListener.java
+++ b/framework/crawler-ui/src/main/java/org/apache/manifoldcf/crawlerui/ServletListener.java
@@ -29,11 +29,16 @@
 {
   public static final String _rcsid = "@(#)$Id$";
 
+  protected IdleCleanupThread idleCleanupThread = null;
+
   public void contextInitialized(ServletContextEvent sce)
   {
     try
     {
-      ManifoldCF.initializeEnvironment();
+      IThreadContext threadContext = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(threadContext);
+      idleCleanupThread = new IdleCleanupThread();
+      idleCleanupThread.start();
     }
     catch (ManifoldCFException e)
     {
@@ -43,7 +48,21 @@
   
   public void contextDestroyed(ServletContextEvent sce)
   {
-    ManifoldCF.cleanUpEnvironment();
+    try
+    {
+      while (true)
+      {
+        if (idleCleanupThread == null)
+          break;
+        idleCleanupThread.interrupt();
+        if (!idleCleanupThread.isAlive())
+          idleCleanupThread = null;
+      }
+    }
+    finally
+    {
+      ManifoldCF.cleanUpEnvironment(ThreadContextFactory.make());
+    }
   }
 
 }
diff --git a/framework/crawler-ui/src/main/webapp/WEB-INF/web.xml b/framework/crawler-ui/src/main/webapp/WEB-INF/web.xml
index b9cb732..dac5077 100644
--- a/framework/crawler-ui/src/main/webapp/WEB-INF/web.xml
+++ b/framework/crawler-ui/src/main/webapp/WEB-INF/web.xml
@@ -26,7 +26,7 @@
   <description>ManifoldCF Crawler Interface</description>
 
   <session-config>
-    <session-timeout>5</session-timeout>
+    <session-timeout>30</session-timeout>
   </session-config>
 
   <context-param>
diff --git a/framework/crawler-ui/src/main/webapp/adminDefaults.jsp b/framework/crawler-ui/src/main/webapp/adminDefaults.jsp
index 6e57536..973e4a5 100644
--- a/framework/crawler-ui/src/main/webapp/adminDefaults.jsp
+++ b/framework/crawler-ui/src/main/webapp/adminDefaults.jsp
@@ -42,7 +42,7 @@
 	org.apache.manifoldcf.ui.multipart.MultipartWrapper variableContext = (org.apache.manifoldcf.ui.multipart.MultipartWrapper)threadContext.get("__WRAPPER__");
 	if (variableContext == null)
 	{
-		variableContext = new org.apache.manifoldcf.ui.multipart.MultipartWrapper(request);
+		variableContext = new org.apache.manifoldcf.ui.multipart.MultipartWrapper(request,adminprofile);
 		threadContext.save("__WRAPPER__",variableContext);
 	}
 %>
diff --git a/framework/crawler-ui/src/main/webapp/adminHeaders.jsp b/framework/crawler-ui/src/main/webapp/adminHeaders.jsp
index 78ee464..080f527 100644
--- a/framework/crawler-ui/src/main/webapp/adminHeaders.jsp
+++ b/framework/crawler-ui/src/main/webapp/adminHeaders.jsp
@@ -46,13 +46,17 @@
 
 
 <%
+	if (adminprofile.getLoggedOn() == false)
+	{
+		response.sendRedirect("login.jsp");
+		return;
+	}
+
 	IThreadContext threadContext = thread.getThreadContext();
 	org.apache.manifoldcf.ui.multipart.MultipartWrapper variableContext = (org.apache.manifoldcf.ui.multipart.MultipartWrapper)threadContext.get("__WRAPPER__");
 	if (variableContext == null)
 	{
-		variableContext = new org.apache.manifoldcf.ui.multipart.MultipartWrapper(request);
+		variableContext = new org.apache.manifoldcf.ui.multipart.MultipartWrapper(request,adminprofile);
 		threadContext.save("__WRAPPER__",variableContext);
 	}
 %>
-
-<%@ include file="checkAdminLogin.jsp" %>
diff --git a/framework/crawler-ui/src/main/webapp/checkAdminLogin.jsp b/framework/crawler-ui/src/main/webapp/checkAdminLogin.jsp
deleted file mode 100644
index ecf5919..0000000
--- a/framework/crawler-ui/src/main/webapp/checkAdminLogin.jsp
+++ /dev/null
@@ -1,35 +0,0 @@
-<c:if test="${adminprofile.loggedOn!='true'}">
-	<% // This code is disabled because tomcat 4.0 does not return the server name; it always returns localhost.
-	   // response.sendRedirect("http://"+request.getServerName()+":"+Integer.toString(request.getServerPort())+"/crawler/index.jsp?force=top&expired=true"); %>
-</c:if>
-
-<%
-
-/* $Id$ */
-
-/**
-* Licensed to the Apache Software Foundation (ASF) under one or more
-* contributor license agreements. See the NOTICE file distributed with
-* this work for additional information regarding copyright ownership.
-* The ASF licenses this file to You under the Apache License, Version 2.0
-* (the "License"); you may not use this file except in compliance with
-* the License. You may obtain a copy of the License at
-* 
-* http://www.apache.org/licenses/LICENSE-2.0
-* 
-* Unless required by applicable law or agreed to in writing, software
-* distributed under the License is distributed on an "AS IS" BASIS,
-* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-* See the License for the specific language governing permissions and
-* limitations under the License.
-*/
-%>
-
-<%
-/**
-*
-*	Validates session and redirects to logon page if expired.
-*/
-%>
-
-
diff --git a/framework/crawler-ui/src/main/webapp/documentstatus.jsp b/framework/crawler-ui/src/main/webapp/documentstatus.jsp
index 3bbb2e1..622df7b 100644
--- a/framework/crawler-ui/src/main/webapp/documentstatus.jsp
+++ b/framework/crawler-ui/src/main/webapp/documentstatus.jsp
@@ -506,52 +506,52 @@
 		}
 %>
 		</table>
-		<table class="displaytable">
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
-		    </tr>
-		    <tr>
-			<td class="value">
+		<table class="reportfootertable">
+		    <tr class="reportfooterrow">
+			<td class="reportfootercell">
 				<nobr>
 <%
 		if (startRow == 0)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.Previous")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.Previous")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="Previous page">Previous</a>
-<%
-		}
-%>
-				&nbsp;
-<%
-		if (hasMoreRows == false)
-		{
-%>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.Next")%>
-<%
-		}
-		else
-		{
-%>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="Next page">Next</a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="Previous page">Previous</a>
 <%
 		}
 %>
 				</nobr>
 			</td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.Rows")%></nobr></td><td class="value"><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.RowsPerPage")%></nobr></td>
-			<td class="value">
-				<input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/>
+			<td class="reportfootercell">
+				<nobr>
+<%
+		if (hasMoreRows == false)
+		{
+%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.Next")%>
+<%
+		}
+		else
+		{
+%>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="Next page">Next</a>
+<%
+		}
+%>
+				</nobr>
 			</td>
-		    </tr>
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.Rows")%></nobr>
+				<nobr><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></nobr>
+			</td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"documentstatus.RowsPerPage")%></nobr>
+				<nobr><input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/></nobr>
+			</td>
 		    </tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/editauthority.jsp b/framework/crawler-ui/src/main/webapp/editauthority.jsp
index 4fef92d..978dcca 100644
--- a/framework/crawler-ui/src/main/webapp/editauthority.jsp
+++ b/framework/crawler-ui/src/main/webapp/editauthority.jsp
@@ -28,11 +28,17 @@
     // the connection object being edited will be placed in the thread context under the name "ConnectionObject".
     try
     {
+	// Get the domain manager handle
+	IAuthorizationDomainManager domainMgr = AuthorizationDomainManagerFactory.make(threadContext);
 	// Get the connection manager handle
 	IAuthorityConnectionManager connMgr = AuthorityConnectionManagerFactory.make(threadContext);
 	// Also get the list of available connectors
 	IAuthorityConnectorManager connectorManager = AuthorityConnectorManagerFactory.make(threadContext);
-
+	// Get the mapping connection manager
+	IMappingConnectionManager mappingConnMgr = MappingConnectionManagerFactory.make(threadContext);
+	// Get the group manager
+	IAuthorityGroupManager authGroupManager = AuthorityGroupManagerFactory.make(threadContext);
+	
 	// Figure out what the current tab name is.
 	String tabName = variableContext.getParameter("tabname");
 	if (tabName == null || tabName.length() == 0)
@@ -58,7 +64,10 @@
 	String className = "";
 	int maxConnections = 10;
 	ConfigParams parameters = new ConfigParams();
-
+	String prereq = null;
+	String authDomain = "";
+	String groupName = "";
+	
 	if (connection != null)
 	{
 		// Set up values
@@ -68,6 +77,13 @@
 		className = connection.getClassName();
 		parameters = connection.getConfigParams();
 		maxConnections = connection.getMaxConnections();
+		prereq = connection.getPrerequisiteMapping();
+		authDomain = connection.getAuthDomain();
+		if (authDomain == null)
+			authDomain = "";
+		groupName = connection.getAuthGroup();
+		if (groupName == null)
+			groupName = "";
 	}
 	else
 		connectionName = null;
@@ -82,7 +98,10 @@
 	tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Name"));
 	tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Type"));
 	if (className.length() > 0)
+	{
+		tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Prerequisites"));
 		tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Throttling"));
+	}
 
 %>
 
@@ -145,6 +164,13 @@
 				document.editconnection.connname.focus();
 				return;
 			}
+			if (editconnection.authoritygroup.value == "")
+			{
+				alert("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editauthority.ConnectionMustHaveAGroup")%>");
+				SelectTab("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editauthority.Type")%>");
+				document.editconnection.authoritygroup.focus();
+				return;
+			}
 			if (window.checkConfigForSave)
 			{
 				if (!checkConfigForSave())
@@ -211,10 +237,12 @@
 	//-->
 	</script>
 <%
-	AuthorityConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters,tabsArray);
+	AuthorityConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabsArray);
 
 	// Get connectors, since this will be needed to determine what to display.
 	IResultSet set = connectorManager.getConnectors();
+	// Same for authority groups
+	IAuthorityGroup[] set2 = authGroupManager.getAllGroups();
 
 %>
 
@@ -226,13 +254,18 @@
       <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
       <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
        <td class="darkwindow">
-
-
 <%
-	if (set.getRowCount() == 0)
+	if (set2.length == 0)
 	{
 %>
-	<p class="windowtitle">Edit Authority Connection</p>
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.EditAuthorityConnection")%></p>
+	<table class="displaytable"><tr><td class="message"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.NoAuthorityGroupsDefinedCreateOneFirst")%></td></tr></table>
+<%
+	}
+	else if (set.getRowCount() == 0)
+	{
+%>
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.EditAuthorityConnection")%></p>
 	<table class="displaytable"><tr><td class="message"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.NoAuthorityConnectorsRegistered")%></td></tr></table>
 <%
 	}
@@ -334,11 +367,13 @@
 	  // "Type" tab
 	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Type")))
 	  {
+	    IResultSet domainSet = domainMgr.getDomains();
 %>
 		    <table class="displaytable">
 			<tr><td class="separator" colspan="5"><hr/></td></tr>
 			<tr>
-				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.ConnectionTypeColon")%></nobr></td><td class="value" colspan="4">
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.ConnectionTypeColon")%></nobr></td>
+				<td class="value" colspan="4">
 <%
 	    if (className.length() > 0)
 	    {
@@ -382,6 +417,49 @@
 %>
 				</td>
 			</tr>
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.AuthorityGroupColon")%></nobr></td>
+				<td class="value" colspan="1">
+					<select name="authoritygroup" size="1">
+						<option value=""><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.SelectAGroup")%></option>
+<%
+	    for (int i = 0; i < set2.length; i++)
+	    {
+		IAuthorityGroup row = set2[i];
+		String thisAuthorityName = row.getName();
+		String thisDescription = row.getDescription();
+		if (thisDescription == null || thisDescription.length() == 0)
+			thisDescription = thisAuthorityName;
+%>
+						<option value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(thisAuthorityName)%>'
+							<%=(groupName.equals(thisAuthorityName))?"selected=\"selected\"":""%>><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(thisDescription)%></option>
+<%
+	    }
+%>
+					</select>
+				</td>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.AuthorizationDomainColon")%></nobr></td>
+				<td class="value" colspan="1">
+					<select name="authdomain" size="1">
+						<option value="" <%=(authDomain == null || authDomain.length() == 0)?"selected=\"selected\"":""%>><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.DefaultDomainNone")%></option>
+<%
+	    for (int i = 0; i < domainSet.getRowCount(); i++)
+	    {
+		IResultRow row = domainSet.getRow(i);
+		String domainName = (String)row.getValue("domainname");
+		String thisDescription = (String)row.getValue("description");
+		if (thisDescription == null || thisDescription.length() == 0)
+			thisDescription = domainName;
+%>
+						<option value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(domainName)%>'
+							<%=(authDomain!=null && domainName.equals(authDomain))?"selected=\"selected\"":""%>><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(thisDescription)%></option>
+<%
+	    }
+%>
+					</select>
+				</td>
+			</tr>
 		    </table>
 <%
 	  }
@@ -390,10 +468,75 @@
 		// Hiddens for the "Type" tab
 %>
 		    <input type="hidden" name="classname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(className)%>'/>
+		    <input type="hidden" name="authdomain" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(authDomain)%>'/>
+		    <input type="hidden" name="authoritygroup" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(groupName)%>'/>
 <%
 	  }
 
+	  // The "Prerequisites" tab
+	  IMappingConnection[] mappingConnections = mappingConnMgr.getAllConnections();
+	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Prerequisites")))
+	  {
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.PrerequisiteUserMappingColon")%></nobr></td>
+				<td class="value" colspan="4">
+					<input type="hidden" name="prerequisites_present" value="true"/>
+<%
+	    if (prereq == null)
+	    {
+%>
+					<input type="radio" name="prerequisites" value="" checked="true"/>&nbsp;<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.NoPrerequisites")%><br/>
+<%
+	    }
+	    else
+	    {
+%>
+					<input type="radio" name="prerequisites" value=""/>&nbsp;<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.NoPrerequisites")%><br/>
+<%
+	    }
 
+	    for (IMappingConnection mappingConnection : mappingConnections)
+	    {
+		String mappingName = mappingConnection.getName();
+		String mappingDescription = mappingName;
+		if (mappingConnection.getDescription() != null && mappingConnection.getDescription().length() > 0)
+			mappingDescription += " (" + mappingConnection.getDescription()+")";
+		if (prereq != null && prereq.equals(mappingName))
+		{
+%>
+					<input type="radio" name="prerequisites" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(mappingName)%>' checked="true"/>&nbsp;<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(mappingDescription)%><br/>
+<%
+		}
+		else
+		{
+%>
+					<input type="radio" name="prerequisites" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(mappingName)%>'/>&nbsp;<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(mappingDescription)%><br/>
+<%
+		}
+	    }
+%>
+				</td>
+			</tr>
+		    </table>
+<%
+	  }
+	  else
+	  {
+		// Hiddens for Prerequisites tab
+%>
+		    <input type="hidden" name="prerequisites_present" value="true"/>
+<%
+		if (prereq != null)
+		{
+%>
+		    <input type="hidden" name="prerequisites" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(prereq)%>'/>
+<%
+		}
+	  }
+	  
 	  // The "Throttling" tab
 	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editauthority.Throttling")))
 	  {
@@ -401,7 +544,7 @@
 		    <table class="displaytable">
 			<tr><td class="separator" colspan="5"><hr/></td></tr>
 			<tr>
-				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.MaxConnections")%></nobr><br/><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.PerJVMColon")%></nobr></td>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editauthority.MaxConnectionsColon")%></nobr></td>
 				<td class="value" colspan="4"><input type="text" size="6" name="maxconnections" value='<%=Integer.toString(maxConnections)%>'/></td>
 			</tr>
 		    </table>
@@ -416,7 +559,7 @@
 	  }
 
 	  if (className.length() > 0)
-		AuthorityConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters,tabName);
+		AuthorityConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabName);
 %>
 		    <table class="displaytable">
 			<tr><td class="separator" colspan="4"><hr/></td></tr>
diff --git a/framework/crawler-ui/src/main/webapp/editconnection.jsp b/framework/crawler-ui/src/main/webapp/editconnection.jsp
index 87e7e83..1906cef 100644
--- a/framework/crawler-ui/src/main/webapp/editconnection.jsp
+++ b/framework/crawler-ui/src/main/webapp/editconnection.jsp
@@ -32,7 +32,7 @@
 	IRepositoryConnectionManager connMgr = RepositoryConnectionManagerFactory.make(threadContext);
 	// Also get the list of available connectors
 	IConnectorManager connectorManager = ConnectorManagerFactory.make(threadContext);
-	IAuthorityConnectionManager authConnectionManager = AuthorityConnectionManagerFactory.make(threadContext);
+	IAuthorityGroupManager authGroupManager = AuthorityGroupManagerFactory.make(threadContext);
 
 	// Figure out what the current tab name is.
 	String tabName = variableContext.getParameter("tabname");
@@ -254,7 +254,7 @@
 	//-->
 	</script>
 <%
-	RepositoryConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters,tabsArray);
+	RepositoryConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabsArray);
 %>
 
 </head>
@@ -422,10 +422,10 @@
 				</td>
 			</tr>
 			<tr>
-				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editconnection.AuthorityColon")%></nobr></td>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editconnection.AuthorityGroupColon")%></nobr></td>
 				<td class="value" colspan="4">
 <%
-	    IAuthorityConnection[] set2 = authConnectionManager.getAllConnections();
+	    IAuthorityGroup[] set2 = authGroupManager.getAllGroups();
 	    int i = 0;
 %>
 					<select name="authorityname" size="1">
@@ -433,7 +433,7 @@
 <%
 	    while (i < set2.length)
 	    {
-		IAuthorityConnection row = set2[i++];
+		IAuthorityGroup row = set2[i++];
 		String thisAuthorityName = row.getName();
 		String thisDescription = row.getDescription();
 		if (thisDescription == null || thisDescription.length() == 0)
@@ -470,7 +470,7 @@
 		    <table class="displaytable">
 			<tr><td class="separator" colspan="2"><hr/></td></tr>
 			<tr>
-				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editconnection.Maxconnections")%></nobr><br/><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editconnection.PerJVMColon")%></nobr></td>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editconnection.MaxconnectionsColon")%></nobr></td>
 				<td class="value"><input type="text" size="6" name="maxconnections" value='<%=Integer.toString(maxConnections)%>'/></td>
 			</tr>
 			<tr>
@@ -567,7 +567,7 @@
 	  }
 
 	  if (className.length() > 0)
-		RepositoryConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters,tabName);
+		RepositoryConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabName);
 %>
 		    <table class="displaytable">
 			<tr><td class="separator" colspan="4"><hr/></td></tr>
diff --git a/framework/crawler-ui/src/main/webapp/editgroup.jsp b/framework/crawler-ui/src/main/webapp/editgroup.jsp
new file mode 100644
index 0000000..9d53abc
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/editgroup.jsp
@@ -0,0 +1,296 @@
+<%@ include file="adminHeaders.jsp" %>
+<%
+
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+    // The contract of this edit page is as follows.  It is either called directly, in which case it is expected to be creating
+    // a connection or beginning the process of editing an existing connection, or it is called via redirection from execute.jsp, in which case
+    // the connection object being edited will be placed in the thread context under the name "GroupObject".
+    try
+    {
+	// Get the group manager
+	IAuthorityGroupManager authGroupManager = AuthorityGroupManagerFactory.make(threadContext);
+	
+	// Figure out what the current tab name is.
+	String tabName = variableContext.getParameter("tabname");
+	if (tabName == null || tabName.length() == 0)
+		tabName = Messages.getString(pageContext.getRequest().getLocale(),"editgroup.Name");
+
+	String groupName = null;
+	IAuthorityGroup group = (IAuthorityGroup)threadContext.get("GroupObject");
+	if (group == null)
+	{
+		// We did not go through execute.jsp
+		// We might have received an argument specifying the connection name.
+		groupName = variableContext.getParameter("groupname");
+		// If the groupname is not null, load the connection description and prepopulate everything with what comes from it.
+		if (groupName != null && groupName.length() > 0)
+		{
+			group = authGroupManager.load(groupName);
+		}
+	}
+
+	// Setup default fields
+	boolean isNew = true;
+	String description = "";
+	
+	if (group != null)
+	{
+		// Set up values
+		isNew = group.getIsNew();
+		groupName = group.getName();
+		description = group.getDescription();
+	}
+	else
+		groupName = null;
+
+	if (groupName == null)
+		groupName = "";
+
+	// Initialize tabs array
+	ArrayList tabsArray = new ArrayList();
+
+	// Set up the predefined tabs
+	tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editgroup.Name"));
+%>
+
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+	<title>
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editgroup.ApacheManifoldCFEditAuthorityGroup")%>
+	</title>
+
+	<script type="text/javascript">
+	<!--
+	// Use this method to repost the form and pick a new tab
+	function SelectTab(newtab)
+	{
+		if (checkForm())
+		{
+			document.editgroup.tabname.value = newtab;
+			document.editgroup.submit();
+		}
+	}
+
+	// Use this method to repost the form,
+	// and set the anchor request.
+	function postFormSetAnchor(anchorValue)
+	{
+		if (checkForm())
+		{
+			if (anchorValue != "")
+				document.group.action = document.editgroup.action + "#" + anchorValue;
+			document.editgroup.submit();
+		}
+	}
+
+	// Use this method to repost the form
+	function postForm()
+	{
+		if (checkForm())
+		{
+			document.editgroup.submit();
+		}
+	}
+
+	function Save()
+	{
+		if (checkForm())
+		{
+			// Can't submit until all required fields have been set.
+			// Some of these don't live on the current tab, so don't set
+			// focus.
+
+			// Check our part of the form, for save
+			if (editgroup.groupname.value == "")
+			{
+				alert("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editgroup.AuthorityGroupMustHaveAName")%>");
+				SelectTab("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editgroup.Name")%>");
+				document.editgroup.groupname.focus();
+				return;
+			}
+			if (window.checkConfigForSave)
+			{
+				if (!checkConfigForSave())
+					return;
+			}
+			document.editgroup.op.value="Save";
+			document.editgroup.submit();
+		}
+	}
+
+	function Continue()
+	{
+		document.editgroup.op.value="Continue";
+		postForm();
+	}
+
+	function Cancel()
+	{
+		document.editgroup.op.value="Cancel";
+		document.editgroup.submit();
+	}
+
+	function checkForm()
+	{
+		return true;
+	}
+
+	//-->
+	</script>
+</head>
+
+<body class="standardbody">
+
+    <table class="page">
+      <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+      <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
+       <td class="darkwindow">
+
+
+	<form class="standardform" name="editgroup" action="execute.jsp" method="POST" enctype="multipart/form-data">
+	  <input type="hidden" name="op" value="Continue"/>
+	  <input type="hidden" name="type" value="group"/>
+	  <input type="hidden" name="tabname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(tabName)%>'/>
+	  <input type="hidden" name="isnewconnection" value='<%=(isNew?"true":"false")%>'/>
+	    <table class="tabtable">
+	      <tr class="tabrow">
+<%
+	int tabNum = 0;
+	while (tabNum < tabsArray.size())
+	{
+		String tab = (String)tabsArray.get(tabNum++);
+		if (tab.equals(tabName))
+		{
+%>
+		      <td class="activetab"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(tab)%></nobr></td>
+<%
+		}
+		else
+		{
+%>
+		      <td class="passivetab"><nobr><a href="javascript:void(0);" alt='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(tab)+" "+Messages.getAttributeString(pageContext.getRequest().getLocale(),"editgroup.tab")%>' onclick='<%="javascript:SelectTab(\""+tab+"\");return false;"%>'><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(tab)%></a></nobr></td>
+<%
+		}
+	}
+%>
+		      <td class="remaindertab">
+<%
+	if (description.length() > 0)
+	{
+%>
+			  <nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editgroup.EditGroup")%> '<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%>'</nobr>
+<%
+	}
+	else
+	{
+%>
+		          <nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editgroup.EditAGroup")%></nobr>
+<%
+	}
+%>
+		      </td>
+	      </tr>
+	      <tr class="tabbodyrow">
+		<td class="tabbody" colspan='<%=Integer.toString(tabsArray.size()+1)%>'>
+
+<%
+
+	// Name tab
+	if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editgroup.Name")))
+	{
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editgroup.NameColon")%></nobr></td>
+				<td class="value" colspan="4">
+<%
+	    // If the group doesn't exist yet, we are allowed to change the name.
+	    if (isNew)
+	    {
+%>
+					<input type="text" size="32" name="groupname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(groupName)%>'/>
+<%
+	    }
+	    else
+	    {
+%>
+					<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(groupName)%>
+					<input type="hidden" name="groupname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(groupName)%>'/>
+<%
+	    }
+%>
+				</td>
+			</tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editgroup.DescriptionColon")%></nobr></td>
+				<td class="value" colspan="4">
+					<input type="text" size="50" name="description" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(description)%>'/>
+				</td>
+			</tr>
+		    </table>
+<%
+	}
+	else
+	{
+		// Hiddens for the Name tab
+%>
+		    <input type="hidden" name="groupname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(groupName)%>'/>
+		    <input type="hidden" name="description" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(description)%>'/>
+<%
+	}
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="4"><hr/></td></tr>
+			<tr><td class="message" colspan="4"><nobr>
+			    <input type="button" value="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editgroup.Save")%>" onClick="javascript:Save()" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editgroup.SaveThisAuthorityGroup")%>"/>
+			    &nbsp;<input type="button" value="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editgroup.Cancel")%>" onClick="javascript:Cancel()" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editgroup.CancelAuthorityGroupEditing")%>"/></nobr></td>
+			</tr>
+		    </table>
+		</td>
+	      </tr>
+	    </table>
+	</form>
+       </td>
+      </tr>
+    </table>
+
+</body>
+
+</html>
+
+<%
+    }
+    catch (ManifoldCFException e)
+    {
+	e.printStackTrace();
+	variableContext.setParameter("text",e.getMessage());
+	variableContext.setParameter("target","listauthorities.jsp");
+%>
+	<jsp:forward page="error.jsp"/>
+<%
+    }
+%>
+
diff --git a/framework/crawler-ui/src/main/webapp/editjob.jsp b/framework/crawler-ui/src/main/webapp/editjob.jsp
index 334efab..5a57ccb 100644
--- a/framework/crawler-ui/src/main/webapp/editjob.jsp
+++ b/framework/crawler-ui/src/main/webapp/editjob.jsp
@@ -36,6 +36,9 @@
 	IOutputConnectionManager outputMgr = OutputConnectionManagerFactory.make(threadContext);
 	IOutputConnection[] outputList = outputMgr.getAllConnections();
 
+	IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+	IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+
 	// Figure out tab name
 	String tabName = variableContext.getParameter("tabname");
 	if (tabName == null || tabName.length() == 0)
@@ -404,17 +407,16 @@
 <%
 	if (outputConnection != null)
 	{
-		IOutputConnector outputConnector = OutputConnectorFactory.grab(threadContext,outputConnection.getClassName(),outputConnection.getConfigParams(),
-			outputConnection.getMaxConnections());
+		IOutputConnector outputConnector = outputConnectorPool.grab(outputConnection);
 		if (outputConnector != null)
 		{
 			try
 			{
-				outputConnector.outputSpecificationHeader(new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),outputSpecification,tabsArray);
+				outputConnector.outputSpecificationHeader(new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),outputSpecification,tabsArray);
 			}
 			finally
 			{
-				OutputConnectorFactory.release(outputConnector);
+				outputConnectorPool.release(outputConnection,outputConnector);
 			}
 		}
 	}
@@ -423,17 +425,16 @@
 <%
 	if (connection != null)
 	{
-		IRepositoryConnector repositoryConnector = RepositoryConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),
-			connection.getMaxConnections());
+		IRepositoryConnector repositoryConnector = repositoryConnectorPool.grab(connection);
 		if (repositoryConnector != null)
 		{
 			try
 			{
-				repositoryConnector.outputSpecificationHeader(new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),documentSpecification,tabsArray);
+				repositoryConnector.outputSpecificationHeader(new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),documentSpecification,tabsArray);
 			}
 			finally
 			{
-				RepositoryConnectorFactory.release(repositoryConnector);
+				repositoryConnectorPool.release(connection,repositoryConnector);
 			}
 		}
 	}
@@ -1272,17 +1273,16 @@
 
 	if (outputConnection != null)
 	{
-		IOutputConnector outputConnector = OutputConnectorFactory.grab(threadContext,outputConnection.getClassName(),outputConnection.getConfigParams(),
-			outputConnection.getMaxConnections());
+		IOutputConnector outputConnector = outputConnectorPool.grab(outputConnection);
 		if (outputConnector != null)
 		{
 			try
 			{
-				outputConnector.outputSpecificationBody(new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),outputSpecification,tabName);
+				outputConnector.outputSpecificationBody(new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),outputSpecification,tabName);
 			}
 			finally
 			{
-				OutputConnectorFactory.release(outputConnector);
+				outputConnectorPool.release(outputConnection,outputConnector);
 			}
 %>
 		  <input type="hidden" name="outputpresent" value="true"/>
@@ -1292,17 +1292,16 @@
 
 	if (connection != null)
 	{
-		IRepositoryConnector repositoryConnector = RepositoryConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),
-			connection.getMaxConnections());
+		IRepositoryConnector repositoryConnector = repositoryConnectorPool.grab(connection);
 		if (repositoryConnector != null)
 		{
 			try
 			{
-				repositoryConnector.outputSpecificationBody(new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),documentSpecification,tabName);
+				repositoryConnector.outputSpecificationBody(new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),documentSpecification,tabName);
 			}
 			finally
 			{
-				RepositoryConnectorFactory.release(repositoryConnector);
+				repositoryConnectorPool.release(connection,repositoryConnector);
 			}
 %>
 		  <input type="hidden" name="connectionpresent" value="true"/>
diff --git a/framework/crawler-ui/src/main/webapp/editmapper.jsp b/framework/crawler-ui/src/main/webapp/editmapper.jsp
new file mode 100644
index 0000000..da54a05
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/editmapper.jsp
@@ -0,0 +1,541 @@
+<%@ include file="adminHeaders.jsp" %>
+
+<%
+
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+
+<%
+    // The contract of this edit page is as follows.  It is either called directly, in which case it is expected to be creating
+    // a connection or beginning the process of editing an existing connection, or it is called via redirection from execute.jsp, in which case
+    // the connection object being edited will be placed in the thread context under the name "ConnectionObject".
+    try
+    {
+	// Get the connection manager handle
+	IMappingConnectionManager connMgr = MappingConnectionManagerFactory.make(threadContext);
+	// Also get the list of available connectors
+	IMappingConnectorManager connectorManager = MappingConnectorManagerFactory.make(threadContext);
+
+	// Figure out what the current tab name is.
+	String tabName = variableContext.getParameter("tabname");
+	if (tabName == null || tabName.length() == 0)
+		tabName = Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Name");
+
+	String connectionName = null;
+	IMappingConnection connection = (IMappingConnection)threadContext.get("ConnectionObject");
+	if (connection == null)
+	{
+		// We did not go through execute.jsp
+		// We might have received an argument specifying the connection name.
+		connectionName = variableContext.getParameter("connname");
+		// If the connectionname is not null, load the connection description and prepopulate everything with what comes from it.
+		if (connectionName != null && connectionName.length() > 0)
+		{
+			connection = connMgr.load(connectionName);
+		}
+	}
+
+	// Setup default fields
+	boolean isNew = true;
+	String description = "";
+	String className = "";
+	int maxConnections = 10;
+	ConfigParams parameters = new ConfigParams();
+	String prereq = null;
+
+	if (connection != null)
+	{
+		// Set up values
+		isNew = connection.getIsNew();
+		connectionName = connection.getName();
+		description = connection.getDescription();
+		className = connection.getClassName();
+		parameters = connection.getConfigParams();
+		maxConnections = connection.getMaxConnections();
+		prereq = connection.getPrerequisiteMapping();
+	}
+	else
+		connectionName = null;
+
+	if (connectionName == null)
+		connectionName = "";
+
+	// Initialize tabs array
+	ArrayList tabsArray = new ArrayList();
+
+	// Set up the predefined tabs
+	tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Name"));
+	tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Type"));
+	if (className.length() > 0)
+	{
+		tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Prerequisites"));
+		tabsArray.add(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Throttling"));
+	}
+
+%>
+
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+	<title>
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.ApacheManifoldCFEditMapping")%>
+	</title>
+
+	<script type="text/javascript">
+	<!--
+	// Use this method to repost the form and pick a new tab
+	function SelectTab(newtab)
+	{
+		if (checkForm())
+		{
+			document.editconnection.tabname.value = newtab;
+			document.editconnection.submit();
+		}
+	}
+
+	// Use this method to repost the form,
+	// and set the anchor request.
+	function postFormSetAnchor(anchorValue)
+	{
+		if (checkForm())
+		{
+			if (anchorValue != "")
+				document.editconnection.action = document.editconnection.action + "#" + anchorValue;
+			document.editconnection.submit();
+		}
+	}
+
+	// Use this method to repost the form
+	function postForm()
+	{
+		if (checkForm())
+		{
+			document.editconnection.submit();
+		}
+	}
+
+	function Save()
+	{
+		if (checkForm())
+		{
+			// Can't submit until all required fields have been set.
+			// Some of these don't live on the current tab, so don't set
+			// focus.
+
+			// Check our part of the form, for save
+			if (editconnection.connname.value == "")
+			{
+				alert("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editmapper.ConnectionMustHaveAName")%>");
+				SelectTab("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editmapper.Name")%>");
+				document.editconnection.connname.focus();
+				return;
+			}
+			if (window.checkConfigForSave)
+			{
+				if (!checkConfigForSave())
+					return;
+			}
+			document.editconnection.op.value="Save";
+			document.editconnection.submit();
+		}
+	}
+
+	function Continue()
+	{
+		document.editconnection.op.value="Continue";
+		postForm();
+	}
+
+	function Cancel()
+	{
+		document.editconnection.op.value="Cancel";
+		document.editconnection.submit();
+	}
+
+	function checkForm()
+	{
+		if (!checkConnectionCount())
+			return false;
+		if (window.checkConfig)
+			return checkConfig();
+		return true;
+	}
+
+	function checkConnectionCount()
+	{
+		if (!isInteger(editconnection.maxconnections.value))
+		{
+			alert("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"editmapper.TheMaximumNumberOfConnectionsMustBeAValidInteger")%>");
+			editconnection.maxconnections.focus();
+			return false;
+		}
+		return true;
+	}
+
+	function isRegularExpression(value)
+	{
+		try
+		{
+			var foo = "teststring";
+                        foo.search(value.replace(/\(\?i\)/,""));
+			return true;
+		}
+		catch (e)
+		{
+			return false;
+		}
+
+	}
+
+	function isInteger(value)
+	{
+		var anum=/(^\d+$)/;
+		return anum.test(value);
+	}
+
+	//-->
+	</script>
+<%
+	MappingConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabsArray);
+
+	// Get connectors, since this will be needed to determine what to display.
+	IResultSet set = connectorManager.getConnectors();
+
+%>
+
+</head>
+
+<body class="standardbody">
+
+    <table class="page">
+      <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+      <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
+       <td class="darkwindow">
+
+
+<%
+	if (set.getRowCount() == 0)
+	{
+%>
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.EditMappingConnection")%></p>
+	<table class="displaytable"><tr><td class="message"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.NoMappingConnectorsRegistered")%></td></tr></table>
+<%
+	}
+	else
+	{
+%>
+	<form class="standardform" name="editconnection" action="execute.jsp" method="POST" enctype="multipart/form-data">
+	  <input type="hidden" name="op" value="Continue"/>
+	  <input type="hidden" name="type" value="mapper"/>
+	  <input type="hidden" name="tabname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(tabName)%>'/>
+	  <input type="hidden" name="isnewconnection" value='<%=(isNew?"true":"false")%>'/>
+	    <table class="tabtable">
+	      <tr class="tabrow">
+<%
+	  int tabNum = 0;
+	  while (tabNum < tabsArray.size())
+	  {
+		String tab = (String)tabsArray.get(tabNum++);
+		if (tab.equals(tabName))
+		{
+%>
+		      <td class="activetab"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(tab)%></nobr></td>
+<%
+		}
+		else
+		{
+%>
+		      <td class="passivetab"><nobr><a href="javascript:void(0);" alt='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(tab)+" "+Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.tab")%>' onclick='<%="javascript:SelectTab(\""+tab+"\");return false;"%>'><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(tab)%></a></nobr></td>
+<%
+		}
+	  }
+%>
+		      <td class="remaindertab">
+<%
+	  if (description.length() > 0)
+	  {
+%>
+			  <nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.EditMapping")%> '<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%>'</nobr>
+<%
+	  }
+	  else
+	  {
+%>
+		          <nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.EditAMapping")%></nobr>
+<%
+	  }
+%>
+		      </td>
+	      </tr>
+	      <tr class="tabbodyrow">
+		<td class="tabbody" colspan='<%=Integer.toString(tabsArray.size()+1)%>'>
+
+<%
+
+	  // Name tab
+	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Name")))
+	  {
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.NameColon")%></nobr></td>
+				<td class="value" colspan="4">
+<%
+	    // If the connection doesn't exist yet, we are allowed to change the name.
+	    if (isNew)
+	    {
+%>
+					<input type="text" size="32" name="connname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(connectionName)%>'/>
+<%
+	    }
+	    else
+	    {
+%>
+					<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)%>
+					<input type="hidden" name="connname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(connectionName)%>'/>
+<%
+	    }
+%>
+				</td>
+			</tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.DescriptionColon")%></nobr></td>
+				<td class="value" colspan="4">
+					<input type="text" size="50" name="description" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(description)%>'/>
+				</td>
+			</tr>
+		    </table>
+<%
+	  }
+	  else
+	  {
+		// Hiddens for the Name tab
+%>
+		    <input type="hidden" name="connname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(connectionName)%>'/>
+		    <input type="hidden" name="description" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(description)%>'/>
+<%
+	  }
+
+
+	  // "Type" tab
+	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Type")))
+	  {
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.ConnectionTypeColon")%></nobr></td><td class="value" colspan="4">
+<%
+	    if (className.length() > 0)
+	    {
+		String value = connectorManager.getDescription(className);
+		if (value == null)
+		{
+%>
+					<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.UNREGISTERED")%> <%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(className)%></nobr>
+<%
+		}
+		else
+		{
+%>
+					<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(value)%>
+<%
+		}
+%>
+					<input type="hidden" name="classname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(className)%>'/>
+<%
+	    }
+	    else
+	    {
+		int i = 0;
+%>
+					<select name="classname" size="1">
+<%
+		while (i < set.getRowCount())
+		{
+			IResultRow row = set.getRow(i++);
+			String thisClassName = row.getValue("classname").toString();
+			String thisDescription = row.getValue("description").toString();
+%>
+						<option value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(thisClassName)%>'
+							<%=className.equals(thisClassName)?"selected=\"selected\"":""%>><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(thisDescription)%></option>
+<%
+		}
+%>
+					</select>
+<%
+	    }
+%>
+				</td>
+			</tr>
+		    </table>
+<%
+	  }
+	  else
+	  {
+		// Hiddens for the "Type" tab
+%>
+		    <input type="hidden" name="classname" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(className)%>'/>
+<%
+	  }
+
+	  // The "Prerequisites" tab
+	  IMappingConnection[] mappingConnections = connMgr.getAllNonLoopingConnections((connection==null)?null:connection.getName());
+	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Prerequisites")))
+	  {
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.PrerequisiteUserMappingColon")%></nobr></td>
+				<td class="value" colspan="4">
+					<input type="hidden" name="prerequisites_present" value="true"/>
+<%
+	    if (prereq == null)
+	    {
+%>
+					<input type="radio" name="prerequisites" value="" checked="true"/>&nbsp;<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.NoPrerequisites")%><br/>
+<%
+	    }
+	    else
+	    {
+%>
+					<input type="radio" name="prerequisites" value=""/>&nbsp;<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.NoPrerequisites")%><br/>
+<%
+	    }
+
+	    for (IMappingConnection mappingConnection : mappingConnections)
+	    {
+		String mappingName = mappingConnection.getName();
+		String mappingDescription = mappingName;
+		if (mappingConnection.getDescription() != null && mappingConnection.getDescription().length() > 0)
+			mappingDescription += " (" + mappingConnection.getDescription()+")";
+		if (prereq != null && prereq.equals(mappingName))
+		{
+%>
+					<input type="radio" name="prerequisites" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(mappingName)%>' checked="true"/>&nbsp;<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(mappingDescription)%><br/>
+<%
+		}
+		else
+		{
+%>
+					<input type="radio" name="prerequisites" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(mappingName)%>'/>&nbsp;<%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(mappingDescription)%><br/>
+<%
+		}
+	    }
+%>
+				</td>
+			</tr>
+		    </table>
+<%
+	  }
+	  else
+	  {
+		// Hiddens for Prerequisites tab
+%>
+		    <input type="hidden" name="prerequisites_present" value="true"/>
+<%
+		if (prereq != null)
+		{
+%>
+		    <input type="hidden" name="prerequisites" value='<%=org.apache.manifoldcf.ui.util.Encoder.attributeEscape(prereq)%>'/>
+<%
+		}
+	  }
+
+	  // The "Throttling" tab
+	  if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Throttling")))
+	  {
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="5"><hr/></td></tr>
+			<tr>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editmapper.MaxConnectionsColon")%></nobr></td>
+				<td class="value" colspan="4"><input type="text" size="6" name="maxconnections" value='<%=Integer.toString(maxConnections)%>'/></td>
+			</tr>
+		    </table>
+<%
+	  }
+	  else
+	  {
+		// Hiddens for "Throttling" tab
+%>
+		    <input type="hidden" name="maxconnections" value='<%=Integer.toString(maxConnections)%>'/>
+<%
+	  }
+
+	  if (className.length() > 0)
+		MappingConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabName);
+%>
+		    <table class="displaytable">
+			<tr><td class="separator" colspan="4"><hr/></td></tr>
+			<tr><td class="message" colspan="4"><nobr>
+<%
+	  if (className.length() > 0)
+	  {
+%>
+			    <input type="button" value="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.Save")%>" onClick="javascript:Save()" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.SaveThisMappingConnection")%>"/>
+<%
+	  }
+	  else
+	  {
+		if (tabName.equals(Messages.getString(pageContext.getRequest().getLocale(),"editmapper.Type")))
+		{
+%>
+			    <input type="button" value="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.Continue")%>" onClick="javascript:Continue()" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.ContinueToNextPage")%>"/>
+<%
+		}
+	  }
+%>
+			    &nbsp;<input type="button" value="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.Cancel")%>" onClick="javascript:Cancel()" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"editmapper.CancelMappingEditing")%>"/></nobr></td>
+			</tr>
+		    </table>
+		</td>
+	      </tr>
+	    </table>
+	</form>
+<%
+	}
+%>
+       </td>
+      </tr>
+    </table>
+
+</body>
+
+</html>
+
+<%
+    }
+    catch (ManifoldCFException e)
+    {
+	e.printStackTrace();
+	variableContext.setParameter("text",e.getMessage());
+	variableContext.setParameter("target","listmappers.jsp");
+%>
+	<jsp:forward page="error.jsp"/>
+<%
+    }
+%>
+
diff --git a/framework/crawler-ui/src/main/webapp/editoutput.jsp b/framework/crawler-ui/src/main/webapp/editoutput.jsp
index cadb800..6324d97 100644
--- a/framework/crawler-ui/src/main/webapp/editoutput.jsp
+++ b/framework/crawler-ui/src/main/webapp/editoutput.jsp
@@ -212,7 +212,7 @@
 	//-->
 	</script>
 <%
-	OutputConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters,tabsArray);
+	OutputConnectorFactory.outputConfigurationHeader(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabsArray);
 %>
 
 </head>
@@ -400,7 +400,7 @@
 		    <table class="displaytable">
 			<tr><td class="separator" colspan="2"><hr/></td></tr>
 			<tr>
-				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editoutput.MaxConnections")%></nobr><br/><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editoutput.PerJVMColon")%></nobr></td>
+				<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"editoutput.MaxConnectionsColon")%></nobr></td>
 				<td class="value"><input type="text" size="6" name="maxconnections" value='<%=Integer.toString(maxConnections)%>'/></td>
 			</tr>
 		    </table>
@@ -415,7 +415,7 @@
 	  }
 
 	  if (className.length() > 0)
-		OutputConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters,tabName);
+		OutputConnectorFactory.outputConfigurationBody(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters,tabName);
 
 %>
 		    <table class="displaytable">
diff --git a/framework/crawler-ui/src/main/webapp/execute.jsp b/framework/crawler-ui/src/main/webapp/execute.jsp
index 45e72a1..a585487 100644
--- a/framework/crawler-ui/src/main/webapp/execute.jsp
+++ b/framework/crawler-ui/src/main/webapp/execute.jsp
@@ -45,10 +45,15 @@
 		// Make a few things we will need
 		// Get the job manager handle
 		IJobManager manager = JobManagerFactory.make(threadContext);
+		IAuthorityGroupManager authGroupManager = AuthorityGroupManagerFactory.make(threadContext);
 		IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(threadContext);
 		IAuthorityConnectionManager authConnManager = AuthorityConnectionManagerFactory.make(threadContext);
+		IMappingConnectionManager mappingConnManager = MappingConnectionManagerFactory.make(threadContext);
 		IOutputConnectionManager outputManager = OutputConnectionManagerFactory.make(threadContext);
 		
+		IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+		IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+		
 		String type = variableContext.getParameter("type");
 		String op = variableContext.getParameter("op");
 		if (type != null && op != null && type.equals("connection"))
@@ -201,6 +206,28 @@
 				<jsp:forward page="listconnections.jsp"/>
 <%
 			}
+			else if (op.equals("ClearHistory"))
+			{
+				try
+				{
+					String connectionName = variableContext.getParameter("connname");
+					if (connectionName == null)
+						throw new ManifoldCFException("Missing connection parameter");
+					connManager.cleanUpHistoryData(connectionName);
+%>
+					<jsp:forward page="listconnections.jsp"/>
+<%
+				}
+				catch (ManifoldCFException e)
+				{
+					e.printStackTrace();
+					variableContext.setParameter("text",e.getMessage());
+					variableContext.setParameter("target","listconnections.jsp");
+%>
+					<jsp:forward page="error.jsp"/>
+<%
+				}
+			}
 			else
 			{
 				// Error
@@ -211,6 +238,105 @@
 <%
 			}
 		}
+		else if (type != null && op != null && type.equals("group"))
+		{
+			// -- Group editing operations --
+			if (op.equals("Save") || op.equals("Continue"))
+			{
+				try
+				{
+					// Set up a connection object that is a merge of an existing connection object plus what was posted.
+					IAuthorityGroup group = null;
+					boolean isNew = true;
+					String x = variableContext.getParameter("isnewconnection");
+					if (x != null)
+						isNew = x.equals("true");
+
+					String groupName = variableContext.getParameter("groupname");
+					// If the groupname is not null, load the group and prepopulate everything with what comes from it.
+					if (groupName != null && groupName.length() > 0 && !isNew)
+					{
+						group = authGroupManager.load(groupName);
+					}
+					
+					if (group == null)
+					{
+						group = authGroupManager.create();
+						if (groupName != null && groupName.length() > 0)
+							group.setName(groupName);
+					}
+
+					// Gather all the data from the form.
+					group.setIsNew(isNew);
+					x = variableContext.getParameter("description");
+					if (x != null)
+						group.setDescription(x);
+
+					if (op.equals("Continue"))
+					{
+						threadContext.save("GroupObject",group);
+%>
+						<jsp:forward page="editgroup.jsp"/>
+<%
+					}
+					else if (op.equals("Save"))
+					{
+						authGroupManager.save(group);
+						variableContext.setParameter("groupname",groupName);
+%>
+						<jsp:forward page="viewgroup.jsp"/>
+<%
+					}
+				}
+				catch (ManifoldCFException e)
+				{
+					e.printStackTrace();
+					variableContext.setParameter("text",e.getMessage());
+					variableContext.setParameter("target","listgroups.jsp");
+%>
+					<jsp:forward page="error.jsp"/>
+<%
+				}
+			}
+			else if (op.equals("Delete"))
+			{
+				try
+				{
+					String groupName = variableContext.getParameter("groupname");
+					if (groupName == null)
+						throw new ManifoldCFException("Missing group name parameter");
+					authGroupManager.delete(groupName);
+%>
+					<jsp:forward page="listgroups.jsp"/>
+<%
+				}
+				catch (ManifoldCFException e)
+				{
+					e.printStackTrace();
+					variableContext.setParameter("text",e.getMessage());
+					variableContext.setParameter("target","listgroups.jsp");
+%>
+					<jsp:forward page="error.jsp"/>
+<%
+				}
+			}
+			else if (op.equals("Cancel"))
+			{
+%>
+				<jsp:forward page="listgroups.jsp"/>
+<%
+			}
+			else
+			{
+				// Error
+				variableContext.setParameter("text","Illegal parameter to authority group execution page");
+				variableContext.setParameter("target","listgroups.jsp");
+%>
+				<jsp:forward page="error.jsp"/>
+<%
+			}
+
+		}
 		else if (type != null && op != null && type.equals("authority"))
 		{
 			// -- Authority editing operations --
@@ -248,8 +374,22 @@
 					if (x != null)
 						connection.setClassName(x);
 					x = variableContext.getParameter("maxconnections");
-					if (x != null && x.length() > 0)
+					if (x != null)
 						connection.setMaxConnections(Integer.parseInt(x));
+					x = variableContext.getParameter("prerequisites_present");
+					if (x != null && x.equals("true"))
+					{
+						String y = variableContext.getParameter("prerequisites");
+						if (y != null && y.length() == 0)
+							y = null;
+						connection.setPrerequisiteMapping(y);
+					}
+					x = variableContext.getParameter("authdomain");
+					if (x != null)
+						connection.setAuthDomain(x);
+					x = variableContext.getParameter("authoritygroup");
+					if (x != null)
+						connection.setAuthGroup(x);
 
 					String error = AuthorityConnectorFactory.processConfigurationPost(threadContext,connection.getClassName(),variableContext,pageContext.getRequest().getLocale(),connection.getConfigParams());
 					
@@ -326,6 +466,129 @@
 <%
 			}
 		}
+		else if (type != null && op != null && type.equals("mapper"))
+		{
+			// -- Mapping editing operations --
+			if (op.equals("Save") || op.equals("Continue"))
+			{
+				try
+				{
+					// Set up a connection object that is a merge of an existing connection object plus what was posted.
+					IMappingConnection connection = null;
+					boolean isNew = true;
+					String x = variableContext.getParameter("isnewconnection");
+					if (x != null)
+						isNew = x.equals("true");
+
+					String connectionName = variableContext.getParameter("connname");
+					// If the connectionname is not null, load the connection description and prepopulate everything with what comes from it.
+					if (connectionName != null && connectionName.length() > 0 && !isNew)
+					{
+						connection = mappingConnManager.load(connectionName);
+					}
+					
+					if (connection == null)
+					{
+						connection = mappingConnManager.create();
+						if (connectionName != null && connectionName.length() > 0)
+							connection.setName(connectionName);
+					}
+
+					// Gather all the data from the form.
+					connection.setIsNew(isNew);
+					x = variableContext.getParameter("description");
+					if (x != null)
+						connection.setDescription(x);
+					x = variableContext.getParameter("classname");
+					if (x != null)
+						connection.setClassName(x);
+					x = variableContext.getParameter("maxconnections");
+					if (x != null && x.length() > 0)
+						connection.setMaxConnections(Integer.parseInt(x));
+					x = variableContext.getParameter("prerequisites_present");
+					if (x != null && x.equals("true"))
+					{
+						String y = variableContext.getParameter("prerequisites");
+						if (y != null && y.length() == 0)
+							y = null;
+						connection.setPrerequisiteMapping(y);
+					}
+
+					String error = MappingConnectorFactory.processConfigurationPost(threadContext,connection.getClassName(),variableContext,pageContext.getRequest().getLocale(),connection.getConfigParams());
+					
+					if (error != null)
+					{
+						variableContext.setParameter("text",error);
+						variableContext.setParameter("target","listmappers.jsp");
+%>
+						<jsp:forward page="error.jsp"/>
+<%
+					}
+					
+					if (op.equals("Continue"))
+					{
+						threadContext.save("ConnectionObject",connection);
+%>
+						<jsp:forward page="editmapper.jsp"/>
+<%
+					}
+					else if (op.equals("Save"))
+					{
+						mappingConnManager.save(connection);
+						variableContext.setParameter("connname",connectionName);
+%>
+						<jsp:forward page="viewmapper.jsp"/>
+<%
+					}
+				}
+				catch (ManifoldCFException e)
+				{
+					e.printStackTrace();
+					variableContext.setParameter("text",e.getMessage());
+					variableContext.setParameter("target","listmappers.jsp");
+%>
+					<jsp:forward page="error.jsp"/>
+<%
+				}
+			}
+			else if (op.equals("Delete"))
+			{
+				try
+				{
+					String connectionName = variableContext.getParameter("connname");
+					if (connectionName == null)
+						throw new ManifoldCFException("Missing connection parameter");
+					mappingConnManager.delete(connectionName);
+%>
+					<jsp:forward page="listmappers.jsp"/>
+<%
+				}
+				catch (ManifoldCFException e)
+				{
+					e.printStackTrace();
+					variableContext.setParameter("text",e.getMessage());
+					variableContext.setParameter("target","listmappers.jsp");
+%>
+					<jsp:forward page="error.jsp"/>
+<%
+				}
+			}
+			else if (op.equals("Cancel"))
+			{
+%>
+				<jsp:forward page="listmappers.jsp"/>
+<%
+			}
+			else
+			{
+				// Error
+				variableContext.setParameter("text","Illegal parameter to mapping execution page");
+				variableContext.setParameter("target","listmappers.jsp");
+%>
+				<jsp:forward page="error.jsp"/>
+<%
+			}
+		}
 		else if (type != null && op != null && type.equals("output"))
 		{
 			// -- Output connection editing operations --
@@ -453,6 +716,28 @@
 <%
 				}
 			}
+			else if (op.equals("RemoveAll"))
+			{
+				try
+				{
+					String connectionName = variableContext.getParameter("connname");
+					if (connectionName == null)
+						throw new ManifoldCFException("Missing connection parameter");
+					org.apache.manifoldcf.agents.system.ManifoldCF.signalOutputConnectionRemoved(threadContext,connectionName);
+%>
+					<jsp:forward page="listoutputs.jsp"/>
+<%
+				}
+				catch (ManifoldCFException e)
+				{
+					e.printStackTrace();
+					variableContext.setParameter("text",e.getMessage());
+					variableContext.setParameter("target","listoutputs.jsp");
+%>
+					<jsp:forward page="error.jsp"/>
+<%
+				}
+			}
 			else
 			{
 				// Error
@@ -768,8 +1053,7 @@
 					
 					if (outputPresent && outputConnection != null)
 					{
-						IOutputConnector outputConnector = OutputConnectorFactory.grab(threadContext,
-							outputConnection.getClassName(),outputConnection.getConfigParams(),outputConnection.getMaxConnections());
+						IOutputConnector outputConnector = outputConnectorPool.grab(outputConnection);
 						if (outputConnector != null)
 						{
 							try
@@ -786,15 +1070,14 @@
 							}
 							finally
 							{
-								OutputConnectorFactory.release(outputConnector);
+								outputConnectorPool.release(outputConnection,outputConnector);
 							}
 						}
 					}
 					
 					if (connectionPresent && connection != null)
 					{
-						IRepositoryConnector repositoryConnector = RepositoryConnectorFactory.grab(threadContext,
-							connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+						IRepositoryConnector repositoryConnector = repositoryConnectorPool.grab(connection);
 						if (repositoryConnector != null)
 						{
 							try
@@ -811,7 +1094,7 @@
 							}
 							finally
 							{
-								RepositoryConnectorFactory.release(repositoryConnector);
+								repositoryConnectorPool.release(connection,repositoryConnector);
 							}
 						}
 					}
diff --git a/framework/crawler-ui/src/main/webapp/index.jsp b/framework/crawler-ui/src/main/webapp/index.jsp
index 6ad4738..ef90c85 100644
--- a/framework/crawler-ui/src/main/webapp/index.jsp
+++ b/framework/crawler-ui/src/main/webapp/index.jsp
@@ -29,7 +29,7 @@
 	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
 	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
 	<title>
-		Apache ManifoldCF
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"index.ApacheManifoldCF")%>
 	</title>
 
 </head>
@@ -39,7 +39,7 @@
       <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
       <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
        <td class="window">
-	<p class="windowtitle"><%=Messages.getString(pageContext.getRequest().getLocale(),"index.WelcomeToApacheManifoldFC")%></p>
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"index.WelcomeToApacheManifoldFC")%></p>
        </td>
       </tr>
     </table>
diff --git a/framework/crawler-ui/src/main/webapp/listauthorities.jsp b/framework/crawler-ui/src/main/webapp/listauthorities.jsp
index 744720f..b7ead68 100644
--- a/framework/crawler-ui/src/main/webapp/listauthorities.jsp
+++ b/framework/crawler-ui/src/main/webapp/listauthorities.jsp
@@ -38,7 +38,7 @@
 
 	function Delete(connectionName)
 	{
-		if (confirm("Delete authority '"+connectionName+"'?"))
+		if (confirm("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"listauthorities.DeleteAuthority")%> '"+connectionName+"'?"))
 		{
 			document.listconnections.op.value="Delete";
 			document.listconnections.connname.value=connectionName;
diff --git a/framework/crawler-ui/src/main/webapp/listconnections.jsp b/framework/crawler-ui/src/main/webapp/listconnections.jsp
index 0738c92..67a2d12 100644
--- a/framework/crawler-ui/src/main/webapp/listconnections.jsp
+++ b/framework/crawler-ui/src/main/webapp/listconnections.jsp
@@ -75,7 +75,11 @@
 				<td class="separator" colspan="6"><hr/></td>
 			</tr>
 			<tr class="headerrow">
-				<td class="columnheader"></td><td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.Name")%></td><td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.Description")%></td><td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.ConnectionType")%></td><td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.Authority")%></td>
+				<td class="columnheader"></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.Name")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.Description")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.ConnectionType")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.AuthorityGroup")%></td>
 				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listconnections.Max")%></td>
 			</tr>
 <%
@@ -91,7 +95,7 @@
 		String className = connection.getClassName();
 		String connectorName = connectorManager.getDescription(className);
 		if (connectorName == null)
-			connectorName = className + "(uninstalled)";
+			connectorName = className + Messages.getString(pageContext.getRequest().getLocale(),"listconnections.uninstalled");;
 		String authorityName = connection.getACLAuthority();
 		int maxCount = connection.getMaxConnections();
 
diff --git a/framework/crawler-ui/src/main/webapp/listgroups.jsp b/framework/crawler-ui/src/main/webapp/listgroups.jsp
new file mode 100644
index 0000000..df8d185
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/listgroups.jsp
@@ -0,0 +1,129 @@
+<%@ include file="adminHeaders.jsp" %>
+
+<%
+
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+
+	<title>
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.ApacheManifoldCFListAuthorityGroups")%>
+	</title>
+
+	<script type="text/javascript">
+	<!--
+
+	function Delete(groupName)
+	{
+		if (confirm("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"listgroups.DeleteAuthorityGroup")%> '"+groupName+"'?"))
+		{
+			document.listgroups.op.value="Delete";
+			document.listgroups.groupname.value=groupName;
+			document.listgroups.submit();
+		}
+	}
+
+	//-->
+	</script>
+
+</head>
+
+<body class="standardbody">
+
+    <table class="page">
+      <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+      <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
+       <td class="window">
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.ListOfAuthorityGroups")%></p>
+	<form class="standardform" name="listgroups" action="execute.jsp" method="POST">
+		<input type="hidden" name="op" value="Continue"/>
+		<input type="hidden" name="type" value="group"/>
+		<input type="hidden" name="groupname" value=""/>
+
+<%
+    try
+    {
+	// Get the authority group manager handle
+	IAuthorityGroupManager manager = AuthorityGroupManagerFactory.make(threadContext);
+	IAuthorityGroup[] groups = manager.getAllGroups();
+%>
+		<table class="datatable">
+			<tr>
+				<td class="separator" colspan="5"><hr/></td>
+			</tr>
+			<tr class="headerrow">
+				<td class="columnheader"></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.Name")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.Description")%></td>
+			</tr>
+<%
+	int i = 0;
+	while (i < groups.length)
+	{
+		IAuthorityGroup group = groups[i++];
+
+		String name = group.getName();
+		String description = group.getDescription();
+		if (description == null)
+			description = "";
+
+%>
+		<tr <%="class=\""+((i%2==0)?"evendatarow":"odddatarow")+"\""%>>
+			<td class="columncell">
+				<a href='<%="viewgroup.jsp?groupname="+java.net.URLEncoder.encode(name,"UTF-8")%>' alt='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listgroups.View")+" "+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(name)%>'><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.View")%></a>&nbsp;<a href='<%="editgroup.jsp?groupname="+java.net.URLEncoder.encode(name,"UTF-8")%>' alt='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listgroups.Edit")+" "+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(name)%>'><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.Edit")%></a>&nbsp;<a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(name)+"\")"%>' alt='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listgroups.Delete")+" "+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(name)%>'><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.Delete")%></a>
+			</td>
+			<td class="columncell"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(name)%></td>
+			<td class="columncell"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%></td>
+		</tr>
+<%
+	}
+%>
+			<tr>
+				<td class="separator" colspan="5"><hr/></td>
+			</tr>
+			<tr><td class="message" colspan="5"><a href="editgroup.jsp" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listgroups.AddNewGroup")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listgroups.AddaNewGroup")%></a></td></tr>
+		</table>
+
+<%
+    }
+    catch (ManifoldCFException e)
+    {
+	e.printStackTrace();
+	variableContext.setParameter("text",e.getMessage());
+	variableContext.setParameter("target","index.jsp");
+%>
+	<jsp:forward page="error.jsp"/>
+<%
+    }
+%>
+	    </form>
+       </td>
+      </tr>
+    </table>
+
+</body>
+
+</html>
diff --git a/framework/crawler-ui/src/main/webapp/listmappers.jsp b/framework/crawler-ui/src/main/webapp/listmappers.jsp
new file mode 100644
index 0000000..e2ff958
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/listmappers.jsp
@@ -0,0 +1,139 @@
+<%@ include file="adminHeaders.jsp" %>
+
+<%
+
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+
+	<title>
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.ApacheManifoldCFListMappers")%>
+	</title>
+
+	<script type="text/javascript">
+	<!--
+
+	function Delete(connectionName)
+	{
+		if (confirm("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"listmappers.DeleteMapper")%> '"+connectionName+"'?"))
+		{
+			document.listconnections.op.value="Delete";
+			document.listconnections.connname.value=connectionName;
+			document.listconnections.submit();
+		}
+	}
+
+	//-->
+	</script>
+
+</head>
+
+<body class="standardbody">
+
+    <table class="page">
+      <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+      <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
+       <td class="window">
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.ListOfMappingConnections")%></p>
+	<form class="standardform" name="listconnections" action="execute.jsp" method="POST">
+		<input type="hidden" name="op" value="Continue"/>
+		<input type="hidden" name="type" value="mapper"/>
+		<input type="hidden" name="connname" value=""/>
+
+<%
+    try
+    {
+	// Get the mapping connection manager handle
+	IMappingConnectionManager manager = MappingConnectionManagerFactory.make(threadContext);
+	IMappingConnectorManager connectorManager = MappingConnectorManagerFactory.make(threadContext);
+	IMappingConnection[] connections = manager.getAllConnections();
+%>
+		<table class="datatable">
+			<tr>
+				<td class="separator" colspan="5"><hr/></td>
+			</tr>
+			<tr class="headerrow">
+				<td class="columnheader"></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.Name")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.Description")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.MapperType")%></td>
+				<td class="columnheader"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.Max")%></td>
+			</tr>
+<%
+	int i = 0;
+	while (i < connections.length)
+	{
+		IMappingConnection connection = connections[i++];
+
+		String name = connection.getName();
+		String description = connection.getDescription();
+		if (description == null)
+			description = "";
+		String className = connection.getClassName();
+		int maxCount = connection.getMaxConnections();
+		String connectorName = connectorManager.getDescription(className);
+		if (connectorName == null)
+			connectorName = className + Messages.getString(pageContext.getRequest().getLocale(),"listmappers.uninstalled");
+
+%>
+		<tr <%="class=\""+((i%2==0)?"evendatarow":"odddatarow")+"\""%>>
+			<td class="columncell">
+				<a href='<%="viewmapper.jsp?connname="+java.net.URLEncoder.encode(name,"UTF-8")%>' alt='<%="View "+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(name)%>'><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.View")%></a>&nbsp;<a href='<%="editmapper.jsp?connname="+java.net.URLEncoder.encode(name,"UTF-8")%>' alt='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listmappers.Edit")+" "+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(name)%>'><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.Edit")%></a>&nbsp;<a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(name)+"\")"%>' alt='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listmappers.Delete")+" "+org.apache.manifoldcf.ui.util.Encoder.attributeEscape(name)%>'><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.Delete")%></a>
+			</td>
+			<td class="columncell"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(name)%></td>
+			<td class="columncell"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%></td>
+			<td class="columncell"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectorName)%></td>
+			<td class="columncell"><%=Integer.toString(maxCount)%></td>
+		</tr>
+<%
+	}
+%>
+			<tr>
+				<td class="separator" colspan="5"><hr/></td>
+			</tr>
+			<tr><td class="message" colspan="5"><a href="editmapper.jsp" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"listmappers.AddNewConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"listmappers.AddaNewConnection")%></a></td></tr>
+		</table>
+
+<%
+    }
+    catch (ManifoldCFException e)
+    {
+	e.printStackTrace();
+	variableContext.setParameter("text",e.getMessage());
+	variableContext.setParameter("target","index.jsp");
+%>
+	<jsp:forward page="error.jsp"/>
+<%
+    }
+%>
+	    </form>
+       </td>
+      </tr>
+    </table>
+
+</body>
+
+</html>
diff --git a/framework/crawler-ui/src/main/webapp/listoutputs.jsp b/framework/crawler-ui/src/main/webapp/listoutputs.jsp
index 85654e1..3ed520d 100644
--- a/framework/crawler-ui/src/main/webapp/listoutputs.jsp
+++ b/framework/crawler-ui/src/main/webapp/listoutputs.jsp
@@ -94,7 +94,7 @@
 		String className = connection.getClassName();
 		String connectorName = connectorManager.getDescription(className);
 		if (connectorName == null)
-			connectorName = className + "(uninstalled)";
+			connectorName = className + Messages.getString(pageContext.getRequest().getLocale(),"listoutputs.uninstalled");;
 		int maxCount = connection.getMaxConnections();
 
 %>
diff --git a/framework/crawler-ui/src/main/webapp/login.jsp b/framework/crawler-ui/src/main/webapp/login.jsp
new file mode 100644
index 0000000..89eaf7f
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/login.jsp
@@ -0,0 +1,89 @@
+<% response.setHeader("Pragma","No-cache");
+response.setDateHeader("Expires",0);
+response.setHeader("Cache-Control", "no-cache");
+response.setDateHeader("max-age", 0);
+response.setContentType("text/html;charset=utf-8");
+%><%@ include file="adminDefaults.jsp" %>
+
+<%
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+	<head>
+		<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+		<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+		<title>
+			<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"index.ApacheManifoldCFLogin")%>
+		</title>
+		<script type="text/javascript">
+			<!--
+			function login()
+			{
+				document.loginform.submit();
+			}
+			//-->
+		</script>
+	</head>
+	<body class="standardbody">
+		<table class="page">
+			<tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+			<tr>
+				<td colspan="2" class="window">
+
+					<form class="standardform" name="loginform" action="setupAdminProfile.jsp" method="POST">
+						<table class="displaytable">
+<%
+String value = variableContext.getParameter("loginfailed");
+if (value != null && value.equals("true"))
+{
+%>
+							<tr><td class="message" colspan="2"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"index.LoginFailed")%></td></tr>
+							<tr><td class="separator" colspan="2"><hr/></td></tr>
+<%
+}
+%>
+							<tr>
+								<td class="description"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"index.UserIDColon")%></td>
+								<td class="value">
+									<input name="userID" type="text" size="32" value=""/>
+								</td>
+							</tr>
+							<tr>
+								<td class="description"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"index.PasswordColon")%></td>
+								<td class="value">
+									<input name="password" type="password" size="32" value=""/>
+								</td>
+							</tr>
+							<tr><td class="separator" colspan="2"><hr/></td></tr>
+							<tr>
+								<td class="message" colspan="2">
+									<input type="button" onclick='Javascript:login();' value='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"index.Login")%>' alt='<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"index.Login")%>'/>
+								</td>
+							</tr>
+						</table>
+					</form>
+				</td>
+			</tr>
+		</table>
+	</body>
+</html>
+
diff --git a/framework/crawler-ui/src/main/webapp/logout.jsp b/framework/crawler-ui/src/main/webapp/logout.jsp
new file mode 100644
index 0000000..99f42ec
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/logout.jsp
@@ -0,0 +1,33 @@
+<% response.setHeader("Pragma","No-cache");
+response.setDateHeader("Expires",0);
+response.setHeader("Cache-Control", "no-cache");
+response.setDateHeader("max-age", 0);
+response.setContentType("text/html;charset=utf-8");
+%><%@ include file="adminDefaults.jsp" %>
+
+<%
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+
+<%
+adminprofile.logout();
+response.sendRedirect("login.jsp");
+%>
+
diff --git a/framework/crawler-ui/src/main/webapp/maxactivityreport.jsp b/framework/crawler-ui/src/main/webapp/maxactivityreport.jsp
index 53ecc29..cf26450 100644
--- a/framework/crawler-ui/src/main/webapp/maxactivityreport.jsp
+++ b/framework/crawler-ui/src/main/webapp/maxactivityreport.jsp
@@ -768,52 +768,50 @@
 <%
 		}
 %>
-		<table class="displaytable">
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
-		    </tr>
-		    <tr>
-			<td class="value">
+		<table class="reportfootertable">
+		    <tr class="reportfooterrow">
+			<td class="reportfootercell">
 				<nobr>
 <%
 		if (startRow == 0)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Previous")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Previous")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxactivityreport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Previous")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxactivityreport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Previous")%></a>
 <%
 		}
 %>
-				&nbsp;
+				</nobr>
+				<nobr>
 <%
 		if (hasMoreRows == false)
 		{
 %>
-				Next
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Next")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxactivityreport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Next")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxactivityreport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Next")%></a>
 <%
 		}
 %>
 				</nobr>
 			</td>
-			<td class="description"><nobr>Rows:</nobr></td><td class="value"><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.RowsPerPage")%></nobr></td>
-			<td class="value">
-				<input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.Rows")%></nobr>
+				<nobr><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></nobr>
 			</td>
-		    </tr>
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxactivityreport.RowsPerPage")%></nobr>
+				<nobr><input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/></nobr>
+			</td>
 		    </tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/maxbandwidthreport.jsp b/framework/crawler-ui/src/main/webapp/maxbandwidthreport.jsp
index b37a23a..c32153a 100644
--- a/framework/crawler-ui/src/main/webapp/maxbandwidthreport.jsp
+++ b/framework/crawler-ui/src/main/webapp/maxbandwidthreport.jsp
@@ -767,52 +767,50 @@
 <%
 		}
 %>
-		<table class="displaytable">
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
-		    </tr>
-		    <tr>
-			<td class="value">
+		<table class="reportfootertable">
+		    <tr class="reportfooterrow">
+			<td class="reportfootercell">
 				<nobr>
 <%
 		if (startRow == 0)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Previous")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Previous")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxbandwidthreport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Previous")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxbandwidthreport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Previous")%></a>
 <%
 		}
 %>
-				&nbsp;
+				</nobr>
+				<nobr>
 <%
 		if (hasMoreRows == false)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Next")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Next")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxbandwidthreport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Next")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"maxbandwidthreport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Next")%></a>
 <%
 		}
 %>
 				</nobr>
 			</td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Rows")%></nobr></td><td class="value"><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.RowsPerPage")%></nobr></td>
-			<td class="value">
-				<input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.Rows")%></nobr>
+				<nobr><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></nobr>
 			</td>
-		    </tr>
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"maxbandwidthreport.RowsPerPage")%></nobr>
+				<nobr><input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/></nobr>
+			</td>
 		    </tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/navigation.jsp b/framework/crawler-ui/src/main/webapp/navigation.jsp
index 2e829cb..e18a315 100644
--- a/framework/crawler-ui/src/main/webapp/navigation.jsp
+++ b/framework/crawler-ui/src/main/webapp/navigation.jsp
@@ -37,6 +37,12 @@
 <p class="menumain"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.Authorities")%></nobr></p>
 <ul class="menusecond">
 	<li class="menuitem">
+		<nobr><a class="menulink" href="listgroups.jsp" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"navigation.Listauthoritygroups")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.ListAuthorityGroups")%></a></nobr>
+	</li>
+	<li class="menuitem">
+		<nobr><a class="menulink" href="listmappers.jsp" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"navigation.Listusermappings")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.ListUserMappings")%></a></nobr>
+	</li>
+	<li class="menuitem">
 		<nobr><a class="menulink" href="listauthorities.jsp" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"navigation.Listauthorities")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.ListAuthorityConnections")%></a></nobr>
 	</li>
 </ul>
@@ -84,4 +90,7 @@
 	<li class="menuitem">
 		<nobr><a class="menulink" href='<%="http://manifoldcf.apache.org/release/trunk/"+Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.Locale")+"/end-user-documentation.html"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"navigation.Help")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.Help")%></a></nobr>
 	</li>
+	<li class="menuitem">
+		<nobr><a class="menulink" href="logout.jsp" alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"navigation.LogOut")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"navigation.LogOut")%></a></nobr>
+	</li>
 </ul>
diff --git a/framework/crawler-ui/src/main/webapp/queuestatus.jsp b/framework/crawler-ui/src/main/webapp/queuestatus.jsp
index 61bd31e..3c931e1 100644
--- a/framework/crawler-ui/src/main/webapp/queuestatus.jsp
+++ b/framework/crawler-ui/src/main/webapp/queuestatus.jsp
@@ -543,52 +543,50 @@
 		}
 %>
 		</table>
-		<table class="displaytable">
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
-		    </tr>
-		    <tr>
-			<td class="value">
+		<table class="reportfootertable">
+		    <tr class="reportfooterrow">
+			<td class="reportfootercell">
 				<nobr>
 <%
 		if (startRow == 0)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Previous")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Previous")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"queuestatus.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Previous")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"queuestatus.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Previous")%></a>
 <%
 		}
 %>
-				&nbsp;
+				</nobr>
+				<nobr>
 <%
 		if (hasMoreRows == false)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Next")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Next")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"queuestatus.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Next")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"queuestatus.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Next")%></a>
 <%
 		}
 %>
 				</nobr>
 			</td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Rows")%></nobr></td><td class="value"><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.RowsPerPage")%></nobr></td>
-			<td class="value">
-				<input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.Rows")%></nobr>
+				<nobr><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></nobr>
 			</td>
-		    </tr>
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"queuestatus.RowsPerPage")%></nobr>
+				<nobr><input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/></nobr>
+			</td>
 		    </tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/resultreport.jsp b/framework/crawler-ui/src/main/webapp/resultreport.jsp
index 6b721eb..8c37941 100644
--- a/framework/crawler-ui/src/main/webapp/resultreport.jsp
+++ b/framework/crawler-ui/src/main/webapp/resultreport.jsp
@@ -777,52 +777,50 @@
 		}
 %>
 		</table>
-		<table class="displaytable">
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
-		    </tr>
-		    <tr>
-			<td class="value">
+		<table class="reportfootertable">
+		    <tr class="reportfooterrow">
+			<td class="reportfootercell">
 				<nobr>
 <%
 		if (startRow == 0)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Previous")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Previous")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"resultreport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Previous")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"resultreport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Previous")%></a>
 <%
 		}
 %>
-				&nbsp;
+				</nobr>
+				<nobr>
 <%
 		if (hasMoreRows == false)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Next")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Next")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"resultreport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Next")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"resultreport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Next")%></a>
 <%
 		}
 %>
 				</nobr>
 			</td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Rows")%></nobr></td><td class="value"><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.RowsPerPage")%></nobr></td>
-			<td class="value">
-				<input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.Rows")%></nobr>
+				<nobr><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></nobr>
 			</td>
-		    </tr>
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"resultreport.RowsPerPage")%></nobr>
+				<nobr><input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/></nobr>
+			</td>
 		    </tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/setupAdminProfile.jsp b/framework/crawler-ui/src/main/webapp/setupAdminProfile.jsp
index dc664dd..e4adf4d 100644
--- a/framework/crawler-ui/src/main/webapp/setupAdminProfile.jsp
+++ b/framework/crawler-ui/src/main/webapp/setupAdminProfile.jsp
@@ -1,5 +1,11 @@
-<%
+<% response.setHeader("Pragma","No-cache");
+response.setDateHeader("Expires",0);
+response.setHeader("Cache-Control", "no-cache");
+response.setDateHeader("max-age", 0);
+response.setContentType("text/html;charset=utf-8");
+%><%@ include file="adminDefaults.jsp" %>
 
+<%
 /* $Id$ */
 
 /**
@@ -20,28 +26,20 @@
 */
 %>
 
-<%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core" %>
-<%@ taglib prefix="sql" uri="http://java.sun.com/jsp/jstl/sql" %>
-<%@ taglib prefix="fmt" uri="http://java.sun.com/jsp/jstl/fmt" %>
+<%
+String userID = variableContext.getParameter("userID");
+String password = variableContext.getParameter("password");
+if (userID == null)
+	userID = "";
+if (password == null)
+	password = "";
 
-<jsp:useBean id="adminprofile" class="org.apache.manifoldcf.ui.beans.AdminProfile" scope="session"/>
-
-<c:catch var="error">
-	<c:if test="${param.valid=='true'}">
-		<c:set value="${param.login}" target="${adminprofile}" property="userID"/>
-		<c:set value="${param.password}" target="${adminprofile}" property="password"/>
-	</c:if>
-	
-	<c:if test="${param.valid=='false'}">
-		<c:set value="null" target="${adminprofile}" property="userID"/>
-	</c:if>
-</c:catch>
-
-<c:if test="${error!=null}">
-	<c:set target="${logger}" property="msg" value="Profile error!!!! ${error}"/>
-	<c:set value="null" target="${adminprofile}" property="userID"/>
-</c:if>
-
-
-
-
+adminprofile.login(threadContext,userID,password);
+if (adminprofile.getLoggedOn())
+	response.sendRedirect("index.jsp");
+else
+{
+	// Go back to login page, but with signal that login failed
+	response.sendRedirect("login.jsp?loginfailed=true");
+}
+%>
diff --git a/framework/crawler-ui/src/main/webapp/showjobstatus.jsp b/framework/crawler-ui/src/main/webapp/showjobstatus.jsp
index 3875a10..60b5a1a 100644
--- a/framework/crawler-ui/src/main/webapp/showjobstatus.jsp
+++ b/framework/crawler-ui/src/main/webapp/showjobstatus.jsp
@@ -110,9 +110,11 @@
 <%
     try
     {
+	// Get the max count
+	int maxCount = LockManagerFactory.getIntProperty(threadContext,"org.apache.manifoldcf.ui.maxstatuscount",500000);
 	// Get the job manager handle
 	IJobManager manager = JobManagerFactory.make(threadContext);
-	JobStatus[] jobs = manager.getAllStatus();
+	JobStatus[] jobs = manager.getAllStatus(true,maxCount);
 %>
 		<table class="datatable">
 			<tr>
@@ -158,7 +160,7 @@
 			statusName = Messages.getBodyString(pageContext.getRequest().getLocale(),"showjobstatus.Done");
 			break;
 		case JobStatus.JOBSTATUS_WINDOWWAIT:
-			statusName = Messages.getBodyString(pageContext.getRequest().getLocale(),"showjobstatus,Waiting");
+			statusName = Messages.getBodyString(pageContext.getRequest().getLocale(),"showjobstatus.Waiting");
 			break;
 		case JobStatus.JOBSTATUS_STARTING:
 			statusName = Messages.getBodyString(pageContext.getRequest().getLocale(),"showjobstatus.Startingup");
@@ -250,10 +252,10 @@
 		}
 %>
 			</td>
-			<td class="columncell"><%="<!--jobid="+js.getJobID()+"-->"%><%=js.getDescription()%></td><td class="columncell"><%=statusName%></td><td class="columncell"><%=startTime%></td><td class="columncell"><%=endTime%></td>
-			<td class="columncell"><%=new Long(js.getDocumentsInQueue()).toString()%></td>
-			<td class="columncell"><%=new Long(js.getDocumentsOutstanding()).toString()%></td>
-			<td class="columncell"><%=new Long(js.getDocumentsProcessed()).toString()%></td>
+			<td class="columncell"><%="<!--jobid="+js.getJobID()+"-->"%><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(js.getDescription())%></td><td class="columncell"><%=statusName%></td><td class="columncell"><%=startTime%></td><td class="columncell"><%=endTime%></td>
+			<td class="columncell"><%=(js.getQueueCountExact()?"":"&gt; ")%><%=new Long(js.getDocumentsInQueue()).toString()%></td>
+			<td class="columncell"><%=(js.getOutstandingCountExact()?"":"&gt; ")%><%=new Long(js.getDocumentsOutstanding()).toString()%></td>
+			<td class="columncell"><%=(js.getProcessedCountExact()?"":"&gt; ")%><%=new Long(js.getDocumentsProcessed()).toString()%></td>
 		</tr>
 <%
 	}
diff --git a/framework/crawler-ui/src/main/webapp/simplereport.jsp b/framework/crawler-ui/src/main/webapp/simplereport.jsp
index 1de1006..a54f350 100644
--- a/framework/crawler-ui/src/main/webapp/simplereport.jsp
+++ b/framework/crawler-ui/src/main/webapp/simplereport.jsp
@@ -711,52 +711,50 @@
 		}
 %>
 		</table>
-		<table class="displaytable">
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
-		    </tr>
-		    <tr>
-			<td class="value">
+		<table class="reportfootertable">
+		    <tr class="reportfooterrow">
+			<td class="reportfootercell">
 				<nobr>
 <%
 		if (startRow == 0)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Previous")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Previous")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"simplereport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Previous")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow-rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"simplereport.PreviousPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Previous")%></a>
 <%
 		}
 %>
-				&nbsp;
+				</nobr>
+				<nobr>
 <%
 		if (hasMoreRows == false)
 		{
 %>
-				<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Next")%>
+					<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Next")%>
 <%
 		}
 		else
 		{
 %>
-				<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"simplereport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Next")%></a>
+					<a href="javascript:void(0);" onclick='<%="javascript:SetPosition("+Integer.toString(startRow+rowCount)+");"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"simplereport.NextPage")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Next")%></a>
 <%
 		}
 %>
 				</nobr>
 			</td>
-			<td class="description"><nobr>Rows:</nobr></td><td class="value"><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></td>
-			<td class="description"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.RowsPerPage")%></nobr></td>
-			<td class="value">
-				<input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.Rows")%></nobr>
+				<nobr><%=Integer.toString(startRow)%>-<%=(hasMoreRows?Integer.toString(startRow+rowCount-1):"END")%></nobr>
 			</td>
-		    </tr>
-		    <tr>
-			<td class="separator" colspan="5"><hr/></td>
+			<td class="reportfootercell">
+				<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"simplereport.RowsPerPage")%></nobr>
+				<nobr><input type="text" name="rowcount" size="5" value='<%=Integer.toString(rowCount)%>'/></nobr>
+			</td>
 		    </tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/style.css b/framework/crawler-ui/src/main/webapp/style.css
index ae7b61b..0029b30 100644
--- a/framework/crawler-ui/src/main/webapp/style.css
+++ b/framework/crawler-ui/src/main/webapp/style.css
@@ -1,744 +1,625 @@
-/* Licensed to the Apache Software Foundation (ASF) under one or more        */
-/* contributor license agreements. See the NOTICE file distributed with      */
-/* this work for additional information regarding copyright ownership.       */
-/* The ASF licenses this file to You under the Apache License, Version 2.0   */
-/* (the "License"); you may not use this file except in compliance with      */
-/* the License. You may obtain a copy of the License at                      */
-/*                                                                           */
-/* http://www.apache.org/licenses/LICENSE-2.0                                */
-/*                                                                           */
-/* Unless required by applicable law or agreed to in writing, software       */
-/* distributed under the License is distributed on an "AS IS" BASIS,         */
-/* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  */
-/* See the License for the specific language governing permissions and       */
-/* limitations under the License.                                            */
+/*
+Licensed to the Apache Software Foundation (ASF) under one or more
+contributor license agreements. See the NOTICE file distributed with
+this work for additional information regarding copyright ownership.
+The ASF licenses this file to You under the Apache License, Version 2.0
+(the "License"); you may not use this file excepx in compliance with
+the License. You may obtain a copy of the License at
 
+http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing, software
+distributed under the License is distributed on an "AS IS" BASIS,
+WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+See the License for the specific language governing permissions and
+limitations under the License.
+*/
+
+/* 
+    Created on : Dec 4, 2013, 11:11:44 AM
+    Author     : Modified By Eranda Bandaranaike eragroove@gmail.com
+*/
+
+body
+{
+    font-family: "Open Sans","Helvetica Neue",Helvetica,Arial,sans-serif;
+    padding: 0;
+    margin: 0;
+    font-size: 12px;
+    color: #444;
+}
 .tabtable
 {
-	height: 100%;
-	width: 100%;
-	background: #404040;
-	border-style: none;
-	border-width: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-left: 0pt;
-	padding-right: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-collapse: collapse;
+    height: 100%;
+    width: 100%;
+    border: 0 solid #fff;
+    margin: 0;
+    padding: 0;
+    border-collapse: collapse;
 }
 
 .tabrow
 {
-	height: 20pt;
-	font-family: sans-serif;
-	font-weight: bold;
-	font-size: 10pt;
+    height: 20px;
+    font-size: 12px;
+    line-height: 12px;
 }
 
 .tabbodyrow
 {
-	height: 100%;
+    height: 100%;
 }
 
 .passivetab
 {
-	background: #f0f0f0;
-	border-style: solid solid groove solid;
-	border-width: 1pt 1pt 2pt 1pt;
-	border-color: black black #606060 black;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-left: 4pt;
-	padding-right: 4pt;
-	padding-top: 4pt;
-	padding-bottom: 0pt;
-	vertical-align: top;
-	height: 20pt;
+    background: #f8f8f8;
+    border: solid 1px;
+    border-color: #ccc #ccc #229ddb #ccc;
+    margin: 0;
+    padding: 4px 10px 5px 10px;
+    vertical-align: top;
+    height: 20px;
 }
-
+.passivetab a{
+    text-decoration: none;
+}
 .activetab
 {
-	background: #f0f0f0;
-	border-style: groove groove none groove;
-	border-width: 2pt 2pt 0pt 2pt;
-	border-color: #606060;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-left: 4pt;
-	padding-right: 4pt;
-	padding-top: 4pt;
-	padding-bottom: 0pt;
-	height: 20pt;
-	vertical-align: top;
+    background: #fff;
+    border-style: solid;
+    border-width: 1px 1px 0 2px;
+    border-color: #229ddb;
+    margin: 0;
+    padding: 4px 10px 5px 10px;
+    height: 20px;
+    vertical-align: top;
 }
 
 .remaindertab
 {
-	width: 100%;
-	background: #404040;
-	border-style: none none groove none;
-	border-width: 0pt 0pt 2pt 0pt;
-	border-color: #606060;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-left: 4pt;
-	padding-right: 4pt;
-	padding-top: 4pt;
-	padding-bottom: 0pt;
-	height: 20pt;
-	vertical-align: top;
-	text-align: right;
-	color: #c0c0c0;
+    width: 100%;
+    background: #f8f8f8;
+    border-style: solid;
+    border-width: 1px 1px 1px 0;
+    border-color: #ccc #ccc #229ddb;
+    padding: 4px 10px 5px 10px;
+    height: 20px;
+    vertical-align: top;
+    text-align: right;
+    color: #555;
 }
 
 .tabbody
 {
-	height: 100%;
-	background: #f0f0f0;
-	border-style: none none none groove;
-	border-width: 0pt 0pt 0pt 2pt;
-	border-color: #606060;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-left: 4pt;
-	padding-right: 4pt;
-	padding-top: 4pt;
-	padding-bottom: 4pt;
-	vertical-align: top;
+    height: 100%;
+    border-left: #229ddb solid 1px;
+    margin: 0;
+    padding: 10px;
+    vertical-align: top;
 }
 
 .standardbody
 {
-	background: #404040;
-	padding: 0pt;
-	margin: 0pt;
+    background: #404040;
+    padding: 0;
+    margin: 0;
 }
 
 .standardform
 {
-	height: 100%;
-	padding: 0pt;
-	margin: 0pt;
+    height: 100%;
+    padding: 0;
+    margin: 0;
 }
 
 .page
 {
-	width: 100%;
-	height: 100%;
-	background: #404040;
-	color: gray;
-	font-family: sans-serif;
+    width: 100%;
+    height: 100%;
+    background: #fff;
+    border-width: 0 !important;
 }
 
 .banner
 {
-	width: 100%;
-	height: 100px;
-	background-color: #c0c0f0;
-	color: black;
-	font-family: helvetica;
+    width: 100%;
+    min-height: 45px;
+    background: #fff;
+    color: black;
 }
 
 .bannertable
 {
-	width: 100%;
-	height: 100%;
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	border-collapse: collapse;
+    width: 100%;
+    height: 100%;
+    margin: 0;
+    padding: 0;
+    border-style: none;
+    border-width: 0;
+    border-collapse: collapse;
+    border-bottom: 1px solid #ddd;
+    background: #f8f8f8;
 }
 
 .headertable
 {
-	width: 100%;
-	height: 100%;
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	border-collapse: collapse;
+    width: 100%;
+    height: 100%;
+    margin: 0;
+    padding: 0;
+    border-style: none;
+    border-width: 0;
+    border-collapse: collapse;
 }
 
 .headerimage
 {
-	width: 50px;
-	text-align: left;
-	vertical-align: center;
-	padding: 0pt;
-	margin: 0pt;
+    width: 50px;
+    text-align: left;
+    vertical-align: middle;
+    padding: 0;
+    margin: 0;
 }
-
+.headerimage img{
+    margin: 12px;
+}
 .header
 {
-	text-align: center;
-	vertical-align: center;
-	color: black;
-	font-weight: bold;
-	font-size: 22pt;
-	font-family: sans-serif;
+    color: black;
+    font-weight: bold;
+    font-size: 20px;
 }
 
 .headerdate
 {
-	text-align: right;
-	vertical-align: top;
-	color: black;
-	font-weight: light;
-	font-size: 8pt;
-	font-family: sans-serif;
+    text-align: right;
+    vertical-align: top;
+    color: black;
+    font-weight: 200;
+    font-size: 10px;
+
 }
 
 .navigation
 {
-	width: 100px;
-	height: 100%;
-	vertical-align: top;
-	background: #8080ff;
-	color: gray;
-	font-family: sans-serif;
+    width: 17%;
+    height: 100%;
+    vertical-align: top;
+    border-top-width: 0;
+    border-right: solid #c0c0c0 1px;
 }
 
 .menumain
 {
-	font-size: 14pt;
-	color: #606060;
-	font-weight: bold;
-	font-family: sans-serif;
+    border-color: #E5E5E5;
+    border-width: 1px 0;
+    border-style: solid;
+    padding: 10px;
+    background: #f8f8f8;
+    margin: 0 0 -1px 0;
 }
 
 .menusecond
 {
-	font-family: sans-serif;
-	margin-left: 0pt;
-	font-size: 10pt;
-	font-weight: bold;
-	color: #202020;
-	list-style: none;
-	text-indent: 0pt;
-	padding-left: 0pt;
+    font-size: 12px;
+    list-style: none;
+    text-indent: 0;
+    margin: 0;
+    padding: 0;
 }
 
 .menuitem
 {
-	margin-left: 16pt;
-	padding-left: 0pt;
-	text-indent: 0pt;
+    margin-left: 16px;
+    padding: 5px;
+    text-indent: 0;
+    border: dotted #e8e8e8;
+    border-width: 1px 0;
+    margin-bottom: -1px;
 }
 
 .menulink
 {
-	font-color: gray;
-	text-decoration: none;
-	color: #202020;
+
+    text-decoration: none;
+    color: #202020;
 }
 
 .menulink:link
 {
-	font-color: gray;
-	text-decoration: none;
-	color: #202020;
+
+    text-decoration: none;
+    color: #202020;
 }
 
 .menulink:active
 {
-	font-color: gray;
-	text-decoration: none;
-	color: #202020;
+
+    text-decoration: none;
+    color: #202020;
 }
 
 .menulink:visited
 {
-	font-color: gray;
-	text-decoration: none;
-	color: #202020;
+
+    text-decoration: none;
+    color: #202020;
 }
 
 .darkwindow
 {
-	width: 100%;
-	height: 100%;
-	background: #404040;
-	font-family: sans-serif;
-	color: black;
-	vertical-align: top;
+    width: 100%;
+    height: 100%;
+    color: black;
+    vertical-align: top;
 }
 
 .window
 {
-	width: 100%;
-	height: 100%;
-	background: #f0f0f0;
-	font-family: sans-serif;
-	color: black;
-	vertical-align: top;
+    width: 83%;
+    height: 100%;
+    background: #fff;
+    color: black;
+    vertical-align: top;
+    padding: 10px;
 }
 
 .windowtitle
 {
-	font-weight: bold;
-	font-size: 16pt;
+    border-bottom: 1px solid #c5e4ff;
+    font-size: 16px;
+    padding: 0 0 10px 10px;
 }
 
 .datatable
 {
-	width: 100%;
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	border-collapse: collapse;
+    width: 100%;
+    margin: 0 -1px;
+    padding: 0;
+    border-style: none;
+    border-width: 0;
+    border-collapse: collapse;
 }
 
 .headerrow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #c0c0c0;
-	vertical-align: center;
+    padding: 0;
+    margin: 0;
+    border-style: none;
+    border-width: 0;
+    vertical-align: middle;
 }
 
 .datarow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #f0f0f0;
-	vertical-align: top;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    background-color: #f0f0f0;
+    vertical-align: top;
 }
 
 .evendatarow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #d8d8e8;
-	vertical-align: top;
+    margin: 0;
+    padding: 0;
+    border-style: none;
+    border-width: 0;
+    vertical-align: top;
 }
 
 .odddatarow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #e8e8f8;
-	vertical-align: top;
+    padding: 0;
+    margin: 0;
+    border-style: none;
+    border-width: 0;
+    vertical-align: top;
+    background-color: #fafafa;
 }
 
 .columnheader
 {
-	padding-left: 10pt;
-	padding-right: 10pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	border-style-top: solid;
-	border-style-bottom: solid;
-	border-color-top: red;
-	border-color-bottom: red;
-	border-width-top: 1pt;
-	border-width-bottom: 1pt;
-	color: #404040;
-	font-size: 14pt;
-	font-family: sans-serif;
+    padding: 8px;
+    border: solid 1px;
+    border-color: #c5e4ff #eee #eee #ccc;
+    font-size: 13px;
+    font-weight: 700;
+}
+.columnheader:first-child, .evendatarow:first-child, .odddatarow:first-child{
+    border-left-width: 0;
+}
+.columnheader:last-child, .evendatarow:last-child, .odddatarow:last-child{
+    border-right-width: 0;
 }
 
 .columncell
 {
-	padding-left: 10pt;
-	padding-right: 10pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	color: black;
-	font-size: 12pt;
-	font-family: sans-serif;
+    margin: 0;
+    padding: 7px;
+    font-size: 12px;
+    border: 1px solid #eee;
+
+}
+.columncell a{
+    border: solid 1px #ccc;
+    padding: 1px 4px;
+    margin: 0;
+    display: block;
+    float: left;
+    text-decoration: none;
+    margin-left: -1px;
+    background: #eee;
+    font-size: 11px;
+}
+.columncell a:hover{
+    background: #ddd;
 }
 
 .reportcolumnheader
 {
-	padding-left: 5pt;
-	padding-right: 5pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	border-style-top: solid;
-	border-style-bottom: solid;
-	border-color-top: red;
-	border-color-bottom: red;
-	border-width-top: 1pt;
-	border-width-bottom: 1pt;
-	color: #404040;
-	font-size: 10pt;
-	font-family: sans-serif;
+    padding:0 5px;
+    margin: 0;
+    color: #404040;
+    font-size: 10px;
+
 }
 
 .reportcolumncell
 {
-	padding-left: 5pt;
-	padding-right: 5pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	color: black;
-	font-size: 8pt;
-	font-family: sans-serif;
+    padding:0 5px;
+    margin: 0;
+    border: 0;
+    color: black;
+    font-size: 8px;
+
+}
+
+.reportfootertable
+{
+    width: 100%;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    border-collapse: collapse;
+}
+
+.reportfooterrow
+{
+	width: 100%;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    background-color: #f0f0f0;
+    vertical-align: center;
+}
+
+.reportfootercell
+{
+    padding-left: 5px;
+	padding-right:5px;
+	padding-top: 5px;
+	padding-bottom: 5px;
+	margin: 0
+    border: 0;
+    color: black;
+    font-size: 10px;
 }
 
 .formtable
 {
-	width: 100%;
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	border-collapse: collapse;
+    width: 100%;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    border-collapse: collapse;
 }
 
 .formheaderrow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #c0c0c0;
-	vertical-align: center;
+    margin: 0;
+    padding: 0;
+    border: 0;
+    background-color: #f9f9f9;
+    vertical-align: middle;
 }
 
 .formcolumnheader
 {
-	padding-left: 5pt;
-	padding-right: 5pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	border-style-top: solid;
-	border-style-bottom: solid;
-	border-color-top: red;
-	border-color-bottom: red;
-	border-width-top: 1pt;
-	border-width-bottom: 1pt;
-	color: #404040;
-	font-size: 10pt;
-	font-family: sans-serif;
+    padding: 5px;
+    color: #404040;
+    font-size: 10px;
+
 }
 
 .evenformrow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #d8d8e8;
-	vertical-align: top;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    background-color: #f8f8f8;
+    vertical-align: top;
 }
 
 .oddformrow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #e8e8f8;
-	vertical-align: top;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    background-color: #f8f8f8;
+    vertical-align: top;
 }
 
 .formrow
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	vertical-align: top;
-	background-color: #e0e0e0;
+    padding: 0;
+    margin: 0;
+    border-style: none;
+    border-width: 0;
+    vertical-align: top;
+    background-color: #f6f6f6;
 }
 
 .formcolumncell
 {
-	padding-left: 5pt;
-	padding-right: 5pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	color: black;
-	font-size: 8pt;
-	font-family: sans-serif;
+    padding:5px;
+    margin: 0;
+    border-style: none;
+    border-width: 0;
+    color: black;
+    font-size: 8px;
+
 }
 
 .formcolumnmessage
 {
-	padding-left: 5pt;
-	padding-right: 5pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	color: black;
-	font-size: 8pt;
-	font-family: sans-serif;
-	text-align: center;
+    padding:0 5px;
+    margin: 0;
+    border-style: none;
+    border-width: 0;
+    color: black;
+    font-size: 8px;
+
+    text-align: center;
 }
 
 .formseparator
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 5pt;
-	margin-right: 5pt;
-	margin-top: 1pt;
-	margin-bottom: 1pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-width: 2pt;
-	border-color: #808080;
-	color: #808080;
-	font-family: sans-serif;
-	font-size: 5pt;
+    padding: 0;
+    margin:0;
+    border-bottom: solid 1px #eee;
+    color: #808080;
+    font-size: 5px;
 }
-
+.formseparator hr{
+    display: none;
+}
 .displaytable
 {
-	width: 100%;
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	border-collapse: collapse;
+    width: 100%;
+    padding: 0;
+    margin: 0;
+    border: 0;
+    border-collapse: collapse;
 }
 
 .description
 {
-	padding-left: 0pt;
-	padding-right: 10pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	border-style: none;
-	border-color: red;
-	border-width: 0pt;
-	background-color: #e0e0e0;
-	font-weight: bold;
-	color: #404040;
-	font-size: 12pt;
-	font-family: sans-serif;
-	text-align: right;
+    padding-left: 0;
+	padding-right:5px;
+	padding-top: 0;
+	padding-bottom: 0;
+	margin-left: 0;
+	margin-right: 0;
+	margin-top: 5px;
+	margin-bottom: 0;
+	border: 0;
+    color: #404040;
+    font-size: 12px;
+    text-align: left;
 }
 
 .message
 {
-	padding-left: 10pt;
-	padding-right: 10pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	border-style: none;
-	border-color: red;
-	border-width: 0pt;
-	color: #404040;
-	font-size: 12pt;
-	font-family: sans-serif;
-	text-align: center;
-	background-color: #e0e0e0;
+    padding: 10px 0;
+    margin: 0;
+    border-style: none;
+    border-width: 0;
+    font-size: 14px;
 }
-
+.message a{
+    padding: 10px;
+    border-right: #6FB3E0 2px solid;
+    background-color: #6FB3E0 !important;
+    text-decoration: none;
+    color: #fff;
+    cursor: pointer;
+    margin: 0 5px 0 0;
+}
+.message input[type="button"], .formcolumncell input[type="button"]{
+    min-height: 30px;
+    font-size: 14px;
+    color: #fff;
+    cursor: pointer;
+    padding: 10px;
+    border: #6FB3E0 2px solid;
+    background-color: #6FB3E0 !important;
+    color: #fff;
+}
+.formcolumncell input[type="button"]{
+    min-height: 22px;
+    padding: 0 10px;
+}
+.message a input[type="button"]{
+    border: none;
+    background: none transparent;
+    margin: -4px;
+    font-size: 14px;
+    color: #fff;
+    cursor: pointer;
+    padding: 0 10px;
+}
 .value
 {
-	width: 100%;
-	padding-left: 10pt;
-	padding-right: 10pt;
-	margin-left: 0pt;
-	margin-right: 0pt;
-	margin-top: 0pt;
-	margin-bottom: 0pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: none;
-	border-width: 0pt;
-	background-color: #e0e0e0;
-	color: black;
-	font-size: 12pt;
-	font-family: sans-serif;
-	text-align: left;
+    width: 100%;
+    padding-left: 0;
+    padding-right: 0;
+	padding-top: 0;
+	padding-bottom: 0;
+    margin-left: 0;
+	margin-right: 0;
+	margin-top: 5px;
+	margin-bottom: 0;
+    border: 0;
+    color: black;
+    font-size: 12px;
+    text-align: left;
 }
-
+.value input[type="text"],.value input[type="password"]{
+    border: 1px solid #bbb;
+    padding: 5px;
+    margin-top: 10px;
+}
 .boxcell
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 2pt;
-	margin-right: 2pt;
-	margin-top: 2pt;
-	margin-bottom: 2pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-style: groove;
-	border-width: 2pt;
-	border-color: #e0e0e0;
+    margin: 0 2px;
+    padding: 0;
+    border: solid 1px #e0e0e0;
 }
 
 .separator
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 5pt;
-	margin-right: 5pt;
-	margin-top: 1pt;
-	margin-bottom: 1pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-width: 2pt;
-	border-color: #f0f0f0;
-	color: #f0f0f0;
-	font-family: sans-serif;
-	font-size: 5pt;
-	background-color: #e0e0e0;
+    padding: 5px 0;
+    color: #f0f0f0;
+    font-size: 5px;
+    border-bottom: dotted 1px #eee;
+}
+.separator hr{
+    display: none;
 }
 
 .lightseparator
 {
-	padding-left: 0pt;
-	padding-right: 0pt;
-	margin-left: 5pt;
-	margin-right: 5pt;
-	margin-top: 1pt;
-	margin-bottom: 1pt;
-	padding-top: 0pt;
-	padding-bottom: 0pt;
-	border-width: 2pt;
-	border-color: #808080;
-	color: #808080;
-	font-family: sans-serif;
-	font-size: 5pt;
-	background-color: #e0e0e0;
+    padding: 0;
+    margin: 1px 5px;
+    border-width: 1px;
+    border-color: #808080;
+    color: #808080;
+    font-size: 5px;
+    background-color: #e0e0e0;
 }
-
-.schedulepulldown
+.lightseparator hr{
+    display: none;
+}
+.schedulepulldown, .value select
 {
-	font-family: sans-serif;
-	font-size: 8pt;
+    margin-top: 10px;
+    font-size: 12px;
+    border: solid #bbb 1px;
 }
-
-BODY { 
-  margin: 1em;
-  font-family: serif;
-  line-height: 1.1;
-  background: white;
-  color: black; 
-}
-
 H1, H2, H3, H4, H5, H6, P, UL, OL, DIR, MENU, DIV, 
 DT, DD, ADDRESS, BLOCKQUOTE, PRE, BR, HR, FORM, DL { 
-  display: block }
+    display: block }
 
 B, STRONG, I, EM, CITE, VAR, TT, CODE, KBD, SAMP, 
 IMG, SPAN { display: inline }
@@ -772,5 +653,57 @@
 DT { margin-bottom: 0 }
 DD { margin-top: 0; margin-left: 3em }
 
-
+/**
+    Login form styles
+*/
+.login_form{
+    border: 1px solid #DDDDDD;
+    margin: 200px auto;
+    padding: 5px 5px 1px;
+    width: 334px;
+}
+.login_form form{
+    margin: 0;
+    padding: 0;
+}
+.login_inputs{
+    border: 1px solid #ddd !important;
+    padding: 7px 7px 7px 25px;
+    min-width: 300px;
+    background-color: #fff !important;
+}
+.login_inputs:focus{
+    border-color: #6FB3E0 !important;
+}
+.username{
+    background: url("username.png") no-repeat 4px 50%;
+}
+.password{
+    background: url("password.png") no-repeat 4px 50%;
+}
+.input_container{
+    margin: 0 0 4px 0;
+}
+.login_submit{
+    padding: 7px;
+    background: #6FB3E0;
+    border: none;
+    width: 100px;
+    text-align: center;
+    font-size: 15px;
+    font-weight: 700;
+    color: #fff;
+    cursor: pointer;
+}
+.login_submit:hover{
+    border: #5397c4 2px solid;
+    padding: 5px;
+}
+.login_header{
+    font-size: 1.2em;
+    padding: 5px 5px 5px 10px;
+    background: #F7F7F7;
+    border-bottom: 1px solid #ddd;
+    margin: -5px -5px 5px -5px;
+}
 
diff --git a/framework/crawler-ui/src/main/webapp/viewauthority.jsp b/framework/crawler-ui/src/main/webapp/viewauthority.jsp
index d03cf62..3fd2946 100644
--- a/framework/crawler-ui/src/main/webapp/viewauthority.jsp
+++ b/framework/crawler-ui/src/main/webapp/viewauthority.jsp
@@ -67,6 +67,7 @@
     {
 	IAuthorityConnectionManager manager = AuthorityConnectionManagerFactory.make(threadContext);
 	IAuthorityConnectorManager connectorManager = AuthorityConnectorManagerFactory.make(threadContext);
+	IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(threadContext);
 	String connectionName = variableContext.getParameter("connname");
 	IAuthorityConnection connection = manager.load(connectionName);
 	if (connection == null)
@@ -83,6 +84,14 @@
 		if (connectorName == null)
 			connectorName = className + Messages.getString(pageContext.getRequest().getLocale(),"viewauthority.uninstalled");
 		int maxCount = connection.getMaxConnections();
+		String prereq = connection.getPrerequisiteMapping();
+		String authDomain = connection.getAuthDomain();
+		if (authDomain == null)
+			authDomain = "";
+		String groupName = connection.getAuthGroup();
+		if (groupName == null)
+			groupName = "";
+
 		ConfigParams parameters = connection.getConfigParams();
 
 		// Do stuff so we can call out to display the parameters
@@ -93,7 +102,7 @@
 		String connectionStatus;
 		try
 		{
-			IAuthorityConnector c = AuthorityConnectorFactory.grab(threadContext,className,parameters,maxCount);
+			IAuthorityConnector c = authorityConnectorPool.grab(connection);
 			if (c == null)
 			{
 				connectionStatus = Messages.getString(pageContext.getRequest().getLocale(),"viewauthority.Connectorisnotinstalled");
@@ -106,7 +115,7 @@
 				}
 				finally
 				{
-					AuthorityConnectorFactory.release(c);
+					authorityConnectorPool.release(connection,c);
 				}
 			}
 		}
@@ -120,15 +129,50 @@
 				<td class="separator" colspan="4"><hr/></td>
 			</tr>
 			<tr>
-				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.NameColon")%></nobr></td><td class="value" colspan="1"><%="<!--connection="+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)+"-->"%><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)%></nobr></td>
-				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.DescriptionColon")%></nobr></td><td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.NameColon")%></nobr></td>
+				<td class="value" colspan="1"><%="<!--connection="+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)+"-->"%><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)%></nobr></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.DescriptionColon")%></nobr></td>
+				<td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%></td>
 			</tr>
 			<tr>
 				<td class="separator" colspan="4"><hr/></td>
 			</tr>
 			<tr>
-				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.AuthorityTypeColon")%></nobr></td><td class="value" colspan="1"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectorName)%></nobr></td>
-				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.MaxConnectionsColon")%></nobr></td><td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(Integer.toString(maxCount))%></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.AuthorityTypeColon")%></nobr></td>
+				<td class="value" colspan="1"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectorName)%></nobr></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.MaxConnectionsColon")%></nobr></td>
+				<td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(Integer.toString(maxCount))%></td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.AuthorityGroupColon")%></nobr></td>
+				<td class="value" colspan="1"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(groupName)%></nobr></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.AuthorizationDomainColon")%></nobr></td>
+				<td class="value" colspan="1"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(authDomain)%></nobr></td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.PrerequisiteUserMappingColon")%></nobr></td>
+				<td class="value" colspan="3">
+<%
+		if (prereq != null)
+		{
+%>
+					<nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(prereq)%></nobr>
+<%
+		}
+		else
+		{
+%>
+					<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.NoPrerequisites")%></nobr>
+<%
+		}
+%>
+				</td>
 			</tr>
 			<tr>
 				<td class="separator" colspan="4"><hr/></td>
@@ -136,7 +180,7 @@
 			<tr>
 				<td colspan="4">
 <%
-		AuthorityConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters);
+		AuthorityConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters);
 %>
 
 				</td>
@@ -145,12 +189,16 @@
 				<td class="separator" colspan="4"><hr/></td>
 			</tr>
 			<tr>
-				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.ConnectionStatusColon")%></nobr></td><td class="value" colspan="3"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionStatus)%></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.ConnectionStatusColon")%></nobr></td>
+				<td class="value" colspan="3"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionStatus)%></td>
 			</tr>
 			<tr>
 				<td class="separator" colspan="4"><hr/></td>
 			</tr>
-		<tr><td class="message" colspan="4"><a href='<%="viewauthority.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewauthority.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.Refresh")%></a>&nbsp;<a href='<%="editauthority.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewauthority.EditThisAuthorityConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.Edit")%></a>&nbsp;<a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewauthority.DeleteThisAuthorityConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.Delete")%></a>
+		<tr><td class="message" colspan="4">
+			<nobr><a href='<%="viewauthority.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewauthority.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.Refresh")%></a></nobr>
+			<nobr><a href='<%="editauthority.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewauthority.EditThisAuthorityConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.Edit")%></a></nobr>
+			<nobr><a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewauthority.DeleteThisAuthorityConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewauthority.Delete")%></a></nobr>
 		</td></tr>
 		</table>
 
diff --git a/framework/crawler-ui/src/main/webapp/viewconnection.jsp b/framework/crawler-ui/src/main/webapp/viewconnection.jsp
index 9f76144..0a01ecc 100644
--- a/framework/crawler-ui/src/main/webapp/viewconnection.jsp
+++ b/framework/crawler-ui/src/main/webapp/viewconnection.jsp
@@ -45,6 +45,16 @@
 		}
 	}
 
+	function ClearHistory(connectionName)
+	{
+		if (confirm("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"viewconnection.Thiscommandwillclearallhistoryrelatedto")%> '"+connectionName+"' <%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"viewconnection.period")%>"))
+		{
+			document.viewconnection.op.value="ClearHistory";
+			document.viewconnection.connname.value=connectionName;
+			document.viewconnection.submit();
+		}
+	}
+
 	//-->
 	</script>
 
@@ -68,6 +78,7 @@
 	IConnectorManager connectorManager = ConnectorManagerFactory.make(threadContext);
 	// Get the connection manager handle
 	IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(threadContext);
+	IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
 	String connectionName = variableContext.getParameter("connname");
 	IRepositoryConnection connection = connManager.load(connectionName);
 	if (connection == null)
@@ -98,7 +109,7 @@
 		String connectionStatus;
 		try
 		{
-			IRepositoryConnector c = RepositoryConnectorFactory.grab(threadContext,className,parameters,maxCount);
+			IRepositoryConnector c = repositoryConnectorPool.grab(connection);
 			if (c == null)
 				connectionStatus = Messages.getString(pageContext.getRequest().getLocale(),"viewconnection.Connectorisnotinstalled");
 			else
@@ -109,7 +120,7 @@
 				}
 				finally
 				{
-					RepositoryConnectorFactory.release(c);
+					repositoryConnectorPool.release(connection,c);
 				}
 			}
 		}
@@ -134,7 +145,10 @@
 				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.MaxConnectionsColon")%></nobr></td><td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(Integer.toString(maxCount))%></td>
 			</tr>
 			<tr>
-				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.AuthorityColon")%></nobr></td><td class="value" colspan="3"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(authorityName)%></nobr></td>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.AuthorityGroupColon")%></nobr></td><td class="value" colspan="3"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(authorityName)%></nobr></td>
 			</tr>
 			<tr>
 				<td class="separator" colspan="4"><hr/></td>
@@ -191,7 +205,7 @@
 			<tr>
 				<td colspan="4">
 <%
-		RepositoryConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters);
+		RepositoryConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters);
 %>
 				</td>
 			</tr>
@@ -204,8 +218,14 @@
 			<tr>
 				<td class="separator" colspan="4"><hr/></td>
 			</tr>
-		<tr><td class="message" colspan="4"><a href='<%="viewconnection.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.Refresh")%></a>&nbsp;<a href='<%="editconnection.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.EditThisConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.Edit")%></a>&nbsp;<a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.Deletethisconnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.Delete")%></a>
-		</td></tr>
+			<tr>
+				<td class="message" colspan="4">
+					<nobr><a href='<%="viewconnection.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.Refresh")%></a></nobr>
+					<nobr><a href='<%="editconnection.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.EditThisConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.Edit")%></a></nobr>
+					<nobr><a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.Deletethisconnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.Delete")%></a></nobr>
+					<nobr><a href="javascript:void()" onclick='<%="javascript:ClearHistory(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewconnection.ClearHistoryAssociatedWithThisConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewconnection.ClearAllRelatedHistory")%></a></nobr>
+				</td>
+			</tr>
 		</table>
 
 <%
diff --git a/framework/crawler-ui/src/main/webapp/viewgroup.jsp b/framework/crawler-ui/src/main/webapp/viewgroup.jsp
new file mode 100644
index 0000000..a937650
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/viewgroup.jsp
@@ -0,0 +1,120 @@
+<%@ include file="adminHeaders.jsp" %>
+
+<%
+
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+	<title>
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewgroup.ApacheManifoldCFViewGroup")%>
+	</title>
+
+	<script type="text/javascript">
+	<!--
+
+	function Delete(groupName)
+	{
+		document.viewgroup.op.value="Delete";
+		document.viewgroup.groupname.value=groupName;
+		document.viewgroup.submit();
+	}
+
+	//-->
+	</script>
+
+</head>
+
+<body class="standardbody">
+
+    <table class="page">
+      <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+      <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
+       <td class="window">
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewgroup.ViewAuthorityGroup")%></p>
+
+	<form class="standardform" name="viewgroup" action="execute.jsp" method="POST">
+		<input type="hidden" name="op" value="Continue"/>
+		<input type="hidden" name="type" value="group"/>
+		<input type="hidden" name="groupname" value=""/>
+
+<%
+    try
+    {
+	// Get the job manager handle
+	IAuthorityGroupManager manager = AuthorityGroupManagerFactory.make(threadContext);
+	String groupName = variableContext.getParameter("groupname");
+	IAuthorityGroup group = manager.load(groupName);
+	if (group == null)
+	{
+		throw new ManifoldCFException("No such group: "+groupName);
+	}
+	else
+	{
+		String description = group.getDescription();
+		if (description == null)
+			description = "";
+%>
+		<table class="displaytable">
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewgroup.NameColon")%></nobr></td>
+				<td class="value" colspan="1"><%="<!--group="+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(groupName)+"-->"%><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(groupName)%></nobr></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewgroup.DescriptionColon")%></nobr></td>
+				<td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%></td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="message" colspan="4">
+					<a href='<%="editgroup.jsp?groupname="+java.net.URLEncoder.encode(groupName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewgroup.EditThisAuthorityGroup")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewgroup.Edit")%></a>&nbsp;<a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(groupName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewgroup.DeleteThisAuthorityGroup")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewgroup.Delete")%></a>
+				</td>
+			</tr>
+		</table>
+
+<%
+	}
+    }
+    catch (ManifoldCFException e)
+    {
+	e.printStackTrace();
+	variableContext.setParameter("text",e.getMessage());
+	variableContext.setParameter("target","listgroups.jsp");
+%>
+	<jsp:forward page="error.jsp"/>
+<%
+    }
+%>
+	</form>
+       </td>
+      </tr>
+    </table>
+
+</body>
+
+</html>
diff --git a/framework/crawler-ui/src/main/webapp/viewjob.jsp b/framework/crawler-ui/src/main/webapp/viewjob.jsp
index 48d383f..3758598 100644
--- a/framework/crawler-ui/src/main/webapp/viewjob.jsp
+++ b/framework/crawler-ui/src/main/webapp/viewjob.jsp
@@ -70,6 +70,10 @@
 	IJobManager manager = JobManagerFactory.make(threadContext);
         IOutputConnectionManager outputManager = OutputConnectionManagerFactory.make(threadContext);
 	IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(threadContext);
+	
+	IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+	IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+	
 	String jobID = variableContext.getParameter("jobid");
 	IJobDescription job = manager.load(new Long(jobID));
 	if (job == null)
@@ -446,7 +450,7 @@
 					if (srDayOfMonth == null)
 					{
 						if (srDayOfWeek == null && srHourOfDay == null && srMinutesOfHour == null)
-							out.println(" "+Messages.getBodyString(pageContext.getRequest().getLocale(),"viewjob.onthe1stofthemonth"));
+							out.println(" "+Messages.getBodyString(pageContext.getRequest().getLocale(),"viewjob.onanydayofthemonth"));
 					}
 					else
 					{
@@ -631,17 +635,16 @@
 <%
 		if (outputConnection != null)
 		{
-			IOutputConnector outputConnector = OutputConnectorFactory.grab(threadContext,outputConnection.getClassName(),outputConnection.getConfigParams(),
-				outputConnection.getMaxConnections());
+			IOutputConnector outputConnector = outputConnectorPool.grab(outputConnection);
 			if (outputConnector != null)
 			{
 				try
 				{
-					outputConnector.viewSpecification(new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),job.getOutputSpecification());
+					outputConnector.viewSpecification(new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),job.getOutputSpecification());
 				}
 				finally
 				{
-					OutputConnectorFactory.release(outputConnector);
+					outputConnectorPool.release(outputConnection,outputConnector);
 				}
 			}
 		}
@@ -656,18 +659,16 @@
 <%
 		if (connection != null)
 		{
-			IRepositoryConnector repositoryConnector = RepositoryConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),
-
-				connection.getMaxConnections());
+			IRepositoryConnector repositoryConnector = repositoryConnectorPool.grab(connection);
 			if (repositoryConnector != null)
 			{
 				try
 				{
-					repositoryConnector.viewSpecification(new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),job.getSpecification());
+					repositoryConnector.viewSpecification(new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),job.getSpecification());
 				}
 				finally
 				{
-					RepositoryConnectorFactory.release(repositoryConnector);
+					repositoryConnectorPool.release(connection,repositoryConnector);
 				}
 			}
 		}
diff --git a/framework/crawler-ui/src/main/webapp/viewmapper.jsp b/framework/crawler-ui/src/main/webapp/viewmapper.jsp
new file mode 100644
index 0000000..468d1e8
--- /dev/null
+++ b/framework/crawler-ui/src/main/webapp/viewmapper.jsp
@@ -0,0 +1,206 @@
+<%@ include file="adminHeaders.jsp" %>
+
+<%
+
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+%>
+
+<?xml version="1.0" encoding="utf-8"?>
+
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+	<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+	<link rel="StyleSheet" href="style.css" type="text/css" media="screen"/>
+	<title>
+		<%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.ApacheManifoldCFViewMappingConnectionStatus")%>
+	</title>
+
+	<script type="text/javascript">
+	<!--
+
+	function Delete(connectionName)
+	{
+		if (confirm("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"viewmapper.DeleteConnection")%> '"+connectionName+"'<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"viewmapper.qmark")%>"))
+		{
+			document.viewconnection.op.value="Delete";
+			document.viewconnection.connname.value=connectionName;
+			document.viewconnection.submit();
+		}
+	}
+
+	//-->
+	</script>
+
+</head>
+
+<body class="standardbody">
+
+    <table class="page">
+      <tr><td colspan="2" class="banner"><jsp:include page="banner.jsp" flush="true"/></td></tr>
+      <tr><td class="navigation"><jsp:include page="navigation.jsp" flush="true"/></td>
+       <td class="window">
+	<p class="windowtitle"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.ViewMappingConnectionStatus")%></p>
+	<form class="standardform" name="viewconnection" action="execute.jsp" method="POST">
+		<input type="hidden" name="op" value="Continue"/>
+		<input type="hidden" name="type" value="mapper"/>
+		<input type="hidden" name="connname" value=""/>
+
+<%
+    try
+    {
+	IMappingConnectionManager manager = MappingConnectionManagerFactory.make(threadContext);
+	IMappingConnectorManager connectorManager = MappingConnectorManagerFactory.make(threadContext);
+	IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(threadContext);
+	String connectionName = variableContext.getParameter("connname");
+	IMappingConnection connection = manager.load(connectionName);
+	if (connection == null)
+	{
+		throw new ManifoldCFException("No such mapping connection: '"+connectionName+"'");
+	}
+	else
+	{
+		String description = connection.getDescription();
+		if (description == null)
+			description = "";
+		String className = connection.getClassName();
+		String connectorName = connectorManager.getDescription(className);
+		if (connectorName == null)
+			connectorName = className + Messages.getString(pageContext.getRequest().getLocale(),"viewmapper.uninstalled");
+		int maxCount = connection.getMaxConnections();
+		String prereq = connection.getPrerequisiteMapping();
+
+		ConfigParams parameters = connection.getConfigParams();
+
+		// Now, test the connection.
+		String connectionStatus;
+		try
+		{
+			IMappingConnector c = mappingConnectorPool.grab(connection);
+			if (c == null)
+			{
+				connectionStatus = Messages.getString(pageContext.getRequest().getLocale(),"viewmapper.Connectorisnotinstalled");
+			}
+			else
+			{
+				try
+				{
+					connectionStatus = c.check();
+				}
+				finally
+				{
+					mappingConnectorPool.release(connection,c);
+				}
+			}
+		}
+		catch (ManifoldCFException e)
+		{
+			connectionStatus = Messages.getString(pageContext.getRequest().getLocale(),"viewmapper.Threwexception")+" '"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(e.getMessage())+"'";
+		}
+%>
+		<table class="displaytable">
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.NameColon")%></nobr></td>
+				<td class="value" colspan="1"><%="<!--connection="+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)+"-->"%><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionName)%></nobr></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.DescriptionColon")%></nobr></td>
+				<td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(description)%></td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.MapperTypeColon")%></nobr></td>
+				<td class="value" colspan="1"><nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectorName)%></nobr></td>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.MaxConnectionsColon")%></nobr></td>
+				<td class="value" colspan="1"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(Integer.toString(maxCount))%></td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.PrerequisiteUserMappingColon")%></nobr></td>
+				<td class="value" colspan="3">
+<%
+		if (prereq != null)
+		{
+%>
+					<nobr><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(prereq)%></nobr><br/>
+<%
+		}
+		else
+		{
+%>
+					<nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.NoPrerequisites")%></nobr>
+<%
+		}
+%>
+				</td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td colspan="4">
+<%
+		MappingConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters);
+%>
+
+				</td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+			<tr>
+				<td class="description" colspan="1"><nobr><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.ConnectionStatusColon")%></nobr></td>
+				<td class="value" colspan="3"><%=org.apache.manifoldcf.ui.util.Encoder.bodyEscape(connectionStatus)%></td>
+			</tr>
+			<tr>
+				<td class="separator" colspan="4"><hr/></td>
+			</tr>
+		<tr><td class="message" colspan="4">
+			<nobr><a href='<%="viewmapper.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewmapper.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.Refresh")%></a></nobr>
+			<nobr><a href='<%="editmapper.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewmapper.EditThisMappingConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.Edit")%></a></nobr>
+			<nobr><a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewmapper.DeleteThisMappingConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewmapper.Delete")%></a></nobr>
+		</td></tr>
+		</table>
+
+<%
+	}
+    }
+    catch (ManifoldCFException e)
+    {
+	e.printStackTrace();
+	variableContext.setParameter("text",e.getMessage());
+	variableContext.setParameter("target","listmappers.jsp");
+%>
+	<jsp:forward page="error.jsp"/>
+<%
+    }
+%>
+	    </form>
+       </td>
+      </tr>
+    </table>
+
+</body>
+
+</html>
diff --git a/framework/crawler-ui/src/main/webapp/viewoutput.jsp b/framework/crawler-ui/src/main/webapp/viewoutput.jsp
index aaaf5b7..e3d5827 100644
--- a/framework/crawler-ui/src/main/webapp/viewoutput.jsp
+++ b/framework/crawler-ui/src/main/webapp/viewoutput.jsp
@@ -55,6 +55,16 @@
 		}
 	}
 	
+	function RemoveAll(connectionName)
+	{
+		if (confirm("<%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"viewoutput.Thiscommandwillcause")%> '"+connectionName+"' <%=Messages.getBodyJavascriptString(pageContext.getRequest().getLocale(),"viewoutput.tobeforgotten")%>"))
+		{
+			document.viewconnection.op.value="RemoveAll";
+			document.viewconnection.connname.value=connectionName;
+			document.viewconnection.submit();
+		}
+	}
+
 	//-->
 	</script>
 
@@ -78,6 +88,7 @@
 	IOutputConnectorManager connectorManager = OutputConnectorManagerFactory.make(threadContext);
 	// Get the connection manager handle
 	IOutputConnectionManager connManager = OutputConnectionManagerFactory.make(threadContext);
+	IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
 	String connectionName = variableContext.getParameter("connname");
 	IOutputConnection connection = connManager.load(connectionName);
 	if (connection == null)
@@ -104,7 +115,7 @@
 		String connectionStatus;
 		try
 		{
-			IOutputConnector c = OutputConnectorFactory.grab(threadContext,className,parameters,maxCount);
+			IOutputConnector c = outputConnectorPool.grab(connection);
 			if (c == null)
 				connectionStatus = Messages.getString(pageContext.getRequest().getLocale(),"viewoutput.Connectorisnotinstalled");
 			else
@@ -115,12 +126,13 @@
 				}
 				finally
 				{
-					OutputConnectorFactory.release(c);
+					outputConnectorPool.release(connection,c);
 				}
 			}
 		}
 		catch (ManifoldCFException e)
 		{
+			e.printStackTrace();
 			connectionStatus = Messages.getString(pageContext.getRequest().getLocale(),"viewoutput.Threwexception")+" '"+org.apache.manifoldcf.ui.util.Encoder.bodyEscape(e.getMessage())+"'";
 		}
 %>
@@ -145,7 +157,7 @@
 			<tr>
 				<td colspan="4">
 <%
-		OutputConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out),pageContext.getRequest().getLocale(),parameters);
+		OutputConnectorFactory.viewConfiguration(threadContext,className,new org.apache.manifoldcf.ui.jsp.JspWrapper(out,adminprofile),pageContext.getRequest().getLocale(),parameters);
 %>
 				</td>
 			</tr>
@@ -158,12 +170,15 @@
 			<tr>
 				<td class="separator" colspan="4"><hr/></td>
 			</tr>
-		<tr>
-			<td class="message" colspan="4">
-				<nobr><a href='<%="viewoutput.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.Refresh")%></a>&nbsp;<a href='<%="editoutput.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.EditThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.Edit")%></a>&nbsp;<a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.DeleteThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.Delete")%></a></nobr><br/>
-				<nobr><a href="javascript:void()" onclick='<%="javascript:ReingestAll(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.ReIngestAllDocumentsAssociatedWithThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.ReIngestAllAssociatedDocuments")%></a></nobr>
-			</td>
-		</tr>
+			<tr>
+				<td class="message" colspan="4">
+					<nobr><a href='<%="viewoutput.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.Refresh")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.Refresh")%></a></nobr>
+					<nobr><a href='<%="editoutput.jsp?connname="+java.net.URLEncoder.encode(connectionName,"UTF-8")%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.EditThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.Edit")%></a></nobr>
+					<nobr><a href="javascript:void()" onclick='<%="javascript:Delete(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.DeleteThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.Delete")%></a></nobr>
+					<nobr><a href="javascript:void()" onclick='<%="javascript:ReingestAll(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.ReIngestAllDocumentsAssociatedWithThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.ReIngestAllAssociatedDocuments")%></a></nobr>
+					<nobr><a href="javascript:void()" onclick='<%="javascript:RemoveAll(\""+org.apache.manifoldcf.ui.util.Encoder.attributeJavascriptEscape(connectionName)+"\")"%>' alt="<%=Messages.getAttributeString(pageContext.getRequest().getLocale(),"viewoutput.RemoveAllDocumentsAssociatedWithThisOutputConnection")%>"><%=Messages.getBodyString(pageContext.getRequest().getLocale(),"viewoutput.RemoveAllAssociatedDocuments")%></a></nobr>
+				</td>
+			</tr>
 		</table>
 
 <%
diff --git a/framework/example-common/connectors.xml b/framework/example-common/connectors.xml
index 3ce32cf..887890e 100644
--- a/framework/example-common/connectors.xml
+++ b/framework/example-common/connectors.xml
@@ -16,13 +16,23 @@
  limitations under the License.
 -->
 
-<!-- The connectors registry file permits registration of connectors upon the
-      startup of the jetty-based LCF example.  In a real installation, this registration
+<!-- The connectors registry file permits registration of domains and connectors upon the
+      startup of the jetty-based ManifoldCF example.  In a real installation, this registration
       step would be done ideally just once, but in the example the connectors
       are all reregistered on every startup.
 -->
 <connectors>
+    <!-- Add any authorization domains here -->
+    <!-- authorizationdomain domain="AD" name="ActiveDirectory"/-->
+    <!-- authorizationdomain domain="SHP" name="SharePoint"/-->
+    <!-- authorizationdomain domain="FB" name="FaceBook"/-->
+
     <!-- Add your output connectors here -->
+    
+    <!-- Add your mapping connectors here -->
+    
     <!-- Add your authority connectors here -->
+    
     <!-- Add your repository connectors here -->
+    
 </connectors>
diff --git a/framework/example-multiprocess-common/start-agents-2.bat b/framework/example-multiprocess-common/start-agents-2.bat
new file mode 100644
index 0000000..b3e0139
--- /dev/null
+++ b/framework/example-multiprocess-common/start-agents-2.bat
@@ -0,0 +1,31 @@
+@echo off

+rem Licensed to the Apache Software Foundation (ASF) under one or more

+rem contributor license agreements.  See the NOTICE file distributed with

+rem this work for additional information regarding copyright ownership.

+rem The ASF licenses this file to You under the Apache License, Version 2.0

+rem (the "License"); you may not use this file except in compliance with

+rem the License.  You may obtain a copy of the License at

+rem

+rem     http://www.apache.org/licenses/LICENSE-2.0

+rem

+rem Unless required by applicable law or agreed to in writing, software

+rem distributed under the License is distributed on an "AS IS" BASIS,

+rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+rem See the License for the specific language governing permissions and

+rem limitations under the License.

+

+rem check that JAVA_HOME is set, and that the current directory is correct

+if not exist "%JAVA_HOME%\bin\java.exe" goto nojavahome

+if not exist ".\properties.xml" goto nolcfhome

+rem set MCF_HOME

+set MCF_HOME=%CD%

+rem invoke the AgentRun command

+cmd /c "processes\executecommand.bat -Dorg.apache.manifoldcf.processid=B org.apache.manifoldcf.agents.AgentRun"

+goto done

+:nojavahome

+echo Environment variable JAVA_HOME is not set properly.

+goto done

+:nolcfhome

+echo Current working directory does not contain a properties.xml file.

+goto done

+:done

diff --git a/framework/example-multiprocess-common/start-agents-2.sh b/framework/example-multiprocess-common/start-agents-2.sh
new file mode 100755
index 0000000..f7ad053
--- /dev/null
+++ b/framework/example-multiprocess-common/start-agents-2.sh
@@ -0,0 +1,34 @@
+#!/bin/bash -e
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#Make sure environment variables are properly set
+if [ -e "$JAVA_HOME"/bin/java ] ; then
+    if [ -f ./properties.xml ] ; then
+        # Set the MCF_HOME variable
+        export MCF_HOME=$PWD
+        processes/executecommand.sh -Dorg.apache.manifoldcf.processid=B org.apache.manifoldcf.agents.AgentRun
+        exit $?
+        
+    else
+        echo "Working directory contains no properties.xml file." 1>&2
+        exit 1
+    fi
+    
+else
+    echo "Environment variable JAVA_HOME is not properly set." 1>&2
+    exit 1
+fi
diff --git a/framework/example-multiprocess-common/start-agents.bat b/framework/example-multiprocess-common/start-agents.bat
index eb251d0..acef6c5 100644
--- a/framework/example-multiprocess-common/start-agents.bat
+++ b/framework/example-multiprocess-common/start-agents.bat
@@ -20,7 +20,7 @@
 rem set MCF_HOME

 set MCF_HOME=%CD%

 rem invoke the AgentRun command

-cmd /c "processes\executecommand.bat org.apache.manifoldcf.agents.AgentRun"

+cmd /c "processes\executecommand.bat -Dorg.apache.manifoldcf.processid=A org.apache.manifoldcf.agents.AgentRun"

 goto done

 :nojavahome

 echo Environment variable JAVA_HOME is not set properly.

diff --git a/framework/example-multiprocess-common/start-agents.sh b/framework/example-multiprocess-common/start-agents.sh
index 43dfcd5..1c92b5b 100755
--- a/framework/example-multiprocess-common/start-agents.sh
+++ b/framework/example-multiprocess-common/start-agents.sh
@@ -20,7 +20,7 @@
     if [ -f ./properties.xml ] ; then
         # Set the MCF_HOME variable
         export MCF_HOME=$PWD
-        processes/executecommand.sh org.apache.manifoldcf.agents.AgentRun
+        processes/executecommand.sh -Dorg.apache.manifoldcf.processid=A org.apache.manifoldcf.agents.AgentRun
         exit $?
         
     else
diff --git a/framework/example-multiprocess-common/lock-clean.bat b/framework/example-multiprocess-file-common/lock-clean.bat
similarity index 100%
rename from framework/example-multiprocess-common/lock-clean.bat
rename to framework/example-multiprocess-file-common/lock-clean.bat
diff --git a/framework/example-multiprocess-common/lock-clean.sh b/framework/example-multiprocess-file-common/lock-clean.sh
similarity index 100%
rename from framework/example-multiprocess-common/lock-clean.sh
rename to framework/example-multiprocess-file-common/lock-clean.sh
diff --git a/framework/example-multiprocess-proprietary/properties.xml b/framework/example-multiprocess-file-proprietary/properties.xml
similarity index 100%
rename from framework/example-multiprocess-proprietary/properties.xml
rename to framework/example-multiprocess-file-proprietary/properties.xml
diff --git a/framework/example-multiprocess/properties.xml b/framework/example-multiprocess-file/properties.xml
similarity index 100%
rename from framework/example-multiprocess/properties.xml
rename to framework/example-multiprocess-file/properties.xml
diff --git a/framework/example-multiprocess-zk-common/properties-global.xml b/framework/example-multiprocess-zk-common/properties-global.xml
new file mode 100644
index 0000000..dfddb35
--- /dev/null
+++ b/framework/example-multiprocess-zk-common/properties-global.xml
@@ -0,0 +1,32 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- This file contains ONLY global (shared) configuration, for use on systems using
+    ZooKeeper to coordinate ManifoldCF processes. -->
+<configuration>
+  <!-- Select HSQLDB as the database implementation, and specify multiprocess access -->
+  <property name="org.apache.manifoldcf.databaseimplementationclass" value="org.apache.manifoldcf.core.database.DBInterfaceHSQLDB"/>
+  <property name="org.apache.manifoldcf.hsqldbdatabaseprotocol" value="hsql"/>
+  <property name="org.apache.manifoldcf.hsqldbdatabaseserver" value="localhost"/>
+  <property name="org.apache.manifoldcf.hsqldbdatabaseinstance" value="xdb"/>
+  <property name="org.apache.manifoldcf.dbsuperusername" value="sa"/>
+  <property name="org.apache.manifoldcf.dbsuperuserpassword" value=""/>
+  <property name="org.apache.manifoldcf.database.maxhandles" value="100"/>
+  <property name="org.apache.manifoldcf.crawler.threads" value="50"/>
+  <!-- Any additional global properties go here -->
+</configuration>
diff --git a/framework/example-multiprocess-zk-common/runzookeeper.bat b/framework/example-multiprocess-zk-common/runzookeeper.bat
new file mode 100644
index 0000000..39a9d12
--- /dev/null
+++ b/framework/example-multiprocess-zk-common/runzookeeper.bat
@@ -0,0 +1,31 @@
+@echo off

+rem Licensed to the Apache Software Foundation (ASF) under one or more

+rem contributor license agreements.  See the NOTICE file distributed with

+rem this work for additional information regarding copyright ownership.

+rem The ASF licenses this file to You under the Apache License, Version 2.0

+rem (the "License"); you may not use this file except in compliance with

+rem the License.  You may obtain a copy of the License at

+rem

+rem     http://www.apache.org/licenses/LICENSE-2.0

+rem

+rem Unless required by applicable law or agreed to in writing, software

+rem distributed under the License is distributed on an "AS IS" BASIS,

+rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+rem See the License for the specific language governing permissions and

+rem limitations under the License.

+

+rem check that JAVA_HOME is set, and that the current directory is correct

+if not exist "%JAVA_HOME%\bin\java.exe" goto nojavahome

+if not exist ".\properties.xml" goto nolcfhome

+rem set MCF_HOME

+set MCF_HOME=%CD%

+rem invoke the RegisterAll command

+cmd /c "processes\executecommand.bat org.apache.zookeeper.server.quorum.QuorumPeerMain zookeeper.cfg"

+goto done

+:nojavahome

+echo Environment variable JAVA_HOME is not set properly.

+goto done

+:nolcfhome

+echo Current working directory does not contain a properties.xml file.

+goto done

+:done

diff --git a/framework/example-multiprocess-zk-common/runzookeeper.sh b/framework/example-multiprocess-zk-common/runzookeeper.sh
new file mode 100755
index 0000000..0f45356
--- /dev/null
+++ b/framework/example-multiprocess-zk-common/runzookeeper.sh
@@ -0,0 +1,34 @@
+#!/bin/bash -e
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#Make sure environment variables are properly set
+if [ -e "$JAVA_HOME"/bin/java ] ; then
+    if [ -f ./properties.xml ] ; then
+        # Set the MCF_HOME variable
+        export MCF_HOME=$PWD
+        processes/executecommand.sh org.apache.zookeeper.server.quorum.QuorumPeerMain zookeeper.cfg
+        exit $?
+        
+    else
+        echo "Working directory contains no properties.xml file." 1>&2
+        exit 1
+    fi
+    
+else
+    echo "Environment variable JAVA_HOME is not properly set." 1>&2
+    exit 1
+fi
diff --git a/framework/example-multiprocess-zk-common/setglobalproperties.bat b/framework/example-multiprocess-zk-common/setglobalproperties.bat
new file mode 100644
index 0000000..34d403f
--- /dev/null
+++ b/framework/example-multiprocess-zk-common/setglobalproperties.bat
@@ -0,0 +1,31 @@
+@echo off

+rem Licensed to the Apache Software Foundation (ASF) under one or more

+rem contributor license agreements.  See the NOTICE file distributed with

+rem this work for additional information regarding copyright ownership.

+rem The ASF licenses this file to You under the Apache License, Version 2.0

+rem (the "License"); you may not use this file except in compliance with

+rem the License.  You may obtain a copy of the License at

+rem

+rem     http://www.apache.org/licenses/LICENSE-2.0

+rem

+rem Unless required by applicable law or agreed to in writing, software

+rem distributed under the License is distributed on an "AS IS" BASIS,

+rem WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

+rem See the License for the specific language governing permissions and

+rem limitations under the License.

+

+rem check that JAVA_HOME is set, and that the current directory is correct

+if not exist "%JAVA_HOME%\bin\java.exe" goto nojavahome

+if not exist ".\properties.xml" goto nolcfhome

+rem set MCF_HOME

+set MCF_HOME=%CD%

+rem invoke the RegisterAll command

+cmd /c "processes\executecommand.bat org.apache.manifoldcf.core.lockmanager.ZooKeeperLockManager properties-global.xml"

+goto done

+:nojavahome

+echo Environment variable JAVA_HOME is not set properly.

+goto done

+:nolcfhome

+echo Current working directory does not contain a properties.xml file.

+goto done

+:done

diff --git a/framework/example-multiprocess-zk-common/setglobalproperties.sh b/framework/example-multiprocess-zk-common/setglobalproperties.sh
new file mode 100755
index 0000000..f5b7ed9
--- /dev/null
+++ b/framework/example-multiprocess-zk-common/setglobalproperties.sh
@@ -0,0 +1,34 @@
+#!/bin/bash -e
+
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+#Make sure environment variables are properly set
+if [ -e "$JAVA_HOME"/bin/java ] ; then
+    if [ -f ./properties.xml ] ; then
+        # Set the MCF_HOME variable
+        export MCF_HOME=$PWD
+        processes/executecommand.sh org.apache.manifoldcf.core.lockmanager.ZooKeeperLockManager properties-global.xml
+        exit $?
+        
+    else
+        echo "Working directory contains no properties.xml file." 1>&2
+        exit 1
+    fi
+    
+else
+    echo "Environment variable JAVA_HOME is not properly set." 1>&2
+    exit 1
+fi
diff --git a/framework/example-multiprocess-zk-common/zookeeper.cfg b/framework/example-multiprocess-zk-common/zookeeper.cfg
new file mode 100644
index 0000000..229eda6
--- /dev/null
+++ b/framework/example-multiprocess-zk-common/zookeeper.cfg
@@ -0,0 +1,18 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tickTime=2000
+dataDir=zookeeper
+clientPort=8349
diff --git a/framework/example-multiprocess-zk-proprietary/properties.xml b/framework/example-multiprocess-zk-proprietary/properties.xml
new file mode 100644
index 0000000..90eca89
--- /dev/null
+++ b/framework/example-multiprocess-zk-proprietary/properties.xml
@@ -0,0 +1,40 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- This configuration file contains only non-global configuration information,
+    which is basically limited to file paths and ZooKeeper configuration -->
+<configuration>
+  <!-- Point to the wars and configure Jetty -->
+  <property name="org.apache.manifoldcf.crawleruiwarpath" value="../web-proprietary/war/mcf-crawler-ui.war"/>
+  <property name="org.apache.manifoldcf.authorityservicewarpath" value="../web-proprietary/war/mcf-authority-service.war"/>
+  <property name="org.apache.manifoldcf.apiservicewarpath" value="../web-proprietary/war/mcf-api-service.war"/>
+  <property name="org.apache.manifoldcf.usejettyparentclassloader" value="false"/>
+  <property name="org.apache.manifoldcf.jettyport" value="8345"/>
+  <!-- ZooKeeper lock manager configuration -->
+  <property name="org.apache.manifoldcf.lockmanagerclass" value="org.apache.manifoldcf.core.lockmanager.ZooKeeperLockManager"/>
+  <property name="org.apache.manifoldcf.zookeeper.connectstring" value="localhost:8349"/>
+  <property name="org.apache.manifoldcf.zookeeper.sessiontimeout" value="2000"/>
+  <!-- Point to a specific (common) logging file -->
+  <property name="org.apache.manifoldcf.logconfigfile" value="./logging.ini"/>
+  <!-- Specify the connectors to be loaded -->
+  <property name="org.apache.manifoldcf.connectorsconfigurationfile" value="../connectors-proprietary.xml"/>
+  <!-- Tell MCF where to find the connector jars -->
+  <libdir path="../connector-lib"/>
+  <libdir path="../connector-lib-proprietary"/>
+  <!-- Any additional local properties go here -->
+</configuration>
diff --git a/framework/example-multiprocess-zk/properties.xml b/framework/example-multiprocess-zk/properties.xml
new file mode 100644
index 0000000..2e7d36f
--- /dev/null
+++ b/framework/example-multiprocess-zk/properties.xml
@@ -0,0 +1,40 @@
+<?xml version="1.0" encoding="UTF-8" ?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<!-- This configuration file contains only non-global configuration information,
+    which is basically limited to file paths and ZooKeeper configuration -->
+<configuration>
+  <!-- Point to the wars and configure Jetty -->
+  <property name="org.apache.manifoldcf.crawleruiwarpath" value="../web/war/mcf-crawler-ui.war"/>
+  <property name="org.apache.manifoldcf.authorityservicewarpath" value="../web/war/mcf-authority-service.war"/>
+  <property name="org.apache.manifoldcf.apiservicewarpath" value="../web/war/mcf-api-service.war"/>
+  <property name="org.apache.manifoldcf.usejettyparentclassloader" value="false"/>
+  <property name="org.apache.manifoldcf.jettyport" value="8345"/>
+  <!-- ZooKeeper lock manager configuration -->
+  <property name="org.apache.manifoldcf.lockmanagerclass" value="org.apache.manifoldcf.core.lockmanager.ZooKeeperLockManager"/>
+  <property name="org.apache.manifoldcf.zookeeper.connectstring" value="localhost:8349"/>
+  <property name="org.apache.manifoldcf.zookeeper.sessiontimeout" value="2000"/>
+  <!-- Point to a specific (common) logging file -->
+  <property name="org.apache.manifoldcf.logconfigfile" value="./logging.ini"/>
+  <!-- Specify the connectors to be loaded -->
+  <property name="org.apache.manifoldcf.connectorsconfigurationfile" value="../connectors.xml"/>
+  <!-- Tell MCF where to find the connector jars -->
+  <libdir path="../connector-lib"/>
+  <libdir path="../connector-lib-proprietary"/>
+  <!-- Any additional local properties go here -->
+</configuration>
diff --git a/framework/example-singleprocess-proprietary/properties.xml b/framework/example-singleprocess-proprietary/properties.xml
index 427c9f4..fc95fac 100644
--- a/framework/example-singleprocess-proprietary/properties.xml
+++ b/framework/example-singleprocess-proprietary/properties.xml
@@ -30,6 +30,7 @@
   <property name="org.apache.manifoldcf.derbydatabasepath" value="."/>
   <property name="org.apache.manifoldcf.database.maxhandles" value="100"/>
   <property name="org.apache.manifoldcf.crawler.threads" value="50"/>
+  <property name="org.apache.manifoldcf.crawler.historycleanupinterval" value="2592000000"/>
   <!-- Point to a specific logging file -->
   <property name="org.apache.manifoldcf.logconfigfile" value="./logging.ini"/>
   <!-- Specify the connectors to be loaded -->
diff --git a/framework/example-singleprocess/properties.xml b/framework/example-singleprocess/properties.xml
index 9f11a73..52e451c 100644
--- a/framework/example-singleprocess/properties.xml
+++ b/framework/example-singleprocess/properties.xml
@@ -30,6 +30,7 @@
   <property name="org.apache.manifoldcf.derbydatabasepath" value="."/>
   <property name="org.apache.manifoldcf.database.maxhandles" value="100"/>
   <property name="org.apache.manifoldcf.crawler.threads" value="50"/>
+  <property name="org.apache.manifoldcf.crawler.historycleanupinterval" value="2592000000"/>
   <!-- Point to a specific logging file -->
   <property name="org.apache.manifoldcf.logconfigfile" value="./logging.ini"/>
   <!-- Specify the connectors to be loaded -->
diff --git a/framework/jetty-runner/pom.xml b/framework/jetty-runner/pom.xml
index adc5c1c..30ebfe8 100644
--- a/framework/jetty-runner/pom.xml
+++ b/framework/jetty-runner/pom.xml
@@ -20,7 +20,7 @@
 	<parent>
 		<groupId>org.apache.manifoldcf</groupId>
 		<artifactId>mcf-framework</artifactId>
-		<version>1.2-SNAPSHOT</version>
+		<version>1.5-SNAPSHOT</version>
 	</parent>
 	<modelVersion>4.0.0</modelVersion>
 
@@ -338,7 +338,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
       <scope>runtime</scope>
     </dependency>
     <dependency>
diff --git a/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFCombinedJettyRunner.java b/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFCombinedJettyRunner.java
index 6ae7174..a33c7df 100644
--- a/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFCombinedJettyRunner.java
+++ b/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFCombinedJettyRunner.java
@@ -133,8 +133,8 @@
     	System.setProperty(ManifoldCF.lcfConfigFileProperty,"./properties.xml");
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
 
       // Grab the parameters which locate the wars and describe how we work with Jetty
       File combinedWarPath = ManifoldCF.getFileProperty(combinedWarPathProperty);
diff --git a/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFJettyRunner.java b/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFJettyRunner.java
index 887f230..dafd709 100644
--- a/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFJettyRunner.java
+++ b/framework/jetty-runner/src/main/java/org/apache/manifoldcf/jettyrunner/ManifoldCFJettyRunner.java
@@ -21,6 +21,7 @@
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.crawler.system.*;
 import org.apache.manifoldcf.crawler.*;
+import org.apache.manifoldcf.agents.system.AgentsDaemon;
 
 import java.io.IOException;
 import java.io.InputStream;
@@ -50,8 +51,6 @@
   public static final String useJettyParentClassLoaderProperty = "org.apache.manifoldcf.usejettyparentclassloader";
   public static final String jettyPortProperty = "org.apache.manifoldcf.jettyport";
   
-  public static final String agentShutdownSignal = org.apache.manifoldcf.agents.AgentRun.agentShutdownSignal;
-  
   protected Server server;
   
   public ManifoldCFJettyRunner( int port, String crawlerWarPath, String authorityServiceWarPath, String apiWarPath, boolean useParentLoader )
@@ -138,26 +137,11 @@
   public static void runAgents(IThreadContext tc)
     throws ManifoldCFException
   {
-    ILockManager lockManager = LockManagerFactory.make(tc);
-
-    while (true)
-    {
-      // Any shutdown signal yet?
-      if (lockManager.checkGlobalFlag(agentShutdownSignal))
-        break;
-          
-      // Start whatever agents need to be started
-      ManifoldCF.startAgents(tc);
-
-      try
-      {
-        ManifoldCF.sleep(5000);
-      }
-      catch (InterruptedException e)
-      {
-        break;
-      }
-    }
+    String processID = ManifoldCF.getProcessID();
+    // Do this so we don't have to call stopAgents() ourselves.
+    AgentsDaemon ad = new AgentsDaemon(processID);
+    ad.registerAgentsShutdownHook(tc);
+    ad.runAgents(tc);
   }
 
   /**
@@ -177,10 +161,10 @@
     	System.setProperty(ManifoldCF.lcfConfigFileProperty,"./properties.xml");
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
 
-      // Grab the parameters which locate the wars and describe how we work with Jetty
+      // Grab the parameters which locate the wars and describe how we work with Jetty.  These are not shared.
       File crawlerWarPath = ManifoldCF.getFileProperty(crawlerUIWarPathProperty);
       File authorityserviceWarPath = ManifoldCF.getFileProperty(authorityServiceWarPathProperty);
       File apiWarPath = ManifoldCF.getFileProperty(apiServiceWarPathProperty);
@@ -216,8 +200,7 @@
       if (useParentClassLoader)
       {
         // Clear the agents shutdown signal.
-        ILockManager lockManager = LockManagerFactory.make(tc);
-        lockManager.clearGlobalFlag(agentShutdownSignal);
+        AgentsDaemon.clearAgentsShutdownSignal(tc);
         
         // Do the basic initialization of the database and its schema
         ManifoldCF.createSystemDatabase(tc);
diff --git a/framework/pom.xml b/framework/pom.xml
index 01abb9f..6499537 100644
--- a/framework/pom.xml
+++ b/framework/pom.xml
@@ -20,13 +20,13 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-parent</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
   <groupId>org.apache.manifoldcf</groupId>
   <artifactId>mcf-framework</artifactId>
-  <version>1.2-SNAPSHOT</version>
+  <version>1.5-SNAPSHOT</version>
 
   <name>ManifoldCF - Framework</name>
   <packaging>pom</packaging>
diff --git a/framework/pull-agent/pom.xml b/framework/pull-agent/pom.xml
index a6b6f6c..26b2c27 100644
--- a/framework/pull-agent/pom.xml
+++ b/framework/pull-agent/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -136,7 +136,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
       <scope>test</scope>
     </dependency>
     <dependency>
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseAuthoritiesInitializationCommand.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseAuthoritiesInitializationCommand.java
index 7d3ec78..3f3fb56 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseAuthoritiesInitializationCommand.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseAuthoritiesInitializationCommand.java
@@ -34,8 +34,8 @@
 {
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
     IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
     IAuthorityConnectorManager mgr = AuthorityConnectorManagerFactory.make(tc);
 
     doExecute(mgr);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseDomainsInitializationCommand.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseDomainsInitializationCommand.java
new file mode 100644
index 0000000..70855dc
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseDomainsInitializationCommand.java
@@ -0,0 +1,45 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities;
+
+import org.apache.manifoldcf.authorities.interfaces.AuthorizationDomainManagerFactory;
+import org.apache.manifoldcf.authorities.interfaces.IAuthorizationDomainManager;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+import org.apache.manifoldcf.core.InitializationCommand;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.ThreadContextFactory;
+
+/**
+ * @author Jettro Coenradie
+ */
+public abstract class BaseDomainsInitializationCommand implements InitializationCommand
+{
+  public void execute() throws ManifoldCFException
+  {
+    IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
+    IAuthorizationDomainManager mgr = AuthorizationDomainManagerFactory.make(tc);
+
+    doExecute(mgr);
+  }
+
+  protected abstract void doExecute(IAuthorizationDomainManager mgr) throws ManifoldCFException;
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseMappersInitializationCommand.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseMappersInitializationCommand.java
new file mode 100644
index 0000000..66fd4e5
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/BaseMappersInitializationCommand.java
@@ -0,0 +1,45 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+
+package org.apache.manifoldcf.authorities;
+
+import org.apache.manifoldcf.authorities.interfaces.MappingConnectorManagerFactory;
+import org.apache.manifoldcf.authorities.interfaces.IMappingConnectorManager;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+import org.apache.manifoldcf.core.InitializationCommand;
+import org.apache.manifoldcf.core.interfaces.IThreadContext;
+import org.apache.manifoldcf.core.interfaces.ManifoldCFException;
+import org.apache.manifoldcf.core.interfaces.ThreadContextFactory;
+
+/**
+ * @author Jettro Coenradie
+ */
+public abstract class BaseMappersInitializationCommand implements InitializationCommand
+{
+  public void execute() throws ManifoldCFException
+  {
+    IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
+    IMappingConnectorManager mgr = MappingConnectorManagerFactory.make(tc);
+
+    doExecute(mgr);
+  }
+
+  protected abstract void doExecute(IMappingConnectorManager mgr) throws ManifoldCFException;
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/ChangeAuthSpec.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/ChangeAuthSpec.java
index f0d6be0..66e7c4a 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/ChangeAuthSpec.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/ChangeAuthSpec.java
@@ -48,8 +48,8 @@
 
                 try
                 {
-                        ManifoldCF.initializeEnvironment();
                         IThreadContext tc = ThreadContextFactory.make();
+                        ManifoldCF.initializeEnvironment(tc);
                         IAuthorityConnectionManager connManager = AuthorityConnectionManagerFactory.make(tc);
                         IAuthorityConnection conn = connManager.load(connectionName);
                         if (conn == null)
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckAll.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckAll.java
index a0ecb71..bc2ae41 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckAll.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckAll.java
@@ -44,8 +44,9 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(tc);
       // Now, get a list of the authority connections
       IAuthorityConnectionManager mgr = AuthorityConnectionManagerFactory.make(tc);
       IAuthorityConnection[] connections = mgr.getAllConnections();
@@ -70,7 +71,7 @@
         String connectionStatus;
         try
         {
-          IAuthorityConnector c = AuthorityConnectorFactory.grab(tc,className,parameters,maxCount);
+          IAuthorityConnector c = authorityConnectorPool.grab(connection);
           if (c != null)
           {
             try
@@ -79,7 +80,7 @@
             }
             finally
             {
-              AuthorityConnectorFactory.release(c);
+              authorityConnectorPool.release(connection,c);
             }
           }
           else
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckConfigured.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckConfigured.java
index 894853f..458829e 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckConfigured.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/CheckConfigured.java
@@ -44,8 +44,8 @@
 
                 try
                 {
-                        ManifoldCF.initializeEnvironment();
                         IThreadContext tc = ThreadContextFactory.make();
+                        ManifoldCF.initializeEnvironment(tc);
                         // Now, get a list of the authority connections
                         IAuthorityConnectionManager mgr = AuthorityConnectionManagerFactory.make(tc);
                         if (mgr.getAllConnections().length > 0)
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineAuthorityConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineAuthorityConnection.java
index 9bbf82c..7a978bb 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineAuthorityConnection.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineAuthorityConnection.java
@@ -28,66 +28,64 @@
 */
 public class DefineAuthorityConnection
 {
-        public static final String _rcsid = "@(#)$Id: DefineAuthorityConnection.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: DefineAuthorityConnection.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private DefineAuthorityConnection()
-        {
-        }
+  private DefineAuthorityConnection()
+  {
+  }
 
 
-        public static void main(String[] args)
-        {
-                if (args.length < 4)
-                {
-                        System.err.println("Usage: DefineAuthorityConnection <connection_name> <description> <connector_class> <pool_max> <param1>=<value1> ...");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length < 4)
+    {
+      System.err.println("Usage: DefineAuthorityConnection <connection_name> <description> <connector_class> <pool_max> <param1>=<value1> ...");
+      System.exit(1);
+    }
 
-                String connectionName = args[0];
-                String description = args[1];
-                String connectorClass = args[2];
-                String poolMax = args[3];
+    String connectionName = args[0];
+    String description = args[1];
+    String connectorClass = args[2];
+    String poolMax = args[3];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IAuthorityConnectionManager mgr = AuthorityConnectionManagerFactory.make(tc);
-                        IAuthorityConnection conn = mgr.create();
-                        conn.setName(connectionName);
-                        conn.setDescription(description);
-                        conn.setClassName(connectorClass);
-                        conn.setMaxConnections(new Integer(poolMax).intValue());
-                        ConfigParams x = conn.getConfigParams();
-                        int i = 4;
-                        while (i < args.length)
-                        {
-                                String arg = args[i++];
-                                // Parse
-                                int pos = arg.indexOf("=");
-                                if (pos == -1)
-                                        throw new ManifoldCFException("Argument missing =");
-                                String name = arg.substring(0,pos);
-                                String value = arg.substring(pos+1);
-                                if (name.endsWith("assword"))
-                                        x.setObfuscatedParameter(name,value);
-                                else
-                                        x.setParameter(name,value);
-                        }
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IAuthorityConnectionManager mgr = AuthorityConnectionManagerFactory.make(tc);
+      IAuthorityConnection conn = mgr.create();
+      conn.setName(connectionName);
+      conn.setDescription(description);
+      conn.setClassName(connectorClass);
+      conn.setMaxConnections(new Integer(poolMax).intValue());
+      ConfigParams x = conn.getConfigParams();
+      int i = 4;
+      while (i < args.length)
+      {
+        String arg = args[i++];
+        // Parse
+        int pos = arg.indexOf("=");
+        if (pos == -1)
+          throw new ManifoldCFException("Argument missing =");
+        String name = arg.substring(0,pos);
+        String value = arg.substring(pos+1);
+        if (name.endsWith("assword"))
+          x.setObfuscatedParameter(name,value);
+        else
+          x.setParameter(name,value);
+      }
 
-                        // Now, save
-                        mgr.save(conn);
+      // Now, save
+      mgr.save(conn);
 
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace(System.err);
-                        System.exit(2);
-                }
-        }
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace(System.err);
+      System.exit(2);
+    }
+  }
 
 
-
-                
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineMappingConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineMappingConnection.java
new file mode 100644
index 0000000..c211acc
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DefineMappingConnection.java
@@ -0,0 +1,91 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import java.io.*;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+import java.util.*;
+
+/** This class is used during testing.
+*/
+public class DefineMappingConnection
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private DefineMappingConnection()
+  {
+  }
+
+
+  public static void main(String[] args)
+  {
+    if (args.length < 4)
+    {
+      System.err.println("Usage: DefineMappingConnection <connection_name> <description> <connector_class> <pool_max> <param1>=<value1> ...");
+      System.exit(1);
+    }
+
+    String connectionName = args[0];
+    String description = args[1];
+    String connectorClass = args[2];
+    String poolMax = args[3];
+
+
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IMappingConnectionManager mgr = MappingConnectionManagerFactory.make(tc);
+      IMappingConnection conn = mgr.create();
+      conn.setName(connectionName);
+      conn.setDescription(description);
+      conn.setClassName(connectorClass);
+      conn.setMaxConnections(new Integer(poolMax).intValue());
+      ConfigParams x = conn.getConfigParams();
+      int i = 4;
+      while (i < args.length)
+      {
+        String arg = args[i++];
+        // Parse
+        int pos = arg.indexOf("=");
+        if (pos == -1)
+          throw new ManifoldCFException("Argument missing =");
+        String name = arg.substring(0,pos);
+        String value = arg.substring(pos+1);
+        if (name.endsWith("assword"))
+          x.setObfuscatedParameter(name,value);
+        else
+          x.setParameter(name,value);
+      }
+
+      // Now, save
+      mgr.save(conn);
+
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace(System.err);
+      System.exit(2);
+    }
+  }
+
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteAuthorityConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteAuthorityConnection.java
index 5bc5622..cd8eb36 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteAuthorityConnection.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteAuthorityConnection.java
@@ -28,38 +28,38 @@
 */
 public class DeleteAuthorityConnection
 {
-        public static final String _rcsid = "@(#)$Id: DeleteAuthorityConnection.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: DeleteAuthorityConnection.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private DeleteAuthorityConnection()
-        {
-        }
+  private DeleteAuthorityConnection()
+  {
+  }
 
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: DeleteAuthorityConnection <connection_name>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: DeleteAuthorityConnection <connection_name>");
+      System.exit(1);
+    }
 
-                String connectionName = args[0];
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IAuthorityConnectionManager mgr = AuthorityConnectionManagerFactory.make(tc);
-                        mgr.delete(connectionName);
+    String connectionName = args[0];
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IAuthorityConnectionManager mgr = AuthorityConnectionManagerFactory.make(tc);
+      mgr.delete(connectionName);
 
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace(System.err);
-                        System.exit(2);
-                }
-        }
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace(System.err);
+      System.exit(2);
+    }
+  }
 
 
 
-                
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteMappingConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteMappingConnection.java
new file mode 100644
index 0000000..affc9c2
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/DeleteMappingConnection.java
@@ -0,0 +1,62 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+* 
+* http://www.apache.org/licenses/LICENSE-2.0
+* 
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import java.io.*;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+import java.util.*;
+
+/** This class is used during testing.
+*/
+public class DeleteMappingConnection
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private DeleteMappingConnection()
+  {
+  }
+
+
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: DeleteMappingConnection <connection_name>");
+      System.exit(1);
+    }
+
+    String connectionName = args[0];
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IMappingConnectionManager mgr = MappingConnectionManagerFactory.make(tc);
+      mgr.delete(connectionName);
+
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace(System.err);
+      System.exit(2);
+    }
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/RegisterDomain.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/RegisterDomain.java
new file mode 100644
index 0000000..248df88
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/RegisterDomain.java
@@ -0,0 +1,67 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+
+public class RegisterDomain extends BaseDomainsInitializationCommand
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private final String domainName;
+  private final String description;
+
+  public RegisterDomain(String domainName, String description)
+  {
+    this.domainName = domainName;
+    this.description = description;
+  }
+
+  protected void doExecute(IAuthorizationDomainManager mgr) throws ManifoldCFException
+  {
+    mgr.registerDomain(description,domainName);
+    Logging.root.info("Successfully registered authorization domain '"+domainName+"'");
+  }
+
+  public static void main(String[] args)
+  {
+    if (args.length != 2)
+    {
+      System.err.println("Usage: RegisterDomain <domainname> <description>");
+      System.exit(1);
+    }
+
+    String domainName = args[0];
+    String description = args[1];
+
+    try
+    {
+      RegisterDomain registerDomain = new RegisterDomain(domainName,description);
+      registerDomain.execute();
+      System.err.println("Successfully registered authorization domain '"+domainName+"'");
+    }
+    catch (ManifoldCFException e)
+    {
+      e.printStackTrace();
+      System.exit(1);
+    }
+  }
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/RegisterMapper.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/RegisterMapper.java
new file mode 100644
index 0000000..c9b29c8
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/RegisterMapper.java
@@ -0,0 +1,67 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+
+public class RegisterMapper extends BaseMappersInitializationCommand
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private final String className;
+  private final String description;
+
+  public RegisterMapper(String className, String description)
+  {
+    this.className = className;
+    this.description = description;
+  }
+
+  protected void doExecute(IMappingConnectorManager mgr) throws ManifoldCFException
+  {
+    mgr.registerConnector(description,className);
+    Logging.root.info("Successfully registered connector '"+className+"'");
+  }
+
+  public static void main(String[] args)
+  {
+    if (args.length != 2)
+    {
+      System.err.println("Usage: RegisterMapper <classname> <description>");
+      System.exit(1);
+    }
+
+    String className = args[0];
+    String description = args[1];
+
+    try
+    {
+      RegisterMapper registerMapper = new RegisterMapper(className,description);
+      registerMapper.execute();
+      System.err.println("Successfully registered connector '"+className+"'");
+    }
+    catch (ManifoldCFException e)
+    {
+      e.printStackTrace();
+      System.exit(1);
+    }
+  }
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/SynchronizeMappers.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/SynchronizeMappers.java
new file mode 100644
index 0000000..6ffa2a4
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/SynchronizeMappers.java
@@ -0,0 +1,77 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import java.io.*;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+
+public class SynchronizeMappers extends BaseMappersInitializationCommand
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  public SynchronizeMappers()
+  {
+  }
+
+
+  protected void doExecute(IMappingConnectorManager mgr) throws ManifoldCFException
+  {
+    IResultSet classNames = mgr.getConnectors();
+    int i = 0;
+    while (i < classNames.getRowCount())
+    {
+      IResultRow row = classNames.getRow(i++);
+      String classname = (String)row.getValue("classname");
+      try
+      {
+        MappingConnectorFactory.getConnectorNoCheck(classname);
+      }
+      catch (ManifoldCFException e)
+      {
+        mgr.removeConnector(classname);
+      }
+    }
+    Logging.root.info("Successfully synchronized all mappers");
+  }
+
+
+  public static void main(String[] args)
+  {
+    if (args.length > 0)
+    {
+      System.err.println("Usage: SynchronizeMappers");
+      System.exit(1);
+    }
+
+
+    try
+    {
+      SynchronizeMappers synchronizeMappers = new SynchronizeMappers();
+      synchronizeMappers.execute();
+      System.err.println("Successfully synchronized all mappers");
+    }
+    catch (ManifoldCFException e)
+    {
+      e.printStackTrace();
+      System.exit(1);
+    }
+  }
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterAllMappers.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterAllMappers.java
new file mode 100644
index 0000000..cef4ccd
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterAllMappers.java
@@ -0,0 +1,68 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import java.io.*;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+
+public class UnRegisterAllMappers extends BaseMappersInitializationCommand
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private UnRegisterAllMappers()
+  {
+  }
+
+  protected void doExecute(IMappingConnectorManager mgr) throws ManifoldCFException
+  {
+    IResultSet classNames = mgr.getConnectors();
+    int i = 0;
+    while (i < classNames.getRowCount())
+    {
+      IResultRow row = classNames.getRow(i++);
+      mgr.unregisterConnector((String)row.getValue("classname"));
+    }
+    Logging.root.info("Successfully unregistered all connectors");
+  }
+
+
+  public static void main(String[] args)
+  {
+    if (args.length > 0)
+    {
+      System.err.println("Usage: UnRegisterAllMappers");
+      System.exit(1);
+    }
+
+
+    try
+    {
+      UnRegisterAllMappers unRegisterAllMappers = new UnRegisterAllMappers();
+      unRegisterAllMappers.execute();
+      System.err.println("Successfully unregistered all connectors");
+    }
+    catch (ManifoldCFException e)
+    {
+      e.printStackTrace();
+      System.exit(1);
+    }
+  }
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterDomain.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterDomain.java
new file mode 100644
index 0000000..0289668
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterDomain.java
@@ -0,0 +1,65 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import java.io.*;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+
+public class UnRegisterDomain extends BaseDomainsInitializationCommand
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private final String domainName;
+
+  public UnRegisterDomain(String domainName)
+  {
+    this.domainName = domainName;
+  }
+
+  protected void doExecute(IAuthorizationDomainManager mgr) throws ManifoldCFException
+  {
+    mgr.unregisterDomain(domainName);
+    Logging.root.info("Successfully unregistered domain '"+domainName+"'");
+  }
+
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: UnRegisterDomain <domainname>");
+      System.exit(1);
+    }
+
+    String domainName = args[0];
+
+    try
+    {
+      UnRegisterDomain unRegisterDomain = new UnRegisterDomain(domainName);
+      unRegisterDomain.execute();
+      System.err.println("Successfully unregistered domain '"+domainName+"'");
+    }
+    catch (ManifoldCFException e)
+    {
+      e.printStackTrace();
+      System.exit(1);
+    }
+  }
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterMapper.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterMapper.java
new file mode 100644
index 0000000..1c7d9f8
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/UnRegisterMapper.java
@@ -0,0 +1,65 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities;
+
+import java.io.*;
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.*;
+
+public class UnRegisterMapper extends BaseMappersInitializationCommand
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  private final String className;
+
+  public UnRegisterMapper(String className)
+  {
+    this.className = className;
+  }
+
+  protected void doExecute(IMappingConnectorManager mgr) throws ManifoldCFException
+  {
+    mgr.unregisterConnector(className);
+    Logging.root.info("Successfully unregistered connector '"+className+"'");
+  }
+
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: UnRegisterMapper <classname>");
+      System.exit(1);
+    }
+
+    String className = args[0];
+
+    try
+    {
+      UnRegisterMapper unRegisterMapper = new UnRegisterMapper(className);
+      unRegisterMapper.execute();
+      System.err.println("Successfully unregistered connector '"+className+"'");
+    }
+    catch (ManifoldCFException e)
+    {
+      e.printStackTrace();
+      System.exit(1);
+    }
+  }
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authdomains/AuthorizationDomainManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authdomains/AuthorizationDomainManager.java
new file mode 100644
index 0000000..c6a9c24
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authdomains/AuthorizationDomainManager.java
@@ -0,0 +1,228 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authdomains;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.interfaces.CacheKeyFactory;
+
+/** This is the implementation of that authority connector manager.
+ * 
+ * <br><br>
+ * <b>authdomains</b>
+ * <table border="1" cellpadding="3" cellspacing="0">
+ * <tr class="TableHeadingColor">
+ * <th>Field</th><th>Type</th><th>Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
+ * <tr><td>description</td><td>VARCHAR(255)</td><td></td></tr>
+ * <tr><td>domainname</td><td>VARCHAR(255)</td><td>Primary Key</td></tr>
+ * </table>
+ * <br><br>
+ * 
+ */
+public class AuthorizationDomainManager extends org.apache.manifoldcf.core.database.BaseTable implements IAuthorizationDomainManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Fields
+  protected static final String descriptionField = "description";
+  protected static final String domainNameField = "domainname";
+
+  // Thread context
+  protected final IThreadContext threadContext;
+
+  /** Constructor.
+  *@param threadContext is the thread context.
+  *@param database is the database handle.
+  */
+  public AuthorizationDomainManager(IThreadContext threadContext, IDBInterface database)
+    throws ManifoldCFException
+  {
+    super(database,"authdomains");
+    this.threadContext = threadContext;
+  }
+
+
+  /** Install or upgrade.
+  */
+  @Override
+  public void install()
+    throws ManifoldCFException
+  {
+    // Always use a loop, in case there's upgrade retries needed.
+    while (true)
+    {
+      Map existing = getTableSchema(null,null);
+      if (existing == null)
+      {
+        HashMap map = new HashMap();
+        map.put(descriptionField,new ColumnDescription("VARCHAR(255)",false,false,null,null,false));
+        map.put(domainNameField,new ColumnDescription("VARCHAR(255)",true,false,null,null,false));
+
+        performCreate(map,null);
+      }
+      else
+      {
+        // Schema upgrade code goes here, if needed.
+      }
+
+      // Index management
+      IndexDescription descriptionIndex = new IndexDescription(true,new String[]{descriptionField});
+
+      // Get rid of indexes that shouldn't be there
+      Map indexes = getTableIndexes(null,null);
+      Iterator iter = indexes.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String indexName = (String)iter.next();
+        IndexDescription id = (IndexDescription)indexes.get(indexName);
+
+        if (descriptionIndex != null && id.equals(descriptionIndex))
+          descriptionIndex = null;
+        else if (indexName.indexOf("_pkey") == -1)
+          // This index shouldn't be here; drop it
+          performRemoveIndex(indexName);
+      }
+
+      // Add the ones we didn't find
+      if (descriptionIndex != null)
+        performAddIndex(null,descriptionIndex);
+
+      break;
+    }
+  }
+
+
+  /** Uninstall.
+  */
+  @Override
+  public void deinstall()
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+    performDrop(invKeys);
+  }
+
+  /** Register a new domain.
+  *@param description is the description to use in the UI.
+  *@param domainName is the internal domain name used by the authority service.
+  */
+  @Override
+  public void registerDomain(String description, String domainName)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+    beginTransaction();
+    try
+    {
+      // See if already there.
+      ArrayList params = new ArrayList();
+      params.add(domainName);
+      IResultSet set = performQuery("SELECT * FROM "+getTableName()+" WHERE "+domainNameField+"=? FOR UPDATE",params,null,null);
+      HashMap map = new HashMap();
+      map.put(descriptionField,description);
+      if (set.getRowCount() == 0)
+      {
+        // Insert it into table first.
+        map.put(domainNameField,domainName);
+        performInsert(map,invKeys);
+      }
+      else
+      {
+        performUpdate(map,"WHERE "+domainNameField+"=?",params,invKeys);
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Unregister a domain.
+  * This may fail if any authority connections refer to the domain.
+  *@param domainName is the internal domain name to unregister.
+  */
+  @Override
+  public void unregisterDomain(String domainName)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+    ArrayList list = new ArrayList();
+    list.add(domainName);
+    performDelete("WHERE "+domainNameField+"=?",list,invKeys);
+  }
+
+  /** Get ordered list of domains.
+  *@return a resultset with the columns "description" and "domainname".
+  * These will be ordered by description.
+  */
+  @Override
+  public IResultSet getDomains()
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+
+    return performQuery("SELECT "+descriptionField+" AS description,"+domainNameField+" AS domainname FROM "+
+      getTableName()+" ORDER BY "+descriptionField+" ASC",null,invKeys,null);
+  }
+
+  /** Get a description given a domain name.
+  *@param domainName is the domain name.
+  *@return the description, or null if the domain is not registered.
+  */
+  @Override
+  public String getDescription(String domainName)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+
+    ArrayList list = new ArrayList();
+    list.add(domainName);
+    IResultSet set = performQuery("SELECT "+descriptionField+" FROM "+
+      getTableName()+" WHERE "+domainNameField+"=?",list,invKeys,null);
+    if (set.getRowCount() == 0)
+      return null;
+    IResultRow row = set.getRow(0);
+    String x = (String)row.getValue(descriptionField);
+    if (x == null)
+      return "";
+    return x;
+  }
+
+  // Protected methods
+
+  /** Get the cache key for the connector manager table.
+  *@return the cache key
+  */
+  protected String getCacheKey()
+  {
+    return CacheKeyFactory.makeTableKey(null,getTableName(),getDBInterface().getDatabaseName());
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authgroups/AuthorityGroup.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authgroups/AuthorityGroup.java
new file mode 100644
index 0000000..9caf739
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authgroups/AuthorityGroup.java
@@ -0,0 +1,103 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authgroups;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+
+/** This is the implementation of the authority group interface, which describes a paper object
+* to be manipulated in order to create, edit, or save an authority group definition.
+*/
+public class AuthorityGroup implements IAuthorityGroup
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // data
+  protected boolean isNew = true;
+  protected String name = null;
+  protected String description = null;
+
+  /** Constructor.
+  */
+  public AuthorityGroup()
+  {
+  }
+
+  /** Clone this object.
+  *@return the cloned object.
+  */
+  public AuthorityGroup duplicate()
+  {
+    AuthorityGroup rval = new AuthorityGroup();
+    rval.isNew = isNew;
+    rval.name = name;
+    rval.description = description;
+    return rval;
+  }
+
+  /** Set 'isnew' condition.
+  *@param isnew true if this is a new instance.
+  */
+  public void setIsNew(boolean isnew)
+  {
+    this.isNew = isnew;
+  }
+  
+  /** Get 'isnew' condition.
+  *@return true if this is a new connection, false otherwise.
+  */
+  public boolean getIsNew()
+  {
+    return isNew;
+  }
+
+  /** Set name.
+  *@param name is the name.
+  */
+  public void setName(String name)
+  {
+    this.name = name;
+  }
+
+  /** Get name.
+  *@return the name
+  */
+  public String getName()
+  {
+    return name;
+  }
+
+  /** Set description.
+  *@param description is the description.
+  */
+  public void setDescription(String description)
+  {
+    this.description = description;
+  }
+
+  /** Get description.
+  *@return the description
+  */
+  public String getDescription()
+  {
+    return description;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authgroups/AuthorityGroupManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authgroups/AuthorityGroupManager.java
new file mode 100644
index 0000000..d4f7b5e
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authgroups/AuthorityGroupManager.java
@@ -0,0 +1,666 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authgroups;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.interfaces.CacheKeyFactory;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+import org.apache.manifoldcf.crawler.interfaces.IRepositoryConnectionManager;
+import org.apache.manifoldcf.crawler.interfaces.RepositoryConnectionManagerFactory;
+
+/** Implementation of the authority group manager functionality.
+ * 
+ * <br><br>
+ * <b>authgroups</b>
+ * <table border="1" cellpadding="3" cellspacing="0">
+ * <tr class="TableHeadingColor">
+ * <th>Field</th><th>Type</th><th>Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
+ * <tr><td>groupname</td><td>VARCHAR(32)</td><td>Primary Key</td></tr>
+ * <tr><td>description</td><td>VARCHAR(255)</td><td></td></tr>
+ * </table>
+ * <br><br>
+ * 
+ */
+public class AuthorityGroupManager extends org.apache.manifoldcf.core.database.BaseTable implements IAuthorityGroupManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected final static String nameField = "groupname";
+  protected final static String descriptionField = "description";
+
+  // Cache manager
+  ICacheManager cacheManager;
+  // Thread context
+  IThreadContext threadContext;
+
+  /** Constructor.
+  *@param threadContext is the thread context.
+  */
+  public AuthorityGroupManager(IThreadContext threadContext, IDBInterface database)
+    throws ManifoldCFException
+  {
+    super(database,"authgroups");
+
+    cacheManager = CacheManagerFactory.make(threadContext);
+    this.threadContext = threadContext;
+  }
+
+  /** Install the manager.
+  */
+  @Override
+  public void install()
+    throws ManifoldCFException
+  {
+    // Always do a loop, in case upgrade needs it.
+    while (true)
+    {
+      Map existing = getTableSchema(null,null);
+      if (existing == null)
+      {
+        // Install the "objects" table.
+        HashMap map = new HashMap();
+        map.put(nameField,new ColumnDescription("VARCHAR(32)",true,false,null,null,false));
+        map.put(descriptionField,new ColumnDescription("VARCHAR(255)",false,true,null,null,false));
+        performCreate(map,null);
+      }
+      else
+      {
+        // Upgrade, when needed
+      }
+
+      // Index management goes here
+
+      break;
+    }
+  }
+
+  /** Uninstall the manager.
+  */
+  @Override
+  public void deinstall()
+    throws ManifoldCFException
+  {
+    performDrop(null);
+  }
+
+  /** Export configuration */
+  @Override
+  public void exportConfiguration(java.io.OutputStream os)
+    throws java.io.IOException, ManifoldCFException
+  {
+    // Write a version indicator
+    ManifoldCF.writeDword(os,1);
+    // Get the authority list
+    IAuthorityGroup[] list = getAllGroups();
+    // Write the number of groups
+    ManifoldCF.writeDword(os,list.length);
+    // Loop through the list and write the individual group info
+    int i = 0;
+    while (i < list.length)
+    {
+      IAuthorityGroup conn = list[i++];
+      ManifoldCF.writeString(os,conn.getName());
+      ManifoldCF.writeString(os,conn.getDescription());
+    }
+  }
+
+  /** Import configuration */
+  @Override
+  public void importConfiguration(java.io.InputStream is)
+    throws java.io.IOException, ManifoldCFException
+  {
+    int version = ManifoldCF.readDword(is);
+    if (version < 1 || version > 1)
+      throw new java.io.IOException("Unknown authority group configuration version: "+Integer.toString(version));
+    int count = ManifoldCF.readDword(is);
+    int i = 0;
+    while (i < count)
+    {
+      IAuthorityGroup conn = create();
+      conn.setName(ManifoldCF.readString(is));
+      conn.setDescription(ManifoldCF.readString(is));
+      // Attempt to save this connection
+      save(conn);
+      i++;
+    }
+  }
+
+  /** Obtain a list of the authority grouops, ordered by name.
+  *@return an array of connection objects.
+  */
+  @Override
+  public IAuthorityGroup[] getAllGroups()
+    throws ManifoldCFException
+  {
+    beginTransaction();
+    try
+    {
+      // Read all the tools
+      StringSetBuffer ssb = new StringSetBuffer();
+      ssb.add(getAuthorityGroupsKey());
+      StringSet localCacheKeys = new StringSet(ssb);
+      IResultSet set = performQuery("SELECT "+nameField+",lower("+nameField+") AS sortfield FROM "+getTableName()+" ORDER BY sortfield ASC",null,
+        localCacheKeys,null);
+      String[] names = new String[set.getRowCount()];
+      int i = 0;
+      while (i < names.length)
+      {
+        IResultRow row = set.getRow(i);
+        names[i] = row.getValue(nameField).toString();
+        i++;
+      }
+      return loadMultiple(names);
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Load an authority group by name.
+  *@param name is the name of the authority group.
+  *@return the loaded group object, or null if not found.
+  */
+  @Override
+  public IAuthorityGroup load(String name)
+    throws ManifoldCFException
+  {
+    return loadMultiple(new String[]{name})[0];
+  }
+
+  /** Load multiple authority groups by name.
+  *@param names are the names to load.
+  *@return the loaded group objects.
+  */
+  @Override
+  public IAuthorityGroup[] loadMultiple(String[] names)
+    throws ManifoldCFException
+  {
+    // Build description objects
+    AuthorityGroupDescription[] objectDescriptions = new AuthorityGroupDescription[names.length];
+    int i = 0;
+    StringSetBuffer ssb = new StringSetBuffer();
+    while (i < names.length)
+    {
+      ssb.clear();
+      ssb.add(getAuthorityGroupKey(names[i]));
+      objectDescriptions[i] = new AuthorityGroupDescription(names[i],new StringSet(ssb));
+      i++;
+    }
+
+    AuthorityGroupExecutor exec = new AuthorityGroupExecutor(this,objectDescriptions);
+    cacheManager.findObjectsAndExecute(objectDescriptions,null,exec,getTransactionID());
+    return exec.getResults();
+  }
+
+  /** Create a new authority group object.
+  *@return the new object.
+  */
+  @Override
+  public IAuthorityGroup create()
+    throws ManifoldCFException
+  {
+    AuthorityGroup rval = new AuthorityGroup();
+    return rval;
+  }
+
+  /** Save an authority group object.
+  *@param object is the object to save.
+  *@return true if the object is created, false otherwise.
+  */
+  @Override
+  public boolean save(IAuthorityGroup object)
+    throws ManifoldCFException
+  {
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getAuthorityGroupsKey());
+    ssb.add(getAuthorityGroupKey(object.getName()));
+    StringSet cacheKeys = new StringSet(ssb);
+    while (true)
+    {
+      long sleepAmt = 0L;
+      try
+      {
+        ICacheHandle ch = cacheManager.enterCache(null,cacheKeys,getTransactionID());
+        try
+        {
+          beginTransaction();
+          try
+          {
+            //performLock();
+            ManifoldCF.noteConfigurationChange();
+            boolean isNew = object.getIsNew();
+            // See whether the instance exists
+            ArrayList params = new ArrayList();
+            String query = buildConjunctionClause(params,new ClauseDescription[]{
+              new UnitaryClause(nameField,object.getName())});
+            IResultSet set = performQuery("SELECT * FROM "+getTableName()+" WHERE "+
+              query+" FOR UPDATE",params,null,null);
+            HashMap values = new HashMap();
+            values.put(descriptionField,object.getDescription());
+
+            boolean isCreated;
+            
+            if (set.getRowCount() > 0)
+            {
+              // If the object is supposedly new, it is bad that we found one that already exists.
+              if (isNew)
+                throw new ManifoldCFException("Authority group '"+object.getName()+"' already exists");
+              isCreated = false;
+              // Update
+              params.clear();
+              query = buildConjunctionClause(params,new ClauseDescription[]{
+                new UnitaryClause(nameField,object.getName())});
+              performUpdate(values," WHERE "+query,params,null);
+            }
+            else
+            {
+              // If the object is not supposed to be new, it is bad that we did not find one.
+              if (!isNew)
+                throw new ManifoldCFException("Authority group '"+object.getName()+"' no longer exists");
+              isCreated = true;
+              // Insert
+              values.put(nameField,object.getName());
+              // We only need the general key because this is new.
+              performInsert(values,null);
+            }
+
+            cacheManager.invalidateKeys(ch);
+            return isCreated;
+          }
+          catch (ManifoldCFException e)
+          {
+            signalRollback();
+            throw e;
+          }
+          catch (Error e)
+          {
+            signalRollback();
+            throw e;
+          }
+          finally
+          {
+            endTransaction();
+          }
+        }
+        finally
+        {
+          cacheManager.leaveCache(ch);
+        }
+      }
+      catch (ManifoldCFException e)
+      {
+        // Is this a deadlock exception?  If so, we want to try again.
+        if (e.getErrorCode() != ManifoldCFException.DATABASE_TRANSACTION_ABORT)
+          throw e;
+        sleepAmt = getSleepAmt();
+      }
+      finally
+      {
+        sleepFor(sleepAmt);
+      }
+    }
+  }
+
+  /** Delete an authority group.
+  *@param name is the name of the group to delete.  If the
+  * name does not exist, no error is returned.
+  */
+  @Override
+  public void delete(String name)
+    throws ManifoldCFException
+  {
+    // Grab repository connection manager handle, to check on legality of deletion.
+    IRepositoryConnectionManager repoManager = RepositoryConnectionManagerFactory.make(threadContext);
+    IAuthorityConnectionManager authManager = AuthorityConnectionManagerFactory.make(threadContext);
+    
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getAuthorityGroupsKey());
+    ssb.add(getAuthorityGroupKey(name));
+    StringSet cacheKeys = new StringSet(ssb);
+    ICacheHandle ch = cacheManager.enterCache(null,cacheKeys,getTransactionID());
+    try
+    {
+      beginTransaction();
+      try
+      {
+        // Check if anything refers to this group name
+        if (repoManager.isGroupReferenced(name))
+          throw new ManifoldCFException("Can't delete authority group '"+name+"': existing repository connections refer to it");
+        if (authManager.isGroupReferenced(name))
+          throw new ManifoldCFException("Can't delete authority group '"+name+"': existing authority connections refer to it");
+        ManifoldCF.noteConfigurationChange();
+        ArrayList params = new ArrayList();
+        String query = buildConjunctionClause(params,new ClauseDescription[]{
+          new UnitaryClause(nameField,name)});
+        performDelete("WHERE "+query,params,null);
+        cacheManager.invalidateKeys(ch);
+      }
+      catch (ManifoldCFException e)
+      {
+        signalRollback();
+        throw e;
+      }
+      catch (Error e)
+      {
+        signalRollback();
+        throw e;
+      }
+      finally
+      {
+        endTransaction();
+      }
+    }
+    finally
+    {
+      cacheManager.leaveCache(ch);
+    }
+
+  }
+
+  /** Get the authority group name column.
+  *@return the name column.
+  */
+  @Override
+  public String getGroupNameColumn()
+  {
+    return nameField;
+  }
+
+  /** Get the authority connection description column.
+  *@return the description column.
+  */
+  @Override
+  public String getGroupDescriptionColumn()
+  {
+    return descriptionField;
+  }
+
+  // Caching strategy: Individual connection descriptions are cached, and there is a global cache key for the list of
+  // repository connections.
+
+  /** Construct a key which represents the general list of repository connectors.
+  *@return the cache key.
+  */
+  protected static String getAuthorityGroupsKey()
+  {
+    return CacheKeyFactory.makeAuthorityGroupsKey();
+  }
+
+  /** Construct a key which represents an individual authority group.
+  *@param groupName is the name of the group.
+  *@return the cache key.
+  */
+  protected static String getAuthorityGroupKey(String groupName)
+  {
+    return CacheKeyFactory.makeAuthorityGroupKey(groupName);
+  }
+
+  // Other utility methods.
+
+  /** Fetch multiple authority groups at a single time.
+  *@param groupNames are a list of group names.
+  *@return the corresponding authority group objects.
+  */
+  protected AuthorityGroup[] getAuthorityGroupsMultiple(String[] groupNames)
+    throws ManifoldCFException
+  {
+    AuthorityGroup[] rval = new AuthorityGroup[groupNames.length];
+    HashMap returnIndex = new HashMap();
+    int i = 0;
+    while (i < groupNames.length)
+    {
+      rval[i] = null;
+      returnIndex.put(groupNames[i],new Integer(i));
+      i++;
+    }
+    beginTransaction();
+    try
+    {
+      i = 0;
+      ArrayList params = new ArrayList();
+      int j = 0;
+      int maxIn = maxClauseGetAuthorityGroupsChunk();
+      while (i < groupNames.length)
+      {
+        if (j == maxIn)
+        {
+          getAuthorityGroupsChunk(rval,returnIndex,params);
+          params.clear();
+          j = 0;
+        }
+        params.add(groupNames[i]);
+        i++;
+        j++;
+      }
+      if (j > 0)
+        getAuthorityGroupsChunk(rval,returnIndex,params);
+      return rval;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Find the maximum number of clauses for getAuthorityConnectionsChunk.
+  */
+  protected int maxClauseGetAuthorityGroupsChunk()
+  {
+    return findConjunctionClauseMax(new ClauseDescription[]{});
+  }
+    
+  /** Read a chunk of authority groups.
+  *@param rval is the place to put the read policies.
+  *@param returnIndex is a map from the object id (resource id) and the rval index.
+  *@param params is the set of parameters.
+  */
+  protected void getAuthorityGroupsChunk(AuthorityGroup[] rval, Map returnIndex, ArrayList params)
+    throws ManifoldCFException
+  {
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(nameField,params)});
+    IResultSet set = performQuery("SELECT * FROM "+getTableName()+" WHERE "+
+      query,list,null,null);
+    int i = 0;
+    while (i < set.getRowCount())
+    {
+      IResultRow row = set.getRow(i++);
+      String name = row.getValue(nameField).toString();
+      int index = ((Integer)returnIndex.get(name)).intValue();
+      AuthorityGroup rc = new AuthorityGroup();
+      rc.setIsNew(false);
+      rc.setName(name);
+      rc.setDescription((String)row.getValue(descriptionField));
+      rval[index] = rc;
+    }
+  }
+
+  // The cached instance will be a AuthorityGroup.  The cached version will be duplicated when it is returned
+  // from the cache.
+  //
+  // The description object is based completely on the name.
+
+  /** This is the object description for an authority group object.
+  */
+  protected static class AuthorityGroupDescription extends org.apache.manifoldcf.core.cachemanager.BaseDescription
+  {
+    protected String groupName;
+    protected String criticalSectionName;
+    protected StringSet cacheKeys;
+
+    public AuthorityGroupDescription(String groupName, StringSet invKeys)
+    {
+      super("authoritygroupcache");
+      this.groupName = groupName;
+      criticalSectionName = getClass().getName()+"-"+groupName;
+      cacheKeys = invKeys;
+    }
+
+    public String getGroupName()
+    {
+      return groupName;
+    }
+
+    public int hashCode()
+    {
+      return groupName.hashCode();
+    }
+
+    public boolean equals(Object o)
+    {
+      if (!(o instanceof AuthorityGroupDescription))
+        return false;
+      AuthorityGroupDescription d = (AuthorityGroupDescription)o;
+      return d.groupName.equals(groupName);
+    }
+
+    public String getCriticalSectionName()
+    {
+      return criticalSectionName;
+    }
+
+    /** Get the cache keys for an object (which may or may not exist yet in
+    * the cache).  This method is called in order for cache manager to throw the correct locks.
+    * @return the object's cache keys, or null if the object should not
+    * be cached.
+    */
+    public StringSet getObjectKeys()
+    {
+      return cacheKeys;
+    }
+
+  }
+
+  /** This is the executor object for locating authority group objects.
+  */
+  protected static class AuthorityGroupExecutor extends org.apache.manifoldcf.core.cachemanager.ExecutorBase
+  {
+    // Member variables
+    protected AuthorityGroupManager thisManager;
+    protected AuthorityGroup[] returnValues;
+    protected HashMap returnMap = new HashMap();
+
+    /** Constructor.
+    *@param manager is the AuthorityGroupManager.
+    *@param objectDescriptions are the object descriptions.
+    */
+    public AuthorityGroupExecutor(AuthorityGroupManager manager, AuthorityGroupDescription[] objectDescriptions)
+    {
+      super();
+      thisManager = manager;
+      returnValues = new AuthorityGroup[objectDescriptions.length];
+      int i = 0;
+      while (i < objectDescriptions.length)
+      {
+        returnMap.put(objectDescriptions[i].getGroupName(),new Integer(i));
+        i++;
+      }
+    }
+
+    /** Get the result.
+    *@return the looked-up or read cached instances.
+    */
+    public AuthorityGroup[] getResults()
+    {
+      return returnValues;
+    }
+
+    /** Create a set of new objects to operate on and cache.  This method is called only
+    * if the specified object(s) are NOT available in the cache.  The specified objects
+    * should be created and returned; if they are not created, it means that the
+    * execution cannot proceed, and the execute() method will not be called.
+    * @param objectDescriptions is the set of unique identifier of the object.
+    * @return the newly created objects to cache, or null, if any object cannot be created.
+    *  The order of the returned objects must correspond to the order of the object descriptinos.
+    */
+    public Object[] create(ICacheDescription[] objectDescriptions) throws ManifoldCFException
+    {
+      // Turn the object descriptions into the parameters for the AuthorityGroup requests
+      String[] groupNames = new String[objectDescriptions.length];
+      int i = 0;
+      while (i < groupNames.length)
+      {
+        AuthorityGroupDescription desc = (AuthorityGroupDescription)objectDescriptions[i];
+        groupNames[i] = desc.getGroupName();
+        i++;
+      }
+
+      return thisManager.getAuthorityGroupsMultiple(groupNames);
+    }
+
+
+    /** Notify the implementing class of the existence of a cached version of the
+    * object.  The object is passed to this method so that the execute() method below
+    * will have it available to operate on.  This method is also called for all objects
+    * that are freshly created as well.
+    * @param objectDescription is the unique identifier of the object.
+    * @param cachedObject is the cached object.
+    */
+    public void exists(ICacheDescription objectDescription, Object cachedObject) throws ManifoldCFException
+    {
+      // Cast what came in as what it really is
+      AuthorityGroupDescription objectDesc = (AuthorityGroupDescription)objectDescription;
+      AuthorityGroup ci = (AuthorityGroup)cachedObject;
+
+      // Duplicate it!
+      if (ci != null)
+        ci = ci.duplicate();
+
+      // In order to make the indexes line up, we need to use the hashtable built by
+      // the constructor.
+      returnValues[((Integer)returnMap.get(objectDesc.getGroupName())).intValue()] = ci;
+    }
+
+    /** Perform the desired operation.  This method is called after either createGetObject()
+    * or exists() is called for every requested object.
+    */
+    public void execute() throws ManifoldCFException
+    {
+      // Does nothing; we only want to fetch objects in this cacher.
+    }
+
+
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorities/BaseAuthorityConnector.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorities/BaseAuthorityConnector.java
index e991b28..304c919 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorities/BaseAuthorityConnector.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorities/BaseAuthorityConnector.java
@@ -34,11 +34,28 @@
 {
   public static final String _rcsid = "@(#)$Id: BaseAuthorityConnector.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  // For repositories that have the ability to deny access based on a user's access tokens
+  protected static final AuthorizationResponse RESPONSE_UNREACHABLE = new AuthorizationResponse(new String[]{GLOBAL_DENY_TOKEN},
+    AuthorizationResponse.RESPONSE_UNREACHABLE);
+  protected static final AuthorizationResponse RESPONSE_USERNOTFOUND = new AuthorizationResponse(new String[]{GLOBAL_DENY_TOKEN},
+    AuthorizationResponse.RESPONSE_USERNOTFOUND);
+  protected static final AuthorizationResponse RESPONSE_USERUNAUTHORIZED = new AuthorizationResponse(new String[]{GLOBAL_DENY_TOKEN},
+    AuthorizationResponse.RESPONSE_USERUNAUTHORIZED);
+
+  // For repositories that DO NOT have the ability to deny access based on a user's access tokens
+  protected static final AuthorizationResponse RESPONSE_UNREACHABLE_ADDITIVE = new AuthorizationResponse(new String[0],
+    AuthorizationResponse.RESPONSE_UNREACHABLE);
+  protected static final AuthorizationResponse RESPONSE_USERNOTFOUND_ADDITIVE = new AuthorizationResponse(new String[0],
+    AuthorizationResponse.RESPONSE_USERNOTFOUND);
+  protected static final AuthorizationResponse RESPONSE_USERUNAUTHORIZED_ADDITIVE = new AuthorizationResponse(new String[0],
+    AuthorizationResponse.RESPONSE_USERUNAUTHORIZED);
+
   /** Obtain the access tokens for a given user name.
   *@param userName is the user name or identifier.
   *@return the response tokens (according to the current authority).
   * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
   */
+  @Override
   public AuthorizationResponse getAuthorizationResponse(String userName)
     throws ManifoldCFException
   {
@@ -67,6 +84,7 @@
   *@param userName is the user name or identifier.
   *@return the default response tokens, presuming that the connect method fails.
   */
+  @Override
   public AuthorizationResponse getDefaultAuthorizationResponse(String userName)
   {
     String[] acls = getDefaultAccessTokens(userName);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnection.java
index 0194c13..b154afe 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnection.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnection.java
@@ -36,6 +36,9 @@
   protected String className = null;
   protected ConfigParams configParams = new ConfigParams();
   protected int maxCount = 100;
+  protected String prerequisiteMapping = null;
+  protected String authDomain = null;
+  protected String authGroup = null;
 
   /** Constructor.
   */
@@ -55,6 +58,9 @@
     rval.className = className;
     rval.maxCount = maxCount;
     rval.configParams = configParams.duplicate();
+    rval.prerequisiteMapping = prerequisiteMapping;
+    rval.authDomain = authDomain;
+    rval.authGroup = authGroup;
     return rval;
   }
 
@@ -146,4 +152,53 @@
     return maxCount;
   }
 
+  /** Set the prerequisite mapper, if any.
+  *@param mapping is the name of the mapping connection to use to get the input user name,
+  *  or null.
+  */
+  public void setPrerequisiteMapping(String mapping)
+  {
+    prerequisiteMapping = mapping;
+  }
+  
+  /** Get the prerequisite mapper, if any.
+  *@return the mapping connection name whose output should be used as the input user name.
+  */
+  public String getPrerequisiteMapping()
+  {
+    return prerequisiteMapping;
+  }
+
+  /** Set the authorization domain.
+  *@param domain is the authorization domain.
+  */
+  public void setAuthDomain(String domain)
+  {
+    authDomain = domain;
+  }
+  
+  /** Get the authorization domain.
+  *@return the authorization domain.
+  */
+  public String getAuthDomain()
+  {
+    return authDomain;
+  }
+
+  /** Set authorization group.
+  *@param groupName is the name of the group.
+  */
+  public void setAuthGroup(String groupName)
+  {
+    authGroup = groupName;
+  }
+  
+  /** Get the authorization group.
+  *@return the group.
+  */
+  public String getAuthGroup()
+  {
+    return authGroup;
+  }
+
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnectionManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnectionManager.java
index de222b9..e77fa9c 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnectionManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authority/AuthorityConnectionManager.java
@@ -39,6 +39,8 @@
  * <tr><td>classname</td><td>VARCHAR(255)</td><td></td></tr>
  * <tr><td>maxcount</td><td>BIGINT</td><td></td></tr>
  * <tr><td>configxml</td><td>LONGTEXT</td><td></td></tr>
+ * <tr><td>mappingname</td><td>VARCHAR(32)</td><td></td></tr>
+ * <tr><td>authdomainname</td><td>VARCHAR(32)</td><td></td></tr>
  * </table>
  * <br><br>
  * 
@@ -55,9 +57,10 @@
   protected final static String classNameField = "classname";
   protected final static String maxCountField = "maxcount";
   protected final static String configField = "configxml";
-
-  protected static Random random = new Random();
-
+  protected final static String mappingField = "mappingname";
+  protected final static String authDomainField = "authdomainname";
+  protected final static String groupNameField = "groupname";
+  
   // Cache manager
   ICacheManager cacheManager;
   // Thread context
@@ -77,9 +80,13 @@
 
   /** Install the manager.
   */
+  @Override
   public void install()
     throws ManifoldCFException
   {
+    // First, get the authority manager table name and name column
+    IAuthorityGroupManager authMgr = AuthorityGroupManagerFactory.make(threadContext);
+
     // Always do a loop, in case upgrade needs it.
     while (true)
     {
@@ -93,21 +100,121 @@
         map.put(classNameField,new ColumnDescription("VARCHAR(255)",false,false,null,null,false));
         map.put(maxCountField,new ColumnDescription("BIGINT",false,false,null,null,false));
         map.put(configField,new ColumnDescription("LONGTEXT",false,true,null,null,false));
+        map.put(mappingField,new ColumnDescription("VARCHAR(32)",false,true,null,null,false));
+        map.put(authDomainField,new ColumnDescription("VARCHAR(255)",false,true,null,null,false));
+        map.put(groupNameField,new ColumnDescription("VARCHAR(32)",false,false,
+          authMgr.getTableName(),authMgr.getGroupNameColumn(),false));
         performCreate(map,null);
       }
       else
       {
-        // Upgrade code goes here
+        // Add the mappingField column
+        ColumnDescription cd = (ColumnDescription)existing.get(mappingField);
+        if (cd == null)
+        {
+          Map addMap = new HashMap();
+          addMap.put(mappingField,new ColumnDescription("VARCHAR(32)",false,true,null,null,false));
+          performAlter(addMap,null,null,null);
+        }
+        // Add the authDomainField column
+        cd = (ColumnDescription)existing.get(authDomainField);
+        if (cd == null)
+        {
+          Map addMap = new HashMap();
+          addMap.put(authDomainField,new ColumnDescription("VARCHAR(255)",false,true,null,null,false));
+          performAlter(addMap,null,null,null);
+        }
+        cd = (ColumnDescription)existing.get(groupNameField);
+        if (cd == null)
+        {
+          Map addMap = new HashMap();
+          addMap.put(groupNameField,new ColumnDescription("VARCHAR(32)",false,true,
+            authMgr.getTableName(),authMgr.getGroupNameColumn(),false));
+          performAlter(addMap,null,null,null);
+          boolean revert = true;
+          try
+          {
+            ArrayList params = new ArrayList();
+            IResultSet set = performQuery("SELECT "+nameField+","+descriptionField+" FROM "+getTableName(),null,null,null);
+            for (int i = 0 ; i < set.getRowCount() ; i++)
+            {
+              IResultRow row = set.getRow(i);
+              String authName = (String)row.getValue(nameField);
+              String authDescription = (String)row.getValue(descriptionField);
+              // Attempt to create a matching auth group.  This will fail if the group
+              // already exists
+              IAuthorityGroup grp = authMgr.create();
+              grp.setName(authName);
+              grp.setDescription(authDescription);
+              try
+              {
+                authMgr.save(grp);
+              }
+              catch (ManifoldCFException e)
+              {
+                if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                  throw e;
+                // Fall through; the row exists already
+              }
+              Map<String,String> map = new HashMap<String,String>();
+              map.put(groupNameField,authName);
+              params.clear();
+              String query = buildConjunctionClause(params,new ClauseDescription[]{
+                new UnitaryClause(nameField,authName)});
+              performUpdate(map," WHERE "+query,params,null);
+            }
+            Map modifyMap = new HashMap();
+            modifyMap.put(groupNameField,new ColumnDescription("VARCHAR(32)",false,false,
+              authMgr.getTableName(),authMgr.getGroupNameColumn(),false));
+            performAlter(null,modifyMap,null,null);
+            revert = false;
+          }
+          finally
+          {
+            if (revert)
+            {
+              // Upgrade failed; back out our changes
+              List<String> deleteList = new ArrayList<String>();
+              deleteList.add(groupNameField);
+              performAlter(null,null,deleteList,null);
+            }
+          }
+        }
       }
 
       // Index management goes here
+      IndexDescription authDomainIndex = new IndexDescription(false,new String[]{authDomainField});
+      IndexDescription authorityIndex = new IndexDescription(false,new String[]{groupNameField});
 
+      // Get rid of indexes that shouldn't be there
+      Map indexes = getTableIndexes(null,null);
+      Iterator iter = indexes.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String indexName = (String)iter.next();
+        IndexDescription id = (IndexDescription)indexes.get(indexName);
+
+        if (authDomainIndex != null && id.equals(authDomainIndex))
+          authDomainIndex = null;
+        if (authorityIndex != null && id.equals(authorityIndex))
+          authorityIndex = null;
+        else if (indexName.indexOf("_pkey") == -1)
+          // This index shouldn't be here; drop it
+          performRemoveIndex(indexName);
+      }
+
+      // Add the ones we didn't find
+      if (authDomainIndex != null)
+        performAddIndex(null,authDomainIndex);
+      if (authorityIndex != null)
+        performAddIndex(null,authorityIndex);
       break;
     }
   }
 
   /** Uninstall the manager.
   */
+  @Override
   public void deinstall()
     throws ManifoldCFException
   {
@@ -115,11 +222,12 @@
   }
 
   /** Export configuration */
+  @Override
   public void exportConfiguration(java.io.OutputStream os)
     throws java.io.IOException, ManifoldCFException
   {
     // Write a version indicator
-    ManifoldCF.writeDword(os,1);
+    ManifoldCF.writeDword(os,3);
     // Get the authority list
     IAuthorityConnection[] list = getAllConnections();
     // Write the number of authorities
@@ -134,35 +242,138 @@
       ManifoldCF.writeString(os,conn.getClassName());
       ManifoldCF.writeString(os,conn.getConfigParams().toXML());
       ManifoldCF.writeDword(os,conn.getMaxConnections());
+      ManifoldCF.writeString(os,conn.getPrerequisiteMapping());
+      ManifoldCF.writeString(os,conn.getAuthDomain());
+      ManifoldCF.writeString(os,conn.getAuthGroup());
     }
   }
 
   /** Import configuration */
+  @Override
   public void importConfiguration(java.io.InputStream is)
     throws java.io.IOException, ManifoldCFException
   {
+    IAuthorityGroupManager authMgr = AuthorityGroupManagerFactory.make(threadContext);
     int version = ManifoldCF.readDword(is);
-    if (version != 1)
+    if (version < 1 || version > 2)
       throw new java.io.IOException("Unknown authority configuration version: "+Integer.toString(version));
     int count = ManifoldCF.readDword(is);
     int i = 0;
     while (i < count)
     {
       IAuthorityConnection conn = create();
-      conn.setName(ManifoldCF.readString(is));
-      conn.setDescription(ManifoldCF.readString(is));
+      String name = ManifoldCF.readString(is);
+      String description = ManifoldCF.readString(is);
+      conn.setName(name);
+      conn.setDescription(description);
       conn.setClassName(ManifoldCF.readString(is));
       conn.getConfigParams().fromXML(ManifoldCF.readString(is));
       conn.setMaxConnections(ManifoldCF.readDword(is));
+      if (version >= 2)
+      {
+        conn.setPrerequisiteMapping(ManifoldCF.readString(is));
+        if (version >= 3)
+        {
+          conn.setAuthDomain(ManifoldCF.readString(is));
+          conn.setAuthGroup(ManifoldCF.readString(is));
+        }
+      }
+      // For importing older than MCF 1.5 import files...
+      if (conn.getAuthGroup() == null || conn.getAuthGroup().length() == 0)
+      {
+        // Attempt to create a matching auth group.  This will fail if the group
+        // already exists
+        IAuthorityGroup grp = authMgr.create();
+        grp.setName(name);
+        grp.setDescription(description);
+        try
+        {
+          authMgr.save(grp);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            throw e;
+          // Fall through; the row exists already
+        }
+        conn.setAuthGroup(name);
+      }
+      
       // Attempt to save this connection
       save(conn);
       i++;
     }
   }
 
-  /** Obtain a list of the repository connections, ordered by name.
+  /** Return true if the specified authority group name is referenced.
+  *@param groupName is the authority group name.
+  *@return true if referenced, false otherwise.
+  */
+  @Override
+  public boolean isGroupReferenced(String groupName)
+    throws ManifoldCFException
+  {
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getAuthorityConnectionsKey());
+    StringSet localCacheKeys = new StringSet(ssb);
+    ArrayList params = new ArrayList();
+    String query = buildConjunctionClause(params,new ClauseDescription[]{
+      new UnitaryClause(groupNameField,groupName)});
+    IResultSet set = performQuery("SELECT "+nameField+" FROM "+getTableName()+" WHERE "+query,params,
+      localCacheKeys,null);
+    return set.getRowCount() > 0;
+  }
+
+  /** Obtain a list of the authority connections which correspond to an auth domain.
+  *@param authDomain is the domain to get connections for.
   *@return an array of connection objects.
   */
+  @Override
+  public IAuthorityConnection[] getDomainConnections(String authDomain)
+    throws ManifoldCFException
+  {
+    beginTransaction();
+    try
+    {
+      // Read the connections for the domain
+      StringSetBuffer ssb = new StringSetBuffer();
+      ssb.add(getAuthorityConnectionsKey());
+      StringSet localCacheKeys = new StringSet(ssb);
+      StringBuilder sb = new StringBuilder("SELECT ");
+      ArrayList list = new ArrayList();
+      sb.append(nameField).append(" FROM ").append(getTableName()).append(" WHERE ");
+      sb.append(buildConjunctionClause(list,new ClauseDescription[]{new UnitaryClause(authDomainField,authDomain)}));
+      IResultSet set = performQuery(sb.toString(),list,localCacheKeys,null);
+      String[] names = new String[set.getRowCount()];
+      int i = 0;
+      while (i < names.length)
+      {
+        IResultRow row = set.getRow(i);
+        names[i] = row.getValue(nameField).toString();
+        i++;
+      }
+      return loadMultiple(names);
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+  
+  /** Obtain a list of the authority connections, ordered by name.
+  *@return an array of connection objects.
+  */
+  @Override
   public IAuthorityConnection[] getAllConnections()
     throws ManifoldCFException
   {
@@ -205,6 +416,7 @@
   *@param name is the name of the repository connection.
   *@return the loaded connection object, or null if not found.
   */
+  @Override
   public IAuthorityConnection load(String name)
     throws ManifoldCFException
   {
@@ -215,6 +427,7 @@
   *@param names are the names to load.
   *@return the loaded connection objects.
   */
+  @Override
   public IAuthorityConnection[] loadMultiple(String[] names)
     throws ManifoldCFException
   {
@@ -238,6 +451,7 @@
   /** Create a new repository connection object.
   *@return the new object.
   */
+  @Override
   public IAuthorityConnection create()
     throws ManifoldCFException
   {
@@ -249,6 +463,7 @@
   *@param object is the object to save.
   *@return true if the object is created, false otherwise.
   */
+  @Override
   public boolean save(IAuthorityConnection object)
     throws ManifoldCFException
   {
@@ -281,6 +496,9 @@
             values.put(classNameField,object.getClassName());
             values.put(maxCountField,new Long((long)object.getMaxConnections()));
             values.put(configField,object.getConfigParams().toXML());
+            values.put(mappingField,object.getPrerequisiteMapping());
+            values.put(authDomainField,object.getAuthDomain());
+            values.put(groupNameField,object.getAuthGroup());
 
             boolean isCreated;
             
@@ -349,11 +567,10 @@
   *@param name is the name of the connection to delete.  If the
   * name does not exist, no error is returned.
   */
+  @Override
   public void delete(String name)
     throws ManifoldCFException
   {
-    // Grab repository connection manager handle, to check on legality of deletion.
-    IRepositoryConnectionManager repoManager = RepositoryConnectionManagerFactory.make(threadContext);
 
     StringSetBuffer ssb = new StringSetBuffer();
     ssb.add(getAuthorityConnectionsKey());
@@ -365,9 +582,6 @@
       beginTransaction();
       try
       {
-        // Check if anything refers to this connection name
-        if (repoManager.isReferenced(name))
-          throw new ManifoldCFException("Can't delete authority connection '"+name+"': existing repository connections refer to it");
         ManifoldCF.noteConfigurationChange();
         ArrayList params = new ArrayList();
         String query = buildConjunctionClause(params,new ClauseDescription[]{
@@ -400,11 +614,31 @@
   /** Get the authority connection name column.
   *@return the name column.
   */
+  @Override
   public String getAuthorityNameColumn()
   {
     return nameField;
   }
 
+  /** Return true if the specified mapping name is referenced.
+  *@param mappingName is the mapping name.
+  *@return true if referenced, false otherwise.
+  */
+  @Override
+  public boolean isMappingReferenced(String mappingName)
+    throws ManifoldCFException
+  {
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getAuthorityConnectionsKey());
+    StringSet localCacheKeys = new StringSet(ssb);
+    ArrayList params = new ArrayList();
+    String query = buildConjunctionClause(params,new ClauseDescription[]{
+      new UnitaryClause(mappingField,mappingName)});
+    IResultSet set = performQuery("SELECT "+nameField+" FROM "+getTableName()+" WHERE "+query,params,
+      localCacheKeys,null);
+    return set.getRowCount() > 0;
+  }
+
   // Caching strategy: Individual connection descriptions are cached, and there is a global cache key for the list of
   // repository connections.
 
@@ -514,6 +748,9 @@
       rc.setDescription((String)row.getValue(descriptionField));
       rc.setClassName((String)row.getValue(classNameField));
       rc.setMaxConnections((int)((Long)row.getValue(maxCountField)).longValue());
+      rc.setPrerequisiteMapping((String)row.getValue(mappingField));
+      rc.setAuthDomain((String)row.getValue(authDomainField));
+      rc.setAuthGroup((String)row.getValue(groupNameField));
       String xml = (String)row.getValue(configField);
       if (xml != null && xml.length() > 0)
         rc.getConfigParams().fromXML(xml);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorityconnectorpool/AuthorityConnectorPool.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorityconnectorpool/AuthorityConnectorPool.java
new file mode 100644
index 0000000..f382b3f
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/authorityconnectorpool/AuthorityConnectorPool.java
@@ -0,0 +1,179 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.authorityconnectorpool;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An implementation of IAuthorityConnectorPool.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public class AuthorityConnectorPool implements IAuthorityConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Local connector pool */
+  protected final static LocalPool localPool = new LocalPool();
+
+  // This implementation is a place-holder for the real one, which will likely fold in the pooling code
+  // as we strip it out of AuthorityConnectorFactory.
+
+  /** Thread context */
+  protected final IThreadContext threadContext;
+  
+  /** Constructor */
+  public AuthorityConnectorPool(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    this.threadContext = threadContext;
+  }
+  
+  /** Get multiple authority connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param authorityConnections are the connections to use the build the connector instances.
+  */
+  @Override
+  public IAuthorityConnector[] grabMultiple(String[] orderingKeys, IAuthorityConnection[] authorityConnections)
+    throws ManifoldCFException
+  {
+    // For now, use the AuthorityConnectorFactory method.  This will require us to extract info
+    // from each authority connection, however.
+    String[] connectionNames = new String[authorityConnections.length];
+    String[] classNames = new String[authorityConnections.length];
+    ConfigParams[] configInfos = new ConfigParams[authorityConnections.length];
+    int[] maxPoolSizes = new int[authorityConnections.length];
+    
+    for (int i = 0; i < authorityConnections.length; i++)
+    {
+      connectionNames[i] = authorityConnections[i].getName();
+      classNames[i] = authorityConnections[i].getClassName();
+      configInfos[i] = authorityConnections[i].getConfigParams();
+      maxPoolSizes[i] = authorityConnections[i].getMaxConnections();
+    }
+    return localPool.grabMultiple(threadContext,
+      orderingKeys, connectionNames, classNames, configInfos, maxPoolSizes);
+  }
+
+  /** Get an authority connector.
+  * The connector is specified by an authority connection object.
+  *@param authorityConnection is the authority connection to base the connector instance on.
+  */
+  @Override
+  public IAuthorityConnector grab(IAuthorityConnection authorityConnection)
+    throws ManifoldCFException
+  {
+    return localPool.grab(threadContext, authorityConnection.getName(), authorityConnection.getClassName(),
+      authorityConnection.getConfigParams(), authorityConnection.getMaxConnections());
+  }
+
+  /** Release multiple authority connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  @Override
+  public void releaseMultiple(IAuthorityConnection[] connections, IAuthorityConnector[] connectors)
+    throws ManifoldCFException
+  {
+    String[] connectionNames = new String[connections.length];
+    for (int i = 0; i < connections.length; i++)
+    {
+      connectionNames[i] = connections[i].getName();
+    }
+    localPool.releaseMultiple(threadContext, connectionNames, connectors);
+  }
+
+  /** Release an output connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  @Override
+  public void release(IAuthorityConnection connection, IAuthorityConnector connector)
+    throws ManifoldCFException
+  {
+    localPool.release(threadContext, connection.getName(), connector);
+  }
+
+  /** Idle notification for inactive authority connector handles.
+  * This method polls all inactive handles.
+  */
+  @Override
+  public void pollAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.pollAllConnectors(threadContext);
+  }
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  @Override
+  public void flushUnusedConnectors()
+    throws ManifoldCFException
+  {
+    localPool.flushUnusedConnectors(threadContext);
+  }
+
+  /** Clean up all open authority connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  @Override
+  public void closeAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.closeAllConnectors(threadContext);
+  }
+
+  /** Actual static output connector pool */
+  protected static class LocalPool extends org.apache.manifoldcf.core.connectorpool.ConnectorPool<IAuthorityConnector>
+  {
+    public LocalPool()
+    {
+      super("_AUTHORITYCONNECTORPOOL_");
+    }
+    
+    @Override
+    protected boolean isInstalled(IThreadContext tc, String className)
+      throws ManifoldCFException
+    {
+      IAuthorityConnectorManager connectorManager = AuthorityConnectorManagerFactory.make(tc);
+      return connectorManager.isInstalled(className);
+    }
+    
+    @Override
+    protected boolean isConnectionNameValid(IThreadContext tc, String connectionName)
+      throws ManifoldCFException
+    {
+      IAuthorityConnectionManager connectionManager = AuthorityConnectionManagerFactory.make(tc);
+      return connectionManager.load(connectionName) != null;
+    }
+
+    public IAuthorityConnector[] grabMultiple(IThreadContext tc, String[] orderingKeys, String connectionNames[], String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
+      throws ManifoldCFException
+    {
+      return grabMultiple(tc,IAuthorityConnector.class,orderingKeys,connectionNames,classNames,configInfos,maxPoolSizes);
+    }
+
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorFactory.java
index 4811b26..d2e2aa8 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorFactory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorFactory.java
@@ -26,24 +26,41 @@
 
 /** This class manages a pool of authority connectors.
 */
-public class AuthorityConnectorFactory
+public class AuthorityConnectorFactory extends ConnectorFactory<IAuthorityConnector>
 {
-  // Pool hash table.
-  // Keyed by PoolKey; value is Pool
-  protected static Map poolHash = new HashMap();
-
-  private AuthorityConnectorFactory()
+  // Static factory
+  protected final static AuthorityConnectorFactory thisFactory = new AuthorityConnectorFactory();
+  
+  protected AuthorityConnectorFactory()
   {
   }
 
+  @Override
+  protected boolean isInstalled(IThreadContext tc, String className)
+    throws ManifoldCFException
+  {
+    IAuthorityConnectorManager connMgr = AuthorityConnectorManagerFactory.make(tc);
+    return connMgr.isInstalled(className);
+  }
+
+  /** Get the default response from a connector.  Called if the connection attempt fails.
+  */
+  public AuthorizationResponse getThisDefaultAuthorizationResponse(IThreadContext threadContext, String className, String userName)
+    throws ManifoldCFException
+  {
+    IAuthorityConnector connector = getThisConnector(threadContext,className);
+    if (connector == null)
+      return null;
+    return connector.getDefaultAuthorizationResponse(userName);
+  }
+
   /** Install connector.
   *@param className is the class name.
   */
   public static void install(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IAuthorityConnector connector = getConnectorNoCheck(className);
-    connector.install(threadContext);
+    thisFactory.installThis(threadContext,className);
   }
 
   /** Uninstall connector.
@@ -52,8 +69,7 @@
   public static void deinstall(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IAuthorityConnector connector = getConnectorNoCheck(className);
-    connector.deinstall(threadContext);
+    thisFactory.deinstallThis(threadContext,className);
   }
 
   /** Get the default response from a connector.  Called if the connection attempt fails.
@@ -61,10 +77,7 @@
   public static AuthorizationResponse getDefaultAuthorizationResponse(IThreadContext threadContext, String className, String userName)
     throws ManifoldCFException
   {
-    IAuthorityConnector connector = getConnector(threadContext,className);
-    if (connector == null)
-      return null;
-    return connector.getDefaultAuthorizationResponse(userName);
+    return thisFactory.getThisDefaultAuthorizationResponse(threadContext,className,userName);
   }
 
   /** Output the configuration header section.
@@ -72,10 +85,7 @@
   public static void outputConfigurationHeader(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams parameters, ArrayList tabsArray)
     throws ManifoldCFException, IOException
   {
-    IAuthorityConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return;
-    connector.outputConfigurationHeader(threadContext,out,locale,parameters,tabsArray);
+    thisFactory.outputThisConfigurationHeader(threadContext,className,out,locale,parameters,tabsArray);
   }
 
   /** Output the configuration body section.
@@ -83,10 +93,7 @@
   public static void outputConfigurationBody(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    IAuthorityConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return;
-    connector.outputConfigurationBody(threadContext,out,locale,parameters,tabName);
+    thisFactory.outputThisConfigurationBody(threadContext,className,out,locale,parameters,tabName);
   }
 
   /** Process configuration post data for a connector.
@@ -94,10 +101,7 @@
   public static String processConfigurationPost(IThreadContext threadContext, String className, IPostParameters variableContext, Locale locale, ConfigParams configParams)
     throws ManifoldCFException
   {
-    IAuthorityConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return null;
-    return connector.processConfigurationPost(threadContext,variableContext,locale,configParams);
+    return thisFactory.processThisConfigurationPost(threadContext,className,variableContext,locale,configParams);
   }
   
   /** View connector configuration.
@@ -105,11 +109,7 @@
   public static void viewConfiguration(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams configParams)
     throws ManifoldCFException, IOException
   {
-    IAuthorityConnector connector = getConnector(threadContext, className);
-    // We want to be able to view connections even if they have unregistered connectors.
-    if (connector == null)
-      return;
-    connector.viewConfiguration(threadContext,out,locale,configParams);
+    thisFactory.viewThisConfiguration(threadContext,className,out,locale,configParams);
   }
 
   /** Get a repository connector instance, but do NOT check if class is installed first!
@@ -119,477 +119,8 @@
   public static IAuthorityConnector getConnectorNoCheck(String className)
     throws ManifoldCFException
   {
-    try
-    {
-      Class theClass = ManifoldCF.findClass(className);
-      Class[] argumentClasses = new Class[0];
-      // Look for a constructor
-      Constructor c = theClass.getConstructor(argumentClasses);
-      Object[] arguments = new Object[0];
-      Object o = c.newInstance(arguments);
-      if (!(o instanceof IAuthorityConnector))
-        throw new ManifoldCFException("Class '"+className+"' does not implement IAuthorityConnector.");
-      return (IAuthorityConnector)o;
-    }
-    catch (InvocationTargetException e)
-    {
-      Throwable z = e.getTargetException();
-      if (z instanceof Error)
-        throw (Error)z;
-      else if (z instanceof RuntimeException)
-        throw (RuntimeException)z;
-      else
-        throw (ManifoldCFException)z;
-    }
-    catch (ClassNotFoundException e)
-    {
-      throw new ManifoldCFException("No authority connector class '"+className+"' was found.",
-        e);
-    }
-    catch (NoSuchMethodException e)
-    {
-      throw new ManifoldCFException("No appropriate constructor for IAuthorityConnector implementation '"+
-        className+"'.  Need xxx().",
-        e);
-    }
-    catch (SecurityException e)
-    {
-      throw new ManifoldCFException("Protected constructor for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Unavailable constructor for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalArgumentException e)
-    {
-      throw new ManifoldCFException("Shouldn't happen!!!",e);
-    }
-    catch (InstantiationException e)
-    {
-      throw new ManifoldCFException("InstantiationException for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-    catch (ExceptionInInitializerError e)
-    {
-      throw new ManifoldCFException("ExceptionInInitializerError for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-
+    return thisFactory.getThisConnectorNoCheck(className);
   }
 
-  /** Get a repository connector instance.
-  *@param className is the class name.
-  *@return the instance.
-  */
-  protected static IAuthorityConnector getConnector(IThreadContext threadContext, String className)
-    throws ManifoldCFException
-  {
-    IAuthorityConnectorManager connMgr = AuthorityConnectorManagerFactory.make(threadContext);
-    if (connMgr.isInstalled(className) == false)
-      return null;
-
-    try
-    {
-      Class theClass = ManifoldCF.findClass(className);
-      Class[] argumentClasses = new Class[0];
-      // Look for a constructor
-      Constructor c = theClass.getConstructor(argumentClasses);
-      Object[] arguments = new Object[0];
-      Object o = c.newInstance(arguments);
-      if (!(o instanceof IAuthorityConnector))
-        throw new ManifoldCFException("Class '"+className+"' does not implement IAuthorityConnector.");
-      return (IAuthorityConnector)o;
-    }
-    catch (InvocationTargetException e)
-    {
-      Throwable z = e.getTargetException();
-      if (z instanceof Error)
-        throw (Error)z;
-      else if (z instanceof RuntimeException)
-        throw (RuntimeException)z;
-      else
-        throw (ManifoldCFException)z;
-    }
-    catch (ClassNotFoundException e)
-    {
-      // If we get this exception, it may mean that the authority is not registered.
-      if (connMgr.isInstalled(className) == false)
-        return null;
-
-      throw new ManifoldCFException("No authority connector class '"+className+"' was found.",
-        e);
-    }
-    catch (NoSuchMethodException e)
-    {
-      throw new ManifoldCFException("No appropriate constructor for IAuthorityConnector implementation '"+
-        className+"'.  Need xxx().",
-        e);
-    }
-    catch (SecurityException e)
-    {
-      throw new ManifoldCFException("Protected constructor for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Unavailable constructor for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalArgumentException e)
-    {
-      throw new ManifoldCFException("Shouldn't happen!!!",e);
-    }
-    catch (InstantiationException e)
-    {
-      throw new ManifoldCFException("InstantiationException for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-    catch (ExceptionInInitializerError e)
-    {
-      throw new ManifoldCFException("ExceptionInInitializerError for IAuthorityConnector implementation '"+className+"'",
-        e);
-    }
-
-  }
-
-  /** Get a repository connector.
-  * The connector is specified by its class and its parameters.
-  *@param threadContext is the current thread context.
-  *@param className is the name of the class to get a connector for.
-  *@param configInfo are the name/value pairs constituting configuration info
-  * for this class.
-  */
-  public static IAuthorityConnector grab(IThreadContext threadContext,
-    String className, ConfigParams configInfo, int maxPoolSize)
-    throws ManifoldCFException
-  {
-    // System.out.println("In AuthorityConnectorManager.grab()");
-
-    // We want to get handles off the pool and use them.  But the
-    // handles we fetch have to have the right config information.
-
-    // Use the classname and config info to build a pool key
-    PoolKey pk = new PoolKey(className,configInfo);
-    Pool p;
-    synchronized (poolHash)
-    {
-      p = (Pool)poolHash.get(pk);
-      if (p == null)
-      {
-        // Build it again, this time making a copy
-        pk = new PoolKey(className,configInfo.duplicate());
-        p = new Pool(pk,maxPoolSize);
-        poolHash.put(pk,p);
-      }
-    }
-
-    IAuthorityConnector rval = p.getConnector(threadContext);
-    // System.out.println("Leaving AuthorityConnectorManager.grab()");
-    return rval;
-  }
-
-  /** Release a repository connector.
-  *@param connector is the connector to release.
-  */
-  public static void release(IAuthorityConnector connector)
-    throws ManifoldCFException
-  {
-    if (connector == null)
-      return;
-
-    // System.out.println("Releasing an authority connector");
-    // Figure out which pool this goes on, and put it there
-    PoolKey pk = new PoolKey(connector.getClass().getName(),connector.getConfiguration());
-    Pool p;
-    synchronized (poolHash)
-    {
-      p = (Pool)poolHash.get(pk);
-    }
-
-    p.releaseConnector(connector);
-    // System.out.println("Done releasing");
-  }
-
-  /** Idle notification for inactive authority connector handles.
-  * This method polls all inactive handles.
-  */
-  public static void pollAllConnectors(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // Go through the whole pool and notify everyone
-    synchronized (poolHash)
-    {
-      Iterator iter = poolHash.values().iterator();
-      while (iter.hasNext())
-      {
-        Pool p = (Pool)iter.next();
-        p.pollAll(threadContext);
-      }
-    }
-
-  }
-
-  /** Clean up all open authority connector handles.
-  * This method is called when the connector pool needs to be flushed,
-  * to free resources.
-  *@param threadContext is the local thread context.
-  */
-  public static void closeAllConnectors(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // Go through the whole pool and clean it out
-    synchronized (poolHash)
-    {
-      Iterator iter = poolHash.values().iterator();
-      while (iter.hasNext())
-      {
-        Pool p = (Pool)iter.next();
-        p.releaseAll(threadContext);
-      }
-    }
-  }
-
-  /** This is an immutable pool key class, which describes a pool in terms of two independent keys.
-  */
-  public static class PoolKey
-  {
-    protected String className;
-    protected ConfigParams configInfo;
-
-    /** Constructor.
-    */
-    public PoolKey(String className, Map configInfo)
-    {
-      this.className = className;
-      this.configInfo = new ConfigParams(configInfo);
-    }
-
-    public PoolKey(String className, ConfigParams configInfo)
-    {
-      this.className = className;
-      this.configInfo = configInfo;
-    }
-
-    /** Get the class name.
-    *@return the class name.
-    */
-    public String getClassName()
-    {
-      return className;
-    }
-
-    /** Get the config info.
-    *@return the params
-    */
-    public ConfigParams getParams()
-    {
-      return configInfo;
-    }
-
-    /** Hash code.
-    */
-    public int hashCode()
-    {
-      return className.hashCode() + configInfo.hashCode();
-    }
-
-    /** Equals operator.
-    */
-    public boolean equals(Object o)
-    {
-      if (!(o instanceof PoolKey))
-        return false;
-
-      PoolKey pk = (PoolKey)o;
-      return pk.className.equals(className) && pk.configInfo.equals(configInfo);
-    }
-
-  }
-
-  /** This class represents a value in the pool hash, which corresponds to a given key.
-  */
-  public static class Pool
-  {
-    protected ArrayList stack = new ArrayList();
-    protected PoolKey key;
-    protected int numFree;
-
-    /** Constructor
-    */
-    public Pool(PoolKey pk, int maxCount)
-    {
-      key = pk;
-      numFree = maxCount;
-    }
-
-    /** Grab a repository connector.
-    * If none exists, construct it using the information in the pool key.
-    *@return the connector.
-    */
-    public synchronized IAuthorityConnector getConnector(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      while (numFree == 0)
-      {
-        try
-        {
-          wait();
-        }
-        catch (InterruptedException e)
-        {
-          throw new ManifoldCFException("Interrupted",e,ManifoldCFException.INTERRUPTED);
-        }
-      }
-
-      if (stack.size() == 0)
-      {
-        String className = key.getClassName();
-        ConfigParams configParams = key.getParams();
-
-        IAuthorityConnectorManager connMgr = AuthorityConnectorManagerFactory.make(threadContext);
-        if (connMgr.isInstalled(className) == false)
-          return null;
-
-        try
-        {
-          Class theClass = ManifoldCF.findClass(className);
-          Class[] argumentClasses = new Class[0];
-          // Look for a constructor
-          Constructor c = theClass.getConstructor(argumentClasses);
-          Object[] arguments = new Object[0];
-          Object o = c.newInstance(arguments);
-          if (!(o instanceof IAuthorityConnector))
-            throw new ManifoldCFException("Class '"+className+"' does not implement IAuthorityConnector.");
-          IAuthorityConnector newrc = (IAuthorityConnector)o;
-          newrc.connect(configParams);
-	  stack.add(newrc);
-        }
-        catch (InvocationTargetException e)
-        {
-          Throwable z = e.getTargetException();
-          if (z instanceof Error)
-            throw (Error)z;
-          else if (z instanceof RuntimeException)
-            throw (RuntimeException)z;
-          else
-            throw (ManifoldCFException)z;
-        }
-        catch (ClassNotFoundException e)
-        {
-          // If we get this exception, it may mean that the authority is not registered.
-          if (connMgr.isInstalled(className) == false)
-            return null;
-
-          throw new ManifoldCFException("No authority connector class '"+className+"' was found.",
-            e);
-        }
-        catch (NoSuchMethodException e)
-        {
-          throw new ManifoldCFException("No appropriate constructor for IAuthorityConnector implementation '"+
-            className+"'.  Need xxx(ConfigParams).",
-            e);
-        }
-        catch (SecurityException e)
-        {
-          throw new ManifoldCFException("Protected constructor for IAuthorityConnector implementation '"+className+"'",
-            e);
-        }
-        catch (IllegalAccessException e)
-        {
-          throw new ManifoldCFException("Unavailable constructor for IAuthorityConnector implementation '"+className+"'",
-            e);
-        }
-        catch (IllegalArgumentException e)
-        {
-          throw new ManifoldCFException("Shouldn't happen!!!",e);
-        }
-        catch (InstantiationException e)
-        {
-          throw new ManifoldCFException("InstantiationException for IAuthorityConnector implementation '"+className+"'",
-            e);
-        }
-        catch (ExceptionInInitializerError e)
-        {
-          throw new ManifoldCFException("ExceptionInInitializerError for IAuthorityConnector implementation '"+className+"'",
-            e);
-        }
-      }
-      
-      // Since thread context set can fail, do that before we remove it from the pool.
-      IAuthorityConnector rc = (IAuthorityConnector)stack.get(stack.size()-1);
-      rc.setThreadContext(threadContext);
-      stack.remove(stack.size()-1);
-      numFree--;
-
-      return rc;
-    }
-
-    /** Release a repository connector to the pool.
-    *@param connector is the connector.
-    */
-    public synchronized void releaseConnector(IAuthorityConnector connector)
-      throws ManifoldCFException
-    {
-      if (connector == null)
-        return;
-
-      // Make sure connector knows it's released
-      connector.clearThreadContext();
-      // Append
-      stack.add(connector);
-      numFree++;
-      notifyAll();
-    }
-
-    /** Notify all free connectors.
-    */
-    public synchronized void pollAll(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      int i = 0;
-      while (i < stack.size())
-      {
-        IConnector rc = (IConnector)stack.get(i++);
-        // Notify
-        rc.setThreadContext(threadContext);
-        try
-        {
-          rc.poll();
-        }
-        finally
-        {
-          rc.clearThreadContext();
-        }
-      }
-    }
-
-    /** Release all free connectors.
-    */
-    public synchronized void releaseAll(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      while (stack.size() > 0)
-      {
-        // Disconnect
-        IConnector rc = (IConnector)stack.get(stack.size()-1);
-        rc.setThreadContext(threadContext);
-        try
-        {
-          rc.disconnect();
-          stack.remove(stack.size()-1);
-        }
-        finally
-        {
-          rc.clearThreadContext();
-        }
-      }
-    }
-
-  }
-
-
-
 }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorPoolFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorPoolFactory.java
new file mode 100644
index 0000000..b253abc
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityConnectorPoolFactory.java
@@ -0,0 +1,54 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+
+/** Authority connector pool manager factory.
+*/
+public class AuthorityConnectorPoolFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // name to use in thread context pool of objects
+  private final static String objectName = "_AuthorityConnectorPoolMgr_";
+
+  private AuthorityConnectorPoolFactory()
+  {
+  }
+
+  /** Make an output connector pool handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IAuthorityConnectorPool make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IAuthorityConnectorPool))
+    {
+      o = new org.apache.manifoldcf.authorities.authorityconnectorpool.AuthorityConnectorPool(tc);
+      tc.save(objectName,o);
+    }
+    return (IAuthorityConnectorPool)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityGroupManagerFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityGroupManagerFactory.java
new file mode 100644
index 0000000..3631396
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorityGroupManagerFactory.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+/** This is the factory class for authority group manager objects.
+*/
+public class AuthorityGroupManagerFactory
+{
+  // name to use in thread context pool of objects
+  private final static String objectName = "_AuthGroupMgr_";
+
+  private AuthorityGroupManagerFactory()
+  {
+  }
+
+  /** Make an authority connection manager handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IAuthorityGroupManager make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IAuthorityGroupManager))
+    {
+      IDBInterface database = DBInterfaceFactory.make(tc,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+
+      o = new org.apache.manifoldcf.authorities.authgroups.AuthorityGroupManager(tc,database);
+      tc.save(objectName,o);
+    }
+    return (IAuthorityGroupManager)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorizationDomainManagerFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorizationDomainManagerFactory.java
new file mode 100644
index 0000000..3bd389c
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/AuthorizationDomainManagerFactory.java
@@ -0,0 +1,58 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+/** This class is the factory for the Authorization Domain Manager.
+*/
+public class AuthorizationDomainManagerFactory
+{
+  protected static final String connMgr = "_AuthorizationDomainManager_";
+
+  private AuthorizationDomainManagerFactory()
+  {
+  }
+
+  /** Construct a connector manager.
+  *@param tc is the thread context.
+  *@return the connector manager handle.
+  */
+  public static IAuthorizationDomainManager make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(connMgr);
+    if (o == null || !(o instanceof IAuthorizationDomainManager))
+    {
+
+      IDBInterface database = DBInterfaceFactory.make(tc,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+
+      o = new org.apache.manifoldcf.authorities.authdomains.AuthorizationDomainManager(tc,database);
+      tc.save(connMgr,o);
+    }
+    return (IAuthorizationDomainManager)o;
+  }
+
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/CacheKeyFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/CacheKeyFactory.java
index ddb7e23..bed2983 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/CacheKeyFactory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/CacheKeyFactory.java
@@ -28,6 +28,23 @@
   {
   }
 
+  /** Construct a key which represents the general list of authority groups.
+  *@return the cache key.
+  */
+  public static String makeAuthorityGroupsKey()
+  {
+    return "AUTHORITYGROUPS";
+  }
+
+  /** Construct a key which represents an individual authority group.
+  *@param groupName is the name of the group.
+  *@return the cache key.
+  */
+  public static String makeAuthorityGroupKey(String groupName)
+  {
+    return "AUTHORITYGROUP_"+groupName;
+  }
+
   /** Construct a key which represents the general list of authority connectors.
   *@return the cache key.
   */
@@ -45,4 +62,21 @@
     return "AUTHORITYCONNECTION_"+connectionName;
   }
 
+  /** Construct a key which represents the general list of mapping connectors.
+  *@return the cache key.
+  */
+  public static String makeMappingConnectionsKey()
+  {
+    return "MAPPINGCONNECTIONS";
+  }
+
+  /** Construct a key which represents an individual mapping connection.
+  *@param connectionName is the name of the connection.
+  *@return the cache key.
+  */
+  public static String makeMappingConnectionKey(String connectionName)
+  {
+    return "MAPPINGCONNECTION_"+connectionName;
+  }
+
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnection.java
index 34243da..68e7c0b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnection.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnection.java
@@ -80,4 +80,35 @@
   */
   public int getMaxConnections();
 
+  /** Set the prerequisite mapper, if any.
+  *@param mapping is the name of the mapping connection to use to get the input user name,
+  *  or null.
+  */
+  public void setPrerequisiteMapping(String mapping);
+
+  /** Get the prerequisite mapper, if any.
+  *@return the mapping connection name whose output should be used as the input user name.
+  */
+  public String getPrerequisiteMapping();
+
+  /** Set the authorization domain.
+  *@param domain is the authorization domain.
+  */
+  public void setAuthDomain(String domain);
+  
+  /** Get the authorization domain.
+  *@return the authorization domain.
+  */
+  public String getAuthDomain();
+  
+  /** Set authorization group.
+  *@param groupName is the name of the group.
+  */
+  public void setAuthGroup(String groupName);
+  
+  /** Get the authorization group.
+  *@return the group.
+  */
+  public String getAuthGroup();
+  
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectionManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectionManager.java
index 5759c1f..bc00aa2 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectionManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectionManager.java
@@ -45,12 +45,26 @@
   public void importConfiguration(java.io.InputStream is)
     throws java.io.IOException, ManifoldCFException;
 
+  /** Return true if the specified authority group name is referenced.
+  *@param authorityGroup is the authority group name.
+  *@return true if referenced, false otherwise.
+  */
+  public boolean isGroupReferenced(String authorityGroup)
+    throws ManifoldCFException;
+
   /** Obtain a list of the authority connections, ordered by name.
   *@return an array of connection objects.
   */
   public IAuthorityConnection[] getAllConnections()
     throws ManifoldCFException;
 
+  /** Obtain a list of the authority connections which correspond to an auth domain.
+  *@param authDomain is the domain to get connections for.
+  *@return an array of connection objects.
+  */
+  public IAuthorityConnection[] getDomainConnections(String authDomain)
+    throws ManifoldCFException;
+
   /** Load a authority connection by name.
   *@param name is the name of the authority connection.
   *@return the loaded connection object, or null if not found.
@@ -58,6 +72,13 @@
   public IAuthorityConnection load(String name)
     throws ManifoldCFException;
 
+  /** Load multiple repository connections by name.
+  *@param names are the names to load.
+  *@return the loaded connection objects.
+  */
+  public IAuthorityConnection[] loadMultiple(String[] names)
+    throws ManifoldCFException;
+
   /** Create a new authority connection object.
   *@return the new object.
   */
@@ -90,4 +111,11 @@
   */
   public String getAuthorityNameColumn();
 
+  /** Return true if the specified mapping name is referenced.
+  *@param mappingName is the mapping name.
+  *@return true if referenced, false otherwise.
+  */
+  public boolean isMappingReferenced(String mappingName)
+    throws ManifoldCFException;
+
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnector.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnector.java
index 394a51b..ca7b766 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnector.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnector.java
@@ -32,7 +32,10 @@
 public interface IAuthorityConnector extends IConnector
 {
 
-  /** Obtain the access tokens for a given user name.
+  /** This is the global deny token.  This should be ingested with all documents. */
+  public static final String GLOBAL_DENY_TOKEN = "DEAD_AUTHORITY";
+
+  /** Obtain the access tokens for a given Active Directory user name.
   *@param userName is the user name or identifier.
   *@return the response tokens (according to the current authority).
   * (Should throws an exception only when a condition cannot be properly described within the authorization response object.)
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectorPool.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectorPool.java
new file mode 100644
index 0000000..9f313bb
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityConnectorPool.java
@@ -0,0 +1,81 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An object implementing this interface functions as a pool of authority connectors.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public interface IAuthorityConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Get multiple authority connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param authorityConnections are the connections to use the build the connector instances.
+  */
+  public IAuthorityConnector[] grabMultiple(String[] orderingKeys, IAuthorityConnection[] authorityConnections)
+    throws ManifoldCFException;
+
+  /** Get an authority connector.
+  * The connector is specified by an authority connection object.
+  *@param outputConnection is the authority connection to base the connector instance on.
+  */
+  public IAuthorityConnector grab(IAuthorityConnection authorityConnection)
+    throws ManifoldCFException;
+
+  /** Release multiple authority connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  public void releaseMultiple(IAuthorityConnection[] connections, IAuthorityConnector[] connectors)
+    throws ManifoldCFException;
+
+  /** Release an authority connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  public void release(IAuthorityConnection connection, IAuthorityConnector connector)
+    throws ManifoldCFException;
+
+  /** Idle notification for inactive authority connector handles.
+  * This method polls all inactive handles.
+  */
+  public void pollAllConnectors()
+    throws ManifoldCFException;
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  public void flushUnusedConnectors()
+    throws ManifoldCFException;
+
+  /** Clean up all open authority connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  public void closeAllConnectors()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityGroup.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityGroup.java
new file mode 100644
index 0000000..a58002e
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityGroup.java
@@ -0,0 +1,58 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+
+/** This interface describes a paper object which is an authority group.
+*/
+public interface IAuthorityGroup
+{
+  /** Set 'isnew' condition.
+  *@param isnew true if this is a new instance.
+  */
+  public void setIsNew(boolean isnew);
+  
+  /** Get 'isnew' condition.
+  *@return true if this is a new connection, false otherwise.
+  */
+  public boolean getIsNew();
+
+  /** Set name.
+  *@param name is the name.
+  */
+  public void setName(String name);
+
+  /** Get name.
+  *@return the name
+  */
+  public String getName();
+
+  /** Set description.
+  *@param description is the description.
+  */
+  public void setDescription(String description);
+
+  /** Get description.
+  *@return the description
+  */
+  public String getDescription();
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityGroupManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityGroupManager.java
new file mode 100644
index 0000000..16b4ae6
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorityGroupManager.java
@@ -0,0 +1,105 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+
+/** This interface describes the functionality in the authority group manager.
+* The authority group manager manages the definitions of individual groups,
+* and allows them to be defined, edited, and removed.
+*/
+public interface IAuthorityGroupManager
+{
+  /** Install the manager.
+  */
+  public void install()
+    throws ManifoldCFException;
+
+  /** Uninstall the manager.
+  */
+  public void deinstall()
+    throws ManifoldCFException;
+
+  /** Export configuration */
+  public void exportConfiguration(java.io.OutputStream os)
+    throws java.io.IOException, ManifoldCFException;
+
+  /** Import configuration */
+  public void importConfiguration(java.io.InputStream is)
+    throws java.io.IOException, ManifoldCFException;
+
+  /** Obtain a list of the authority groups, ordered by name.
+  *@return an array of group objects.
+  */
+  public IAuthorityGroup[] getAllGroups()
+    throws ManifoldCFException;
+
+  /** Load a authority group by name.
+  *@param name is the name of the authority group.
+  *@return the loaded group object, or null if not found.
+  */
+  public IAuthorityGroup load(String name)
+    throws ManifoldCFException;
+
+  /** Load multiple authority groups by name.
+  *@param names are the names to load.
+  *@return the loaded group objects.
+  */
+  public IAuthorityGroup[] loadMultiple(String[] names)
+    throws ManifoldCFException;
+
+  /** Create a new authority group object.
+  *@return the new object.
+  */
+  public IAuthorityGroup create()
+    throws ManifoldCFException;
+
+  /** Save an authority group object.
+  *@param object is the object to save.
+  *@return true if the object was created, false otherwise.
+  */
+  public boolean save(IAuthorityGroup object)
+    throws ManifoldCFException;
+
+  /** Delete an authority group.
+  *@param name is the name of the group to delete.  If the
+  * name does not exist, no error is returned.
+  */
+  public void delete(String name)
+    throws ManifoldCFException;
+
+  // Schema related
+
+  /** Get the authority connection table name.
+  *@return the table name.
+  */
+  public String getTableName();
+
+  /** Get the authority connection name column.
+  *@return the name column.
+  */
+  public String getGroupNameColumn();
+
+  /** Get the authority connection description column.
+  *@return the description column.
+  */
+  public String getGroupDescriptionColumn();
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorizationDomainManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorizationDomainManager.java
new file mode 100644
index 0000000..e14f845
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IAuthorizationDomainManager.java
@@ -0,0 +1,66 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+/** This interface describes the authorization domain registry.  Authorization domains are registered here, so that
+* they can be made available when an authority connection is created.
+*/
+public interface IAuthorizationDomainManager
+{
+  /** Install.
+  */
+  public void install()
+    throws ManifoldCFException;
+
+  /** Uninstall.
+  */
+  public void deinstall()
+    throws ManifoldCFException;
+
+  /** Register a new domain.
+  *@param description is the description to use in the UI.
+  *@param domainName is the internal domain name used by the authority service.
+  */
+  public void registerDomain(String description, String domainName)
+    throws ManifoldCFException;
+
+  /** Unregister a domain.
+  * This may fail if any authority connections refer to the domain.
+  *@param domainName is the internal domain name to unregister.
+  */
+  public void unregisterDomain(String domainName)
+    throws ManifoldCFException;
+
+  /** Get ordered list of domains.
+  *@return a resultset with the columns "description" and "domainname".
+  * These will be ordered by description.
+  */
+  public IResultSet getDomains()
+    throws ManifoldCFException;
+
+  /** Get a description given a domain name.
+  *@param domainName is the domain name.
+  *@return the description, or null if the domain is not registered.
+  */
+  public String getDescription(String domainName)
+    throws ManifoldCFException;
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnection.java
new file mode 100644
index 0000000..a330e53
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnection.java
@@ -0,0 +1,94 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+
+/** This interface describes a paper object which is an mapping connection.
+*/
+public interface IMappingConnection
+{
+  /** Set 'isnew' condition.
+  *@param isnew true if this is a new instance.
+  */
+  public void setIsNew(boolean isnew);
+  
+  /** Get 'isnew' condition.
+  *@return true if this is a new connection, false otherwise.
+  */
+  public boolean getIsNew();
+
+  /** Set name.
+  *@param name is the name.
+  */
+  public void setName(String name);
+
+  /** Get name.
+  *@return the name
+  */
+  public String getName();
+
+  /** Set description.
+  *@param description is the description.
+  */
+  public void setDescription(String description);
+
+  /** Get description.
+  *@return the description
+  */
+  public String getDescription();
+
+  /** Set the class name.
+  *@param className is the class name.
+  */
+  public void setClassName(String className);
+
+  /** Get the class name.
+  *@return the class name
+  */
+  public String getClassName();
+
+  /** Get the configuration parameters.
+  *@return the map.  Can be modified.
+  */
+  public ConfigParams getConfigParams();
+
+  /** Set the maximum size of the connection pool.
+  *@param maxCount is the maximum connection count per JVM.
+  */
+  public void setMaxConnections(int maxCount);
+
+  /** Get the maximum size of the connection pool.
+  *@return the maximum size.
+  */
+  public int getMaxConnections();
+
+  /** Set the prerequisite mapper, if any.
+  *@param mapping is the name of the mapping connection to use to get the input user name,
+  *  or null.
+  */
+  public void setPrerequisiteMapping(String mapping);
+
+  /** Get the prerequisite mapper, if any.
+  *@return the mapping connection name whose output should be used as the input user name.
+  */
+  public String getPrerequisiteMapping();
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectionManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectionManager.java
new file mode 100644
index 0000000..94bc295
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectionManager.java
@@ -0,0 +1,109 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+
+/** This interface describes the functionality in the mapping connection manager.
+* The authority connection manager manages the definitions of individual connections,
+* and allows them to be defined, edited, and removed.
+*/
+public interface IMappingConnectionManager
+{
+  /** Install the manager.
+  */
+  public void install()
+    throws ManifoldCFException;
+
+  /** Uninstall the manager.
+  */
+  public void deinstall()
+    throws ManifoldCFException;
+
+  /** Export configuration */
+  public void exportConfiguration(java.io.OutputStream os)
+    throws java.io.IOException, ManifoldCFException;
+
+  /** Import configuration */
+  public void importConfiguration(java.io.InputStream is)
+    throws java.io.IOException, ManifoldCFException;
+
+  /** Obtain a list of the mapping connections, ordered by name.
+  *@return an array of connection objects.
+  */
+  public IMappingConnection[] getAllConnections()
+    throws ManifoldCFException;
+
+  /** Obtain a list of the mapping connections, ordered by name,
+  * excluding those that would form a prerequisite loop if chosen.
+  *@param startingConnectionName is the name of the connection we would be starting with.
+  * Pass null for all connections.
+  *@return an array of connection objects.
+  */
+  public IMappingConnection[] getAllNonLoopingConnections(String startingConnectionName)
+    throws ManifoldCFException;
+
+  /** Load a mapping connection by name.
+  *@param name is the name of the mapping connection.
+  *@return the loaded connection object, or null if not found.
+  */
+  public IMappingConnection load(String name)
+    throws ManifoldCFException;
+
+  /** Load multiple mapping connections by name.
+  *@param names are the names to load.
+  *@return the loaded connection objects.
+  */
+  public IMappingConnection[] loadMultiple(String[] names)
+    throws ManifoldCFException;
+
+  /** Create a new mapping connection object.
+  *@return the new object.
+  */
+  public IMappingConnection create()
+    throws ManifoldCFException;
+
+  /** Save an mapping connection object.
+  *@param object is the object to save.
+  *@return true if the object was created, false otherwise.
+  */
+  public boolean save(IMappingConnection object)
+    throws ManifoldCFException;
+
+  /** Delete an mapping connection.
+  *@param name is the name of the connection to delete.  If the
+  * name does not exist, no error is returned.
+  */
+  public void delete(String name)
+    throws ManifoldCFException;
+
+  // Schema related
+
+  /** Get the authority connection table name.
+  *@return the table name.
+  */
+  public String getTableName();
+
+  /** Get the mapping connection name column.
+  *@return the name column.
+  */
+  public String getMappingNameColumn();
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnector.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnector.java
new file mode 100644
index 0000000..76a950d
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnector.java
@@ -0,0 +1,38 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+/** A Mapping Connector helps fill out the user identification information for a user.
+*
+* An instance of this interface provides this functionality.  Mapping connector instances are pooled, so that session
+* setup does not need to be done repeatedly.  The pool is segregated by specific sets of configuration parameters.
+*/
+public interface IMappingConnector extends IConnector
+{
+
+  /** Map an input user name to an output name.
+  *@param userName is the name to map
+  *@return the mapped user name
+  */
+  public String mapUser(String userName)
+    throws ManifoldCFException;
+  
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectorManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectorManager.java
new file mode 100644
index 0000000..a23c4cd
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectorManager.java
@@ -0,0 +1,81 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+/** This interface describes the mapping connector registry.  Mapping connectors are registered here, so that
+* they can be made available when an mapping connection is created.
+*/
+public interface IMappingConnectorManager
+{
+  /** Install.
+  */
+  public void install()
+    throws ManifoldCFException;
+
+  /** Uninstall.  This also unregisters all connectors.
+  */
+  public void deinstall()
+    throws ManifoldCFException;
+
+  /** Register a new connector.
+  * The connector's install method will also be called.
+  *@param description is the description to use in the UI.
+  *@param className is the class name.
+  */
+  public void registerConnector(String description, String className)
+    throws ManifoldCFException;
+
+  /** Unregister a connector.
+  * The connector's deinstall method will also be called.
+  *@param className is the connector class to unregister.
+  */
+  public void unregisterConnector(String className)
+    throws ManifoldCFException;
+
+  /** Remove a connector.
+  * Call this when the connector cannot be instantiated.
+  *@param className is the connector class to remove.
+  */
+  public void removeConnector(String className)
+    throws ManifoldCFException;
+
+  /** Get ordered list of connectors.
+  *@return a resultset with the columns "description" and "classname".
+  * These will be ordered by description.
+  */
+  public IResultSet getConnectors()
+    throws ManifoldCFException;
+
+  /** Get a description given a class name.
+  *@param className is the class name.
+  *@return the description, or null if the class is not registered.
+  */
+  public String getDescription(String className)
+    throws ManifoldCFException;
+
+  /** Check if a particular connector is installed or not.
+  *@param className is the class name of the connector.
+  *@return true if installed, false otherwise.
+  */
+  public boolean isInstalled(String className)
+    throws ManifoldCFException;
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectorPool.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectorPool.java
new file mode 100644
index 0000000..513e394
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/IMappingConnectorPool.java
@@ -0,0 +1,81 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An object implementing this interface functions as a pool of mapping connectors.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public interface IMappingConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Get multiple mapping connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param mappingConnections are the connections to use the build the connector instances.
+  */
+  public IMappingConnector[] grabMultiple(String[] orderingKeys, IMappingConnection[] mappingConnections)
+    throws ManifoldCFException;
+
+  /** Get a mapping connector.
+  * The connector is specified by a mapping connection object.
+  *@param mappingConnection is the mapping connection to base the connector instance on.
+  */
+  public IMappingConnector grab(IMappingConnection mappingConnection)
+    throws ManifoldCFException;
+
+  /** Release multiple mapping connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  public void releaseMultiple(IMappingConnection[] connections, IMappingConnector[] connectors)
+    throws ManifoldCFException;
+
+  /** Release a mapping connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  public void release(IMappingConnection connection, IMappingConnector connector)
+    throws ManifoldCFException;
+
+  /** Idle notification for inactive mapping connector handles.
+  * This method polls all inactive handles.
+  */
+  public void pollAllConnectors()
+    throws ManifoldCFException;
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  public void flushUnusedConnectors()
+    throws ManifoldCFException;
+
+  /** Clean up all open mapping connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  public void closeAllConnectors()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectionManagerFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectionManagerFactory.java
new file mode 100644
index 0000000..abcc219
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectionManagerFactory.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+/** This is the factory class for mapping connection manager objects.
+*/
+public class MappingConnectionManagerFactory
+{
+  // name to use in thread context pool of objects
+  private final static String objectName = "_MapConnectionMgr_";
+
+  private MappingConnectionManagerFactory()
+  {
+  }
+
+  /** Make an authority connection manager handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IMappingConnectionManager make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IMappingConnectionManager))
+    {
+      IDBInterface database = DBInterfaceFactory.make(tc,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+
+      o = new org.apache.manifoldcf.authorities.mapping.MappingConnectionManager(tc,database);
+      tc.save(objectName,o);
+    }
+    return (IMappingConnectionManager)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorFactory.java
new file mode 100644
index 0000000..35e227e
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorFactory.java
@@ -0,0 +1,108 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.ManifoldCF;
+import java.util.*;
+import java.io.*;
+import java.lang.reflect.*;
+
+/** This class manages a pool of mapping connectors.
+*/
+public class MappingConnectorFactory extends ConnectorFactory<IMappingConnector>
+{
+
+  // Static factory
+  protected final static MappingConnectorFactory thisFactory = new MappingConnectorFactory();
+
+  protected MappingConnectorFactory()
+  {
+  }
+
+  @Override
+  protected boolean isInstalled(IThreadContext tc, String className)
+    throws ManifoldCFException
+  {
+    IMappingConnectorManager connMgr = MappingConnectorManagerFactory.make(tc);
+    return connMgr.isInstalled(className);
+  }
+
+  /** Install connector.
+  *@param className is the class name.
+  */
+  public static void install(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    thisFactory.installThis(threadContext,className);
+  }
+
+  /** Uninstall connector.
+  *@param className is the class name.
+  */
+  public static void deinstall(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    thisFactory.deinstallThis(threadContext,className);
+  }
+
+  /** Output the configuration header section.
+  */
+  public static void outputConfigurationHeader(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams parameters, ArrayList tabsArray)
+    throws ManifoldCFException, IOException
+  {
+    thisFactory.outputThisConfigurationHeader(threadContext,className,out,locale,parameters,tabsArray);
+  }
+
+  /** Output the configuration body section.
+  */
+  public static void outputConfigurationBody(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
+    throws ManifoldCFException, IOException
+  {
+    thisFactory.outputThisConfigurationBody(threadContext,className,out,locale,parameters,tabName);
+  }
+
+  /** Process configuration post data for a connector.
+  */
+  public static String processConfigurationPost(IThreadContext threadContext, String className, IPostParameters variableContext, Locale locale, ConfigParams configParams)
+    throws ManifoldCFException
+  {
+    return thisFactory.processThisConfigurationPost(threadContext,className,variableContext,locale,configParams);
+  }
+  
+  /** View connector configuration.
+  */
+  public static void viewConfiguration(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams configParams)
+    throws ManifoldCFException, IOException
+  {
+    thisFactory.viewThisConfiguration(threadContext,className,out,locale,configParams);
+  }
+
+  /** Get a mapping connector instance, but do NOT check if class is installed first!
+  *@param className is the class name.
+  *@return the instance.
+  */
+  public static IMappingConnector getConnectorNoCheck(String className)
+    throws ManifoldCFException
+  {
+    return thisFactory.getThisConnectorNoCheck(className);
+  }
+
+}
+
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorManagerFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorManagerFactory.java
new file mode 100644
index 0000000..70b9196
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorManagerFactory.java
@@ -0,0 +1,58 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+/** This class is the factory for the Mapping Connector Manager.
+*/
+public class MappingConnectorManagerFactory
+{
+  protected static final String connMgr = "_MappingConnectorManager_";
+
+  private MappingConnectorManagerFactory()
+  {
+  }
+
+  /** Construct a connector manager.
+  *@param tc is the thread context.
+  *@return the connector manager handle.
+  */
+  public static IMappingConnectorManager make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(connMgr);
+    if (o == null || !(o instanceof IMappingConnectorManager))
+    {
+
+      IDBInterface database = DBInterfaceFactory.make(tc,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+
+      o = new org.apache.manifoldcf.authorities.mapconnmgr.MappingConnectorManager(tc,database);
+      tc.save(connMgr,o);
+    }
+    return (IMappingConnectorManager)o;
+  }
+
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorPoolFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorPoolFactory.java
new file mode 100644
index 0000000..ba28133
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/interfaces/MappingConnectorPoolFactory.java
@@ -0,0 +1,54 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+
+/** Mapping connector pool manager factory.
+*/
+public class MappingConnectorPoolFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // name to use in thread context pool of objects
+  private final static String objectName = "_MappingConnectorPoolMgr_";
+
+  private MappingConnectorPoolFactory()
+  {
+  }
+
+  /** Make a mapping connector pool handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IMappingConnectorPool make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IMappingConnectorPool))
+    {
+      o = new org.apache.manifoldcf.authorities.mappingconnectorpool.MappingConnectorPool(tc);
+      tc.save(objectName,o);
+    }
+    return (IMappingConnectorPool)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapconnmgr/MappingConnectorManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapconnmgr/MappingConnectorManager.java
new file mode 100644
index 0000000..785f6e3
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapconnmgr/MappingConnectorManager.java
@@ -0,0 +1,302 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mapconnmgr;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.interfaces.CacheKeyFactory;
+
+/** This is the implementation of that authority connector manager.
+ * 
+ * <br><br>
+ * <b>mapconnectors</b>
+ * <table border="1" cellpadding="3" cellspacing="0">
+ * <tr class="TableHeadingColor">
+ * <th>Field</th><th>Type</th><th>Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
+ * <tr><td>description</td><td>VARCHAR(255)</td><td></td></tr>
+ * <tr><td>classname</td><td>VARCHAR(255)</td><td>Primary Key</td></tr>
+ * </table>
+ * <br><br>
+ * 
+ */
+public class MappingConnectorManager extends org.apache.manifoldcf.core.database.BaseTable implements IMappingConnectorManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Fields
+  protected static final String descriptionField = "description";
+  protected static final String classNameField = "classname";
+
+  // Thread context
+  protected IThreadContext threadContext;
+
+  /** Constructor.
+  *@param threadContext is the thread context.
+  *@param database is the database handle.
+  */
+  public MappingConnectorManager(IThreadContext threadContext, IDBInterface database)
+    throws ManifoldCFException
+  {
+    super(database,"mapconnectors");
+    this.threadContext = threadContext;
+  }
+
+
+  /** Install or upgrade.
+  */
+  public void install()
+    throws ManifoldCFException
+  {
+    // Always use a loop, in case there's upgrade retries needed.
+    while (true)
+    {
+      Map existing = getTableSchema(null,null);
+      if (existing == null)
+      {
+        HashMap map = new HashMap();
+        map.put(descriptionField,new ColumnDescription("VARCHAR(255)",false,false,null,null,false));
+        map.put(classNameField,new ColumnDescription("VARCHAR(255)",true,false,null,null,false));
+
+        performCreate(map,null);
+      }
+      else
+      {
+        // Schema upgrade code goes here, if needed.
+      }
+
+      // Index management
+      IndexDescription descriptionIndex = new IndexDescription(true,new String[]{descriptionField});
+
+      // Get rid of indexes that shouldn't be there
+      Map indexes = getTableIndexes(null,null);
+      Iterator iter = indexes.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String indexName = (String)iter.next();
+        IndexDescription id = (IndexDescription)indexes.get(indexName);
+
+        if (descriptionIndex != null && id.equals(descriptionIndex))
+          descriptionIndex = null;
+        else if (indexName.indexOf("_pkey") == -1)
+          // This index shouldn't be here; drop it
+          performRemoveIndex(indexName);
+      }
+
+      // Add the ones we didn't find
+      if (descriptionIndex != null)
+        performAddIndex(null,descriptionIndex);
+
+      break;
+    }
+  }
+
+
+  /** Uninstall.  This also unregisters all connectors.
+  */
+  public void deinstall()
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+
+    // First, go through all registered connectors.  This is all inside a transaction.
+    beginTransaction();
+    try
+    {
+      IResultSet set = performQuery("SELECT "+classNameField+" FROM "+getTableName(),null,null,null);
+      int i = 0;
+      while (i < set.getRowCount())
+      {
+        IResultRow row = set.getRow(i++);
+        String className = row.getValue(classNameField).toString();
+        // Call the deinstall method
+        MappingConnectorFactory.deinstall(threadContext,className);
+      }
+      performDrop(invKeys);
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Register a new connector.
+  * The connector's install method will also be called.
+  *@param description is the description to use in the UI.
+  *@param className is the class name.
+  */
+  public void registerConnector(String description, String className)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+    beginTransaction();
+    try
+    {
+      //performLock();
+      // See if already there.
+      ArrayList params = new ArrayList();
+      params.add(className);
+      IResultSet set = performQuery("SELECT * FROM "+getTableName()+" WHERE "+classNameField+"=? FOR UPDATE",params,null,null);
+      HashMap map = new HashMap();
+      map.put(descriptionField,description);
+      if (set.getRowCount() == 0)
+      {
+        // Insert it into table first.
+        map.put(classNameField,className);
+        performInsert(map,invKeys);
+      }
+      else
+      {
+        performUpdate(map,"WHERE "+classNameField+"=?",params,invKeys);
+      }
+
+      // Either way, we must do the install/upgrade itself.
+      MappingConnectorFactory.install(threadContext,className);
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Unregister a connector.
+  * The connector's deinstall method will also be called.
+  *@param className is the class name of the connector to unregister.
+  */
+  public void unregisterConnector(String className)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+    beginTransaction();
+    try
+    {
+      // Uninstall first
+      MappingConnectorFactory.deinstall(threadContext,className);
+
+      removeConnector(className);
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Remove a connector.
+  * Call this when the connector cannot be instantiated.
+  *@param className is the connector class to remove.
+  */
+  public void removeConnector(String className)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+    ArrayList list = new ArrayList();
+    list.add(className);
+    performDelete("WHERE "+classNameField+"=?",list,invKeys);
+  }
+
+  /** Get ordered list of connectors.
+  *@return a resultset with the columns "description" and "classname".
+  * These will be ordered by description.
+  */
+  public IResultSet getConnectors()
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+
+    return performQuery("SELECT "+descriptionField+" AS description,"+classNameField+" AS classname FROM "+
+      getTableName()+" ORDER BY "+descriptionField+" ASC",null,invKeys,null);
+  }
+
+  /** Get a description given a class name.
+  *@param className is the class name.
+  *@return the description, or null if the class is not registered.
+  */
+  public String getDescription(String className)
+    throws ManifoldCFException
+  {
+    StringSet invKeys = new StringSet(getCacheKey());
+
+    ArrayList list = new ArrayList();
+    list.add(className);
+    IResultSet set = performQuery("SELECT "+descriptionField+" FROM "+
+      getTableName()+" WHERE "+classNameField+"=?",list,invKeys,null);
+    if (set.getRowCount() == 0)
+      return null;
+    IResultRow row = set.getRow(0);
+    return row.getValue(descriptionField).toString();
+  }
+
+  /** Check if a particular connector is installed or not.
+  *@param className is the class name of the connector.
+  *@return true if installed, false otherwise.
+  */
+  public boolean isInstalled(String className)
+    throws ManifoldCFException
+  {
+    // Use the global table key; that's good enough because we don't expect stuff to change out from under very often.
+    StringSet invKeys = new StringSet(getCacheKey());
+
+    ArrayList list = new ArrayList();
+    list.add(className);
+    IResultSet set = performQuery("SELECT * FROM "+
+      getTableName()+" WHERE "+classNameField+"=?",list,invKeys,null);
+    return set.getRowCount() > 0;
+  }
+
+  // Protected methods
+
+  /** Get the cache key for the connector manager table.
+  *@return the cache key
+  */
+  protected String getCacheKey()
+  {
+    return CacheKeyFactory.makeTableKey(null,getTableName(),getDBInterface().getDatabaseName());
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mappers/BaseMappingConnector.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mappers/BaseMappingConnector.java
new file mode 100644
index 0000000..c1b898f
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mappers/BaseMappingConnector.java
@@ -0,0 +1,36 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mappers;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+
+/** A mapping connector massages a UserRecord to augment the user identification information within.
+*
+* An instance of this interface provides this functionality.  Mapping connector instances are pooled, so that session
+* setup does not need to be done repeatedly.  The pool is segregated by specific sets of configuration parameters.
+*/
+public abstract class BaseMappingConnector extends org.apache.manifoldcf.core.connector.BaseConnector implements IMappingConnector
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // This class is provided for future backwards compatibility reasons, so it is wise to
+  // extend it.
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapping/MappingConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapping/MappingConnection.java
new file mode 100644
index 0000000..e69dbea
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapping/MappingConnection.java
@@ -0,0 +1,168 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mapping;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+
+/** This is the implementation of the authority connection interface, which describes a paper object
+* to be manipulated in order to create, edit, or save an authority definition.
+*/
+public class MappingConnection implements IMappingConnection
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // data
+  protected boolean isNew = true;
+  protected String name = null;
+  protected String description = null;
+  protected String className = null;
+  protected ConfigParams configParams = new ConfigParams();
+  protected int maxCount = 100;
+  protected String prerequisiteMapping = null;
+
+  /** Constructor.
+  */
+  public MappingConnection()
+  {
+  }
+
+  /** Clone this object.
+  *@return the cloned object.
+  */
+  public MappingConnection duplicate()
+  {
+    MappingConnection rval = new MappingConnection();
+    rval.isNew = isNew;
+    rval.name = name;
+    rval.description = description;
+    rval.className = className;
+    rval.maxCount = maxCount;
+    rval.configParams = configParams.duplicate();
+    rval.prerequisiteMapping = prerequisiteMapping;
+    return rval;
+  }
+
+  /** Set 'isnew' condition.
+  *@param isnew true if this is a new instance.
+  */
+  public void setIsNew(boolean isnew)
+  {
+    this.isNew = isnew;
+  }
+  
+  /** Get 'isnew' condition.
+  *@return true if this is a new connection, false otherwise.
+  */
+  public boolean getIsNew()
+  {
+    return isNew;
+  }
+
+  /** Set name.
+  *@param name is the name.
+  */
+  public void setName(String name)
+  {
+    this.name = name;
+  }
+
+  /** Get name.
+  *@return the name
+  */
+  public String getName()
+  {
+    return name;
+  }
+
+  /** Set description.
+  *@param description is the description.
+  */
+  public void setDescription(String description)
+  {
+    this.description = description;
+  }
+
+  /** Get description.
+  *@return the description
+  */
+  public String getDescription()
+  {
+    return description;
+  }
+
+  /** Set the class name.
+  *@param className is the class name.
+  */
+  public void setClassName(String className)
+  {
+    this.className = className;
+  }
+
+  /** Get the class name.
+  *@return the class name
+  */
+  public String getClassName()
+  {
+    return className;
+  }
+
+  /** Get the configuration parameters.
+  *@return the map.  Can be modified.
+  */
+  public ConfigParams getConfigParams()
+  {
+    return configParams;
+  }
+
+  /** Set the maximum size of the connection pool.
+  *@param maxCount is the maximum connection count per JVM.
+  */
+  public void setMaxConnections(int maxCount)
+  {
+    this.maxCount = maxCount;
+  }
+
+  /** Get the maximum size of the connection pool.
+  *@return the maximum size.
+  */
+  public int getMaxConnections()
+  {
+    return maxCount;
+  }
+
+  /** Set the prerequisite mapper, if any.
+  *@param mapping is the name of the mapping connection to use to get the input user name,
+  *  or null.
+  */
+  public void setPrerequisiteMapping(String mapping)
+  {
+    prerequisiteMapping = mapping;
+  }
+  
+  /** Get the prerequisite mapper, if any.
+  *@return the mapping connection name whose output should be used as the input user name.
+  */
+  public String getPrerequisiteMapping()
+  {
+    return prerequisiteMapping;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapping/MappingConnectionManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapping/MappingConnectionManager.java
new file mode 100644
index 0000000..133ed1c
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mapping/MappingConnectionManager.java
@@ -0,0 +1,822 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mapping;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+import org.apache.manifoldcf.authorities.interfaces.CacheKeyFactory;
+import org.apache.manifoldcf.authorities.system.ManifoldCF;
+
+/** Implementation of the authority connection manager functionality.
+ * 
+ * <br><br>
+ * <b>mapconnections</b>
+ * <table border="1" cellpadding="3" cellspacing="0">
+ * <tr class="TableHeadingColor">
+ * <th>Field</th><th>Type</th><th>Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
+ * <tr><td>mappingname</td><td>VARCHAR(32)</td><td>Primary Key</td></tr>
+ * <tr><td>description</td><td>VARCHAR(255)</td><td></td></tr>
+ * <tr><td>classname</td><td>VARCHAR(255)</td><td></td></tr>
+ * <tr><td>maxcount</td><td>BIGINT</td><td></td></tr>
+ * <tr><td>configxml</td><td>LONGTEXT</td><td></td></tr>
+ * </table>
+ * <br><br>
+ * 
+ */
+public class MappingConnectionManager extends org.apache.manifoldcf.core.database.BaseTable implements IMappingConnectionManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Special field suffix
+  private final static String passwordSuffix = "password";
+
+  protected final static String nameField = "connname";      // Changed this to work around a bug in postgresql
+  protected final static String descriptionField = "description";
+  protected final static String classNameField = "classname";
+  protected final static String maxCountField = "maxcount";
+  protected final static String configField = "configxml";
+  protected final static String mappingField = "mappingname";
+
+  // Cache manager
+  ICacheManager cacheManager;
+  // Thread context
+  IThreadContext threadContext;
+
+  /** Constructor.
+  *@param threadContext is the thread context.
+  */
+  public MappingConnectionManager(IThreadContext threadContext, IDBInterface database)
+    throws ManifoldCFException
+  {
+    super(database,"mapconnections");
+
+    cacheManager = CacheManagerFactory.make(threadContext);
+    this.threadContext = threadContext;
+  }
+
+  /** Install the manager.
+  */
+  @Override
+  public void install()
+    throws ManifoldCFException
+  {
+    // Always do a loop, in case upgrade needs it.
+    while (true)
+    {
+      Map existing = getTableSchema(null,null);
+      if (existing == null)
+      {
+        // Install the "objects" table.
+        HashMap map = new HashMap();
+        map.put(nameField,new ColumnDescription("VARCHAR(32)",true,false,null,null,false));
+        map.put(descriptionField,new ColumnDescription("VARCHAR(255)",false,true,null,null,false));
+        map.put(classNameField,new ColumnDescription("VARCHAR(255)",false,false,null,null,false));
+        map.put(maxCountField,new ColumnDescription("BIGINT",false,false,null,null,false));
+        map.put(configField,new ColumnDescription("LONGTEXT",false,true,null,null,false));
+        map.put(mappingField,new ColumnDescription("VARCHAR(32)",false,true,null,null,false));
+        performCreate(map,null);
+      }
+      else
+      {
+        // Upgrade code goes here
+      }
+
+      // Index management goes here
+
+      break;
+    }
+    
+
+  }
+
+  /** Uninstall the manager.
+  */
+  @Override
+  public void deinstall()
+    throws ManifoldCFException
+  {
+    performDrop(null);
+  }
+
+  /** Export configuration */
+  @Override
+  public void exportConfiguration(java.io.OutputStream os)
+    throws java.io.IOException, ManifoldCFException
+  {
+    // Write a version indicator
+    ManifoldCF.writeDword(os,1);
+    // Get the authority list
+    IMappingConnection[] list = getAllConnections();
+    // Write the number of authorities
+    ManifoldCF.writeDword(os,list.length);
+    // Loop through the list and write the individual mapping info
+    for (IMappingConnection conn : list)
+    {
+      ManifoldCF.writeString(os,conn.getName());
+      ManifoldCF.writeString(os,conn.getDescription());
+      ManifoldCF.writeString(os,conn.getClassName());
+      ManifoldCF.writeString(os,conn.getConfigParams().toXML());
+      ManifoldCF.writeDword(os,conn.getMaxConnections());
+      ManifoldCF.writeString(os,conn.getPrerequisiteMapping());
+    }
+    
+  }
+
+  /** Import configuration */
+  @Override
+  public void importConfiguration(java.io.InputStream is)
+    throws java.io.IOException, ManifoldCFException
+  {
+    int version = ManifoldCF.readDword(is);
+    if (version != 1)
+      throw new java.io.IOException("Unknown mapping configuration version: "+Integer.toString(version));
+    int count = ManifoldCF.readDword(is);
+    for (int i = 0; i < count; i++)
+    {
+      IMappingConnection conn = create();
+      conn.setName(ManifoldCF.readString(is));
+      conn.setDescription(ManifoldCF.readString(is));
+      conn.setClassName(ManifoldCF.readString(is));
+      conn.getConfigParams().fromXML(ManifoldCF.readString(is));
+      conn.setMaxConnections(ManifoldCF.readDword(is));
+      conn.setPrerequisiteMapping(ManifoldCF.readString(is));
+      // Attempt to save this connection
+      save(conn);
+    }
+  }
+
+  /** Obtain a list of the mapping connections, ordered by name,
+  * excluding those that would form a prerequisite loop if chosen.
+  *@param startingConnectionName is the name of the connection we would be starting with.
+  * Pass null for all connections.
+  *@return an array of connection objects.
+  */
+  public IMappingConnection[] getAllNonLoopingConnections(String startingConnectionName)
+    throws ManifoldCFException
+  {
+    // The point of this method is to prune connections from the list that, if a new prereq was established
+    // between the specified starting connection name and the listed mapping connection, a loop would develop.
+    IMappingConnection[] connections = getAllConnections();
+    // Degenerate case: no (existing) starting point.
+    if (startingConnectionName == null)
+      return connections;
+    
+    List<IMappingConnection> finalConnections = new ArrayList<IMappingConnection>();
+    
+    Map<String,IMappingConnection> connectionMap = new HashMap<String,IMappingConnection>();
+    for (IMappingConnection thisConnection : connections)
+    {
+      connectionMap.put(thisConnection.getName(), thisConnection);
+    }
+    
+    for (IMappingConnection connectionToEvaluate : connections)
+    {
+      // The algorithm we want is as follows (from Wikipedia):
+      //
+      // L <- Empty list where we put the sorted elements
+      // Q <- Set of all nodes with no incoming edges
+      // while Q is non-empty do
+      //    remove a node n from Q
+      //    insert n into L
+      //    for each node m with an edge e from n to m do
+      //        remove edge e from the graph
+      //        if m has no other incoming edges then
+      //            insert m into Q
+      // if graph has edges then
+      //    output error message (graph has a cycle)
+      // else 
+      //    output message (proposed topologically sorted order: L)
+      //
+      // In order to "remove" a link, we have to either keep a list of links we've already processed, or copy the
+      // structure to another one where we *can* remove links.  I opt for the former.
+      // The second issue is that we need to generate Q up front.  This is easy enough; just keep a hash of connections
+      // that have not been referenced (yet), and remove connections from the hash as refs are found.
+      // Also interesting: we don't actually need to keep L.
+      Set<String> Q = new HashSet<String>();
+      Set<String> links = new HashSet<String>();
+      Map<String,Integer> incomingCount = new HashMap<String,Integer>();
+
+      for (int i = 0; i < connections.length; i++)
+      {
+        Q.add(connections[i].getName());
+      }
+      for (int i = 0; i < connections.length; i++)
+      {
+        String connectionName = connections[i].getName();
+        String prerequisite = connections[i].getPrerequisiteMapping();
+        if (prerequisite != null)
+        {
+          Integer x = incomingCount.get(prerequisite);
+          if (x == null)
+            incomingCount.put(prerequisite,new Integer(1));
+          else
+            incomingCount.put(prerequisite,new Integer(x.intValue()+1));
+          Q.remove(prerequisite);
+          links.add(connectionName + ":" + prerequisite);
+        }
+      }
+
+      // There is a "proposed" edge ending at connectionToEvaluate, so remove that one too
+      String thisConnectionName = connectionToEvaluate.getName();
+      Q.remove(thisConnectionName);
+      Integer x1 = incomingCount.get(thisConnectionName);
+      if (x1 == null)
+        incomingCount.put(thisConnectionName,new Integer(1));
+      else
+        incomingCount.put(thisConnectionName,new Integer(x1.intValue()+1));
+      links.add(startingConnectionName + ":" + thisConnectionName);
+
+      // Now, repeat until Q is empty
+      while (!Q.isEmpty())
+      {
+        Iterator<String> iter = Q.iterator();
+        String checkConnectionName = iter.next();
+        // Get prereqs for the connection, those that are still in the graph
+        IMappingConnection sourceConnection = connectionMap.get(checkConnectionName);
+        String s = sourceConnection.getPrerequisiteMapping();
+        if (s != null)
+        {
+          String edgeName = checkConnectionName + ":" + s;
+          if (links.contains(edgeName))
+          {
+            // Remove edgeName from graph
+            links.remove(edgeName);
+            // If s has no other incoming edges then insert it into Q
+            Integer x = incomingCount.get(s);
+            if (x.intValue() == 1)
+            {
+              incomingCount.remove(s);
+              Q.add(s);
+            }
+            else
+              incomingCount.put(s,new Integer(x.intValue() - 1));
+          }
+        }
+      }
+      
+      // Any links remaining?
+      if (links.isEmpty())
+      {
+        // No cycles.  Add this connection to the final list.
+        finalConnections.add(connectionToEvaluate);
+      }
+    }
+    return finalConnections.toArray(new IMappingConnection[0]);
+  }
+
+  /** Obtain a list of the repository connections, ordered by name.
+  *@return an array of connection objects.
+  */
+  @Override
+  public IMappingConnection[] getAllConnections()
+    throws ManifoldCFException
+  {
+    beginTransaction();
+    try
+    {
+      // Read all the tools
+      StringSetBuffer ssb = new StringSetBuffer();
+      ssb.add(getMappingConnectionsKey());
+      StringSet localCacheKeys = new StringSet(ssb);
+      IResultSet set = performQuery("SELECT "+nameField+",lower("+nameField+") AS sortfield FROM "+getTableName()+" ORDER BY sortfield ASC",null,
+        localCacheKeys,null);
+      String[] names = new String[set.getRowCount()];
+      int i = 0;
+      while (i < names.length)
+      {
+        IResultRow row = set.getRow(i);
+        names[i] = row.getValue(nameField).toString();
+        i++;
+      }
+      return loadMultiple(names);
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Load a mapping connection by name.
+  *@param name is the name of the mapping connection.
+  *@return the loaded connection object, or null if not found.
+  */
+  @Override
+  public IMappingConnection load(String name)
+    throws ManifoldCFException
+  {
+    return loadMultiple(new String[]{name})[0];
+  }
+
+  /** Load multiple mapping connections by name.
+  *@param names are the names to load.
+  *@return the loaded connection objects.
+  */
+  @Override
+  public IMappingConnection[] loadMultiple(String[] names)
+    throws ManifoldCFException
+  {
+    // Build description objects
+    MappingConnectionDescription[] objectDescriptions = new MappingConnectionDescription[names.length];
+    int i = 0;
+    StringSetBuffer ssb = new StringSetBuffer();
+    while (i < names.length)
+    {
+      ssb.clear();
+      ssb.add(getMappingConnectionKey(names[i]));
+      objectDescriptions[i] = new MappingConnectionDescription(names[i],new StringSet(ssb));
+      i++;
+    }
+
+    MappingConnectionExecutor exec = new MappingConnectionExecutor(this,objectDescriptions);
+    cacheManager.findObjectsAndExecute(objectDescriptions,null,exec,getTransactionID());
+    return exec.getResults();
+  }
+
+  /** Create a new repository connection object.
+  *@return the new object.
+  */
+  @Override
+  public IMappingConnection create()
+    throws ManifoldCFException
+  {
+    MappingConnection rval = new MappingConnection();
+    return rval;
+  }
+
+  /** Save a mapping connection object.
+  *@param object is the object to save.
+  *@return true if the object is created, false otherwise.
+  */
+  @Override
+  public boolean save(IMappingConnection object)
+    throws ManifoldCFException
+  {
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getMappingConnectionsKey());
+    ssb.add(getMappingConnectionKey(object.getName()));
+    StringSet cacheKeys = new StringSet(ssb);
+    while (true)
+    {
+      long sleepAmt = 0L;
+      try
+      {
+        ICacheHandle ch = cacheManager.enterCache(null,cacheKeys,getTransactionID());
+        try
+        {
+          beginTransaction();
+          try
+          {
+            //performLock();
+            ManifoldCF.noteConfigurationChange();
+            boolean isNew = object.getIsNew();
+            // See whether the instance exists
+            ArrayList params = new ArrayList();
+            String query = buildConjunctionClause(params,new ClauseDescription[]{
+              new UnitaryClause(nameField,object.getName())});
+            IResultSet set = performQuery("SELECT * FROM "+getTableName()+" WHERE "+
+              query+" FOR UPDATE",params,null,null);
+            HashMap values = new HashMap();
+            values.put(descriptionField,object.getDescription());
+            values.put(classNameField,object.getClassName());
+            values.put(maxCountField,new Long((long)object.getMaxConnections()));
+            values.put(configField,object.getConfigParams().toXML());
+            values.put(mappingField,object.getPrerequisiteMapping());
+
+            boolean isCreated;
+            
+            if (set.getRowCount() > 0)
+            {
+              // If the object is supposedly new, it is bad that we found one that already exists.
+              if (isNew)
+                throw new ManifoldCFException("Authority connection '"+object.getName()+"' already exists");
+              isCreated = false;
+              // Update
+              params.clear();
+              query = buildConjunctionClause(params,new ClauseDescription[]{
+                new UnitaryClause(nameField,object.getName())});
+              performUpdate(values," WHERE "+query,params,null);
+            }
+            else
+            {
+              // If the object is not supposed to be new, it is bad that we did not find one.
+              if (!isNew)
+                throw new ManifoldCFException("Mapping connection '"+object.getName()+"' no longer exists");
+              isCreated = true;
+              // Insert
+              values.put(nameField,object.getName());
+              // We only need the general key because this is new.
+              performInsert(values,null);
+            }
+
+            cacheManager.invalidateKeys(ch);
+            return isCreated;
+          }
+          catch (ManifoldCFException e)
+          {
+            signalRollback();
+            throw e;
+          }
+          catch (Error e)
+          {
+            signalRollback();
+            throw e;
+          }
+          finally
+          {
+            endTransaction();
+          }
+        }
+        finally
+        {
+          cacheManager.leaveCache(ch);
+        }
+      }
+      catch (ManifoldCFException e)
+      {
+        // Is this a deadlock exception?  If so, we want to try again.
+        if (e.getErrorCode() != ManifoldCFException.DATABASE_TRANSACTION_ABORT)
+          throw e;
+        sleepAmt = getSleepAmt();
+      }
+      finally
+      {
+        sleepFor(sleepAmt);
+      }
+    }
+  }
+
+  /** Delete an authority connection.
+  *@param name is the name of the connection to delete.  If the
+  * name does not exist, no error is returned.
+  */
+  @Override
+  public void delete(String name)
+    throws ManifoldCFException
+  {
+
+    // Grab authority connection manager handle, to check on legality of deletion.
+    IAuthorityConnectionManager authManager = AuthorityConnectionManagerFactory.make(threadContext);
+
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getMappingConnectionsKey());
+    ssb.add(getMappingConnectionKey(name));
+    StringSet cacheKeys = new StringSet(ssb);
+    ICacheHandle ch = cacheManager.enterCache(null,cacheKeys,getTransactionID());
+    try
+    {
+      beginTransaction();
+      try
+      {
+        // Check if any other mapping refers to this connection name
+        if (isReferenced(name))
+          throw new ManifoldCFException("Can't delete mapping connection '"+name+"': existing mapping connections refer to it");
+        if (authManager.isMappingReferenced(name))
+          throw new ManifoldCFException("Can't delete mapping connection '"+name+"': existing authority connections refer to it");
+        ManifoldCF.noteConfigurationChange();
+        ArrayList params = new ArrayList();
+        String query = buildConjunctionClause(params,new ClauseDescription[]{
+          new UnitaryClause(nameField,name)});
+        performDelete("WHERE "+query,params,null);
+        cacheManager.invalidateKeys(ch);
+      }
+      catch (ManifoldCFException e)
+      {
+        signalRollback();
+        throw e;
+      }
+      catch (Error e)
+      {
+        signalRollback();
+        throw e;
+      }
+      finally
+      {
+        endTransaction();
+      }
+    }
+    finally
+    {
+      cacheManager.leaveCache(ch);
+    }
+
+  }
+
+  /** Get the mapping connection name column.
+  *@return the name column.
+  */
+  @Override
+  public String getMappingNameColumn()
+  {
+    return nameField;
+  }
+
+  /** Return true if the specified mapping name is referenced.
+  *@param mappingName is the mapping name.
+  *@return true if referenced, false otherwise.
+  */
+  protected boolean isReferenced(String mappingName)
+    throws ManifoldCFException
+  {
+    StringSetBuffer ssb = new StringSetBuffer();
+    ssb.add(getMappingConnectionsKey());
+    StringSet localCacheKeys = new StringSet(ssb);
+    ArrayList params = new ArrayList();
+    String query = buildConjunctionClause(params,new ClauseDescription[]{
+      new UnitaryClause(mappingField,mappingName)});
+    IResultSet set = performQuery("SELECT "+nameField+" FROM "+getTableName()+" WHERE "+query,params,
+      localCacheKeys,null);
+    return set.getRowCount() > 0;
+  }
+
+  // Caching strategy: Individual connection descriptions are cached, and there is a global cache key for the list of
+  // repository connections.
+
+  /** Construct a key which represents the general list of mapping connectors.
+  *@return the cache key.
+  */
+  protected static String getMappingConnectionsKey()
+  {
+    return CacheKeyFactory.makeMappingConnectionsKey();
+  }
+
+  /** Construct a key which represents an individual mapping connection.
+  *@param connectionName is the name of the connector.
+  *@return the cache key.
+  */
+  protected static String getMappingConnectionKey(String connectionName)
+  {
+    return CacheKeyFactory.makeMappingConnectionKey(connectionName);
+  }
+
+  // Other utility methods.
+
+  /** Fetch multiple mapping connections at a single time.
+  *@param connectionNames are a list of connection names.
+  *@return the corresponding mapping connection objects.
+  */
+  protected MappingConnection[] getMappingConnectionsMultiple(String[] connectionNames)
+    throws ManifoldCFException
+  {
+    MappingConnection[] rval = new MappingConnection[connectionNames.length];
+    HashMap returnIndex = new HashMap();
+    int i = 0;
+    while (i < connectionNames.length)
+    {
+      rval[i] = null;
+      returnIndex.put(connectionNames[i],new Integer(i));
+      i++;
+    }
+    beginTransaction();
+    try
+    {
+      i = 0;
+      ArrayList params = new ArrayList();
+      int j = 0;
+      int maxIn = maxClauseGetMappingConnectionsChunk();
+      while (i < connectionNames.length)
+      {
+        if (j == maxIn)
+        {
+          getMappingConnectionsChunk(rval,returnIndex,params);
+          params.clear();
+          j = 0;
+        }
+        params.add(connectionNames[i]);
+        i++;
+        j++;
+      }
+      if (j > 0)
+        getMappingConnectionsChunk(rval,returnIndex,params);
+      return rval;
+    }
+    catch (Error e)
+    {
+      signalRollback();
+      throw e;
+    }
+    catch (ManifoldCFException e)
+    {
+      signalRollback();
+      throw e;
+    }
+    finally
+    {
+      endTransaction();
+    }
+  }
+
+  /** Find the maximum number of clauses for getMappingConnectionsChunk.
+  */
+  protected int maxClauseGetMappingConnectionsChunk()
+  {
+    return findConjunctionClauseMax(new ClauseDescription[]{});
+  }
+    
+  /** Read a chunk of mapping connections.
+  *@param rval is the place to put the read policies.
+  *@param returnIndex is a map from the object id (resource id) and the rval index.
+  *@param params is the set of parameters.
+  */
+  protected void getMappingConnectionsChunk(MappingConnection[] rval, Map returnIndex, ArrayList params)
+    throws ManifoldCFException
+  {
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(nameField,params)});
+    IResultSet set = performQuery("SELECT * FROM "+getTableName()+" WHERE "+
+      query,list,null,null);
+    int i = 0;
+    while (i < set.getRowCount())
+    {
+      IResultRow row = set.getRow(i++);
+      String name = row.getValue(nameField).toString();
+      int index = ((Integer)returnIndex.get(name)).intValue();
+      MappingConnection rc = new MappingConnection();
+      rc.setIsNew(false);
+      rc.setName(name);
+      rc.setDescription((String)row.getValue(descriptionField));
+      rc.setClassName((String)row.getValue(classNameField));
+      rc.setMaxConnections((int)((Long)row.getValue(maxCountField)).longValue());
+      rc.setPrerequisiteMapping((String)row.getValue(mappingField));
+      String xml = (String)row.getValue(configField);
+      if (xml != null && xml.length() > 0)
+        rc.getConfigParams().fromXML(xml);
+      rval[index] = rc;
+    }
+  }
+
+  // The cached instance will be a MappingConnection.  The cached version will be duplicated when it is returned
+  // from the cache.
+  //
+  // The description object is based completely on the name.
+
+  /** This is the object description for a mapping connection object.
+  */
+  protected static class MappingConnectionDescription extends org.apache.manifoldcf.core.cachemanager.BaseDescription
+  {
+    protected String connectionName;
+    protected String criticalSectionName;
+    protected StringSet cacheKeys;
+
+    public MappingConnectionDescription(String connectionName, StringSet invKeys)
+    {
+      super("mappingconnectioncache");
+      this.connectionName = connectionName;
+      criticalSectionName = getClass().getName()+"-"+connectionName;
+      cacheKeys = invKeys;
+    }
+
+    public String getConnectionName()
+    {
+      return connectionName;
+    }
+
+    public int hashCode()
+    {
+      return connectionName.hashCode();
+    }
+
+    public boolean equals(Object o)
+    {
+      if (!(o instanceof MappingConnectionDescription))
+        return false;
+      MappingConnectionDescription d = (MappingConnectionDescription)o;
+      return d.connectionName.equals(connectionName);
+    }
+
+    public String getCriticalSectionName()
+    {
+      return criticalSectionName;
+    }
+
+    /** Get the cache keys for an object (which may or may not exist yet in
+    * the cache).  This method is called in order for cache manager to throw the correct locks.
+    * @return the object's cache keys, or null if the object should not
+    * be cached.
+    */
+    public StringSet getObjectKeys()
+    {
+      return cacheKeys;
+    }
+
+  }
+
+  /** This is the executor object for locating mapping connection objects.
+  */
+  protected static class MappingConnectionExecutor extends org.apache.manifoldcf.core.cachemanager.ExecutorBase
+  {
+    // Member variables
+    protected MappingConnectionManager thisManager;
+    protected MappingConnection[] returnValues;
+    protected HashMap returnMap = new HashMap();
+
+    /** Constructor.
+    *@param manager is the ToolManager.
+    *@param objectDescriptions are the object descriptions.
+    */
+    public MappingConnectionExecutor(MappingConnectionManager manager, MappingConnectionDescription[] objectDescriptions)
+    {
+      super();
+      thisManager = manager;
+      returnValues = new MappingConnection[objectDescriptions.length];
+      int i = 0;
+      while (i < objectDescriptions.length)
+      {
+        returnMap.put(objectDescriptions[i].getConnectionName(),new Integer(i));
+        i++;
+      }
+    }
+
+    /** Get the result.
+    *@return the looked-up or read cached instances.
+    */
+    public MappingConnection[] getResults()
+    {
+      return returnValues;
+    }
+
+    /** Create a set of new objects to operate on and cache.  This method is called only
+    * if the specified object(s) are NOT available in the cache.  The specified objects
+    * should be created and returned; if they are not created, it means that the
+    * execution cannot proceed, and the execute() method will not be called.
+    * @param objectDescriptions is the set of unique identifier of the object.
+    * @return the newly created objects to cache, or null, if any object cannot be created.
+    *  The order of the returned objects must correspond to the order of the object descriptinos.
+    */
+    public Object[] create(ICacheDescription[] objectDescriptions) throws ManifoldCFException
+    {
+      // Turn the object descriptions into the parameters for the ToolInstance requests
+      String[] connectionNames = new String[objectDescriptions.length];
+      int i = 0;
+      while (i < connectionNames.length)
+      {
+        MappingConnectionDescription desc = (MappingConnectionDescription)objectDescriptions[i];
+        connectionNames[i] = desc.getConnectionName();
+        i++;
+      }
+
+      return thisManager.getMappingConnectionsMultiple(connectionNames);
+    }
+
+
+    /** Notify the implementing class of the existence of a cached version of the
+    * object.  The object is passed to this method so that the execute() method below
+    * will have it available to operate on.  This method is also called for all objects
+    * that are freshly created as well.
+    * @param objectDescription is the unique identifier of the object.
+    * @param cachedObject is the cached object.
+    */
+    public void exists(ICacheDescription objectDescription, Object cachedObject) throws ManifoldCFException
+    {
+      // Cast what came in as what it really is
+      MappingConnectionDescription objectDesc = (MappingConnectionDescription)objectDescription;
+      MappingConnection ci = (MappingConnection)cachedObject;
+
+      // Duplicate it!
+      if (ci != null)
+        ci = ci.duplicate();
+
+      // In order to make the indexes line up, we need to use the hashtable built by
+      // the constructor.
+      returnValues[((Integer)returnMap.get(objectDesc.getConnectionName())).intValue()] = ci;
+    }
+
+    /** Perform the desired operation.  This method is called after either createGetObject()
+    * or exists() is called for every requested object.
+    */
+    public void execute() throws ManifoldCFException
+    {
+      // Does nothing; we only want to fetch objects in this cacher.
+    }
+
+
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mappingconnectorpool/MappingConnectorPool.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mappingconnectorpool/MappingConnectorPool.java
new file mode 100644
index 0000000..cd9ebdf
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/mappingconnectorpool/MappingConnectorPool.java
@@ -0,0 +1,180 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.mappingconnectorpool;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An implementation of IMappingConnectorPool.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public class MappingConnectorPool implements IMappingConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Local connector pool */
+  protected final static LocalPool localPool = new LocalPool();
+
+  // This implementation is a place-holder for the real one, which will likely fold in the pooling code
+  // as we strip it out of MappingConnectorFactory.
+
+  /** Thread context */
+  protected final IThreadContext threadContext;
+  
+  /** Constructor */
+  public MappingConnectorPool(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    this.threadContext = threadContext;
+  }
+  
+  /** Get multiple mapping connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param mappingConnections are the connections to use the build the connector instances.
+  */
+  @Override
+  public IMappingConnector[] grabMultiple(String[] orderingKeys, IMappingConnection[] mappingConnections)
+    throws ManifoldCFException
+  {
+    // For now, use the MappingConnectorFactory method.  This will require us to extract info
+    // from each mapping connection, however.
+    String[] connectionNames = new String[mappingConnections.length];
+    String[] classNames = new String[mappingConnections.length];
+    ConfigParams[] configInfos = new ConfigParams[mappingConnections.length];
+    int[] maxPoolSizes = new int[mappingConnections.length];
+    
+    for (int i = 0; i < mappingConnections.length; i++)
+    {
+      connectionNames[i] = mappingConnections[i].getName();
+      classNames[i] = mappingConnections[i].getClassName();
+      configInfos[i] = mappingConnections[i].getConfigParams();
+      maxPoolSizes[i] = mappingConnections[i].getMaxConnections();
+    }
+    return localPool.grabMultiple(threadContext,
+      orderingKeys, connectionNames, classNames, configInfos, maxPoolSizes);
+  }
+
+  /** Get a mapping connector.
+  * The connector is specified by an mapping connection object.
+  *@param mappingConnection is the mapping connection to base the connector instance on.
+  */
+  @Override
+  public IMappingConnector grab(IMappingConnection mappingConnection)
+    throws ManifoldCFException
+  {
+    return localPool.grab(threadContext, mappingConnection.getName(),
+      mappingConnection.getClassName(),
+      mappingConnection.getConfigParams(), mappingConnection.getMaxConnections());
+  }
+
+  /** Release multiple mapping connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  @Override
+  public void releaseMultiple(IMappingConnection[] connections, IMappingConnector[] connectors)
+    throws ManifoldCFException
+  {
+    String[] connectionNames = new String[connections.length];
+    for (int i = 0; i < connections.length; i++)
+    {
+      connectionNames[i] = connections[i].getName();
+    }
+    localPool.releaseMultiple(threadContext, connectionNames, connectors);
+  }
+
+  /** Release a mapping connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  @Override
+  public void release(IMappingConnection connection, IMappingConnector connector)
+    throws ManifoldCFException
+  {
+    localPool.release(threadContext, connection.getName(), connector);
+  }
+
+  /** Idle notification for inactive mapping connector handles.
+  * This method polls all inactive handles.
+  */
+  @Override
+  public void pollAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.pollAllConnectors(threadContext);
+  }
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  @Override
+  public void flushUnusedConnectors()
+    throws ManifoldCFException
+  {
+    localPool.flushUnusedConnectors(threadContext);
+  }
+
+  /** Clean up all open mapping connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  @Override
+  public void closeAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.closeAllConnectors(threadContext);
+  }
+
+  /** Actual static mapping connector pool */
+  protected static class LocalPool extends org.apache.manifoldcf.core.connectorpool.ConnectorPool<IMappingConnector>
+  {
+    public LocalPool()
+    {
+      super("_MAPPINGCONNECTORPOOL_");
+    }
+    
+    @Override
+    protected boolean isInstalled(IThreadContext tc, String className)
+      throws ManifoldCFException
+    {
+      IMappingConnectorManager connectorManager = MappingConnectorManagerFactory.make(tc);
+      return connectorManager.isInstalled(className);
+    }
+
+    @Override
+    protected boolean isConnectionNameValid(IThreadContext tc, String connectionName)
+      throws ManifoldCFException
+    {
+      IMappingConnectionManager connectionManager = MappingConnectionManagerFactory.make(tc);
+      return connectionManager.load(connectionName) != null;
+    }
+
+    public IMappingConnector[] grabMultiple(IThreadContext tc, String[] orderingKeys, String[] connectionNames, String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
+      throws ManifoldCFException
+    {
+      return grabMultiple(tc,IMappingConnector.class,orderingKeys,connectionNames,classNames,configInfos,maxPoolSizes);
+    }
+
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthCheckThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthCheckThread.java
index cb1ce0c..fc04d98 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthCheckThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthCheckThread.java
@@ -33,11 +33,11 @@
   public static final String _rcsid = "@(#)$Id: AuthCheckThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Local data
-  protected RequestQueue requestQueue;
+  protected RequestQueue<AuthRequest> requestQueue;
 
   /** Constructor.
   */
-  public AuthCheckThread(String id, RequestQueue requestQueue)
+  public AuthCheckThread(String id, RequestQueue<AuthRequest> requestQueue)
     throws ManifoldCFException
   {
     super();
@@ -50,109 +50,122 @@
   {
     // Create a thread context object.
     IThreadContext threadContext = ThreadContextFactory.make();
-
-    // Loop
-    while (true)
+    try
     {
-      // Do another try/catch around everything in the loop
-      try
+      // Create an authority connection pool object.
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(threadContext);
+      
+      // Loop
+      while (true)
       {
-        if (Thread.currentThread().isInterrupted())
-          throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
-
-        // Wait for a request.
-        AuthRequest theRequest = requestQueue.getRequest();
-
-        // Try to fill the request before going back to sleep.
-        if (Logging.authorityService.isDebugEnabled())
-        {
-          Logging.authorityService.debug(" Calling connector class '"+theRequest.getClassName()+"'");
-        }
-
-        AuthorizationResponse response = null;
-        Throwable exception = null;
-
+        // Do another try/catch around everything in the loop
         try
         {
-          IAuthorityConnector connector = AuthorityConnectorFactory.grab(threadContext,
-            theRequest.getClassName(),
-            theRequest.getConfigurationParams(),
-            theRequest.getMaxConnections());
-          // If this is null, we MUST treat this as an "unauthorized" condition!!
-          // We signal that by setting the exception value.
-          try
+          if (Thread.currentThread().isInterrupted())
+            throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
+
+          // Wait for a request.
+          AuthRequest theRequest = requestQueue.getRequest();
+
+          // Try to fill the request before going back to sleep.
+          if (Logging.authorityService.isDebugEnabled())
           {
-            if (connector == null)
-              exception = new ManifoldCFException("Authority connector "+theRequest.getClassName()+" is not registered.");
-            else
+            Logging.authorityService.debug(" Calling connector class '"+theRequest.getAuthorityConnection().getClassName()+"'");
+          }
+
+          AuthorizationResponse response = null;
+          Throwable exception = null;
+
+          // Grab an authorization response only if there's a user
+          if (theRequest.getUserID() != null)
+          {
+            try
             {
-              // Get the acl for the user
+              IAuthorityConnector connector = authorityConnectorPool.grab(theRequest.getAuthorityConnection());
+              // If this is null, we MUST treat this as an "unauthorized" condition!!
+              // We signal that by setting the exception value.
               try
               {
-                response = connector.getAuthorizationResponse(theRequest.getUserID());
-              }
-              catch (ManifoldCFException e)
-              {
-                if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-                  throw e;
-                Logging.authorityService.warn("Authority error: "+e.getMessage(),e);
-                response = AuthorityConnectorFactory.getDefaultAuthorizationResponse(threadContext,theRequest.getClassName(),theRequest.getUserID());
-              }
+                if (connector == null)
+                  exception = new ManifoldCFException("Authority connector "+theRequest.getAuthorityConnection().getClassName()+" is not registered.");
+                else
+                {
+                  // Get the acl for the user
+                  try
+                  {
+                    response = connector.getAuthorizationResponse(theRequest.getUserID());
+                  }
+                  catch (ManifoldCFException e)
+                  {
+                    if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                      throw e;
+                    Logging.authorityService.warn("Authority error: "+e.getMessage(),e);
+                    response = AuthorityConnectorFactory.getDefaultAuthorizationResponse(threadContext,theRequest.getAuthorityConnection().getClassName(),theRequest.getUserID());
+                  }
 
+                }
+              }
+              finally
+              {
+                authorityConnectorPool.release(theRequest.getAuthorityConnection(),connector);
+              }
+            }
+            catch (ManifoldCFException e)
+            {
+              if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                throw e;
+              Logging.authorityService.warn("Authority connection exception: "+e.getMessage(),e);
+              response = AuthorityConnectorFactory.getDefaultAuthorizationResponse(threadContext,theRequest.getAuthorityConnection().getClassName(),theRequest.getUserID());
+              if (response == null)
+                exception = e;
+            }
+            catch (Throwable e)
+            {
+              Logging.authorityService.warn("Authority connection error: "+e.getMessage(),e);
+              response = AuthorityConnectorFactory.getDefaultAuthorizationResponse(threadContext,theRequest.getAuthorityConnection().getClassName(),theRequest.getUserID());
+              if (response == null)
+                exception = e;
             }
           }
-          finally
-          {
-            AuthorityConnectorFactory.release(connector);
-          }
+
+          // The request is complete
+          theRequest.completeRequest(response,exception);
+
+          // Repeat, and only go to sleep if there are no more requests.
         }
         catch (ManifoldCFException e)
         {
           if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-            throw e;
-          Logging.authorityService.warn("Authority connection exception: "+e.getMessage(),e);
-          response = AuthorityConnectorFactory.getDefaultAuthorizationResponse(threadContext,theRequest.getClassName(),theRequest.getUserID());
-          if (response == null)
-            exception = e;
+            break;
+
+          // Log it, but keep the thread alive
+          Logging.authorityService.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
         }
         catch (Throwable e)
         {
-          Logging.authorityService.warn("Authority connection error: "+e.getMessage(),e);
-          response = AuthorityConnectorFactory.getDefaultAuthorizationResponse(threadContext,theRequest.getClassName(),theRequest.getUserID());
-          if (response == null)
-            exception = e;
+          // A more severe error - but stay alive
+          Logging.authorityService.fatal("Error tossed: "+e.getMessage(),e);
         }
-
-        // The request is complete
-        theRequest.completeRequest(response,exception);
-
-        // Repeat, and only go to sleep if there are no more requests.
       }
-      catch (ManifoldCFException e)
-      {
-        if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
-          break;
-
-        // Log it, but keep the thread alive
-        Logging.authorityService.error("Exception tossed: "+e.getMessage(),e);
-
-        if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
-        {
-          // Shut the whole system down!
-          System.exit(1);
-        }
-
-      }
-      catch (InterruptedException e)
-      {
-        // We're supposed to quit
-        break;
-      }
-      catch (Throwable e)
-      {
-        // A more severe error - but stay alive
-        Logging.authorityService.fatal("Error tossed: "+e.getMessage(),e);
-      }
+    }
+    catch (ManifoldCFException e)
+    {
+      // Severe error on initialization
+      System.err.println("Authority service auth check thread could not start - shutting down");
+      Logging.authorityService.fatal("AuthCheckThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
     }
   }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthRequest.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthRequest.java
index 0ecdad4..2f59871 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthRequest.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/AuthRequest.java
@@ -32,10 +32,8 @@
 
   // This is where the request data actually lives
   protected String userID;
-  protected String className;
-  protected String identifyingString;
-  protected ConfigParams configParameters;
-  protected int maxConnections;
+  protected final IAuthorityConnection authorityConnection;
+  protected final String identifyingString;
 
   // These are the possible results of the request
   protected boolean answerComplete = false;
@@ -44,25 +42,28 @@
 
   /** Construct the request, and record the question.
   */
-  public AuthRequest(String userID, String className, String identifyingString, ConfigParams configParameters, int maxConnections)
+  public AuthRequest(IAuthorityConnection authorityConnection, String identifyingString)
   {
-    this.userID = userID;
-    this.className = className;
+    this.authorityConnection = authorityConnection;
     this.identifyingString = identifyingString;
-    this.configParameters = configParameters;
-    this.maxConnections = maxConnections;
   }
 
+  /** Set the user ID we'll be using */
+  public void setUserID(String userID)
+  {
+    this.userID = userID;
+  }
+  
   /** Get the user id */
   public String getUserID()
   {
     return userID;
   }
 
-  /** Get the class name */
-  public String getClassName()
+  /** Get the authority connection */
+  public IAuthorityConnection getAuthorityConnection()
   {
-    return className;
+    return authorityConnection;
   }
 
   /** Get the identifying string, to pass back to the user if there was a problem */
@@ -71,18 +72,6 @@
     return identifyingString;
   }
 
-  /** Get the configuration parameters */
-  public ConfigParams getConfigurationParams()
-  {
-    return configParameters;
-  }
-
-  /** Get the maximum number of connections */
-  public int getMaxConnections()
-  {
-    return maxConnections;
-  }
-
   /** Wait for an auth request to be complete.
   */
   public void waitForComplete()
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/IdleCleanupThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/IdleCleanupThread.java
index cdff257..5d7b0de 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/IdleCleanupThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/IdleCleanupThread.java
@@ -52,6 +52,8 @@
       // Create a thread context object.
       IThreadContext threadContext = ThreadContextFactory.make();
       ICacheManager cacheManager = CacheManagerFactory.make(threadContext);
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(threadContext);
+      IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(threadContext);
       
       // Loop
       while (true)
@@ -60,11 +62,12 @@
         try
         {
           // Do the cleanup
-          AuthorityConnectorFactory.pollAllConnectors(threadContext);
+          authorityConnectorPool.pollAllConnectors();
+          mappingConnectorPool.pollAllConnectors();
           cacheManager.expireObjects(System.currentTimeMillis());
           
           // Sleep for the retry interval.
-          ManifoldCF.sleep(15000L);
+          ManifoldCF.sleep(5000L);
         }
         catch (ManifoldCFException e)
         {
@@ -72,7 +75,7 @@
             break;
 
           // Log it, but keep the thread alive
-          Logging.authorityService.error("Exception tossed",e);
+          Logging.authorityService.error("Exception tossed: "+e.getMessage(),e);
 
           if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
           {
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/Logging.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/Logging.java
index 89db825..7101356 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/Logging.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/Logging.java
@@ -32,6 +32,7 @@
   // Public logger objects
   public static Logger authorityService = null;
   public static Logger authorityConnectors = null;
+  public static Logger mappingConnectors = null;
 
   /** Initialize logger setup.
   */
@@ -45,6 +46,7 @@
     // package loggers
     authorityService = newLogger("org.apache.manifoldcf.authorityservice");
     authorityConnectors = newLogger("org.apache.manifoldcf.authorityconnectors");
+    mappingConnectors = newLogger("org.apache.manifoldcf.mappingconnectors");
   }
 
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/ManifoldCF.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/ManifoldCF.java
index 4864452..0be8832 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/ManifoldCF.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/ManifoldCF.java
@@ -31,37 +31,43 @@
   // Threads
   protected static IdleCleanupThread idleCleanupThread = null;
   protected static AuthCheckThread[] authCheckThreads = null;
+  protected static MappingThread[] mappingThreads = null;
 
   // Number of auth check threads
   protected static int numAuthCheckThreads = 0;
-
+  // Number of mapping threads
+  protected static int numMappingThreads = 0;
+  
   protected static final String authCheckThreadCountProperty = "org.apache.manifoldcf.authorityservice.threads";
+  protected static final String mappingThreadCountProperty = "org.apache.manifoldcf.authorityservice.mappingthreads";
 
   // Request queue
-  protected static RequestQueue requestQueue = null;
-
+  protected static RequestQueue<AuthRequest> requestQueue = null;
+  // Mapping request queue
+  protected static RequestQueue<MappingRequest> mappingRequestQueue = null;
+  
   /** Initialize environment.
   */
-  public static void initializeEnvironment()
+  public static void initializeEnvironment(IThreadContext tc)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
     {
-      org.apache.manifoldcf.core.system.ManifoldCF.initializeEnvironment();
-      org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
+      org.apache.manifoldcf.core.system.ManifoldCF.initializeEnvironment(tc);
+      org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
     }
   }
 
-  public static void cleanUpEnvironment()
+  public static void cleanUpEnvironment(IThreadContext tc)
   {
     synchronized (initializeFlagLock)
     {
-      org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-      org.apache.manifoldcf.core.system.ManifoldCF.cleanUpEnvironment();
+      org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+      org.apache.manifoldcf.core.system.ManifoldCF.cleanUpEnvironment(tc);
     }
   }
 
-  public static void localInitialize()
+  public static void localInitialize(IThreadContext tc)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
@@ -70,13 +76,33 @@
         return;
 
       Logging.initializeLoggers();
-      Logging.setLogLevels();
+      Logging.setLogLevels(tc);
       authoritiesInitialized = true;
     }
   }
   
-  public static void localCleanup()
+  public static void localCleanup(IThreadContext tc)
   {
+    // Since pools are a shared resource, we clean them up only
+    // when we are certain nothing else is using them in the JVM.
+    try
+    {
+      AuthorityConnectorPoolFactory.make(tc).closeAllConnectors();
+    }
+    catch (ManifoldCFException e)
+    {
+      if (Logging.authorityService != null)
+        Logging.authorityService.warn("Exception closing authority connection pool: "+e.getMessage(),e);
+    }
+    try
+    {
+      MappingConnectorPoolFactory.make(tc).closeAllConnectors();
+    }
+    catch (ManifoldCFException e)
+    {
+      if (Logging.authorityService != null)
+        Logging.authorityService.warn("Exception closing mapping connection pool: "+e.getMessage(),e);
+    }
   }
   
   /** Install all the authority manager system tables.
@@ -85,35 +111,19 @@
   public static void installSystemTables(IThreadContext threadcontext)
     throws ManifoldCFException
   {
-    IDBInterface mainDatabase = DBInterfaceFactory.make(threadcontext,
-      ManifoldCF.getMasterDatabaseName(),
-      ManifoldCF.getMasterDatabaseUsername(),
-      ManifoldCF.getMasterDatabasePassword());
-
+    IAuthorizationDomainManager domainMgr = AuthorizationDomainManagerFactory.make(threadcontext);
+    IAuthorityGroupManager groupMgr = AuthorityGroupManagerFactory.make(threadcontext);
     IAuthorityConnectorManager connMgr = AuthorityConnectorManagerFactory.make(threadcontext);
     IAuthorityConnectionManager authConnMgr = AuthorityConnectionManagerFactory.make(threadcontext);
+    IMappingConnectorManager mappingConnectorMgr = MappingConnectorManagerFactory.make(threadcontext);
+    IMappingConnectionManager mappingConnectionMgr = MappingConnectionManagerFactory.make(threadcontext);
 
-    mainDatabase.beginTransaction();
-    try
-    {
-      connMgr.install();
-      authConnMgr.install();
-    }
-    catch (ManifoldCFException e)
-    {
-      mainDatabase.signalRollback();
-      throw e;
-    }
-    catch (Error e)
-    {
-      mainDatabase.signalRollback();
-      throw e;
-    }
-    finally
-    {
-      mainDatabase.endTransaction();
-    }
-
+    domainMgr.install();
+    connMgr.install();
+    mappingConnectorMgr.install();
+    groupMgr.install();
+    authConnMgr.install();
+    mappingConnectionMgr.install();
   }
 
   /** Uninstall all the authority manager system tables.
@@ -122,39 +132,19 @@
   public static void deinstallSystemTables(IThreadContext threadcontext)
     throws ManifoldCFException
   {
-    IDBInterface mainDatabase = DBInterfaceFactory.make(threadcontext,
-      ManifoldCF.getMasterDatabaseName(),
-      ManifoldCF.getMasterDatabaseUsername(),
-      ManifoldCF.getMasterDatabasePassword());
-
-    ManifoldCFException se = null;
-
+    IAuthorizationDomainManager domainMgr = AuthorizationDomainManagerFactory.make(threadcontext);
     IAuthorityConnectorManager connMgr = AuthorityConnectorManagerFactory.make(threadcontext);
+    IAuthorityGroupManager groupMgr = AuthorityGroupManagerFactory.make(threadcontext);
     IAuthorityConnectionManager authConnMgr = AuthorityConnectionManagerFactory.make(threadcontext);
+    IMappingConnectorManager mappingConnectorMgr = MappingConnectorManagerFactory.make(threadcontext);
+    IMappingConnectionManager mappingConnectionMgr = MappingConnectionManagerFactory.make(threadcontext);
 
-    mainDatabase.beginTransaction();
-    try
-    {
-      authConnMgr.deinstall();
-      connMgr.deinstall();
-    }
-    catch (ManifoldCFException e)
-    {
-      mainDatabase.signalRollback();
-      throw e;
-    }
-    catch (Error e)
-    {
-      mainDatabase.signalRollback();
-      throw e;
-    }
-    finally
-    {
-      mainDatabase.endTransaction();
-    }
-    if (se != null)
-      throw se;
-
+    mappingConnectionMgr.deinstall();
+    authConnMgr.deinstall();
+    groupMgr.deinstall();
+    mappingConnectorMgr.deinstall();
+    connMgr.deinstall();
+    domainMgr.deinstall();
   }
 
   /** Start the authority system.
@@ -163,27 +153,35 @@
     throws ManifoldCFException
   {
     // Read any parameters
-    String maxThreads = getProperty(authCheckThreadCountProperty);
-    if (maxThreads == null)
-      maxThreads = "10";
-    numAuthCheckThreads = new Integer(maxThreads).intValue();
+    numAuthCheckThreads = LockManagerFactory.getIntProperty(threadContext, authCheckThreadCountProperty, 10);
     if (numAuthCheckThreads < 1 || numAuthCheckThreads > 100)
       throw new ManifoldCFException("Illegal value for the number of auth check threads");
 
+    numMappingThreads = LockManagerFactory.getIntProperty(threadContext, mappingThreadCountProperty, 10);
+    if (numMappingThreads < 1 || numMappingThreads > 100)
+      throw new ManifoldCFException("Illegal value for the number of mapping threads");
+
     // Start up threads
     idleCleanupThread = new IdleCleanupThread();
     idleCleanupThread.start();
 
-    requestQueue = new RequestQueue();
+    requestQueue = new RequestQueue<AuthRequest>();
+    mappingRequestQueue = new RequestQueue<MappingRequest>();
 
     authCheckThreads = new AuthCheckThread[numAuthCheckThreads];
-    int i = 0;
-    while (i < numAuthCheckThreads)
+    for (int i = 0; i < numAuthCheckThreads; i++)
     {
       authCheckThreads[i] = new AuthCheckThread(Integer.toString(i),requestQueue);
       authCheckThreads[i].start();
-      i++;
     }
+    
+    mappingThreads = new MappingThread[numMappingThreads];
+    for (int i = 0; i < numMappingThreads; i++)
+    {
+      mappingThreads[i] = new MappingThread(Integer.toString(i),mappingRequestQueue);
+      mappingThreads[i].start();
+    }
+
   }
 
   /** Shut down the authority system.
@@ -192,7 +190,7 @@
     throws ManifoldCFException
   {
 
-    while (idleCleanupThread != null || authCheckThreads != null)
+    while (idleCleanupThread != null || authCheckThreads != null || mappingThreads != null)
     {
       if (idleCleanupThread != null)
       {
@@ -200,15 +198,23 @@
       }
       if (authCheckThreads != null)
       {
-        int i = 0;
-        while (i < authCheckThreads.length)
+        for (int i = 0; i < authCheckThreads.length; i++)
         {
-          Thread authCheckThread = authCheckThreads[i++];
+          Thread authCheckThread = authCheckThreads[i];
           if (authCheckThread != null)
             authCheckThread.interrupt();
         }
       }
-
+      if (mappingThreads != null)
+      {
+        for (int i = 0; i < mappingThreads.length; i++)
+        {
+          Thread mappingThread = mappingThreads[i];
+          if (mappingThread != null)
+            mappingThread.interrupt();
+        }
+      }
+      
       if (idleCleanupThread != null)
       {
         if (!idleCleanupThread.isAlive())
@@ -216,9 +222,8 @@
       }
       if (authCheckThreads != null)
       {
-        int i = 0;
         boolean isAlive = false;
-        while (i < authCheckThreads.length)
+        for (int i = 0; i < authCheckThreads.length; i++)
         {
           Thread authCheckThread = authCheckThreads[i];
           if (authCheckThread != null)
@@ -234,6 +239,25 @@
           authCheckThreads = null;
       }
 
+      if (mappingThreads != null)
+      {
+        boolean isAlive = false;
+        for (int i = 0; i < mappingThreads.length; i++)
+        {
+          Thread mappingThread = mappingThreads[i];
+          if (mappingThread != null)
+          {
+            if (!mappingThread.isAlive())
+              mappingThreads[i] = null;
+            else
+              isAlive = true;
+          }
+          i++;
+        }
+        if (!isAlive)
+          mappingThreads = null;
+      }
+
       try
       {
         ManifoldCF.sleep(1000);
@@ -244,16 +268,25 @@
     }
 
     // Release all authority connectors
-    AuthorityConnectorFactory.closeAllConnectors(threadContext);
+    AuthorityConnectorPoolFactory.make(threadContext).flushUnusedConnectors();
     numAuthCheckThreads = 0;
     requestQueue = null;
+    MappingConnectorPoolFactory.make(threadContext).flushUnusedConnectors();
+    numMappingThreads = 0;
+    mappingRequestQueue = null;
   }
 
   /** Get the current request queue */
-  public static RequestQueue getRequestQueue()
+  public static RequestQueue<AuthRequest> getRequestQueue()
   {
     return requestQueue;
   }
 
+  /** Get the current mapping request queue */
+  public static RequestQueue<MappingRequest> getMappingRequestQueue()
+  {
+    return mappingRequestQueue;
+  }
+  
 }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/MappingRequest.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/MappingRequest.java
new file mode 100644
index 0000000..f51b4f2
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/MappingRequest.java
@@ -0,0 +1,120 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import java.util.*;
+
+/** This class describes a user mapping request.  The request has state: It can be in an incomplete state, or it can be in a complete state.
+* The thread that cares whether the request is complete needs to be able to wait for that situation to occur, so the request has
+* a method that does just that.
+*/
+public class MappingRequest
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // This is where the request data actually lives
+  protected String userID;
+  protected final IMappingConnection mappingConnection;
+  protected final String identifyingString;
+
+  // These are the possible results of the request
+  protected boolean answerComplete = false;
+  protected String outputUserID = null;
+  protected Throwable answerException = null;
+
+  /** Construct the request, and record the question.
+  */
+  public MappingRequest(IMappingConnection mappingConnection, String identifyingString)
+  {
+    this.mappingConnection = mappingConnection;
+    this.identifyingString = identifyingString;
+  }
+
+  /** Set the user ID we'll be using */
+  public void setUserID(String userID)
+  {
+    this.userID = userID;
+  }
+  
+  /** Get the user ID */
+  public String getUserID()
+  {
+    return userID;
+  }
+  
+  /** Get the mapping connection.
+  */
+  public IMappingConnection getMappingConnection()
+  {
+    return mappingConnection;
+  }
+
+  /** Get the identifying string, to pass back to the user if there was a problem */
+  public String getIdentifyingString()
+  {
+    return identifyingString;
+  }
+
+  /** Wait for an auth request to be complete.
+  */
+  public void waitForComplete()
+    throws InterruptedException
+  {
+    synchronized (this)
+    {
+      if (answerComplete)
+        return;
+      this.wait();
+    }
+  }
+
+  /** Note that the request is complete, and record the answers.
+  */
+  public void completeRequest(String outputUserID, Throwable answerException)
+  {
+    synchronized (this)
+    {
+      if (answerComplete)
+        return;
+
+      // Record the answer.
+      answerComplete = true;
+      this.outputUserID = outputUserID;
+      this.answerException = answerException;
+
+      // Notify threads waiting on the answer.
+      this.notifyAll();
+    }
+  }
+
+  /** Get the answer user */
+  public String getAnswerResponse()
+  {
+    return outputUserID;
+  }
+
+  /** Get the answer exception */
+  public Throwable getAnswerException()
+  {
+    return answerException;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/MappingThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/MappingThread.java
new file mode 100644
index 0000000..fd1fe45
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/MappingThread.java
@@ -0,0 +1,162 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.authorities.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.authorities.interfaces.*;
+import org.apache.manifoldcf.authorities.system.Logging;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** This thread performs actual user mapping operations.
+*/
+public class MappingThread extends Thread
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Local data
+  protected RequestQueue<MappingRequest> requestQueue;
+
+  /** Constructor.
+  */
+  public MappingThread(String id, RequestQueue<MappingRequest> requestQueue)
+    throws ManifoldCFException
+  {
+    super();
+    this.requestQueue = requestQueue;
+    setName("Mapping thread "+id);
+    setDaemon(true);
+  }
+
+  public void run()
+  {
+    // Create a thread context object.
+    IThreadContext threadContext = ThreadContextFactory.make();
+    try
+    {
+      IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(threadContext);
+      // Loop
+      while (true)
+      {
+        // Do another try/catch around everything in the loop
+        try
+        {
+          if (Thread.currentThread().isInterrupted())
+            throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
+
+          // Wait for a request.
+          MappingRequest theRequest = requestQueue.getRequest();
+
+          // Try to fill the request before going back to sleep.
+          if (Logging.authorityService.isDebugEnabled())
+          {
+            Logging.authorityService.debug(" Calling mapping connector class '"+theRequest.getMappingConnection().getClassName()+"'");
+          }
+
+          String outputUserID = null;
+          Throwable exception = null;
+
+          // Only try a mapping if we have a user to map...
+          if (theRequest.getUserID() != null)
+          {
+            try
+            {
+              IMappingConnector connector = mappingConnectorPool.grab(theRequest.getMappingConnection());
+              try
+              {
+                if (connector == null)
+                  exception = new ManifoldCFException("Mapping connector "+theRequest.getMappingConnection().getClassName()+" is not registered.");
+                else
+                {
+                  // Do the mapping
+                  try
+                  {
+                    outputUserID = connector.mapUser(theRequest.getUserID());
+                  }
+                  catch (ManifoldCFException e)
+                  {
+                    if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                      throw e;
+                    Logging.authorityService.warn("Mapping error: "+e.getMessage(),e);
+                  }
+
+                }
+              }
+              finally
+              {
+                mappingConnectorPool.release(theRequest.getMappingConnection(),connector);
+              }
+            }
+            catch (ManifoldCFException e)
+            {
+              if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                throw e;
+              Logging.authorityService.warn("Mapping connection exception: "+e.getMessage(),e);
+              exception = e;
+            }
+            catch (Throwable e)
+            {
+              Logging.authorityService.warn("Mapping connection error: "+e.getMessage(),e);
+              exception = e;
+            }
+          }
+
+          // The request is complete
+          theRequest.completeRequest(outputUserID, exception);
+
+          // Repeat, and only go to sleep if there are no more requests.
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            break;
+
+          // Log it, but keep the thread alive
+          Logging.authorityService.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
+        }
+        catch (Throwable e)
+        {
+          // A more severe error - but stay alive
+          Logging.authorityService.fatal("Error tossed: "+e.getMessage(),e);
+        }
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      // Severe error on initialization
+      System.err.println("Authority service mapping thread could not start - shutting down");
+      Logging.authorityService.fatal("MappingThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
+    }
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/RequestQueue.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/RequestQueue.java
index 2f71f43..5bba282 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/RequestQueue.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/authorities/system/RequestQueue.java
@@ -28,12 +28,12 @@
 * (b) the "reader" threads block if queue is empty.
 * The objects being queued are all AuthRequest objects.
 */
-public class RequestQueue
+public class RequestQueue<T>
 {
   public static final String _rcsid = "@(#)$Id: RequestQueue.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Since the queue has a maximum size, an ArrayList is a fine way to keep it
-  protected ArrayList queue = new ArrayList();
+  protected List<T> queue = new ArrayList<T>();
 
   /** Constructor.
   */
@@ -44,7 +44,7 @@
   /** Add a request to the queue.
   *@param dd is the request.
   */
-  public void addRequest(AuthRequest dd)
+  public void addRequest(T dd)
   {
     synchronized (queue)
     {
@@ -57,7 +57,7 @@
   * nothing there.
   *@return the request to be processed.
   */
-  public AuthRequest getRequest()
+  public T getRequest()
     throws InterruptedException
   {
     synchronized (queue)
@@ -66,7 +66,7 @@
       while (queue.size() == 0)
         queue.wait();
 
-      return (AuthRequest)queue.remove(queue.size()-1);
+      return queue.remove(queue.size()-1);
     }
   }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AbortJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AbortJob.java
index 02517d2..e29b020 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AbortJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AbortJob.java
@@ -28,38 +28,38 @@
 */
 public class AbortJob
 {
-        public static final String _rcsid = "@(#)$Id: AbortJob.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: AbortJob.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private AbortJob()
-        {
-        }
+  private AbortJob()
+  {
+  }
 
-        // Add: throttle, priority, recrawl interval
+  // Add: throttle, priority, recrawl interval
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: AbortJob <jobid>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: AbortJob <jobid>");
+      System.exit(1);
+    }
 
-                String jobID = args[0];
+    String jobID = args[0];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        jobManager.manualAbort(new Long(jobID));
-                        System.err.println("Job aborting");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      jobManager.manualAbort(new Long(jobID));
+      System.err.println("Job aborting");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AddScheduledTime.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AddScheduledTime.java
index c9d1f13..2280487 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AddScheduledTime.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/AddScheduledTime.java
@@ -69,8 +69,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
 
       IJobDescription desc = jobManager.load(new Long(jobID));
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/BaseCrawlerInitializationCommand.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/BaseCrawlerInitializationCommand.java
index 8e6717b..11a869c 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/BaseCrawlerInitializationCommand.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/BaseCrawlerInitializationCommand.java
@@ -32,8 +32,8 @@
 {
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
     IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
     doExecute(tc);
   }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ChangeJobDocSpec.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ChangeJobDocSpec.java
index 271ddbd..067fc6e 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ChangeJobDocSpec.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ChangeJobDocSpec.java
@@ -28,46 +28,46 @@
 */
 public class ChangeJobDocSpec
 {
-        public static final String _rcsid = "@(#)$Id: ChangeJobDocSpec.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: ChangeJobDocSpec.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private ChangeJobDocSpec()
-        {
-        }
+  private ChangeJobDocSpec()
+  {
+  }
 
-        public static void main(String[] args)
-        {
-                if (args.length != 2)
-                {
-                        System.err.println("Usage: ChangeJobDocSpec <jobid> <filespec_xml>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 2)
+    {
+      System.err.println("Usage: ChangeJobDocSpec <jobid> <filespec_xml>");
+      System.exit(1);
+    }
 
-                String jobID = args[0];
-                String filespecXML = args[1];
+    String jobID = args[0];
+    String filespecXML = args[1];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        IJobDescription desc = jobManager.load(new Long(jobID));
-                        if (desc == null)
-                        {
-                                System.err.println("No such job: "+jobID);
-                                System.exit(3);
-                        }
-                        desc.getSpecification().fromXML(filespecXML);
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      IJobDescription desc = jobManager.load(new Long(jobID));
+      if (desc == null)
+      {
+        System.err.println("No such job: "+jobID);
+        System.exit(3);
+      }
+      desc.getSpecification().fromXML(filespecXML);
 
-                        // Now, save
-                        jobManager.save(desc);
-                        System.err.println("Job doc spec has been changed");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+      // Now, save
+      jobManager.save(desc);
+      System.err.println("Job doc spec has been changed");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/CheckConfigured.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/CheckConfigured.java
index e734f57..224a6c6 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/CheckConfigured.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/CheckConfigured.java
@@ -28,37 +28,37 @@
 */
 public class CheckConfigured
 {
-        public static final String _rcsid = "@(#)$Id: CheckConfigured.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: CheckConfigured.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private CheckConfigured()
-        {
-        }
+  private CheckConfigured()
+  {
+  }
 
-        // Add: throttle, priority, recrawl interval
+  // Add: throttle, priority, recrawl interval
 
-        public static void main(String[] args)
-        {
-                if (args.length != 0)
-                {
-                        System.err.println("Usage: CheckConfigured");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 0)
+    {
+      System.err.println("Usage: CheckConfigured");
+      System.exit(1);
+    }
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(tc);
-                        if (connManager.getAllConnections().length > 0)
-                                UTF8Stdout.println("CONFIGURED");
-                        else
-                                UTF8Stdout.println("OK");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(tc);
+      if (connManager.getAllConnections().length > 0)
+        UTF8Stdout.println("CONFIGURED");
+      else
+        UTF8Stdout.println("OK");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineJob.java
index ae08112..c9143dd 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineJob.java
@@ -28,117 +28,115 @@
 */
 public class DefineJob
 {
-        public static final String _rcsid = "@(#)$Id: DefineJob.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: DefineJob.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private DefineJob()
+  private DefineJob()
+  {
+  }
+
+  public static void main(String[] args)
+  {
+    if (args.length != 13)
+    {
+      System.err.println("Usage: DefineJob <description> <connection_name> <output_name> <type> <start_method> <hopcount_method> <recrawl_interval> <expiration_interval> <reseed_interval> <job_priority> <hop_filters> <filespec_xml> <outputspec_xml>");
+      System.err.println("<type> is one of: continuous or specified");
+      System.err.println("<start_method> is one of: windowbegin, windowinside, disable");
+      System.err.println("<hopcount_method> is one of: accurate, nodelete, neverdelete");
+      System.err.println("<recrawl_interval> is the default document recrawl interval in minutes");
+      System.err.println("<expiration_interval> is the default document expiration interval in minutes");
+      System.err.println("<reseed_interval> is the default document reseed interval in minutes");
+      System.err.println("<job_priority> is the job priority (and integer between 0 and 10)");
+      System.err.println("<hop_filters> is a comma-separated list of tuples, of the form 'linktype=maxhops'");
+      System.err.println("<filespec_xml> is the document specification XML, its form dependent on the connection type");
+      System.err.println("<outputspec_xml> is the output specification XML, its form dependent on the output connection type");
+      System.exit(-1);
+    }
+
+    String description = args[0];
+    String connectionName = args[1];
+    String outputConnectionName = args[2];
+    String typeString = args[3];
+    String startString = args[4];
+    String hopcountString = args[5];
+    String recrawlInterval = args[6];
+    String expirationInterval = args[7];
+    String reseedInterval = args[8];
+    String jobPriority = args[9];
+    String hopFilters = args[10];
+    String filespecXML = args[11];
+    String outputspecXML = args[12];
+
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      IJobDescription desc = jobManager.createJob();
+
+      desc.setDescription(description);
+      desc.setConnectionName(connectionName);
+      desc.setOutputConnectionName(outputConnectionName);
+
+      if (typeString.equals("continuous"))
+        desc.setType(IJobDescription.TYPE_CONTINUOUS);
+      else if (typeString.equals("specified"))
+        desc.setType(IJobDescription.TYPE_SPECIFIED);
+      else
+        throw new ManifoldCFException("Unknown type: '"+typeString+"'");
+      if (startString.equals("windowbegin"))
+        desc.setStartMethod(IJobDescription.START_WINDOWBEGIN);
+      else if (startString.equals("windowinside"))
+        desc.setStartMethod(IJobDescription.START_WINDOWINSIDE);
+      else if (startString.equals("disable"))
+        desc.setStartMethod(IJobDescription.START_DISABLE);
+      else
+        throw new ManifoldCFException("Unknown start method: '"+startString+"'");
+
+      if (hopcountString.equals("accurate"))
+        desc.setHopcountMode(IJobDescription.HOPCOUNT_ACCURATE);
+      else if (hopcountString.equals("nodelete"))
+        desc.setHopcountMode(IJobDescription.HOPCOUNT_NODELETE);
+      else if (hopcountString.equals("neverdelete"))
+        desc.setHopcountMode(IJobDescription.HOPCOUNT_NEVERDELETE);
+      else
+        throw new ManifoldCFException("Unknown hopcount mode: '"+hopcountString+"'");
+      
+      if (recrawlInterval.length() > 0)
+        desc.setInterval(new Long(recrawlInterval));
+      if (expirationInterval.length() > 0)
+        desc.setExpiration(new Long(expirationInterval));
+      if (reseedInterval.length() > 0)
+        desc.setReseedInterval(new Long(reseedInterval));
+      desc.setPriority(Integer.parseInt(jobPriority));
+      
+      String[] hopFilterSet = hopFilters.split(",");
+      int i = 0;
+      while (i < hopFilterSet.length)
+      {
+        String hopFilter = hopFilterSet[i++];
+        if (hopFilter != null && hopFilter.length() > 0)
         {
+            String[] stuff = hopFilter.trim().split("=");
+            if (stuff != null && stuff.length == 2)
+          desc.addHopCountFilter(stuff[0],((stuff[1].length()>0)?new Long(stuff[1]):null));
         }
+      }
+      
+      desc.getSpecification().fromXML(filespecXML);
+      if (outputspecXML.length() > 0)
+        desc.getOutputSpecification().fromXML(outputspecXML);
+      
+      // Now, save
+      jobManager.save(desc);
 
-        public static void main(String[] args)
-        {
-                if (args.length != 13)
-                {
-                        System.err.println("Usage: DefineJob <description> <connection_name> <output_name> <type> <start_method> <hopcount_method> <recrawl_interval> <expiration_interval> <reseed_interval> <job_priority> <hop_filters> <filespec_xml> <outputspec_xml>");
-                        System.err.println("<type> is one of: continuous or specified");
-                        System.err.println("<start_method> is one of: windowbegin, windowinside, disable");
-                        System.err.println("<hopcount_method> is one of: accurate, nodelete, neverdelete");
-                        System.err.println("<recrawl_interval> is the default document recrawl interval in minutes");
-                        System.err.println("<expiration_interval> is the default document expiration interval in minutes");
-                        System.err.println("<reseed_interval> is the default document reseed interval in minutes");
-                        System.err.println("<job_priority> is the job priority (and integer between 0 and 10)");
-                        System.err.println("<hop_filters> is a comma-separated list of tuples, of the form 'linktype=maxhops'");
-                        System.err.println("<filespec_xml> is the document specification XML, its form dependent on the connection type");
-                        System.err.println("<outputspec_xml> is the output specification XML, its form dependent on the output connection type");
-                        System.exit(-1);
-                }
-
-                String description = args[0];
-                String connectionName = args[1];
-                String outputConnectionName = args[2];
-                String typeString = args[3];
-                String startString = args[4];
-                String hopcountString = args[5];
-                String recrawlInterval = args[6];
-                String expirationInterval = args[7];
-                String reseedInterval = args[8];
-                String jobPriority = args[9];
-                String hopFilters = args[10];
-                String filespecXML = args[11];
-                String outputspecXML = args[12];
-
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        IJobDescription desc = jobManager.createJob();
-
-                        desc.setDescription(description);
-                        desc.setConnectionName(connectionName);
-                        desc.setOutputConnectionName(outputConnectionName);
-
-                        if (typeString.equals("continuous"))
-                                desc.setType(IJobDescription.TYPE_CONTINUOUS);
-                        else if (typeString.equals("specified"))
-                                desc.setType(IJobDescription.TYPE_SPECIFIED);
-                        else
-                                throw new ManifoldCFException("Unknown type: '"+typeString+"'");
-                        if (startString.equals("windowbegin"))
-                                desc.setStartMethod(IJobDescription.START_WINDOWBEGIN);
-                        else if (startString.equals("windowinside"))
-                                desc.setStartMethod(IJobDescription.START_WINDOWINSIDE);
-                        else if (startString.equals("disable"))
-                                desc.setStartMethod(IJobDescription.START_DISABLE);
-                        else
-                                throw new ManifoldCFException("Unknown start method: '"+startString+"'");
-
-                        if (hopcountString.equals("accurate"))
-                                desc.setHopcountMode(IJobDescription.HOPCOUNT_ACCURATE);
-                        else if (hopcountString.equals("nodelete"))
-                                desc.setHopcountMode(IJobDescription.HOPCOUNT_NODELETE);
-                        else if (hopcountString.equals("neverdelete"))
-                                desc.setHopcountMode(IJobDescription.HOPCOUNT_NEVERDELETE);
-                        else
-                                throw new ManifoldCFException("Unknown hopcount mode: '"+hopcountString+"'");
-                        
-                        if (recrawlInterval.length() > 0)
-                                desc.setInterval(new Long(recrawlInterval));
-                        if (expirationInterval.length() > 0)
-                                desc.setExpiration(new Long(expirationInterval));
-                        if (reseedInterval.length() > 0)
-                                desc.setReseedInterval(new Long(reseedInterval));
-                        desc.setPriority(Integer.parseInt(jobPriority));
-                        
-                        String[] hopFilterSet = hopFilters.split(",");
-                        int i = 0;
-                        while (i < hopFilterSet.length)
-                        {
-                                String hopFilter = hopFilterSet[i++];
-                                if (hopFilter != null && hopFilter.length() > 0)
-                                {
-                                    String[] stuff = hopFilter.trim().split("=");
-                                    if (stuff != null && stuff.length == 2)
-                                        desc.addHopCountFilter(stuff[0],((stuff[1].length()>0)?new Long(stuff[1]):null));
-                                }
-                        }
-                        
-                        desc.getSpecification().fromXML(filespecXML);
-                        if (outputspecXML.length() > 0)
-                                desc.getOutputSpecification().fromXML(outputspecXML);
-                        
-                        // Now, save
-                        jobManager.save(desc);
-
-                        System.out.print(desc.getID().toString());
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(-2);
-                }
-        }
+      System.out.print(desc.getID().toString());
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(-2);
+    }
+  }
 
 
-
-                
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineRepositoryConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineRepositoryConnection.java
index 89c0edd..087662e 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineRepositoryConnection.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DefineRepositoryConnection.java
@@ -28,69 +28,66 @@
 */
 public class DefineRepositoryConnection
 {
-        public static final String _rcsid = "@(#)$Id: DefineRepositoryConnection.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: DefineRepositoryConnection.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private DefineRepositoryConnection()
-        {
-        }
+  private DefineRepositoryConnection()
+  {
+  }
 
 
-        public static void main(String[] args)
-        {
-                if (args.length < 5)
-                {
-                        System.err.println("Usage: DefineRepositoryConnection <connection_name> <description> <connector_class> <authority_name> <pool_max> <param1>=<value1> ...");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length < 5)
+    {
+      System.err.println("Usage: DefineRepositoryConnection <connection_name> <description> <connector_class> <authority_name> <pool_max> <param1>=<value1> ...");
+      System.exit(1);
+    }
 
-                String connectionName = args[0];
-                String description = args[1];
-                String connectorClass = args[2];
-                String authorityName = args[3];
-                String poolMax = args[4];
+    String connectionName = args[0];
+    String description = args[1];
+    String connectorClass = args[2];
+    String authorityName = args[3];
+    String poolMax = args[4];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(tc);
-                        IRepositoryConnection conn = mgr.create();
-                        conn.setName(connectionName);
-                        conn.setDescription(description);
-                        conn.setClassName(connectorClass);
-                        if (authorityName.length() > 0)
-                                conn.setACLAuthority(authorityName);
-                        conn.setMaxConnections(new Integer(poolMax).intValue());
-                        ConfigParams x = conn.getConfigParams();
-                        int i = 5;
-                        while (i < args.length)
-                        {
-                                String arg = args[i++];
-                                // Parse
-                                int pos = arg.indexOf("=");
-                                if (pos == -1)
-                                        throw new ManifoldCFException("Argument missing =");
-                                String name = arg.substring(0,pos);
-                                String value = arg.substring(pos+1);
-                                if (name.endsWith("assword"))
-                                        x.setObfuscatedParameter(name,value);
-                                else
-                                        x.setParameter(name,value);
-                        }
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(tc);
+      IRepositoryConnection conn = mgr.create();
+      conn.setName(connectionName);
+      conn.setDescription(description);
+      conn.setClassName(connectorClass);
+      if (authorityName.length() > 0)
+        conn.setACLAuthority(authorityName);
+      conn.setMaxConnections(new Integer(poolMax).intValue());
+      ConfigParams x = conn.getConfigParams();
+      int i = 5;
+      while (i < args.length)
+      {
+        String arg = args[i++];
+        // Parse
+        int pos = arg.indexOf("=");
+        if (pos == -1)
+          throw new ManifoldCFException("Argument missing =");
+        String name = arg.substring(0,pos);
+        String value = arg.substring(pos+1);
+        if (name.endsWith("assword"))
+          x.setObfuscatedParameter(name,value);
+        else
+          x.setParameter(name,value);
+      }
 
-                        // Now, save
-                        mgr.save(conn);
+      // Now, save
+      mgr.save(conn);
 
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
 
-
-
-                
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteJob.java
index 85f8d45..5a2f83a 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteJob.java
@@ -28,36 +28,36 @@
 */
 public class DeleteJob
 {
-        public static final String _rcsid = "@(#)$Id: DeleteJob.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: DeleteJob.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private DeleteJob()
-        {
-        }
+  private DeleteJob()
+  {
+  }
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: DeleteJob <jobid>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: DeleteJob <jobid>");
+      System.exit(1);
+    }
 
-                String jobID = args[0];
+    String jobID = args[0];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        jobManager.deleteJob(new Long(jobID));
-                        System.out.println("Job deleting");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      jobManager.deleteJob(new Long(jobID));
+      System.out.println("Job deleting");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteRepositoryConnection.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteRepositoryConnection.java
index 08df389..6d2842d 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteRepositoryConnection.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/DeleteRepositoryConnection.java
@@ -28,38 +28,38 @@
 */
 public class DeleteRepositoryConnection
 {
-        public static final String _rcsid = "@(#)$Id: DeleteRepositoryConnection.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: DeleteRepositoryConnection.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private DeleteRepositoryConnection()
-        {
-        }
+  private DeleteRepositoryConnection()
+  {
+  }
 
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: DeleteRepositoryConnection <connection_name>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: DeleteRepositoryConnection <connection_name>");
+      System.exit(1);
+    }
 
-                String connectionName = args[0];
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(tc);
-                        mgr.delete(connectionName);
+    String connectionName = args[0];
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(tc);
+      mgr.delete(connectionName);
 
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
 
 
 
-                
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/FindJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/FindJob.java
index 06372d5..a65c05c 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/FindJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/FindJob.java
@@ -48,8 +48,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
       IJobDescription[] jobs = jobManager.getAllJobs();
       int i = 0;
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/GetJobSchedule.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/GetJobSchedule.java
index 51dbecf..6377be7 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/GetJobSchedule.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/GetJobSchedule.java
@@ -49,8 +49,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
 
       IJobDescription job = jobManager.load(new Long(jobID));
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/InitializeAndRegister.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/InitializeAndRegister.java
index 0570b84..23524c1 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/InitializeAndRegister.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/InitializeAndRegister.java
@@ -53,8 +53,8 @@
 
       try
       {
-        ManifoldCF.initializeEnvironment();
         IThreadContext tc = ThreadContextFactory.make();
+        ManifoldCF.initializeEnvironment(tc);
       
         InitializeAndRegister register = new InitializeAndRegister();
         register.doExecute(tc);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobStatuses.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobStatuses.java
index 6786863..2dda179 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobStatuses.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobStatuses.java
@@ -47,8 +47,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
       JobStatus[] jobStatuses = jobManager.getAllStatus();
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobs.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobs.java
index 223fda5..089a930 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobs.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/ListJobs.java
@@ -47,8 +47,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
       IJobDescription[] jobs = jobManager.getAllJobs();
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/PauseJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/PauseJob.java
index bf62afc..2a2d0bc 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/PauseJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/PauseJob.java
@@ -28,38 +28,38 @@
 */
 public class PauseJob
 {
-        public static final String _rcsid = "@(#)$Id: PauseJob.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: PauseJob.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private PauseJob()
-        {
-        }
+  private PauseJob()
+  {
+  }
 
-        // Add: throttle, priority, recrawl interval
+  // Add: throttle, priority, recrawl interval
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: PauseJob <jobid>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: PauseJob <jobid>");
+      System.exit(1);
+    }
 
-                String jobID = args[0];
+    String jobID = args[0];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        jobManager.pauseJob(new Long(jobID));
-                        System.out.println("Job paused");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      jobManager.pauseJob(new Long(jobID));
+      System.out.println("Job paused");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RestartJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RestartJob.java
index 309c956..d2d550c 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RestartJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RestartJob.java
@@ -28,38 +28,38 @@
 */
 public class RestartJob
 {
-        public static final String _rcsid = "@(#)$Id: RestartJob.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: RestartJob.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private RestartJob()
-        {
-        }
+  private RestartJob()
+  {
+  }
 
-        // Add: throttle, priority, recrawl interval
+  // Add: throttle, priority, recrawl interval
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: RestartJob <jobid>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: RestartJob <jobid>");
+      System.exit(1);
+    }
 
-                String jobID = args[0];
+    String jobID = args[0];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        jobManager.restartJob(new Long(jobID));
-                        System.err.println("Job resuming");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      jobManager.restartJob(new Long(jobID));
+      System.err.println("Job resuming");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunDocumentStatus.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunDocumentStatus.java
index 06e83dd..b77d0f5 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunDocumentStatus.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunDocumentStatus.java
@@ -75,8 +75,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
 
       StatusFilterCriteria filter = parseFilterCriteria(jobList,currentTime,matchRegexp,matchStateList,matchStatusList);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxActivityHistory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxActivityHistory.java
index e2ad2eb..78889a2 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxActivityHistory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxActivityHistory.java
@@ -72,8 +72,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(tc);
 
       FilterCriteria filter = parseFilterCriteria(activityList,startTime,endTime,entityRegexp,resultCodeRegexp);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxBandwidthHistory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxBandwidthHistory.java
index a226d34..a3afac0 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxBandwidthHistory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunMaxBandwidthHistory.java
@@ -72,8 +72,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(tc);
 
       FilterCriteria filter = parseFilterCriteria(activityList,startTime,endTime,entityRegexp,resultCodeRegexp);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunQueueStatus.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunQueueStatus.java
index 5aa8e3a..0140164 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunQueueStatus.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunQueueStatus.java
@@ -77,8 +77,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IJobManager jobManager = JobManagerFactory.make(tc);
 
       StatusFilterCriteria filter = parseFilterCriteria(jobList,currentTime,matchRegexp,matchStateList,matchStatusList);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunResultHistory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunResultHistory.java
index 3463688..e043c8d 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunResultHistory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunResultHistory.java
@@ -73,8 +73,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(tc);
 
       FilterCriteria filter = parseFilterCriteria(activityList,startTime,endTime,entityRegexp,resultCodeRegexp);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunSimpleHistory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunSimpleHistory.java
index fee1028..5497962 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunSimpleHistory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/RunSimpleHistory.java
@@ -70,8 +70,8 @@
 
     try
     {
-      ManifoldCF.initializeEnvironment();
       IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
       IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(tc);
 
       FilterCriteria filter = parseFilterCriteria(activityList,startTime,endTime,entityRegexp,resultCodeRegexp);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/StartJob.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/StartJob.java
index f17b99f..6b51192 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/StartJob.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/StartJob.java
@@ -49,8 +49,8 @@
 
                 try
                 {
-                        ManifoldCF.initializeEnvironment();
                         IThreadContext tc = ThreadContextFactory.make();
+                        ManifoldCF.initializeEnvironment(tc);
                         IJobManager jobManager = JobManagerFactory.make(tc);
                         jobManager.manualStart(new Long(jobID));
                         System.out.println("Job starting");
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/TransactionalCrawlerInitializationCommand.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/TransactionalCrawlerInitializationCommand.java
index ef83933..44fd63e 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/TransactionalCrawlerInitializationCommand.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/TransactionalCrawlerInitializationCommand.java
@@ -30,8 +30,8 @@
 {
   public void execute() throws ManifoldCFException
   {
-    ManifoldCF.initializeEnvironment();
     IThreadContext tc = ThreadContextFactory.make();
+    ManifoldCF.initializeEnvironment(tc);
     IDBInterface database = DBInterfaceFactory.make(tc,
       org.apache.manifoldcf.agents.system.ManifoldCF.getMasterDatabaseName(),
       org.apache.manifoldcf.agents.system.ManifoldCF.getMasterDatabaseUsername(),
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobDeleted.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobDeleted.java
index 301f976..7efa596 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobDeleted.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobDeleted.java
@@ -28,44 +28,44 @@
 */
 public class WaitForJobDeleted
 {
-        public static final String _rcsid = "@(#)$Id: WaitForJobDeleted.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: WaitForJobDeleted.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private WaitForJobDeleted()
-        {
-        }
+  private WaitForJobDeleted()
+  {
+  }
 
-        // Add: throttle, priority, recrawl interval
+  // Add: throttle, priority, recrawl interval
 
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: WaitForJobDeleted <jobid>");
-                        System.exit(1);
-                }
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: WaitForJobDeleted <jobid>");
+      System.exit(1);
+    }
 
-                String jobID = args[0];
+    String jobID = args[0];
 
 
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        while (true)
-                        {
-                                JobStatus status = jobManager.getStatus(new Long(jobID));
-                                if (status == null)
-                                        break;
-                                ManifoldCF.sleep(10000);
-                        }
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      while (true)
+      {
+        JobStatus status = jobManager.getStatus(new Long(jobID));
+        if (status == null)
+          break;
+        ManifoldCF.sleep(10000);
+      }
 
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobInactive.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobInactive.java
index 65e9f5e..9b4a99f 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobInactive.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitForJobInactive.java
@@ -28,61 +28,61 @@
 */
 public class WaitForJobInactive
 {
-        public static final String _rcsid = "@(#)$Id: WaitForJobInactive.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: WaitForJobInactive.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private WaitForJobInactive()
+  private WaitForJobInactive()
+  {
+  }
+
+  // Add: throttle, priority, recrawl interval
+
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: WaitForJobInactive <jobid>");
+      System.exit(1);
+    }
+
+    String jobID = args[0];
+
+
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+
+      while (true)
+      {
+        JobStatus status = jobManager.getStatus(new Long(jobID));
+        if (status == null)
+          throw new ManifoldCFException("No such job: '"+jobID+"'");
+        int statusValue = status.getStatus();
+        switch (statusValue)
         {
+        case JobStatus.JOBSTATUS_NOTYETRUN:
+          System.out.println("Never run");
+          break;
+        case JobStatus.JOBSTATUS_COMPLETED:
+          System.out.println("OK");
+          break;
+        case JobStatus.JOBSTATUS_ERROR:
+          System.out.println("Error: "+status.getErrorText());
+          break;
+        default:
+          ManifoldCF.sleep(10000);
+          continue;
         }
+        break;
+      }
 
-        // Add: throttle, priority, recrawl interval
-
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: WaitForJobInactive <jobid>");
-                        System.exit(1);
-                }
-
-                String jobID = args[0];
-
-
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-
-                        while (true)
-                        {
-                                JobStatus status = jobManager.getStatus(new Long(jobID));
-                                if (status == null)
-                                        throw new ManifoldCFException("No such job: '"+jobID+"'");
-                                int statusValue = status.getStatus();
-                                switch (statusValue)
-                                {
-                                case JobStatus.JOBSTATUS_NOTYETRUN:
-                                        System.out.println("Never run");
-                                        break;
-                                case JobStatus.JOBSTATUS_COMPLETED:
-                                        System.out.println("OK");
-                                        break;
-                                case JobStatus.JOBSTATUS_ERROR:
-                                        System.out.println("Error: "+status.getErrorText());
-                                        break;
-                                default:
-                                        ManifoldCF.sleep(10000);
-                                        continue;
-                                }
-                                break;
-                        }
-
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitJobPaused.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitJobPaused.java
index 04a1cf7..3f16438 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitJobPaused.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/WaitJobPaused.java
@@ -28,46 +28,46 @@
 */
 public class WaitJobPaused
 {
-        public static final String _rcsid = "@(#)$Id: WaitJobPaused.java 988245 2010-08-23 18:39:35Z kwright $";
+  public static final String _rcsid = "@(#)$Id: WaitJobPaused.java 988245 2010-08-23 18:39:35Z kwright $";
 
-        private WaitJobPaused()
+  private WaitJobPaused()
+  {
+  }
+
+  // Add: throttle, priority, recrawl interval
+
+  public static void main(String[] args)
+  {
+    if (args.length != 1)
+    {
+      System.err.println("Usage: WaitJobPaused <jobid>");
+      System.exit(1);
+    }
+
+    String jobID = args[0];
+
+
+    try
+    {
+      IThreadContext tc = ThreadContextFactory.make();
+      ManifoldCF.initializeEnvironment(tc);
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      while (true)
+      {
+        if (jobManager.checkJobBusy(new Long(jobID)))
         {
+          ManifoldCF.sleep(5000);
+          continue;
         }
-
-        // Add: throttle, priority, recrawl interval
-
-        public static void main(String[] args)
-        {
-                if (args.length != 1)
-                {
-                        System.err.println("Usage: WaitJobPaused <jobid>");
-                        System.exit(1);
-                }
-
-                String jobID = args[0];
-
-
-                try
-                {
-                        ManifoldCF.initializeEnvironment();
-                        IThreadContext tc = ThreadContextFactory.make();
-                        IJobManager jobManager = JobManagerFactory.make(tc);
-                        while (true)
-                        {
-                                if (jobManager.checkJobBusy(new Long(jobID)))
-                                {
-                                        ManifoldCF.sleep(5000);
-                                        continue;
-                                }
-                                break;
-                        }
-                        System.err.println("Job no longer busy");
-                }
-                catch (Exception e)
-                {
-                        e.printStackTrace();
-                        System.exit(2);
-                }
-        }
-                
+        break;
+      }
+      System.err.println("Job no longer busy");
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      System.exit(2);
+    }
+  }
+    
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/bins/BinManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/bins/BinManager.java
new file mode 100644
index 0000000..377ff2d
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/bins/BinManager.java
@@ -0,0 +1,225 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.bins;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.CacheKeyFactory;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+import org.apache.manifoldcf.crawler.system.Logging;
+import java.util.*;
+
+/** This class manages the docbins table.
+* A row in this table represents a document bin.  The count that is kept is the
+* number of documents in this particular bin that have been assigned a document priority.
+* 
+* <br><br>
+* <b>docbins</b>
+* <table border="1" cellpadding="3" cellspacing="0">
+* <tr class="TableHeadingColor">
+* <th>Field</th><th>Type</th><th>Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
+* <tr><td>binname</td><td>VARCHAR(255)</td><td>Primary Key</td></tr>
+* <tr><td>bincounter</td><td>BIGINT</td><td></td></tr>
+* </table>
+* <br><br>
+* 
+*/
+public class BinManager extends org.apache.manifoldcf.core.database.BaseTable implements IBinManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Field names
+  public final static String binNameField = "binname";
+  public final static String binCounterField = "bincounter";
+  
+  /** Constructor.
+  *@param database is the database handle.
+  */
+  public BinManager(IDBInterface database)
+    throws ManifoldCFException
+  {
+    super(database,"docbins");
+  }
+
+  /** Install or upgrade this table.
+  */
+  @Override
+  public void install()
+    throws ManifoldCFException
+  {
+    // Standard practice: outer loop for installs
+    while (true)
+    {
+      Map existing = getTableSchema(null,null);
+      if (existing == null)
+      {
+        HashMap map = new HashMap();
+        // HSQLDB does not like null primary keys!!
+        map.put(binNameField,new ColumnDescription("VARCHAR(255)",false,true,null,null,false));
+        map.put(binCounterField,new ColumnDescription("FLOAT",false,false,null,null,false));
+        performCreate(map,null);
+      }
+      else
+      {
+        // Upgrade goes here if needed
+      }
+
+      // Index management goes here
+      IndexDescription binIndex = new IndexDescription(true,new String[]{binNameField});
+
+      // Get rid of indexes that shouldn't be there
+      Map indexes = getTableIndexes(null,null);
+      Iterator iter = indexes.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String indexName = (String)iter.next();
+        IndexDescription id = (IndexDescription)indexes.get(indexName);
+
+        if (binIndex != null && id.equals(binIndex))
+          binIndex = null;
+        else if (indexName.indexOf("_pkey") == -1)
+          // This index shouldn't be here; drop it
+          performRemoveIndex(indexName);
+      }
+
+      // Add the ones we didn't find
+      if (binIndex != null)
+        performAddIndex(null,binIndex);
+
+      break;
+    }
+  }
+
+  /** Uninstall.
+  */
+  @Override
+  public void deinstall()
+    throws ManifoldCFException
+  {
+    performDrop(null);
+  }
+
+  /** Reset all bins */
+  @Override
+  public void reset()
+    throws ManifoldCFException
+  {
+    performDelete("", null, null);
+  }
+
+  /** Get N bin values (and set next one).  If the record does not yet exist, create it with a starting value.
+  * We expect this to happen within a transaction!!.
+  *@param binName is the name of the bin (256 char max)
+  *@param newBinValue is the value to use if there is no such bin yet.  This is the value that will be
+  * returned; what will be stored will be that value + 1.
+  *@param count is the number of values desired.
+  *@return the counter values.
+  */
+  public double[] getIncrementBinValues(String binName, double newBinValue, int count)
+    throws ManifoldCFException
+  {
+    double[] returnValues = new double[count];
+    // SELECT FOR UPDATE/MODIFY is the most common path
+    ArrayList params = new ArrayList();
+    String query = buildConjunctionClause(params,new ClauseDescription[]{
+      new UnitaryClause(binNameField,binName)});
+    IResultSet result = performQuery("SELECT "+binCounterField+" FROM "+getTableName()+" WHERE "+query+" FOR UPDATE",params,null,null);
+    if (result.getRowCount() > 0)
+    {
+      IResultRow row = result.getRow(0);
+      Double value = (Double)row.getValue(binCounterField);
+      double rval = value.doubleValue();
+      if (rval < newBinValue)
+        rval = newBinValue;
+      // rval is the starting value; compute the entire array based on it.
+      for (int i = 0; i < count; i++)
+      {
+        returnValues[i] = rval;
+        rval += 1.0;
+      }
+      HashMap map = new HashMap();
+      map.put(binCounterField,new Double(rval));
+      performUpdate(map," WHERE "+query,params,null);
+    }
+    else
+    {
+      for (int i = 0; i < count; i++)
+      {
+        returnValues[i] = newBinValue;
+        newBinValue += 1.0;
+      }
+      HashMap map = new HashMap();
+      map.put(binNameField,binName);
+      map.put(binCounterField,new Double(newBinValue));
+      performInsert(map,null);
+    }
+    return returnValues;
+  }
+
+  /** Get N bin values (and set next one).  If the record does not yet exist, create it with a starting value.
+  * This method invokes its own retry-able transaction.
+  *@param binName is the name of the bin (256 char max)
+  *@param newBinValue is the value to use if there is no such bin yet.  This is the value that will be
+  * returned; what will be stored will be that value + 1.
+  *@param count is the number of values desired.
+  *@return the counter values.
+  */
+  @Override
+  public double[] getIncrementBinValuesInTransaction(String binName, double newBinValue, int count)
+    throws ManifoldCFException
+  {
+    while (true)
+    {
+      long sleepAmt = 0L;
+      beginTransaction();
+      try
+      {
+        return getIncrementBinValues(binName, newBinValue, count);
+      }
+      catch (Error e)
+      {
+        signalRollback();
+        throw e;
+      }
+      catch (RuntimeException e)
+      {
+        signalRollback();
+        throw e;
+      }
+      catch (ManifoldCFException e)
+      {
+        signalRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction obtaining docpriorities: "+e.getMessage());
+          sleepAmt = getSleepAmt();
+          continue;
+        }
+        throw e;
+      }
+      finally
+      {
+        endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/connectors/BaseRepositoryConnector.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/connectors/BaseRepositoryConnector.java
index fcbd086..9eef7d9 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/connectors/BaseRepositoryConnector.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/connectors/BaseRepositoryConnector.java
@@ -64,6 +64,7 @@
   * This must return a model value as specified above.
   *@return the model type value.
   */
+  @Override
   public int getConnectorModel()
   {
     // Return the simplest model - full everything
@@ -73,6 +74,7 @@
   /** Return the list of activities that this connector supports (i.e. writes into the log).
   *@return the list.
   */
+  @Override
   public String[] getActivitiesList()
   {
     return new String[0];
@@ -81,6 +83,7 @@
   /** Return the list of relationship types that this connector recognizes.
   *@return the list.
   */
+  @Override
   public String[] getRelationshipTypes()
   {
     // The base situation is that there are no relationships.
@@ -97,6 +100,7 @@
   *@return the set of bin names.  If an empty array is returned, it is equivalent to there being no request
   * rate throttling available for this identifier.
   */
+  @Override
   public String[] getBinNames(String documentIdentifier)
   {
     // Base version has one bin for all documents.  Use empty string for this since "*" would make
@@ -111,6 +115,7 @@
   *@param command is the command, which is taken directly from the API request.
   *@return true if the resource is found, false if not.  In either case, output may be filled in.
   */
+  @Override
   public boolean requestInfo(Configuration output, String command)
     throws ManifoldCFException
   {
@@ -143,6 +148,7 @@
   *@param endTime is the end of the time range to consider, exclusive.
   *@param jobMode is an integer describing how the job is being run, whether continuous or once-only.
   */
+  @Override
   public void addSeedDocuments(ISeedingActivity activities, DocumentSpecification spec,
     long startTime, long endTime, int jobMode)
     throws ManifoldCFException, ServiceInterruption
@@ -285,6 +291,7 @@
   * Empty version strings indicate that there is no versioning ability for the corresponding document, and the document
   * will always be processed.
   */
+  @Override
   public String[] getDocumentVersions(String[] documentIdentifiers, String[] oldVersions, IVersionActivity activities,
     DocumentSpecification spec, int jobMode, boolean usesDefaultAuthority)
     throws ManifoldCFException, ServiceInterruption
@@ -387,6 +394,7 @@
   *@param documentIdentifiers is the set of document identifiers.
   *@param versions is the corresponding set of version identifiers (individual identifiers may be null).
   */
+  @Override
   public void releaseDocumentVersions(String[] documentIdentifiers, String[] versions)
     throws ManifoldCFException
   {
@@ -396,6 +404,7 @@
   /** Get the maximum number of documents to amalgamate together into one batch, for this connector.
   *@return the maximum number. 0 indicates "unlimited".
   */
+  @Override
   public int getMaxDocumentRequest()
   {
     // Base implementation does one at a time.
@@ -416,6 +425,7 @@
   * should only find other references, and should not actually call the ingestion methods.
   *@param jobMode is an integer describing how the job is being run, whether continuous or once-only.
   */
+  @Override
   public void processDocuments(String[] documentIdentifiers, String[] versions, IProcessActivity activities,
     DocumentSpecification spec, boolean[] scanOnly, int jobMode)
     throws ManifoldCFException, ServiceInterruption
@@ -461,6 +471,7 @@
   *@param ds is the current document specification for this job.
   *@param tabsArray is an array of tab names.  Add to this array any tab names that are specific to the connector.
   */
+  @Override
   public void outputSpecificationHeader(IHTTPOutput out, Locale locale, DocumentSpecification ds, List<String> tabsArray)
     throws ManifoldCFException, IOException
   {
@@ -502,6 +513,7 @@
   *@param ds is the current document specification for this job.
   *@param tabName is the current tab name.
   */
+  @Override
   public void outputSpecificationBody(IHTTPOutput out, Locale locale, DocumentSpecification ds, String tabName)
     throws ManifoldCFException, IOException
   {
@@ -532,6 +544,7 @@
   *@return null if all is well, or a string error message if there is an error that should prevent saving of
   * the job (and cause a redirection to an error page).
   */
+  @Override
   public String processSpecificationPost(IPostParameters variableContext, Locale locale, DocumentSpecification ds)
     throws ManifoldCFException
   {
@@ -561,6 +574,7 @@
   *@param locale is the locale the output is preferred to be in.
   *@param ds is the current document specification for this job.
   */
+  @Override
   public void viewSpecification(IHTTPOutput out, Locale locale, DocumentSpecification ds)
     throws ManifoldCFException, IOException
   {
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/BinManagerFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/BinManagerFactory.java
new file mode 100644
index 0000000..4c47d75
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/BinManagerFactory.java
@@ -0,0 +1,58 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.system.*;
+
+/** Factory class for IBinManager.
+*/
+public class BinManagerFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Name
+  protected final static String binManagerName = "_BinManager_";
+
+  private BinManagerFactory()
+  {
+  }
+
+  /** Create a bin manager handle.
+  *@param threadContext is the thread context.
+  *@return the handle.
+  */
+  public static IBinManager make(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    Object o = threadContext.get(binManagerName);
+    if (o == null || !(o instanceof IBinManager))
+    {
+      IDBInterface database = DBInterfaceFactory.make(threadContext,
+        ManifoldCF.getMasterDatabaseName(),
+        ManifoldCF.getMasterDatabaseUsername(),
+        ManifoldCF.getMasterDatabasePassword());
+
+      o = new org.apache.manifoldcf.crawler.bins.BinManager(database);
+      threadContext.save(binManagerName,o);
+    }
+    return (IBinManager)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IBinManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IBinManager.java
new file mode 100644
index 0000000..482c6d8
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IBinManager.java
@@ -0,0 +1,66 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import java.util.*;
+
+/** This interface represents a class that tracks data in document bins.
+*/
+public interface IBinManager
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Install or upgrade this table.
+  */
+  public void install()
+    throws ManifoldCFException;
+
+  /** Uninstall.
+  */
+  public void deinstall()
+    throws ManifoldCFException;
+
+  /** Reset all bins */
+  public void reset()
+    throws ManifoldCFException;
+
+  /** Get N bin values (and set next one).  If the record does not yet exist, create it with a starting value.
+  * We expect this to happen within a transaction!! 
+  *@param binName is the name of the bin (256 char max)
+  *@param newBinValue is the value to use if there is no such bin yet.  This is the value that will be
+  * returned; what will be stored will be that value + 1.
+  *@param count is the number of values desired.
+  *@return the counter values.
+  */
+  public double[] getIncrementBinValues(String binName, double newBinValue, int count)
+    throws ManifoldCFException;
+
+  /** Get N bin values (and set next one).  If the record does not yet exist, create it with a starting value.
+  * This method invokes its own retry-able transaction.
+  *@param binName is the name of the bin (256 char max)
+  *@param newBinValue is the value to use if there is no such bin yet.  This is the value that will be
+  * returned; what will be stored will be that value + 1.
+  *@param count is the number of values desired.
+  *@return the counter values.
+  */
+  public double[] getIncrementBinValuesInTransaction(String binName, double newBinValue, int count)
+    throws ManifoldCFException;
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IJobManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IJobManager.java
index ef595a8..4184e73 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IJobManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IJobManager.java
@@ -136,48 +136,68 @@
   // The job queue is maintained underneath this interface, and all threads that perform
   // job activities need to go through this layer.
 
-  /** Reset the job queue immediately before starting up.
-  * If the system was shut down in the middle of a job, sufficient information should
-  * be around in the database to allow it to restart.  However, BEFORE all the job threads
-  * are spun up, there needs to be a pass over the queue to bring things back to a "normal"
-  * state.
+  /** Reset the job queue for an individual process ID.
+  * If a node was shut down in the middle of doing something, sufficient information should
+  * be around in the database to allow the node's activities to be cleaned up.
+  *@param processID is the process ID of the node we want to clean up after.
   */
-  public void prepareForStart()
+  public void cleanupProcessData(String processID)
+    throws ManifoldCFException;
+
+  /** Reset the job queue for all process IDs.
+  * If a node was shut down in the middle of doing something, sufficient information should
+  * be around in the database to allow the node's activities to be cleaned up.
+  */
+  public void cleanupProcessData()
+    throws ManifoldCFException;
+
+  /** Prepare to start the entire cluster.
+  * If there are no other nodes alive, then at the time the first node comes up, we need to
+  * reset the job queue for ALL processes that had been running before.  This method must
+  * be called in addition to cleanupProcessData().
+  */
+  public void prepareForClusterStart()
     throws ManifoldCFException;
 
   /** Reset as part of restoring document worker threads.
+  *@param processID is the current process ID.
   */
-  public void resetDocumentWorkerStatus()
+  public void resetDocumentWorkerStatus(String processID)
     throws ManifoldCFException;
 
   /** Reset as part of restoring seeding threads.
   */
-  public void resetSeedingWorkerStatus()
+  public void resetSeedingWorkerStatus(String processID)
     throws ManifoldCFException;
 
   /** Reset as part of restoring doc delete threads.
+  *@param processID is the current process ID.
   */
-  public void resetDocDeleteWorkerStatus()
+  public void resetDocDeleteWorkerStatus(String processID)
     throws ManifoldCFException;
 
   /** Reset as part of restoring doc cleanup threads.
+  *@param processID is the current process ID.
   */
-  public void resetDocCleanupWorkerStatus()
+  public void resetDocCleanupWorkerStatus(String processID)
     throws ManifoldCFException;
 
   /** Reset as part of restoring delete startup threads.
+  *@param processID is the current process ID.
   */
-  public void resetDeleteStartupWorkerStatus()
+  public void resetDeleteStartupWorkerStatus(String processID)
     throws ManifoldCFException;
 
   /** Reset as part of restoring notification threads.
+  *@param processID is the current process ID.
   */
-  public void resetNotificationWorkerStatus()
+  public void resetNotificationWorkerStatus(String processID)
     throws ManifoldCFException;
 
   /** Reset as part of restoring startup threads.
+  *@param processID is the current process ID.
   */
-  public void resetStartupWorkerStatus()
+  public void resetStartupWorkerStatus(String processID)
     throws ManifoldCFException;
 
   // These methods support the "set doc priority" thread
@@ -208,7 +228,7 @@
   *@param descriptions are the document descriptions.
   *@param priorities are the desired priorities.
   */
-  public void writeDocumentPriorities(long currentTime, DocumentDescription[] descriptions, double[] priorities)
+  public void writeDocumentPriorities(long currentTime, DocumentDescription[] descriptions, IPriorityCalculator[] priorities)
     throws ManifoldCFException;
 
   // This method supports the "expiration" thread
@@ -218,11 +238,12 @@
   * The same marking is used as is used for documents that have been queued for worker threads.  The model
   * is thus identical.
   *
+  *@param processID is the current process ID.
   *@param n is the maximum number of records desired.
   *@param currentTime is the current time.
   *@return the array of document descriptions to expire.
   */
-  public DocumentSetAndFlags getExpiredDocuments(int n, long currentTime)
+  public DocumentSetAndFlags getExpiredDocuments(String processID, int n, long currentTime)
     throws ManifoldCFException;
 
   // This method supports the "queue stuffer" thread
@@ -232,6 +253,7 @@
   * pertaining to the document's handling (e.g. whether it should be refetched if the version
   * has not changed).
   * This method also marks the documents whose descriptions have be returned as "being processed".
+  *@param processID is the current process ID.
   *@param n is the number of documents desired.
   *@param currentTime is the current time; some fetches do not occur until a specific time.
   *@param interval is the number of milliseconds that this set of documents should represent (for throttling).
@@ -243,7 +265,8 @@
   * to being overcommitted.
   *@return the array of document descriptions to fetch and process.
   */
-  public DocumentDescription[] getNextDocuments(int n, long currentTime, long interval,
+  public DocumentDescription[] getNextDocuments(String processID,
+    int n, long currentTime, long interval,
     BlockingDocuments blockingDocuments, PerformanceStatistics statistics,
     DepthStatistics scanRecord)
     throws ManifoldCFException;
@@ -387,9 +410,8 @@
   * extent that if one is *already* being processed, it will need to be done over again.
   *@param documentDescriptions is the set of description objects for the documents that have had their parent carrydown information changed.
   *@param docPriorities are the document priorities to assign to the documents, if needed.
-  *@return a flag for each document priority, true if it was used, false otherwise.
   */
-  public boolean[] carrydownChangeDocumentMultiple(DocumentDescription[] documentDescriptions, long currentTime, double[] docPriorities)
+  public void carrydownChangeDocumentMultiple(DocumentDescription[] documentDescriptions, long currentTime, IPriorityCalculator[] docPriorities)
     throws ManifoldCFException;
 
   /** Requeue a document because of carrydown changes.
@@ -397,9 +419,8 @@
   * extent that if it is *already* being processed, it will need to be done over again.
   *@param documentDescription is the description object for the document that has had its parent carrydown information changed.
   *@param docPriority is the document priority to assign to the document, if needed.
-  *@return a flag for the document priority, true if it was used, false otherwise.
   */
-  public boolean carrydownChangeDocument(DocumentDescription documentDescription, long currentTime, double docPriority)
+  public void carrydownChangeDocument(DocumentDescription documentDescription, long currentTime, IPriorityCalculator docPriority)
     throws ManifoldCFException;
 
   /** Requeue a document for further processing in the future.
@@ -495,6 +516,7 @@
   * This method is called during job startup, when the queue is being loaded.
   * A set of document references is passed to this method, which updates the status of the document
   * in the specified job's queue, according to specific state rules.
+  *@param processID is the current process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHashes are the hashes of the local document identifiers (primary key).
@@ -504,11 +526,11 @@
   *@param currentTime is the current time in milliseconds since epoch.
   *@param documentPriorities are the document priorities corresponding to the document identifiers.
   *@param prereqEventNames are the events that must be completed before each document can be processed.
-  *@return true if the priority value(s) were used, false otherwise.
   */
-  public boolean[] addDocumentsInitial(Long jobID, String[] legalLinkTypes,
+  public void addDocumentsInitial(String processID,
+    Long jobID, String[] legalLinkTypes,
     String[] docIDHashes, String[] docIDs, boolean overrideSchedule,
-    int hopcountMethod, long currentTime, double[] documentPriorities,
+    int hopcountMethod, long currentTime, IPriorityCalculator[] documentPriorities,
     String[][] prereqEventNames)
     throws ManifoldCFException;
 
@@ -516,12 +538,14 @@
   * This method is called during job startup, when the queue is being loaded, to list documents that
   * were NOT included by calling addDocumentsInitial().  Documents listed here are simply designed to
   * enable the framework to get rid of old, invalid seeds.  They are not queued for processing.
+  *@param processID is the current process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHashes are the hash values of the local document identifiers.
   *@param hopcountMethod is either accurate, nodelete, or neverdelete.
   */
-  public void addRemainingDocumentsInitial(Long jobID, String[] legalLinkTypes,
+  public void addRemainingDocumentsInitial(String processID,
+    Long jobID, String[] legalLinkTypes,
     String[] docIDHashes,
     int hopcountMethod)
     throws ManifoldCFException;
@@ -540,10 +564,11 @@
     throws ManifoldCFException;
 
   /** Begin an event sequence.
+  *@param processID is the current process ID.
   *@param eventName is the name of the event.
   *@return true if the event could be created, or false if it's already there.
   */
-  public boolean beginEventSequence(String eventName)
+  public boolean beginEventSequence(String processID, String eventName)
     throws ManifoldCFException;
 
   /** Complete an event sequence.
@@ -578,10 +603,12 @@
   * This method is called during document processing, when a document reference is discovered.
   * The document reference is passed to this method, which updates the status of the document
   * in the specified job's queue, according to specific state rules.
+  *@param processID is the current process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHash is the local document identifier hash value.
   *@param parentIdentifierHash is the optional parent identifier hash value for this document.  Pass null if none.
+  *       MUST be present in the case of carrydown information.
   *@param relationshipType is the optional link type between this document and its parent.  Pass null if there
   *       is no relationship with a parent.
   *@param hopcountMethod is either accurate, nodelete, or neverdelete.
@@ -591,25 +618,27 @@
   *@param currentTime is the time in milliseconds since epoch that will be recorded for this operation.
   *@param priority is the desired document priority for the document.
   *@param prereqEventNames are the events that must be completed before the document can be processed.
-  *@return true if the priority value was used, false otherwise.
   */
-  public boolean addDocument(Long jobID, String[] legalLinkTypes,
+  public void addDocument(String processID,
+    Long jobID, String[] legalLinkTypes,
     String docIDHash, String docID,
     String parentIdentifierHash,
     String relationshipType,
     int hopcountMethod, String[] dataNames, Object[][] dataValues,
-    long currentTime, double priority, String[] prereqEventNames)
+    long currentTime, IPriorityCalculator priority, String[] prereqEventNames)
     throws ManifoldCFException;
 
   /** Add documents to the queue in bulk.
   * This method is called during document processing, when a set of document references are discovered.
   * The document references are passed to this method, which updates the status of the document(s)
   * in the specified job's queue, according to specific state rules.
+  *@param processID is the current process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHashes are the hashes of the local document identifiers.
   *@param docIDs are the local document identifiers.
   *@param parentIdentifierHash is the optional parent identifier hash of these documents.  Pass null if none.
+  *       MUST be present in the case of carrydown information.
   *@param relationshipType is the optional link type between this document and its parent.  Pass null if there
   *       is no relationship with a parent.
   *@param hopcountMethod is either accurate, nodelete, or neverdelete.
@@ -619,14 +648,14 @@
   *@param currentTime is the time in milliseconds since epoch that will be recorded for this operation.
   *@param priorities are the desired document priorities for the documents.
   *@param prereqEventNames are the events that must be completed before each document can be processed.
-  *@return an array of boolean values indicating whether or not the passed-in priority value was used or not for each doc id (true if used).
   */
-  public boolean[] addDocuments(Long jobID, String[] legalLinkTypes,
+  public void addDocuments(String processID,
+    Long jobID, String[] legalLinkTypes,
     String[] docIDHashes, String[] docIDs,
     String parentIdentifierHash,
     String relationshipType,
     int hopcountMethod, String[][] dataNames, Object[][][] dataValues,
-    long currentTime, double[] priorities,
+    long currentTime, IPriorityCalculator[] priorities,
     String[][] prereqEventNames)
     throws ManifoldCFException;
 
@@ -745,11 +774,12 @@
     throws ManifoldCFException;
 
   /** Get the list of jobs that are ready for seeding.
+  *@param processID is the current process ID.
   *@param currentTime is the current time in milliseconds since epoch.
   *@return jobs that are active and are running in adaptive mode.  These will be seeded
   * based on what the connector says should be added to the queue.
   */
-  public JobSeedingRecord[] getJobsReadyForSeeding(long currentTime)
+  public JobSeedingRecord[] getJobsReadyForSeeding(String processID, long currentTime)
     throws ManifoldCFException;
 
   /** Reset a seeding job back to "active" state.
@@ -759,21 +789,24 @@
     throws ManifoldCFException;
 
   /** Get the list of jobs that are ready for deletion.
+  *@param processID is the current process ID.
   *@return jobs that were in the "readyfordelete" state.
   */
-  public JobDeleteRecord[] getJobsReadyForDelete()
+  public JobDeleteRecord[] getJobsReadyForDelete(String processID)
     throws ManifoldCFException;
     
   /** Get the list of jobs that are ready for startup.
+  *@param processID is the current process ID.
   *@return jobs that were in the "readyforstartup" state.  These will be marked as being in the "starting up" state.
   */
-  public JobStartRecord[] getJobsReadyForStartup()
+  public JobStartRecord[] getJobsReadyForStartup(String processID)
     throws ManifoldCFException;
 
   /** Find the list of jobs that need to have their connectors notified of job completion.
+  *@param processID is the current process ID.
   *@return the ID's of jobs that need their output connectors notified in order to become inactive.
   */
-  public JobNotifyRecord[] getJobsReadyForInactivity()
+  public JobNotifyRecord[] getJobsReadyForInactivity(String processID)
     throws ManifoldCFException;
 
   /** Inactivate a job, from the notification state.
@@ -900,20 +933,24 @@
 
   /** Get list of deletable document descriptions.  This list will take into account
   * multiple jobs that may own the same document.
+  *@param processID is the current process ID.
   *@param n is the maximum number of documents to return.
   *@param currentTime is the current time; some fetches do not occur until a specific time.
   *@return the document descriptions for these documents.
   */
-  public DocumentDescription[] getNextDeletableDocuments(int n, long currentTime)
+  public DocumentDescription[] getNextDeletableDocuments(String processID,
+    int n, long currentTime)
     throws ManifoldCFException;
 
   /** Get list of cleanable document descriptions.  This list will take into account
   * multiple jobs that may own the same document.
+  *@param processID is the current process ID.
   *@param n is the maximum number of documents to return.
   *@param currentTime is the current time; some fetches do not occur until a specific time.
   *@return the document descriptions for these documents.
   */
-  public DocumentSetAndFlags getNextCleanableDocuments(int n, long currentTime)
+  public DocumentSetAndFlags getNextCleanableDocuments(String processID,
+    int n, long currentTime)
     throws ManifoldCFException;
 
   /** Delete ingested document identifiers (as part of deleting the owning job).
@@ -971,6 +1008,7 @@
   // Status reports
 
   /** Get the status of a job.
+  *@param jobID is the job ID.
   *@return the status object for the specified job.
   */
   public JobStatus getStatus(Long jobID)
@@ -995,6 +1033,7 @@
     throws ManifoldCFException;
 
   /** Get the status of a job.
+  *@param jobID is the job ID.
   *@param includeCounts is true if document counts should be included.
   *@return the status object for the specified job.
   */
@@ -1022,6 +1061,39 @@
   public JobStatus[] getFinishedJobs(boolean includeCounts)
     throws ManifoldCFException;
 
+  /** Get the status of a job.
+  *@param jobID is the job ID.
+  *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
+  *@return the status object for the specified job.
+  */
+  public JobStatus getStatus(Long jobID, boolean includeCounts, int maxCount)
+    throws ManifoldCFException;
+
+  /** Get a list of all jobs, and their status information.
+  *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
+  *@return an ordered array of job status objects.
+  */
+  public JobStatus[] getAllStatus(boolean includeCounts, int maxCount)
+    throws ManifoldCFException;
+
+  /** Get a list of running jobs.  This is for status reporting.
+  *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
+  *@return an array of the job status objects.
+  */
+  public JobStatus[] getRunningJobs(boolean includeCounts, int maxCount)
+    throws ManifoldCFException;
+
+  /** Get a list of completed jobs, and their statistics.
+  *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
+  *@return an array of the job status objects.
+  */
+  public JobStatus[] getFinishedJobs(boolean includeCounts, int maxCount)
+    throws ManifoldCFException;
+
   // The following commands generate reports based on the queue.
 
   /** Run a 'document status' report.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IPriorityCalculator.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IPriorityCalculator.java
new file mode 100644
index 0000000..d483fe9
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IPriorityCalculator.java
@@ -0,0 +1,39 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+
+/** This interface represents an object that calculates a document priority
+* value, for inclusion in the jobqueue table.  One of these objects is passed in
+* lieu of a document priority for every document being added to the table.
+*/
+public interface IPriorityCalculator
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Compute the document priority.  This MUST be called from within a
+  * a retry-able database transaction!!
+  *@return the document priority.
+  */
+  public double getDocumentPriority()
+    throws ManifoldCFException;
+  
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IProcessActivity.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IProcessActivity.java
index 0a4bd73..e394263 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IProcessActivity.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IProcessActivity.java
@@ -33,6 +33,7 @@
   * fetched the document).
   *@param parentIdentifier is the document identifier that is considered to be the "parent"
   * of this identifier.  May be null, if no hopcount filtering desired for this kind of relationship.
+  * MUST be present in the case of carrydown information.
   *@param relationshipType is the string describing the kind of relationship described by this
   * reference.  This must be one of the strings returned by the IRepositoryConnector method
   * "getRelationshipTypes()".  May be null.
@@ -51,6 +52,7 @@
   * fetched the document).
   *@param parentIdentifier is the document identifier that is considered to be the "parent"
   * of this identifier.  May be null, if no hopcount filtering desired for this kind of relationship.
+  * MUST be present in the case of carrydown information.
   *@param relationshipType is the string describing the kind of relationship described by this
   * reference.  This must be one of the strings returned by the IRepositoryConnector method
   * "getRelationshipTypes()".  May be null.
@@ -69,6 +71,7 @@
   * fetched the document).
   *@param parentIdentifier is the document identifier that is considered to be the "parent"
   * of this identifier.  May be null, if no hopcount filtering desired for this kind of relationship.
+  * MUST be present in the case of carrydown information.
   *@param relationshipType is the string describing the kind of relationship described by this
   * reference.  This must be one of the strings returned by the IRepositoryConnector method
   * "getRelationshipTypes()".  May be null.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectionManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectionManager.java
index 2b4f7f8..3d1b156 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectionManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectionManager.java
@@ -85,11 +85,11 @@
   public void delete(String name)
     throws ManifoldCFException;
 
-  /** Return true if the specified authority name is referenced.
-  *@param authorityName is the authority name.
+  /** Return true if the specified authority group name is referenced.
+  *@param authorityGroup is the authority group name.
   *@return true if referenced, false otherwise.
   */
-  public boolean isReferenced(String authorityName)
+  public boolean isGroupReferenced(String authorityGroup)
     throws ManifoldCFException;
 
   /** Get a list of repository connections that share the same connector.
@@ -121,6 +121,18 @@
 
   // Reporting and analysis related
 
+  /** Delete history rows related to a specific connection, upon user request.
+  *@param connectionName is the connection whose history records should be removed.
+  */
+  public void cleanUpHistoryData(String connectionName)
+    throws ManifoldCFException;
+  
+  /** Delete history rows older than a specified timestamp.
+  *@param timeCutoff is the timestamp to delete older rows before.
+  */
+  public void cleanUpHistoryData(long timeCutoff)
+    throws ManifoldCFException;
+
   // Activities the Connector Framework records
 
   /** Start a job */
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnector.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnector.java
index 26741a4..516fbfd 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnector.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnector.java
@@ -104,6 +104,9 @@
   public static final int JOBMODE_ONCEONLY = IJobDescription.TYPE_SPECIFIED;
   public static final int JOBMODE_CONTINUOUS = IJobDescription.TYPE_CONTINUOUS;
 
+  /** This is the global deny token.  This should be ingested with all documents. */
+  public static final String GLOBAL_DENY_TOKEN = "DEAD_AUTHORITY";
+
   /** Tell the world what model this connector uses for addSeedDocuments().
   * This must return a model value as specified above.  The connector does not have to be connected
   * for this method to be called.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectorPool.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectorPool.java
new file mode 100644
index 0000000..e2d361f
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IRepositoryConnectorPool.java
@@ -0,0 +1,81 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An object implementing this interface functions as a pool of repository connectors.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public interface IRepositoryConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Get multiple repository connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param repositoryConnections are the connections to use the build the connector instances.
+  */
+  public IRepositoryConnector[] grabMultiple(String[] orderingKeys, IRepositoryConnection[] authorityConnections)
+    throws ManifoldCFException;
+
+  /** Get a repository connector.
+  * The connector is specified by a repository connection object.
+  *@param repositoryConnection is the authority connection to base the connector instance on.
+  */
+  public IRepositoryConnector grab(IRepositoryConnection repositoryConnection)
+    throws ManifoldCFException;
+
+  /** Release multiple repository connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  public void releaseMultiple(IRepositoryConnection[] connections, IRepositoryConnector[] connectors)
+    throws ManifoldCFException;
+
+  /** Release a repository connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  public void release(IRepositoryConnection connection, IRepositoryConnector connector)
+    throws ManifoldCFException;
+
+  /** Idle notification for inactive repository connector handles.
+  * This method polls all inactive handles.
+  */
+  public void pollAllConnectors()
+    throws ManifoldCFException;
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  public void flushUnusedConnectors()
+    throws ManifoldCFException;
+
+  /** Clean up all open repository connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  public void closeAllConnectors()
+    throws ManifoldCFException;
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IReprioritizationTracker.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IReprioritizationTracker.java
new file mode 100644
index 0000000..b821336
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/IReprioritizationTracker.java
@@ -0,0 +1,106 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+/** This interface represents functionality that tracks cluster-wide
+* reprioritization operations.
+* These operations are driven forward by whatever thread needs them,
+* and are completed if those processes die by the threads that clean up
+* after the original process.
+*/
+public interface IReprioritizationTracker
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Start a reprioritization activity.
+  *@param prioritizationTime is the timestamp of the prioritization.
+  *@param processID is the process ID of the process performing/waiting for the prioritization
+  * to complete.
+  *@param reproID is the reprocessing thread ID
+  */
+  public void startReprioritization(long prioritizationTime, String processID, String reproID)
+    throws ManifoldCFException;
+  
+  /** Retrieve the current reprioritization time stamp.  This should be obtained before
+  * performing any prioritization steps.
+  *@return the current prioritization timestamp, or null if no prioritization is in effect.
+  */
+  public Long checkReprioritizationInProgress()
+    throws ManifoldCFException;
+
+  /** Complete a reprioritization activity.  Prioritization will be marked as complete
+  * only if the processID matches the one that started the current reprioritization.
+  *@param processID is the process ID of the process completing the prioritization.
+  */
+  public void doneReprioritization(String reproID)
+    throws ManifoldCFException;
+  
+  /** Check if the specified processID is the one performing reprioritization.
+  *@param processID is the process ID to check.
+  *@return the repro ID if the processID is confirmed to be the one.
+  */
+  public String isSpecifiedProcessReprioritizing(String processID)
+    throws ManifoldCFException;
+  
+  /** Assess the current minimum depth.
+  * This method is called to provide information about the priorities of the documents being currently
+  * queued.  It is the case that it is unoptimal to assign document priorities that are fundamentally higher than this value,
+  * because then the new documents will be preferentially queued, and the goal of distributing documents across bins will not be
+  * adequately met.
+  *@param binNamesSet is the current set of priorities we see on the queuing operation.
+  */
+  public void assessMinimumDepth(Double[] binNamesSet)
+    throws ManifoldCFException;
+
+  /** Retrieve current minimum depth.
+  *@return the current minimum depth to use.
+  */
+  public double getMinimumDepth()
+    throws ManifoldCFException;
+  
+  /** Note preload amounts.
+  */
+  public void addPreloadRequest(String binName, double weightedMinimumDepth);
+  
+  /** Preload bin values.  Call this OUTSIDE of a transaction.
+  */
+  public void preloadBinValues()
+    throws ManifoldCFException;
+  
+  /** Clear any preload requests.
+  */
+  public void clearPreloadRequests();
+  
+  /** Clear remaining preloaded values.
+  */
+  public void clearPreloadedValues();
+
+  /** Get a bin value.
+  *@param binName is the bin name.
+  *@param weightedMinimumDepth is the minimum depth to use.
+  *@return the bin value.
+  */
+  public double getIncrementBinValue(String binName, double weightedMinimumDepth)
+    throws ManifoldCFException;
+  
+
+}
+
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/JobStatus.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/JobStatus.java
index ab142d8..3a70c21 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/JobStatus.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/JobStatus.java
@@ -44,15 +44,18 @@
 
 
   // Member variables.
-  protected String jobID;
-  protected String description;
-  protected int status;
-  protected long documentsInQueue;
-  protected long documentsOutstanding;
-  protected long documentsProcessed;
-  protected long startTime;       // -1 if job never started
-  protected long endTime;         // -1 if job has not ended yet
-  protected String errorText;     // null if no error on previous action
+  protected final String jobID;
+  protected final String description;
+  protected final int status;
+  protected final long documentsInQueue;
+  protected final long documentsOutstanding;
+  protected final long documentsProcessed;
+  protected final boolean queueCountExact;
+  protected final boolean outstandingCountExact;
+  protected final boolean processedCountExact;
+  protected final long startTime;       // -1 if job never started
+  protected final long endTime;         // -1 if job has not ended yet
+  protected final String errorText;     // null if no error on previous action
 
   /** Constructor.
   *@param jobID is the job identifier.
@@ -70,6 +73,9 @@
     long documentsInQueue,
     long documentsOutstanding,
     long documentsProcessed,
+    boolean queueCountExact,
+    boolean outstandingCountExact,
+    boolean processedCountExact,
     long startTime,
     long endTime,
     String errorText)
@@ -80,6 +86,9 @@
     this.documentsInQueue = documentsInQueue;
     this.documentsOutstanding = documentsOutstanding;
     this.documentsProcessed = documentsProcessed;
+    this.queueCountExact = queueCountExact;
+    this.outstandingCountExact = outstandingCountExact;
+    this.processedCountExact = processedCountExact;
     this.startTime = startTime;
     this.endTime = endTime;
     this.errorText = errorText;
@@ -117,6 +126,14 @@
     return documentsInQueue;
   }
 
+  /** Get whether the queue count is accurate, or an estimate.
+  *@return true if accurate.
+  */
+  public boolean getQueueCountExact()
+  {
+    return queueCountExact;
+  }
+  
   /** Get the number of documents outstanding.
   *@return the documents that are waiting for processing.
   */
@@ -125,6 +142,14 @@
     return documentsOutstanding;
   }
 
+  /** Get whether the outstanding count is accurate, or an estimate.
+  *@return true if accurate.
+  */
+  public boolean getOutstandingCountExact()
+  {
+    return outstandingCountExact;
+  }
+
   /** Get the number of documents that have been processed at least once.
   *@return the document count.
   */
@@ -133,6 +158,14 @@
     return documentsProcessed;
   }
 
+  /** Get whether the processed count is accurate, or an estimate.
+  *@return true if accurate.
+  */
+  public boolean getProcessedCountExact()
+  {
+    return processedCountExact;
+  }
+
   /** Get the start time.
   *@return the start time, or -1
   */
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/QueueTracker.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/QueueTracker.java
index 6a108bc..f1213b1 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/QueueTracker.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/QueueTracker.java
@@ -58,64 +58,20 @@
   protected final static double binReductionFactor = 1.0;
 
   /** These are the accumulated performance averages for all connections etc. */
-  protected PerformanceStatistics performanceStatistics = new PerformanceStatistics();
-
-  /** These are the bin counts for a prioritization pass.
-  * This hash table is keyed by bin, and contains DoubleBinCount objects as values */
-  protected HashMap binCounts = new HashMap();
+  protected final PerformanceStatistics performanceStatistics = new PerformanceStatistics();
 
   /** These are the bin counts for tracking the documents that are on
   * the active queue, but are not being processed yet */
-  protected HashMap queuedBinCounts = new HashMap();
+  protected final Map<String,BinCount> queuedBinCounts = new HashMap<String,BinCount>();
 
   /** These are the bin counts for active threads */
-  protected HashMap activeBinCounts = new HashMap();
-
-  /** The "minimum depth" - which is the smallest bin count of the last document queued.  This helps guarantee that documents that are
-  * newly discovered don't wind up with high priority, but instead wind up about the same as the currently active document priority. */
-  protected double currentMinimumDepth = 0.0;
-
-  /** This flag, when set, indicates that a reset is in progress, so queuetracker bincount updates are ignored. */
-  protected boolean resetInProgress = false;
-
-  /** This hash table is keyed by PriorityKey objects, and contains ArrayList objects containing Doubles, in sorted order. */
-  protected HashMap availablePriorities = new HashMap();
-
-  /** This hash table is keyed by a String (which is the bin name), and contains a HashMap of PriorityKey objects containing that String as a bin */
-  protected HashMap binDependencies = new HashMap();
-
+  protected final Map<String,BinCount> activeBinCounts = new HashMap<String,BinCount>();
 
   /** Constructor */
   public QueueTracker()
   {
   }
 
-  /** Reset the queue tracker.
-  * This occurs ONLY when we are about to reprioritize all active documents.  It does not affect the portion of the queue tracker that
-  * tracks the active queue.
-  */
-  public void beginReset()
-  {
-    synchronized (binCounts)
-    {
-      binCounts.clear();
-      currentMinimumDepth = 0.0;
-      availablePriorities.clear();
-      binDependencies.clear();
-      resetInProgress = true;
-    }
-
-  }
-
-  /** Finish the reset operation */
-  public void endReset()
-  {
-    synchronized (binCounts)
-    {
-      resetInProgress = false;
-    }
-  }
-
   /** Add an access record to the queue tracker.  This happens when a document
   * is added to the in-memory queue, and allows us to keep track of that particular event so
   * we can schedule in a way that meets our distribution goals.
@@ -140,7 +96,7 @@
       String binName = binNames[i++];
       synchronized (queuedBinCounts)
       {
-        BinCount value = (BinCount)queuedBinCounts.get(binName);
+        BinCount value = queuedBinCounts.get(binName);
         if (value == null)
         {
           value = new BinCount();
@@ -152,60 +108,6 @@
 
   }
 
-  /** Note that a priority which was previously allocated was not used, and needs to be released.
-  */
-  public void notePriorityNotUsed(String[] binNames, IRepositoryConnection connection, double priority)
-  {
-    // If this is called, it means that a calculated document priority was given out but was not used.  As such, this
-    // priority can now be assigned to the next comparable document that has similar characteristics.
-
-    // Since prioritization calculations are not reversible, these unused values are kept in a queue, and are used preferentially.
-    PriorityKey pk = new PriorityKey(binNames);
-    synchronized (binCounts)
-    {
-      ArrayList value = (ArrayList)availablePriorities.get(pk);
-      if (value == null)
-      {
-        value = new ArrayList();
-        availablePriorities.put(pk,value);
-      }
-      // Use bisection lookup to file the current priority so that highest priority is at the end (0.0), and lowest is at the beginning
-      int begin = 0;
-      int end = value.size();
-      while (true)
-      {
-        if (end == begin)
-        {
-          value.add(end,new Double(priority));
-          break;
-        }
-        int middle = (begin + end) >> 1;
-        Double middleValue = (Double)value.get(middle);
-        if (middleValue.doubleValue() < priority)
-        {
-          end = middle;
-        }
-        else
-        {
-          begin = middle + 1;
-        }
-      }
-      // Make sure the key is asserted into the binDependencies map for each bin
-      int i = 0;
-      while (i < binNames.length)
-      {
-        String binName = binNames[i++];
-        HashMap hm = (HashMap)binDependencies.get(binName);
-        if (hm == null)
-        {
-          hm = new HashMap();
-          binDependencies.put(binName,hm);
-        }
-        hm.put(pk,pk);
-      }
-    }
-  }
-
   /** Note the time required to successfully complete a set of documents.  This allows this module to keep track of
   * the performance characteristics of each individual connection, so distribution across connections can be balanced
   * properly.
@@ -251,7 +153,7 @@
       // Increment queued bin count for this bin.
       synchronized (queuedBinCounts)
       {
-        BinCount value = (BinCount)queuedBinCounts.get(binName);
+        BinCount value = queuedBinCounts.get(binName);
         if (value != null)
         {
           if (value.decrement())
@@ -262,7 +164,7 @@
       // Decrement active bin count for this bin.
       synchronized (activeBinCounts)
       {
-        BinCount value = (BinCount)activeBinCounts.get(binName);
+        BinCount value = activeBinCounts.get(binName);
         if (value == null)
         {
           value = new BinCount();
@@ -273,55 +175,6 @@
     }
   }
 
-  /** Assess the current minimum depth.
-  * This method is called to provide to the QueueTracker information about the priorities of the documents being currently
-  * queued.  It is the case that it is unoptimal to assign document priorities that are fundamentally higher than this value,
-  * because then the new documents will be preferentially queued, and the goal of distributing documents across bins will not be
-  * adequately met.
-  *@param binNamesSet is the current set of priorities we see on the queuing operation.
-  */
-  public void assessMinimumDepth(Double[] binNamesSet)
-  {
-    synchronized (binCounts)
-    {
-      // Ignore all numbers until reset is complete
-      if (!resetInProgress)
-      {
-        //Logging.scheduling.debug("In assessMinimumDepth");
-        int j = 0;
-        double newMinPriority = Double.MAX_VALUE;
-        while (j < binNamesSet.length)
-        {
-          Double binValue = binNamesSet[j++];
-          if (binValue.doubleValue() < newMinPriority)
-            newMinPriority = binValue.doubleValue();
-        }
-
-        if (newMinPriority != Double.MAX_VALUE)
-        {
-          // Convert minPriority to minDepth.
-          // Note that this calculation does not take into account anything having to do with connection rates, throttling,
-          // or other adjustment factors.  It allows us only to obtain the "raw" minimum depth: the depth without any
-          // adjustments.
-          double newMinDepth = Math.exp(newMinPriority)-1.0;
-
-          if (newMinDepth > currentMinimumDepth)
-          {
-            currentMinimumDepth = newMinDepth;
-            if (Logging.scheduling.isDebugEnabled())
-              Logging.scheduling.debug("Setting new minimum depth value to "+new Double(currentMinimumDepth).toString());
-          }
-          else
-          {
-            if (newMinDepth < currentMinimumDepth && Logging.scheduling.isDebugEnabled())
-              Logging.scheduling.debug("Minimum depth value seems to have been set too high too early! currentMin = "+new Double(currentMinimumDepth).toString()+"; queue value = "+new Double(newMinDepth).toString());
-          }
-        }
-      }
-    }
-
-  }
-
 
   /** Note that we have completed processing of a document with a given set of bins.
   * This method gets called when a Worker Thread has finished with a document.
@@ -347,7 +200,7 @@
       String binName = binNames[i++];
       synchronized (activeBinCounts)
       {
-        BinCount value = (BinCount)activeBinCounts.get(binName);
+        BinCount value = activeBinCounts.get(binName);
         if (value != null)
         {
           if (value.decrement())
@@ -380,7 +233,7 @@
       int count = 0;
       synchronized (activeBinCounts)
       {
-        BinCount value = (BinCount)activeBinCounts.get(binName);
+        BinCount value = activeBinCounts.get(binName);
         if (value != null)
           count = value.getValue();
       }
@@ -405,346 +258,6 @@
     return rval;
   }
 
-  /** This is a made-up constant, originally based on 100 documents/second, but adjusted downward as a result of experimentation and testing, which is described as "T" below.
-  */
-  private final static double minMsPerFetch = 50.0;
-
-  /** Calculate a document priority value.  Priorities are reversed, and in log space, so that
-  * zero (0.0) is considered the highest possible priority, and larger priority values are considered lower in actual priority.
-  *@param binNames are the global bins to which the document belongs.
-  *@param connection is the connection, from which the throttles may be obtained.  More highly throttled connections are given
-  *          less favorable priority.
-  *@return the priority value, based on recent history.  Also updates statistics atomically.
-  */
-  public double calculatePriority(String[] binNames, IRepositoryConnection connection)
-  {
-    synchronized (binCounts)
-    {
-
-      // NOTE: We must be sure to adjust the return value by the factor calculated due to performance; a slower throttle rate
-      // should yield a lower priority.  In theory it should be possible to calculate an adjusted priority pretty exactly,
-      // on the basis that the fetch rates of two distinct bins should grant priorities such that:
-      //
-      //  (n documents) / (the rate of fetch (docs/millisecond) of the first bin) = milliseconds for the first bin
-      //
-      //  should equal:
-      //
-      //  (m documents) / (the rate of fetch of the second bin) = milliseconds for the second bin
-      //
-      // ... and then assigning priorities so that after a given number of document priorities are assigned from the first bin, the
-      // corresponding (*m/n) number of document priorities would get assigned for the second bin.
-      //
-      // Suppose the maximum fetch rate for the document is F fetches per millisecond.  If the document priority assigned for the Bth
-      // bin member is -log(1/(1+B)) for a document fetched with no throttling whatsoever,
-      // then we want the priority to be -log(1/(1+k)) for a throttled bin, where k is chosen so that:
-      // k = B * ((T + 1/F)/T) = B * (1 + 1/TF)
-      // ... where T is the time taken to fetch a single document that has no throttling at all.
-      // For the purposes of this exercise, a value of 100 doc/sec, or T=10ms.
-      //
-      // Basically, for F = 0, k should be infinity, and for F = infinity, k should be B.
-
-
-      // First, calculate the document's max fetch rate, in fetches per millisecond.  This will be used to adjust the priority, and
-      // also when resetting the bin counts.
-      double[] maxFetchRates = calculateMaxFetchRates(binNames,connection);
-
-      // For each bin, we will be calculating the bin count scale factor, which is what we multiply the bincount by to adjust for the
-      // throttling on that bin.
-      double[] binCountScaleFactors = new double[binNames.length];
-
-
-      // Before calculating priority, reset any bins to a higher value, if it seems like it is appropriate.  This is how we avoid assigning priorities
-      // higher than the current level at which queuing is currently taking place.
-
-      // First thing to do is to reset the bin values based on the current minimum.  If we *do* wind up resetting, we also need to ditch any availablePriorities that match.
-      int i = 0;
-      while (i < binNames.length)
-      {
-        String binName = binNames[i];
-        // Remember, maxFetchRate is in fetches per ms.
-        double maxFetchRate = maxFetchRates[i];
-
-        // Calculate (and save for later) the scale factor for this bin.
-        double binCountScaleFactor;
-        if (maxFetchRate == 0.0)
-          binCountScaleFactor = Double.POSITIVE_INFINITY;
-        else
-          binCountScaleFactor = 1.0 + 1.0 / (minMsPerFetch * maxFetchRate);
-        binCountScaleFactors[i] = binCountScaleFactor;
-
-        double thisCount = 0.0;
-        DoubleBinCount bc = (DoubleBinCount)binCounts.get(binName);
-        if (bc != null)
-        {
-          thisCount = bc.getValue();
-        }
-        // Adjust the count, if needed, so that we are not assigning priorities greater than the current level we are
-        // grabbing documents at
-        if (thisCount * binCountScaleFactor < currentMinimumDepth)
-        {
-          double weightedMinimumDepth = currentMinimumDepth / binCountScaleFactor;
-
-          if (Logging.scheduling.isDebugEnabled())
-            Logging.scheduling.debug("Resetting value of bin '"+binName+"' to "+new Double(weightedMinimumDepth).toString()+"(scale factor is "+new Double(binCountScaleFactor)+")");
-
-          // Clear available priorities that depend on this bin
-          HashMap hm = (HashMap)binDependencies.get(binName);
-          if (hm != null)
-          {
-            Iterator iter = hm.keySet().iterator();
-            while (iter.hasNext())
-            {
-              PriorityKey pk = (PriorityKey)iter.next();
-              availablePriorities.remove(pk);
-            }
-            binDependencies.remove(binName);
-          }
-
-          // Set a new bin value
-          if (bc == null)
-          {
-            bc = new DoubleBinCount();
-            binCounts.put(binName,bc);
-          }
-          bc.setValue(weightedMinimumDepth);
-        }
-
-        i++;
-      }
-
-      double returnValue;
-
-      PriorityKey pk2 = new PriorityKey(binNames);
-      ArrayList queuedvalue = (ArrayList)availablePriorities.get(pk2);
-      if (queuedvalue != null && queuedvalue.size() > 0)
-      {
-        // There's a saved value on the queue, which was calculated but not assigned earlier.  We use these values preferentially.
-        returnValue = ((Double)queuedvalue.remove(queuedvalue.size()-1)).doubleValue();
-        if (queuedvalue.size() == 0)
-        {
-          i = 0;
-          while (i < binNames.length)
-          {
-            String binName = binNames[i++];
-            HashMap hm = (HashMap)binDependencies.get(binName);
-            if (hm != null)
-            {
-              hm.remove(pk2);
-              if (hm.size() == 0)
-                binDependencies.remove(binName);
-            }
-          }
-          availablePriorities.remove(pk2);
-        }
-      }
-      else
-      {
-        // There was no previously-calculated value available, so we need to calculate a new value.
-
-        // Find the bin with the largest effective count, and use that for the document's priority.
-        // (This of course assumes that the slowest throttle is the one that wins.)
-        double highestAdjustedCount = 0.0;
-        i = 0;
-        while (i < binNames.length)
-        {
-          String binName = binNames[i];
-          double binCountScaleFactor = binCountScaleFactors[i];
-
-          double thisCount = 0.0;
-          DoubleBinCount bc = (DoubleBinCount)binCounts.get(binName);
-          if (bc != null)
-            thisCount = bc.getValue();
-
-          double adjustedCount;
-          // Use the scale factor already calculated above to yield a priority that is adjusted for the fetch rate.
-          if (binCountScaleFactor == Double.POSITIVE_INFINITY)
-            adjustedCount = Double.POSITIVE_INFINITY;
-          else
-            adjustedCount = thisCount * binCountScaleFactor;
-          if (adjustedCount > highestAdjustedCount)
-            highestAdjustedCount = adjustedCount;
-          i++;
-        }
-
-        // Calculate the proper log value
-        if (highestAdjustedCount == Double.POSITIVE_INFINITY)
-          returnValue = Double.POSITIVE_INFINITY;
-        else
-          returnValue = Math.log(1.0 + highestAdjustedCount);
-
-        // Update bins to indicate we used another priority.  If more than one bin is associated with the document,
-        // counts for all bins are nevertheless updated, because we don't wish to arrange scheduling collisions with hypothetical
-        // documents that share any of these bins.
-        int j = 0;
-        while (j < binNames.length)
-        {
-          String binName = binNames[j];
-          DoubleBinCount bc = (DoubleBinCount)binCounts.get(binName);
-          if (bc == null)
-          {
-            bc = new DoubleBinCount();
-            binCounts.put(binName,bc);
-          }
-          bc.increment();
-
-          j++;
-        }
-
-      }
-
-      if (Logging.scheduling.isDebugEnabled())
-      {
-        StringBuilder sb = new StringBuilder();
-        int k = 0;
-        while (k < binNames.length)
-        {
-          sb.append(binNames[k++]).append(" ");
-        }
-        Logging.scheduling.debug("Document with bins ["+sb.toString()+"] given priority value "+new Double(returnValue).toString());
-      }
-
-
-      return returnValue;
-    }
-  }
-
-  /** Calculate the maximum fetch rate for a given set of bins for a given connection.
-  * This is used to adjust the final priority of a document.
-  */
-  protected double[] calculateMaxFetchRates(String[] binNames, IRepositoryConnection connection)
-  {
-    ThrottleLimits tl = new ThrottleLimits(connection);
-    return tl.getMaximumRates(binNames);
-  }
-
-  /** This class represents the throttle limits out of the connection specification */
-  protected static class ThrottleLimits
-  {
-    protected ArrayList specs = new ArrayList();
-
-    public ThrottleLimits(IRepositoryConnection connection)
-    {
-      String[] throttles = connection.getThrottles();
-      int i = 0;
-      while (i < throttles.length)
-      {
-        try
-        {
-          specs.add(new ThrottleLimitSpec(throttles[i],(double)connection.getThrottleValue(throttles[i])));
-        }
-        catch (PatternSyntaxException e)
-        {
-        }
-        i++;
-      }
-    }
-
-    public double[] getMaximumRates(String[] binNames)
-    {
-      double[] rval = new double[binNames.length];
-      int j = 0;
-      while (j < binNames.length)
-      {
-        String binName = binNames[j];
-        double maxRate = Double.POSITIVE_INFINITY;
-        int i = 0;
-        while (i < specs.size())
-        {
-          ThrottleLimitSpec spec = (ThrottleLimitSpec)specs.get(i++);
-          Pattern p = spec.getRegexp();
-          Matcher m = p.matcher(binName);
-          if (m.find())
-          {
-            double rate = spec.getMaxRate();
-            // The direction of this inequality reflects the fact that the throttling is conservative when more rules are present.
-            if (rate < maxRate)
-              maxRate = rate;
-          }
-        }
-        rval[j] = maxRate;
-        j++;
-      }
-      return rval;
-    }
-
-  }
-
-  /** This is a class which describes an individual throttle limit, in fetches per millisecond. */
-  protected static class ThrottleLimitSpec
-  {
-    /** Regexp */
-    protected Pattern regexp;
-    /** The fetch limit for all bins matching that regexp, in fetches per millisecond */
-    protected double maxRate;
-
-    /** Constructor */
-    public ThrottleLimitSpec(String regexp, double maxRate)
-      throws PatternSyntaxException
-    {
-      this.regexp = Pattern.compile(regexp);
-      this.maxRate = maxRate;
-    }
-
-    /** Get the regexp. */
-    public Pattern getRegexp()
-    {
-      return regexp;
-    }
-
-    /** Get the max count */
-    public double getMaxRate()
-    {
-      return maxRate;
-    }
-  }
-
-  /** This is the key class for the availablePriorities table */
-  protected static class PriorityKey
-  {
-    // The bins, in sorted order
-    protected String[] binNames;
-
-    /** Constructor */
-    public PriorityKey(String[] binNames)
-    {
-      this.binNames = new String[binNames.length];
-      int i = 0;
-      while (i < binNames.length)
-      {
-        this.binNames[i] = binNames[i];
-        i++;
-      }
-      java.util.Arrays.sort(this.binNames);
-    }
-
-    public int hashCode()
-    {
-      int rval = 0;
-      int i = 0;
-      while (i < binNames.length)
-      {
-        rval += binNames[i++].hashCode();
-      }
-      return rval;
-    }
-
-    public boolean equals(Object o)
-    {
-      if (!(o instanceof PriorityKey))
-        return false;
-      PriorityKey p = (PriorityKey)o;
-      if (binNames.length != p.binNames.length)
-        return false;
-      int i = 0;
-      while (i < binNames.length)
-      {
-        if (!binNames[i].equals(p.binNames[i]))
-          return false;
-        i++;
-      }
-      return true;
-    }
-  }
 
   /** This is the class which allows a mutable integer count value to be saved in the bincount table.
   */
@@ -791,42 +304,5 @@
     }
   }
 
-  /** This is the class which allows a mutable integer count value to be saved in the bincount table.
-  */
-  protected static class DoubleBinCount
-  {
-    /** The count */
-    protected double count = 0.0;
-
-    /** Create */
-    public DoubleBinCount()
-    {
-    }
-
-    public DoubleBinCount duplicate()
-    {
-      DoubleBinCount rval = new DoubleBinCount();
-      rval.count = this.count;
-      return rval;
-    }
-
-    /** Increment the counter */
-    public void increment()
-    {
-      count += 1.0;
-    }
-
-    /** Set the value */
-    public void setValue(double count)
-    {
-      this.count = count;
-    }
-
-    /** Get the value */
-    public double getValue()
-    {
-      return count;
-    }
-  }
 
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorFactory.java
index 7d3773a..12292b4 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorFactory.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorFactory.java
@@ -29,48 +29,33 @@
 
 /** This is the factory class for IRepositoryConnector objects.
 */
-public class RepositoryConnectorFactory
+public class RepositoryConnectorFactory extends ConnectorFactory<IRepositoryConnector>
 {
   public static final String _rcsid = "@(#)$Id: RepositoryConnectorFactory.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  // Pool hash table.
-  // Keyed by PoolKey; value is Pool
-  protected static Map poolHash = new HashMap();
-
-  // private static HashMap checkedOutConnectors = new HashMap();
+  // Static factory
+  protected final static RepositoryConnectorFactory thisFactory = new RepositoryConnectorFactory();
 
   private RepositoryConnectorFactory()
   {
   }
 
-  /** Install connector.
-  *@param className is the class name.
-  */
-  public static void install(IThreadContext threadContext, String className)
+  @Override
+  protected boolean isInstalled(IThreadContext tc, String className)
     throws ManifoldCFException
   {
-    IRepositoryConnector connector = getConnectorNoCheck(className);
-    connector.install(threadContext);
-  }
-
-  /** Uninstall connector.
-  *@param className is the class name.
-  */
-  public static void deinstall(IThreadContext threadContext, String className)
-    throws ManifoldCFException
-  {
-    IRepositoryConnector connector = getConnectorNoCheck(className);
-    connector.deinstall(threadContext);
+    IConnectorManager connMgr = ConnectorManagerFactory.make(tc);
+    return connMgr.isInstalled(className);
   }
 
   /** Get the activities supported by this connector.
   *@param className is the class name.
   *@return the list of activities.
   */
-  public static String[] getActivitiesList(IThreadContext threadContext, String className)
+  protected String[] getThisActivitiesList(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
+    IRepositoryConnector connector = getThisConnector(threadContext, className);
     if (connector == null)
       return null;
     String[] values = connector.getActivitiesList();
@@ -82,10 +67,10 @@
   *@param className is the class name.
   *@return the list of link types, in sorted order.
   */
-  public static String[] getRelationshipTypes(IThreadContext threadContext, String className)
+  protected String[] getThisRelationshipTypes(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
+    IRepositoryConnector connector = getThisConnector(threadContext, className);
     if (connector == null)
       return null;
     String[] values = connector.getRelationshipTypes();
@@ -97,24 +82,69 @@
   *@param className is the class name.
   *@return the connector operating model, as specified in IRepositoryConnector.
   */
-  public static int getConnectorModel(IThreadContext threadContext, String className)
+  protected int getThisConnectorModel(IThreadContext threadContext, String className)
     throws ManifoldCFException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
+    IRepositoryConnector connector = getThisConnector(threadContext, className);
     if (connector == null)
       return -1;
     return connector.getConnectorModel();
   }
 
+  /** Install connector.
+  *@param className is the class name.
+  */
+  public static void install(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    thisFactory.installThis(threadContext,className);
+  }
+
+  /** Uninstall connector.
+  *@param className is the class name.
+  */
+  public static void deinstall(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    thisFactory.deinstallThis(threadContext,className);
+  }
+
+  /** Get the activities supported by this connector.
+  *@param className is the class name.
+  *@return the list of activities.
+  */
+  public static String[] getActivitiesList(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    return thisFactory.getThisActivitiesList(threadContext,className);
+  }
+
+  /** Get the link types logged by this connector.
+  *@param className is the class name.
+  *@return the list of link types, in sorted order.
+  */
+  public static String[] getRelationshipTypes(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    return thisFactory.getThisRelationshipTypes(threadContext,className);
+  }
+
+  /** Get the operating mode for a connector.
+  *@param className is the class name.
+  *@return the connector operating model, as specified in IRepositoryConnector.
+  */
+  public static int getConnectorModel(IThreadContext threadContext, String className)
+    throws ManifoldCFException
+  {
+    return thisFactory.getThisConnectorModel(threadContext,className);
+  }
+
   /** Output the configuration header section.
   */
   public static void outputConfigurationHeader(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams parameters, ArrayList tabsArray)
     throws ManifoldCFException, IOException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return;
-    connector.outputConfigurationHeader(threadContext,out,locale,parameters,tabsArray);
+    thisFactory.outputThisConfigurationHeader(threadContext,className,out,locale,parameters,tabsArray);
   }
 
   /** Output the configuration body section.
@@ -122,10 +152,7 @@
   public static void outputConfigurationBody(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams parameters, String tabName)
     throws ManifoldCFException, IOException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return;
-    connector.outputConfigurationBody(threadContext,out,locale,parameters,tabName);
+    thisFactory.outputThisConfigurationBody(threadContext,className,out,locale,parameters,tabName);
   }
 
   /** Process configuration post data for a connector.
@@ -133,10 +160,7 @@
   public static String processConfigurationPost(IThreadContext threadContext, String className, IPostParameters variableContext, Locale locale, ConfigParams configParams)
     throws ManifoldCFException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
-    if (connector == null)
-      return null;
-    return connector.processConfigurationPost(threadContext,variableContext,locale,configParams);
+    return thisFactory.processThisConfigurationPost(threadContext,className,variableContext,locale,configParams);
   }
   
   /** View connector configuration.
@@ -144,11 +168,7 @@
   public static void viewConfiguration(IThreadContext threadContext, String className, IHTTPOutput out, Locale locale, ConfigParams configParams)
     throws ManifoldCFException, IOException
   {
-    IRepositoryConnector connector = getConnector(threadContext, className);
-    // We want to be able to view connections even if they have unregistered connectors.
-    if (connector == null)
-      return;
-    connector.viewConfiguration(threadContext,out,locale,configParams);
+    thisFactory.viewThisConfiguration(threadContext,className,out,locale,configParams);
   }
 
   /** Get a repository connector instance, without checking for installed connector.
@@ -158,619 +178,7 @@
   public static IRepositoryConnector getConnectorNoCheck(String className)
     throws ManifoldCFException
   {
-    try
-    {
-      Class theClass = ManifoldCF.findClass(className);
-      Class[] argumentClasses = new Class[0];
-      // Look for a constructor
-      Constructor c = theClass.getConstructor(argumentClasses);
-      Object[] arguments = new Object[0];
-      Object o = c.newInstance(arguments);
-      if (!(o instanceof IRepositoryConnector))
-        throw new ManifoldCFException("Class '"+className+"' does not implement IRepositoryConnector.");
-      return (IRepositoryConnector)o;
-    }
-    catch (InvocationTargetException e)
-    {
-      Throwable z = e.getTargetException();
-      if (z instanceof Error)
-        throw (Error)z;
-      else if (z instanceof RuntimeException)
-        throw (RuntimeException)z;
-      else
-        throw (ManifoldCFException)z;
-    }
-    catch (ClassNotFoundException e)
-    {
-      throw new ManifoldCFException("No repository connector class '"+className+"' was found.",
-        e);
-    }
-    catch (NoSuchMethodException e)
-    {
-      throw new ManifoldCFException("No appropriate constructor for IRepositoryConnector implementation '"+
-        className+"'.  Need xxx(ConfigParams).",
-        e);
-    }
-    catch (SecurityException e)
-    {
-      throw new ManifoldCFException("Protected constructor for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Unavailable constructor for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalArgumentException e)
-    {
-      throw new ManifoldCFException("Shouldn't happen!!!",e);
-    }
-    catch (InstantiationException e)
-    {
-      throw new ManifoldCFException("InstantiationException for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-    catch (ExceptionInInitializerError e)
-    {
-      throw new ManifoldCFException("ExceptionInInitializerError for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-
+    return thisFactory.getThisConnectorNoCheck(className);
   }
 
-  /** Get a repository connector instance.
-  *@param className is the class name.
-  *@return the instance.
-  */
-  protected static IRepositoryConnector getConnector(IThreadContext threadContext, String className)
-    throws ManifoldCFException
-  {
-    IConnectorManager connMgr = ConnectorManagerFactory.make(threadContext);
-    if (connMgr.isInstalled(className) == false)
-      return null;
-
-    try
-    {
-      Class theClass = ManifoldCF.findClass(className);
-      Class[] argumentClasses = new Class[0];
-      // Look for a constructor
-      Constructor c = theClass.getConstructor(argumentClasses);
-      Object[] arguments = new Object[0];
-      Object o = c.newInstance(arguments);
-      if (!(o instanceof IRepositoryConnector))
-        throw new ManifoldCFException("Class '"+className+"' does not implement IRepositoryConnector.");
-      return (IRepositoryConnector)o;
-    }
-    catch (InvocationTargetException e)
-    {
-      Throwable z = e.getTargetException();
-      if (z instanceof Error)
-        throw (Error)z;
-      else if (z instanceof RuntimeException)
-        throw (RuntimeException)z;
-      else
-        throw (ManifoldCFException)z;
-    }
-    catch (ClassNotFoundException e)
-    {
-      // This MAY mean that an existing connector has been uninstalled; check out this possibility!
-      // We return null because that is the signal that we cannot get a connector instance for that reason.
-      if (connMgr.isInstalled(className) == false)
-        return null;
-
-      throw new ManifoldCFException("No repository connector class '"+className+"' was found.",
-        e);
-    }
-    catch (NoSuchMethodException e)
-    {
-      throw new ManifoldCFException("No appropriate constructor for IRepositoryConnector implementation '"+
-        className+"'.  Need xxx(ConfigParams).",
-        e);
-    }
-    catch (SecurityException e)
-    {
-      throw new ManifoldCFException("Protected constructor for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalAccessException e)
-    {
-      throw new ManifoldCFException("Unavailable constructor for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-    catch (IllegalArgumentException e)
-    {
-      throw new ManifoldCFException("Shouldn't happen!!!",e);
-    }
-    catch (InstantiationException e)
-    {
-      throw new ManifoldCFException("InstantiationException for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-    catch (ExceptionInInitializerError e)
-    {
-      throw new ManifoldCFException("ExceptionInInitializerError for IRepositoryConnector implementation '"+className+"'",
-        e);
-    }
-
-  }
-
-  /** Get multiple repository connectors, all at once.  Do this in a particular order
-  * so that any connector exhaustion will not cause a deadlock.
-  */
-  public static IRepositoryConnector[] grabMultiple(IThreadContext threadContext,
-    String[] orderingKeys, String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
-    throws ManifoldCFException
-  {
-    IRepositoryConnector[] rval = new IRepositoryConnector[classNames.length];
-    HashMap orderMap = new HashMap();
-    int i = 0;
-    while (i < orderingKeys.length)
-    {
-      if (orderMap.get(orderingKeys[i]) != null)
-        throw new ManifoldCFException("Found duplicate order key");
-      orderMap.put(orderingKeys[i],new Integer(i));
-      i++;
-    }
-    java.util.Arrays.sort(orderingKeys);
-    i = 0;
-    while (i < orderingKeys.length)
-    {
-      String orderingKey = orderingKeys[i];
-      int index = ((Integer)orderMap.get(orderingKey)).intValue();
-      String className = classNames[index];
-      ConfigParams cp = configInfos[index];
-      int maxPoolSize = maxPoolSizes[index];
-      try
-      {
-        IRepositoryConnector connector = grab(threadContext,className,cp,maxPoolSize);
-        rval[index] = connector;
-      }
-      catch (Throwable e)
-      {
-        while (i > 0)
-        {
-          i--;
-          orderingKey = orderingKeys[i];
-          index = ((Integer)orderMap.get(orderingKey)).intValue();
-          try
-          {
-            release(rval[index]);
-          }
-          catch (ManifoldCFException e2)
-          {
-          }
-        }
-        if (e instanceof ManifoldCFException)
-          throw (ManifoldCFException)e;
-	else if (e instanceof RuntimeException)
-          throw (RuntimeException)e;
-        throw (Error)e;
-      }
-      i++;
-    }
-    return rval;
-  }
-
-  /** Get a repository connector.
-  * The connector is specified by its class and its parameters.
-  *@param threadContext is the current thread context.
-  *@param className is the name of the class to get a connector for.
-  *@param configInfo are the name/value pairs constituting configuration info
-  * for this class.
-  */
-  public static IRepositoryConnector grab(IThreadContext threadContext,
-    String className, ConfigParams configInfo, int maxPoolSize)
-    throws ManifoldCFException
-  {
-    // We want to get handles off the pool and use them.  But the
-    // handles we fetch have to have the right config information.
-
-    // Use the classname and config info to build a pool key.  This
-    // key will be discarded if we actually have to save a key persistently,
-    // since we avoid copying the configInfo unnecessarily.
-    PoolKey pk = new PoolKey(className,configInfo);
-    Pool p;
-    synchronized (poolHash)
-    {
-      p = (Pool)poolHash.get(pk);
-      if (p == null)
-      {
-        pk = new PoolKey(className,configInfo.duplicate());
-        p = new Pool(pk,maxPoolSize);
-        poolHash.put(pk,p);
-      }
-    }
-
-    IRepositoryConnector rval = p.getConnector(threadContext);
-
-    // Enter it in the pool so we can figure out whether it closed
-    // synchronized (checkedOutConnectors)
-    // {
-    //      checkedOutConnectors.put(rval.toString(),new ConnectorTracker(rval));
-    // }
-
-    return rval;
-
-  }
-
-  /** Release multiple repository connectors.
-  */
-  public static void releaseMultiple(IRepositoryConnector[] connectors)
-    throws ManifoldCFException
-  {
-    int i = 0;
-    ManifoldCFException currentException = null;
-    while (i < connectors.length)
-    {
-      IRepositoryConnector c = connectors[i++];
-      try
-      {
-        release(c);
-      }
-      catch (ManifoldCFException e)
-      {
-        if (currentException == null)
-          currentException = e;
-      }
-    }
-    if (currentException != null)
-      throw currentException;
-  }
-
-  /** Release a repository connector.
-  *@param connector is the connector to release.
-  */
-  public static void release(IRepositoryConnector connector)
-    throws ManifoldCFException
-  {
-    // If the connector is null, skip the release, because we never really got the connector in the first place.
-    if (connector == null)
-      return;
-
-    // Figure out which pool this goes on, and put it there
-    PoolKey pk = new PoolKey(connector.getClass().getName(),connector.getConfiguration());
-    Pool p;
-    synchronized (poolHash)
-    {
-      p = (Pool)poolHash.get(pk);
-    }
-
-    p.releaseConnector(connector);
-
-    // synchronized (checkedOutConnectors)
-    // {
-    //      checkedOutConnectors.remove(connector.toString());
-    // }
-
-  }
-
-  /** Idle notification for inactive repository connector handles.
-  * This method polls all inactive handles.
-  */
-  public static void pollAllConnectors(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // System.out.println("Pool stats:");
-
-    // Go through the whole pool and notify everyone
-    synchronized (poolHash)
-    {
-      Iterator iter = poolHash.values().iterator();
-      while (iter.hasNext())
-      {
-        Pool p = (Pool)iter.next();
-        p.pollAll(threadContext);
-        //p.printStats();
-      }
-    }
-
-    // System.out.println("About to check if any repository connector instances have been abandoned...");
-    // checkConnectors(System.currentTimeMillis());
-  }
-
-  /** Clean up all open repository connector handles.
-  * This method is called when the connector pool needs to be flushed,
-  * to free resources.
-  *@param threadContext is the local thread context.
-  */
-  public static void closeAllConnectors(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    // Go through the whole pool and clean it out
-    synchronized (poolHash)
-    {
-      Iterator iter = poolHash.values().iterator();
-      while (iter.hasNext())
-      {
-        Pool p = (Pool)iter.next();
-        p.releaseAll(threadContext);
-      }
-    }
-  }
-
-  /** Track connection allocation */
-  // public static void checkConnectors(long currentTime)
-  // {
-  //      synchronized (checkedOutConnectors)
-  //      {
-  //              Iterator iter = checkedOutConnectors.keySet().iterator();
-  //              while (iter.hasNext())
-  //              {
-  //                      Object key = iter.next();
-  //                      ConnectorTracker ct = (ConnectorTracker)checkedOutConnectors.get(key);
-  //                      if (ct.hasExpired(currentTime))
-  //                              ct.printDetails();
-  //              }
-  //      }
-  // }
-
-  /** This is an immutable pool key class, which describes a pool in terms of two independent keys.
-  */
-  public static class PoolKey
-  {
-    protected String className;
-    protected ConfigParams configInfo;
-
-    /** Constructor.
-    */
-    public PoolKey(String className, Map configInfo)
-    {
-      this.className = className;
-      this.configInfo = new ConfigParams(configInfo);
-    }
-
-    public PoolKey(String className, ConfigParams configInfo)
-    {
-      this.className = className;
-      this.configInfo = configInfo;
-    }
-
-    /** Get the class name.
-    *@return the class name.
-    */
-    public String getClassName()
-    {
-      return className;
-    }
-
-    /** Get the config info.
-    *@return the params
-    */
-    public ConfigParams getParams()
-    {
-      return configInfo;
-    }
-
-    /** Hash code.
-    */
-    public int hashCode()
-    {
-      return className.hashCode() + configInfo.hashCode();
-    }
-
-    /** Equals operator.
-    */
-    public boolean equals(Object o)
-    {
-      if (!(o instanceof PoolKey))
-        return false;
-
-      PoolKey pk = (PoolKey)o;
-      return pk.className.equals(className) && pk.configInfo.equals(configInfo);
-    }
-
-  }
-
-  /** This class represents a value in the pool hash, which corresponds to a given key.
-  */
-  public static class Pool
-  {
-    protected ArrayList stack = new ArrayList();
-    protected PoolKey key;
-    protected int numFree;
-
-    /** Constructor
-    */
-    public Pool(PoolKey pk, int maxCount)
-    {
-      key = pk;
-      numFree = maxCount;
-    }
-
-    /** Grab a repository connector.
-    * If none exists, construct it using the information in the pool key.
-    *@return the connector, or null if no connector could be connected.
-    */
-    public synchronized IRepositoryConnector getConnector(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      while (numFree == 0)
-      {
-        try
-        {
-          wait();
-        }
-        catch (InterruptedException e)
-        {
-          throw new ManifoldCFException("Interrupted: "+e.getMessage(),e,ManifoldCFException.INTERRUPTED);
-        }
-      }
-
-      if (stack.size() == 0)
-      {
-        String className = key.getClassName();
-        ConfigParams configParams = key.getParams();
-
-        IConnectorManager connMgr = ConnectorManagerFactory.make(threadContext);
-        if (connMgr.isInstalled(className) == false)
-          return null;
-
-        try
-        {
-          Class theClass = ManifoldCF.findClass(className);
-          Class[] argumentClasses = new Class[0];
-          // Look for a constructor
-          Constructor c = theClass.getConstructor(argumentClasses);
-          Object[] arguments = new Object[0];
-          Object o = c.newInstance(arguments);
-          if (!(o instanceof IRepositoryConnector))
-            throw new ManifoldCFException("Class '"+className+"' does not implement IRepositoryConnector.");
-          IRepositoryConnector newrc = (IRepositoryConnector)o;
-          newrc.connect(configParams);
-          stack.add(newrc);
-        }
-        catch (InvocationTargetException e)
-        {
-          Throwable z = e.getTargetException();
-          if (z instanceof Error)
-            throw (Error)z;
-          else if (z instanceof RuntimeException)
-            throw (RuntimeException)z;
-          else
-            throw (ManifoldCFException)z;
-        }
-        catch (ClassNotFoundException e)
-        {
-          // If we see this exception, it COULD mean that the connector was uninstalled, and we happened to get here
-          // after that occurred.
-          // We return null because that is the signal that we cannot get a connector instance for that reason.
-          if (connMgr.isInstalled(className) == false)
-            return null;
-
-          throw new ManifoldCFException("No repository connector class '"+className+"' was found.",
-            e);
-        }
-        catch (NoSuchMethodException e)
-        {
-          throw new ManifoldCFException("No appropriate constructor for IRepositoryConnector implementation '"+
-            className+"'.  Need xxx(ConfigParams).",
-            e);
-        }
-        catch (SecurityException e)
-        {
-          throw new ManifoldCFException("Protected constructor for IRepositoryConnector implementation '"+className+"'",
-            e);
-        }
-        catch (IllegalAccessException e)
-        {
-          throw new ManifoldCFException("Unavailable constructor for IRepositoryConnector implementation '"+className+"'",
-            e);
-        }
-        catch (IllegalArgumentException e)
-        {
-          throw new ManifoldCFException("Shouldn't happen!!!",e);
-        }
-        catch (InstantiationException e)
-        {
-          throw new ManifoldCFException("InstantiationException for IRepositoryConnector implementation '"+className+"'",
-            e);
-        }
-        catch (ExceptionInInitializerError e)
-        {
-          throw new ManifoldCFException("ExceptionInInitializerError for IRepositoryConnector implementation '"+className+"'",
-            e);
-        }
-      }
-
-      // Since thread context set can fail, do that before we remove it from the pool.
-      IRepositoryConnector rc = (IRepositoryConnector)stack.get(stack.size()-1);
-      rc.setThreadContext(threadContext);
-      stack.remove(stack.size()-1);
-      numFree--;
-
-      return rc;
-    }
-
-    /** Release a repository connector to the pool.
-    *@param connector is the connector.
-    */
-    public synchronized void releaseConnector(IRepositoryConnector connector)
-      throws ManifoldCFException
-    {
-      if (connector == null)
-        return;
-
-      // Make sure connector knows it's released
-      connector.clearThreadContext();
-      // Append
-      stack.add(connector);
-      numFree++;
-      notifyAll();
-    }
-
-    /** Notify all free connectors.
-    */
-    public synchronized void pollAll(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      int i = 0;
-      while (i < stack.size())
-      {
-        IConnector rc = (IConnector)stack.get(i++);
-        // Notify
-        rc.setThreadContext(threadContext);
-        try
-        {
-          rc.poll();
-        }
-        finally
-        {
-          rc.clearThreadContext();
-        }
-      }
-    }
-
-    /** Release all free connectors.
-    */
-    public synchronized void releaseAll(IThreadContext threadContext)
-      throws ManifoldCFException
-    {
-      while (stack.size() > 0)
-      {
-        // Disconnect
-        IConnector rc = (IConnector)stack.get(stack.size()-1);
-        rc.setThreadContext(threadContext);
-        try
-        {
-          rc.disconnect();
-          stack.remove(stack.size()-1);
-        }
-        finally
-        {
-          rc.clearThreadContext();
-        }
-      }
-    }
-
-    /** Print pool stats */
-    public synchronized void printStats()
-    {
-      System.out.println(" Class name = "+key.getClassName()+"; Number free = "+Integer.toString(numFree));
-    }
-  }
-
-
-  protected static class ConnectorTracker
-  {
-    protected IRepositoryConnector theConnector;
-    protected long checkoutTime;
-    protected Exception theTrace;
-
-    public ConnectorTracker(IRepositoryConnector theConnector)
-    {
-      this.theConnector = theConnector;
-      this.checkoutTime = System.currentTimeMillis();
-      this.theTrace = new Exception("Stack trace");
-    }
-
-    public boolean hasExpired(long currentTime)
-    {
-      return (checkoutTime + 300000L < currentTime);
-    }
-
-    public void printDetails()
-    {
-      Logging.threads.error("Connector instance may have been abandoned: "+theConnector.toString(),theTrace);
-    }
-  }
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorPoolFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorPoolFactory.java
new file mode 100644
index 0000000..bac5f3d
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/RepositoryConnectorPoolFactory.java
@@ -0,0 +1,54 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.util.*;
+
+/** Repository connector pool manager factory.
+*/
+public class RepositoryConnectorPoolFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // name to use in thread context pool of objects
+  private final static String objectName = "_RepositoryConnectorPoolMgr_";
+
+  private RepositoryConnectorPoolFactory()
+  {
+  }
+
+  /** Make a repository connector pool handle.
+  *@param tc is the thread context.
+  *@return the handle.
+  */
+  public static IRepositoryConnectorPool make(IThreadContext tc)
+    throws ManifoldCFException
+  {
+    Object o = tc.get(objectName);
+    if (o == null || !(o instanceof IRepositoryConnectorPool))
+    {
+      o = new org.apache.manifoldcf.crawler.repositoryconnectorpool.RepositoryConnectorPool(tc);
+      tc.save(objectName,o);
+    }
+    return (IRepositoryConnectorPool)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/ReprioritizationTrackerFactory.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/ReprioritizationTrackerFactory.java
new file mode 100644
index 0000000..a142dea
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/interfaces/ReprioritizationTrackerFactory.java
@@ -0,0 +1,53 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.interfaces;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.system.*;
+
+/** Factory class for IReprioritizationTracker.
+*/
+public class ReprioritizationTrackerFactory
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  // Name
+  protected final static String reprioritizationTrackerName = "_ReprioritizationTracker_";
+
+  private ReprioritizationTrackerFactory()
+  {
+  }
+
+  /** Create a reprioritization tracker handle.
+  *@param threadContext is the thread context.
+  *@return the handle.
+  */
+  public static IReprioritizationTracker make(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    Object o = threadContext.get(reprioritizationTrackerName);
+    if (o == null || !(o instanceof IReprioritizationTracker))
+    {
+      o = new org.apache.manifoldcf.crawler.reprioritizationtracker.ReprioritizationTracker(threadContext);
+      threadContext.save(reprioritizationTrackerName,o);
+    }
+    return (IReprioritizationTracker)o;
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Carrydown.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Carrydown.java
index 2559e8f..034a477 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Carrydown.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Carrydown.java
@@ -39,6 +39,7 @@
  * <tr><td>datavaluehash</td><td>VARCHAR(40)</td><td></td></tr>
  * <tr><td>datavalue</td><td>LONGTEXT</td><td></td></tr>
  * <tr><td>isnew</td><td>CHAR(1)</td><td></td></tr>
+ * <tr><td>processid</td><td>VARCHAR(16)</td><td></td></tr>
  * </table>
  * <br><br>
  * 
@@ -55,6 +56,7 @@
   public static final String dataValueHashField = "datavaluehash";
   public static final String dataValueField = "datavalue";
   public static final String newField = "isnew";
+  public static final String processIDField = "processid";
 
   /** The standard value for the "isnew" field.  Means that the link existed prior to this scan, and no new link
   * was found yet. */
@@ -106,6 +108,7 @@
         map.put(dataValueHashField,new ColumnDescription("VARCHAR(40)",false,true,null,null,false));
         map.put(dataValueField,new ColumnDescription("LONGTEXT",false,true,null,null,false));
         map.put(newField,new ColumnDescription("CHAR(1)",false,true,null,null,false));
+        map.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
 
         performCreate(map,null);
 
@@ -113,13 +116,19 @@
       else
       {
         // Upgrade code goes here, if needed.
+        if (existing.get(processIDField) == null)
+        {
+          Map insertMap = new HashMap();
+          insertMap.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+          performAlter(insertMap,null,null,null);
+        }
       }
 
       // Now do index management
 
       IndexDescription uniqueIndex = new IndexDescription(true,new String[]{jobIDField,parentIDHashField,childIDHashField,dataNameField,dataValueHashField});
       IndexDescription jobChildDataIndex = new IndexDescription(false,new String[]{jobIDField,childIDHashField,dataNameField});
-      IndexDescription newIndex = new IndexDescription(false,new String[]{newField});
+      IndexDescription newIndex = new IndexDescription(false,new String[]{newField,processIDField});
 
       Map indexes = getTableIndexes(null,null);
       Iterator iter = indexes.keySet().iterator();
@@ -198,8 +207,31 @@
   //
 
   /** Reset, at startup time.
+  *@param processID is the process ID.
   */
-  public void reset()
+  public void restart(String processID)
+    throws ManifoldCFException
+  {
+    // Delete "new" rows
+    HashMap map = new HashMap();
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(newField,statusToString(ISNEW_NEW)),
+      new UnitaryClause(processIDField,processID)});
+    performDelete("WHERE "+query,list,null);
+
+    // Convert "existing" rows to base
+    map.put(newField,statusToString(ISNEW_BASE));
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(newField,statusToString(ISNEW_EXISTING)),
+      new UnitaryClause(processIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+  }
+
+  /** Clean up after all process IDs.
+  */
+  public void restart()
     throws ManifoldCFException
   {
     // Delete "new" rows
@@ -216,23 +248,31 @@
       new UnitaryClause(newField,statusToString(ISNEW_EXISTING))});
     performUpdate(map,"WHERE "+query,list,null);
   }
+  
+  /** Reset, at startup time, entire cluster
+  */
+  public void restartCluster()
+    throws ManifoldCFException
+  {
+    // Does nothing
+  }
 
   /** Add carrydown data for a given parent/child pair.
   *
   *@return true if new carrydown data was recorded; false otherwise.
   */
   public boolean recordCarrydownData(Long jobID, String parentDocumentIDHash, String childDocumentIDHash,
-    String[] documentDataNames, String[][] documentDataValueHashes, Object[][] documentDataValues)
+    String[] documentDataNames, String[][] documentDataValueHashes, Object[][] documentDataValues, String processID)
     throws ManifoldCFException
   {
     return recordCarrydownDataMultiple(jobID,parentDocumentIDHash,new String[]{childDocumentIDHash},
-      new String[][]{documentDataNames},new String[][][]{documentDataValueHashes},new Object[][][]{documentDataValues})[0];
+      new String[][]{documentDataNames},new String[][][]{documentDataValueHashes},new Object[][][]{documentDataValues},processID)[0];
   }
 
   /** Add carrydown data to the table.
   */
   public boolean[] recordCarrydownDataMultiple(Long jobID, String parentDocumentIDHash, String[] childDocumentIDHashes,
-    String[][] dataNames, String[][][] dataValueHashes, Object[][][] dataValues)
+    String[][] dataNames, String[][][] dataValueHashes, Object[][][] dataValues, String processID)
     throws ManifoldCFException
   {
 
@@ -340,19 +380,30 @@
         }
 
         map.put(newField,statusToString(ISNEW_NEW));
+        map.put(processIDField,processID);
         performInsert(map,null);
         noteModifications(1,0,0);
         insertHappened.put(childDocumentIDHash,new Boolean(true));
       }
       else
       {
-        sb = new StringBuilder();
-        sb.append("WHERE ").append(jobIDField).append("=? AND ")
+        sb = new StringBuilder("WHERE ");
+        ArrayList updateList = new ArrayList();
+        sb.append(buildConjunctionClause(updateList,new ClauseDescription[]{
+          new UnitaryClause(jobIDField,jobID),
+          new UnitaryClause(parentIDHashField,parentDocumentIDHash),
+          new UnitaryClause(childIDHashField,childDocumentIDHash),
+          new UnitaryClause(dataNameField,dataName),
+          (dataValueHash==null)?
+            new NullCheckClause(dataValueHashField,true):
+            new UnitaryClause(dataValueHashField,dataValueHash)}));
+
+        /*
+        sb.append(jobIDField).append("=? AND ")
           .append(parentIDHashField).append("=? AND ")
           .append(childIDHashField).append("=? AND ")
           .append(dataNameField).append("=? AND ");
 
-        ArrayList updateList = new ArrayList();
         updateList.add(jobID);
         updateList.add(parentDocumentIDHash);
         updateList.add(childDocumentIDHash);
@@ -366,8 +417,10 @@
         {
           sb.append(dataValueHashField).append(" IS NULL");
         }
-
+        */
+            
         map.put(newField,statusToString(ISNEW_EXISTING));
+        map.put(processIDField,processID);
         performUpdate(map,sb.toString(),updateList,null);
         noteModifications(0,1,0);
       }
@@ -483,6 +536,7 @@
     
     HashMap map = new HashMap();
     map.put(newField,statusToString(ISNEW_BASE));
+    map.put(processIDField,null);
     performUpdate(map,sb.toString(),newList,null);
     
     noteModifications(0,list.size(),0);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/EventManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/EventManager.java
index 234974e..b5f321b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/EventManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/EventManager.java
@@ -21,6 +21,7 @@
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.CacheKeyFactory;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
 import java.util.*;
 
 /** This class manages the events table.
@@ -33,6 +34,7 @@
 * <tr class="TableHeadingColor">
 * <th>Field</th><th>Type</th><th>Description&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</th>
 * <tr><td>name</td><td>VARCHAR(255)</td><td>Primary Key</td></tr>
+* <tr><td>processid</td><td>VARCHAR(16)</td><td></td></tr>
 * </table>
 * <br><br>
 * 
@@ -43,7 +45,8 @@
 
   // Field names
   public final static String eventNameField = "name";
-
+  public final static String processIDField = "processid";
+  
   /** Constructor.
   *@param database is the database handle.
   */
@@ -66,14 +69,41 @@
       {
         HashMap map = new HashMap();
         map.put(eventNameField,new ColumnDescription("VARCHAR(255)",true,false,null,null,false));
+        map.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
         performCreate(map,null);
       }
       else
       {
         // Upgrade goes here if needed
+        if (existing.get(processIDField) == null)
+        {
+          Map insertMap = new HashMap();
+          insertMap.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+          performAlter(insertMap,null,null,null);
+        }
       }
 
       // Index management goes here
+      IndexDescription processIDIndex = new IndexDescription(false,new String[]{processIDField});
+      // Get rid of unused indexes
+      Map indexes = getTableIndexes(null,null);
+      Iterator iter = indexes.keySet().iterator();
+      while (iter.hasNext())
+      {
+        String indexName = (String)iter.next();
+        IndexDescription id = (IndexDescription)indexes.get(indexName);
+
+        if (processIDIndex != null && id.equals(processIDIndex))
+          processIDIndex = null;
+        else if (indexName.indexOf("_pkey") == -1)
+          // This index shouldn't be here; drop it
+          performRemoveIndex(indexName);
+      }
+
+      // Build missing indexes
+
+      if (processIDIndex != null)
+        performAddIndex(null,processIDIndex);
 
       break;
     }
@@ -84,42 +114,45 @@
   public void deinstall()
     throws ManifoldCFException
   {
-    beginTransaction();
-    try
-    {
-      performDrop(null);
-    }
-    catch (ManifoldCFException e)
-    {
-      signalRollback();
-      throw e;
-    }
-    catch (Error e)
-    {
-      signalRollback();
-      throw e;
-    }
-    finally
-    {
-      endTransaction();
-    }
+    performDrop(null);
   }
 
   /** Prepare for restart.
+  *@param processID is the processID to restart.
+  */
+  public void restart(String processID)
+    throws ManifoldCFException
+  {
+    // Delete all rows in this table matching the processID
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(processIDField,processID)});
+    performDelete("WHERE "+query,list,null);
+  }
+
+  /** Clean up after all processIDs.
   */
   public void restart()
     throws ManifoldCFException
   {
-    // Delete all rows in this table.
     performDelete("",null,null);
   }
-
+  
+  /** Restart cluster.
+  */
+  public void restartCluster()
+    throws ManifoldCFException
+  {
+    // Does nothing
+  }
+  
   /** Atomically create an event - and return false if the event already exists */
-  public void createEvent(String eventName)
+  public void createEvent(String eventName, String processID)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     map.put(eventNameField,eventName);
+    map.put(processIDField,processID);
     performInsert(map,null);
   }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/HopCount.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/HopCount.java
index ce710af..0682fb3 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/HopCount.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/HopCount.java
@@ -302,20 +302,37 @@
   }
 
   /** Reset, at startup time.
+  *@param processID is the process ID.
   */
-  public void reset()
+  public void restart(String processID)
     throws ManifoldCFException
   {
-    intrinsicLinkManager.reset();
+    intrinsicLinkManager.restart(processID);
   }
 
+  /** Clean up after all process IDs.
+  */
+  public void restart()
+    throws ManifoldCFException
+  {
+    intrinsicLinkManager.restart();
+  }
+  
+  /** Restart entire cluster.
+  */
+  public void restartCluster()
+    throws ManifoldCFException
+  {
+    intrinsicLinkManager.restartCluster();
+  }
+  
   /** Record a references from a set of documents to the root.  These will be marked as "new" or "existing", and
   * will have a null linktype.
   */
-  public void recordSeedReferences(Long jobID, String[] legalLinkTypes, String[] targetDocumentIDHashes, int hopcountMethod)
+  public void recordSeedReferences(Long jobID, String[] legalLinkTypes, String[] targetDocumentIDHashes, int hopcountMethod, String processID)
     throws ManifoldCFException
   {
-    doRecord(jobID,legalLinkTypes,"",targetDocumentIDHashes,"",hopcountMethod);
+    doRecord(jobID,legalLinkTypes,"",targetDocumentIDHashes,"",hopcountMethod,processID);
   }
 
   /** Finish seed references.  Seed references are special in that the only source is the root.
@@ -329,19 +346,19 @@
   /** Record a reference from source to target.  This reference will be marked as "new" or "existing".
   */
   public boolean recordReference(Long jobID, String[] legalLinkTypes, String sourceDocumentIDHash, String targetDocumentIDHash, String linkType,
-    int hopcountMethod)
+    int hopcountMethod, String processID)
     throws ManifoldCFException
   {
-    return doRecord(jobID,legalLinkTypes,sourceDocumentIDHash,new String[]{targetDocumentIDHash},linkType,hopcountMethod)[0];
+    return doRecord(jobID,legalLinkTypes,sourceDocumentIDHash,new String[]{targetDocumentIDHash},linkType,hopcountMethod,processID)[0];
   }
 
   /** Record a set of references from source to target.  This reference will be marked as "new" or "existing".
   */
   public boolean[] recordReferences(Long jobID, String[] legalLinkTypes, String sourceDocumentIDHash, String[] targetDocumentIDHashes, String linkType,
-    int hopcountMethod)
+    int hopcountMethod, String processID)
     throws ManifoldCFException
   {
-    return doRecord(jobID,legalLinkTypes,sourceDocumentIDHash,targetDocumentIDHashes,linkType,hopcountMethod);
+    return doRecord(jobID,legalLinkTypes,sourceDocumentIDHash,targetDocumentIDHashes,linkType,hopcountMethod,processID);
   }
 
   /** Complete a recalculation pass for a set of source documents.  All child links that are not marked as "new"
@@ -355,7 +372,7 @@
 
   /** Do the work of recording source-target references. */
   protected boolean[] doRecord(Long jobID, String[] legalLinkTypes, String sourceDocumentIDHash, String[] targetDocumentIDHashes, String linkType,
-    int hopcountMethod)
+    int hopcountMethod, String processID)
     throws ManifoldCFException
   {
 
@@ -367,7 +384,7 @@
       rval[i] = false;
     }
     
-    String[] newReferences = intrinsicLinkManager.recordReferences(jobID,sourceDocumentIDHash,targetDocumentIDHashes,linkType);
+    String[] newReferences = intrinsicLinkManager.recordReferences(jobID,sourceDocumentIDHash,targetDocumentIDHashes,linkType,processID);
     if (newReferences.length > 0)
     {
       // There are added links.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/IntrinsicLink.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/IntrinsicLink.java
index 5ffdd5d..331db8b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/IntrinsicLink.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/IntrinsicLink.java
@@ -37,6 +37,7 @@
  * <tr><td>parentidhash</td><td>VARCHAR(40)</td><td></td></tr>
  * <tr><td>childidhash</td><td>VARCHAR(40)</td><td></td></tr>
  * <tr><td>isnew</td><td>CHAR(1)</td><td></td></tr>
+ * <tr><td>processid</td><td>VARCHAR(16)</td><td></td></tr>
  * </table>
  * <br><br>
  * 
@@ -61,6 +62,7 @@
   public static final String parentIDHashField = "parentidhash";
   public static final String childIDHashField = "childidhash";
   public static final String newField = "isnew";
+  public static final String processIDField = "processid";
 
   // Map from string character to link status
   protected static Map linkstatusMap;
@@ -99,17 +101,24 @@
         map.put(parentIDHashField,new ColumnDescription("VARCHAR(40)",false,false,null,null,false));
         map.put(childIDHashField,new ColumnDescription("VARCHAR(40)",false,true,null,null,false));
         map.put(newField,new ColumnDescription("CHAR(1)",false,true,null,null,false));
+        map.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
         performCreate(map,null);
       }
       else
       {
         // Perform upgrade, if needed.
+        if (existing.get(processIDField) == null)
+        {
+          Map insertMap = new HashMap();
+          insertMap.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+          performAlter(insertMap,null,null,null);
+        }
       }
 
       // Indexes
       IndexDescription uniqueIndex = new IndexDescription(true,new String[]{jobIDField,parentIDHashField,linkTypeField,childIDHashField});
       IndexDescription jobChildNewIndex = new IndexDescription(false,new String[]{jobIDField,childIDHashField,newField});
-      IndexDescription newIndex = new IndexDescription(false,new String[]{newField});
+      IndexDescription newIndex = new IndexDescription(false,new String[]{newField,processIDField});
 
       Map indexes = getTableIndexes(null,null);
       Iterator iter = indexes.keySet().iterator();
@@ -179,8 +188,25 @@
   * of documents, and cached records of hopcount are updated only when requested, it is safest to simply
   * move any "new" or "new existing" links back to base state on startup.  Then, the next time that page
   * is processed, the links will be updated properly.
+  *@param processID is the process to restart.
   */
-  public void reset()
+  public void restart(String processID)
+    throws ManifoldCFException
+  {
+    HashMap map = new HashMap();
+    map.put(newField,statusToString(LINKSTATUS_BASE));
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(newField,new Object[]{
+        statusToString(LINKSTATUS_NEW),
+        statusToString(LINKSTATUS_EXISTING)}),
+      new UnitaryClause(processIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+  }
+
+  /** Clean up after all process IDs
+  */
+  public void restart()
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
@@ -192,11 +218,18 @@
         statusToString(LINKSTATUS_EXISTING)})});
     performUpdate(map,"WHERE "+query,list,null);
   }
-
+  
+  public void restartCluster()
+    throws ManifoldCFException
+  {
+    // Does nothing
+  }
+  
   /** Record a references from source to targets.  These references will be marked as either "new" or "existing".
   *@return the target document ID's that are considered "new".
   */
-  public String[] recordReferences(Long jobID, String sourceDocumentIDHash, String[] targetDocumentIDHashes, String linkType)
+  public String[] recordReferences(Long jobID, String sourceDocumentIDHash,
+    String[] targetDocumentIDHashes, String linkType, String processID)
     throws ManifoldCFException
   {
     HashMap duplicateRemoval = new HashMap();
@@ -253,6 +286,7 @@
         map.put(childIDHashField,sourceDocumentIDHash);
         map.put(linkTypeField,linkType);
         map.put(newField,statusToString(LINKSTATUS_NEW));
+        map.put(processIDField,processID);
         performInsert(map,null);
         noteModifications(1,0,0);
       }
@@ -260,6 +294,7 @@
       {
         HashMap map = new HashMap();
         map.put(newField,statusToString(LINKSTATUS_EXISTING));
+        map.put(processIDField,processID);
         ArrayList updateList = new ArrayList();
         String query = buildConjunctionClause(updateList,new ClauseDescription[]{
           new UnitaryClause(jobIDField,jobID),
@@ -470,6 +505,7 @@
   {
     HashMap map = new HashMap();
     map.put(newField,statusToString(LINKSTATUS_BASE));
+    map.put(processIDField,null);
     
     StringBuilder sb = new StringBuilder("WHERE ");
     ArrayList newList = new ArrayList();
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobManager.java
index a863f8c..9b6150b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobManager.java
@@ -32,19 +32,24 @@
 {
   public static final String _rcsid = "@(#)$Id: JobManager.java 998576 2010-09-19 01:11:02Z kwright $";
 
+  protected static final String stufferLock = "_STUFFER_";
+  protected static final String deleteStufferLock = "_DELETESTUFFER_";
+  protected static final String expireStufferLock = "_EXPIRESTUFFER_";
+  protected static final String cleanStufferLock = "_CLEANSTUFFER_";
   protected static final String hopLock = "_HOPLOCK_";
 
   // Member variables
-  protected IDBInterface database;
-  protected IOutputConnectionManager outputMgr;
-  protected IRepositoryConnectionManager connectionMgr;
-  protected ILockManager lockManager;
-  protected IThreadContext threadContext;
-  protected JobQueue jobQueue;
-  protected Jobs jobs;
-  protected HopCount hopCount;
-  protected Carrydown carryDown;
-  protected EventManager eventManager;
+  protected final IDBInterface database;
+  protected final IOutputConnectionManager outputMgr;
+  protected final IRepositoryConnectionManager connectionMgr;
+  protected final IRepositoryConnectorPool repositoryConnectorPool;
+  protected final ILockManager lockManager;
+  protected final IThreadContext threadContext;
+  protected final JobQueue jobQueue;
+  protected final Jobs jobs;
+  protected final HopCount hopCount;
+  protected final Carrydown carryDown;
+  protected final EventManager eventManager;
 
 
   protected static Random random = new Random();
@@ -65,8 +70,8 @@
     eventManager = new EventManager(database);
     outputMgr = OutputConnectionManagerFactory.make(threadContext);
     connectionMgr = RepositoryConnectionManagerFactory.make(threadContext);
+    repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
     lockManager = LockManagerFactory.make(threadContext);
-
   }
 
   /** Install.
@@ -530,7 +535,7 @@
       throw new ManifoldCFException("Job "+id+" is active; you must shut it down before deleting it");
       if (status != jobs.STATUS_INACTIVE)
         throw new ManifoldCFException("Job "+id+" is busy; you must wait and/or shut it down before deleting it");
-      jobs.writeStatus(id,jobs.STATUS_READYFORDELETE);
+      jobs.writePermanentStatus(id,jobs.STATUS_READYFORDELETE);
       if (Logging.jobs.isDebugEnabled())
         Logging.jobs.debug("Job "+id+" marked for deletion");
     }
@@ -616,18 +621,15 @@
   // The job queue is maintained underneath this interface, and all threads that perform
   // job activities need to go through this layer.
 
-  /** Reset the job queue immediately after starting up.
-  * If the system was shut down in the middle of a job, sufficient information should
-  * be around in the database to allow it to restart.  However, BEFORE all the job threads
-  * are spun up, there needs to be a pass over the queue to bring things back to a "normal"
-  * state.
-  * Also, if a job's status is in a state that indicates it was being processed by a thread
-  * (which is now dead), then we have to set that status back to previous value.
+  /** Reset the job queue for an individual process ID.
+  * If a node was shut down in the middle of doing something, sufficient information should
+  * be around in the database to allow the node's activities to be cleaned up.
+  *@param processID is the process ID of the node we want to clean up after.
   */
-  public void prepareForStart()
+  public void cleanupProcessData(String processID)
     throws ManifoldCFException
   {
-    Logging.jobs.debug("Resetting due to restart");
+    Logging.jobs.debug("Cleaning up process data for process '"+processID+"'");
     while (true)
     {
       long sleepAmt = 0L;
@@ -635,19 +637,19 @@
       try
       {
         // Clean up events
-        eventManager.restart();
+        eventManager.restart(processID);
         // Clean up job queue
-        jobQueue.restart();
+        jobQueue.restart(processID);
         // Clean up jobs
-        jobs.restart();
+        jobs.restart(processID);
         // Clean up hopcount stuff
-        hopCount.reset();
+        hopCount.restart(processID);
         // Clean up carrydown stuff
-        carryDown.reset();
+        carryDown.restart(processID);
         TrackerClass.notePrecommit();
         database.performCommit();
         TrackerClass.noteCommit();
-        Logging.jobs.debug("Reset complete");
+        Logging.jobs.debug("Cleanup complete");
         break;
       }
       catch (ManifoldCFException e)
@@ -677,9 +679,128 @@
     }
   }
 
-  /** Reset as part of restoring document worker threads.
+  /** Reset the job queue for all process IDs.
+  * If a node was shut down in the middle of doing something, sufficient information should
+  * be around in the database to allow the node's activities to be cleaned up.
   */
-  public void resetDocumentWorkerStatus()
+  @Override
+  public void cleanupProcessData()
+    throws ManifoldCFException
+  {
+    Logging.jobs.debug("Cleaning up all process data");
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        // Clean up events
+        eventManager.restart();
+        // Clean up job queue
+        jobQueue.restart();
+        // Clean up jobs
+        jobs.restart();
+        // Clean up hopcount stuff
+        hopCount.restart();
+        // Clean up carrydown stuff
+        carryDown.restart();
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        Logging.jobs.debug("Cleanup complete");
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting for restart: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+  }
+
+  /** Prepare to start the entire cluster.
+  * If there are no other nodes alive, then at the time the first node comes up, we need to
+  * reset the job queue for ALL processes that had been running before.  This method must
+  * be called in addition to cleanupProcessData().
+  */
+  @Override
+  public void prepareForClusterStart()
+    throws ManifoldCFException
+  {
+    Logging.jobs.debug("Starting cluster");
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        // Clean up events
+        eventManager.restartCluster();
+        // Clean up job queue
+        jobQueue.restartCluster();
+        // Clean up jobs
+        jobs.restartCluster();
+        // Clean up hopcount stuff
+        hopCount.restartCluster();
+        // Clean up carrydown stuff
+        carryDown.restartCluster();
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        Logging.jobs.debug("Cluster start complete");
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction starting cluster: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+  }
+
+  /** Reset as part of restoring document worker threads.
+  *@param processID is the current process ID.
+  */
+  @Override
+  public void resetDocumentWorkerStatus(String processID)
     throws ManifoldCFException
   {
     Logging.jobs.debug("Resetting document active status");
@@ -689,7 +810,7 @@
       database.beginTransaction();
       try
       {
-        jobQueue.resetDocumentWorkerStatus();
+        jobQueue.resetDocumentWorkerStatus(processID);
         TrackerClass.notePrecommit();
         database.performCommit();
         TrackerClass.noteCommit();
@@ -724,66 +845,294 @@
   }
 
   /** Reset as part of restoring seeding threads.
+  *@param processID is the current process ID.
   */
-  public void resetSeedingWorkerStatus()
+  @Override
+  public void resetSeedingWorkerStatus(String processID)
     throws ManifoldCFException
   {
     Logging.jobs.debug("Resetting seeding status");
-    jobs.resetSeedingWorkerStatus();
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        jobs.resetSeedingWorkerStatus(processID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting seeding worker status: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
     Logging.jobs.debug("Reset complete");
   }
 
   /** Reset as part of restoring doc delete threads.
+  *@param processID is the current process ID.
   */
-  public void resetDocDeleteWorkerStatus()
+  @Override
+  public void resetDocDeleteWorkerStatus(String processID)
     throws ManifoldCFException
   {
     Logging.jobs.debug("Resetting doc deleting status");
-    TrackerClass.notePrecommit();
-    jobQueue.resetDocDeleteWorkerStatus();
-    TrackerClass.noteCommit();
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        jobQueue.resetDocDeleteWorkerStatus(processID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting doc deleting worker status: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
     Logging.jobs.debug("Reset complete");
   }
 
   /** Reset as part of restoring doc cleanup threads.
+  *@param processID is the current process ID.
   */
-  public void resetDocCleanupWorkerStatus()
+  @Override
+  public void resetDocCleanupWorkerStatus(String processID)
     throws ManifoldCFException
   {
     Logging.jobs.debug("Resetting doc cleaning status");
-    TrackerClass.notePrecommit();
-    jobQueue.resetDocCleanupWorkerStatus();
-    TrackerClass.noteCommit();
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        jobQueue.resetDocCleanupWorkerStatus(processID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting doc cleaning status: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
     Logging.jobs.debug("Reset complete");
   }
 
   /** Reset as part of restoring delete startup threads.
+  *@param processID is the current process ID.
   */
-  public void resetDeleteStartupWorkerStatus()
+  @Override
+  public void resetDeleteStartupWorkerStatus(String processID)
     throws ManifoldCFException
   {
     Logging.jobs.debug("Resetting job delete starting up status");
-    jobs.resetDeleteStartupWorkerStatus();
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        jobs.resetDeleteStartupWorkerStatus(processID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting job delete starting up status: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
     Logging.jobs.debug("Reset complete");
   }
 
   /** Reset as part of restoring notification threads.
   */
-  public void resetNotificationWorkerStatus()
+  @Override
+  public void resetNotificationWorkerStatus(String processID)
     throws ManifoldCFException
   {
-    Logging.jobs.debug("Resetting notification up status");
-    jobs.resetNotificationWorkerStatus();
+    Logging.jobs.debug("Resetting notification worker status");
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        jobs.resetNotificationWorkerStatus(processID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting notification worker status: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
     Logging.jobs.debug("Reset complete");
   }
 
   /** Reset as part of restoring startup threads.
   */
-  public void resetStartupWorkerStatus()
+  @Override
+  public void resetStartupWorkerStatus(String processID)
     throws ManifoldCFException
   {
     Logging.jobs.debug("Resetting job starting up status");
-    jobs.resetStartupWorkerStatus();
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        jobs.resetStartupWorkerStatus(processID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction resetting job starting up status: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
     Logging.jobs.debug("Reset complete");
   }
 
@@ -807,11 +1156,13 @@
   * is returned will be transitioned to the "beingcleaned" state.  Documents which are
   * not in transition and are eligible, but are owned by other jobs, will have their
   * jobqueue entries deleted by this method.
+  *@param processID is the current process ID.
   *@param maxCount is the maximum number of documents to return.
   *@param currentTime is the current time; some fetches do not occur until a specific time.
   *@return the document descriptions for these documents.
   */
-  public DocumentSetAndFlags getNextCleanableDocuments(int maxCount, long currentTime)
+  @Override
+  public DocumentSetAndFlags getNextCleanableDocuments(String processID, int maxCount, long currentTime)
     throws ManifoldCFException
   {
     // The query will be built here, because it joins the jobs table against the jobqueue
@@ -848,198 +1199,208 @@
     while (true)
     {
       long sleepAmt = 0L;
-      database.beginTransaction();
+      
+      // Enter a write lock.  This means we don't need a FOR UPDATE on the query.
+      lockManager.enterWriteLock(cleanStufferLock);
       try
       {
-        if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug("After "+new Long(System.currentTimeMillis()-startTime).toString()+" ms, beginning query to look for documents to put on cleaning queue");
-
-        // Note: This query does not do "FOR UPDATE", because it is running under the only thread that can possibly change the document's state to "being cleaned".
-        ArrayList list = new ArrayList();
-        
-        StringBuilder sb = new StringBuilder("SELECT ");
-        sb.append(jobQueue.idField).append(",")
-          .append(jobQueue.jobIDField).append(",")
-          .append(jobQueue.docHashField).append(",")
-          .append(jobQueue.docIDField).append(",")
-          .append(jobQueue.failTimeField).append(",")
-          .append(jobQueue.failCountField)
-          .append(" FROM ").append(jobQueue.getTableName()).append(" t0 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new UnitaryClause("t0."+jobQueue.statusField,jobQueue.statusToString(jobQueue.STATUS_PURGATORY))})).append(" AND ")
-          .append("(t0.").append(jobQueue.checkTimeField).append(" IS NULL OR t0.").append(jobQueue.checkTimeField).append("<=?) AND ");
-          
-        list.add(new Long(currentTime));
-
-        sb.append("EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t1 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new UnitaryClause("t1."+jobs.statusField,jobs.statusToString(jobs.STATUS_SHUTTINGDOWN)),
-            new JoinClause("t1."+jobs.idField,"t0."+jobQueue.jobIDField)}))
-          .append(") AND ");
-        
-        sb.append("NOT EXISTS(SELECT 'x' FROM ").append(jobQueue.getTableName()).append(" t2 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new JoinClause("t2."+jobQueue.docHashField,"t0."+jobQueue.docHashField)})).append(" AND ")
-          .append("t2.").append(jobQueue.statusField).append(" IN (?,?,?,?,?,?) AND ")
-          .append("t2.").append(jobQueue.jobIDField).append("!=t0.").append(jobQueue.jobIDField)
-          .append(") ");
-          
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVE));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVEPURGATORY));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCAN));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCANPURGATORY));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGDELETED));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGCLEANED));
-
-        sb.append(database.constructOffsetLimitClause(0,maxCount));
-        
-        // The checktime is null field check is for backwards compatibility
-        IResultSet set = database.performQuery(sb.toString(),list,null,null,maxCount,null);
-
-        if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug("Done getting docs to cleaning queue after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
-
-        // We need to organize the returned set by connection name and output connection name, so that we can efficiently
-        // use  getUnindexableDocumentIdentifiers.
-        // This is a table keyed by connection name and containing an ArrayList, which in turn contains DocumentDescription
-        // objects.
-        HashMap connectionNameMap = new HashMap();
-        HashMap documentIDMap = new HashMap();
-        int i = 0;
-        while (i < set.getRowCount())
-        {
-          IResultRow row = set.getRow(i);
-          Long jobID = (Long)row.getValue(jobQueue.jobIDField);
-          String documentIDHash = (String)row.getValue(jobQueue.docHashField);
-          String documentID = (String)row.getValue(jobQueue.docIDField);
-          Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
-          Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
-          // Failtime is probably not useful in this context, but we'll bring it along for completeness
-          long failTime;
-          if (failTimeValue == null)
-            failTime = -1L;
-          else
-            failTime = failTimeValue.longValue();
-          int failCount;
-          if (failCountValue == null)
-            failCount = 0;
-          else
-            failCount = (int)failCountValue.longValue();
-          IJobDescription jobDesc = load(jobID);
-          String connectionName = jobDesc.getConnectionName();
-          String outputConnectionName = jobDesc.getOutputConnectionName();
-          DocumentDescription dd = new DocumentDescription((Long)row.getValue(jobQueue.idField),
-            jobID,documentIDHash,documentID,failTime,failCount);
-          String compositeDocumentID = makeCompositeID(documentIDHash,connectionName);
-          documentIDMap.put(compositeDocumentID,dd);
-          Map y = (Map)connectionNameMap.get(connectionName);
-          if (y == null)
-          {
-            y = new HashMap();
-            connectionNameMap.put(connectionName,y);
-          }
-          ArrayList x = (ArrayList)y.get(outputConnectionName);
-          if (x == null)
-          {
-            // New entry needed
-            x = new ArrayList();
-            y.put(outputConnectionName,x);
-          }
-          x.add(dd);
-          i++;
-        }
-
-        // For each bin, obtain a filtered answer, and enter all answers into a hash table.
-        // We'll then scan the result again to look up the right descriptions for return,
-        // and delete the ones that are owned multiply.
-        HashMap allowedDocIds = new HashMap();
-        Iterator iter = connectionNameMap.keySet().iterator();
-        while (iter.hasNext())
-        {
-          String connectionName = (String)iter.next();
-          Map y = (Map)connectionNameMap.get(connectionName);
-          Iterator outputIter = y.keySet().iterator();
-          while (outputIter.hasNext())
-          {
-            String outputConnectionName = (String)outputIter.next();
-            ArrayList x = (ArrayList)y.get(outputConnectionName);
-            // Do the filter query
-            DocumentDescription[] descriptions = new DocumentDescription[x.size()];
-            int j = 0;
-            while (j < descriptions.length)
-            {
-              descriptions[j] = (DocumentDescription)x.get(j);
-              j++;
-            }
-            String[] docIDHashes = getUnindexableDocumentIdentifiers(descriptions,connectionName,outputConnectionName);
-            j = 0;
-            while (j < docIDHashes.length)
-            {
-              String docIDHash = docIDHashes[j++];
-              String key = makeCompositeID(docIDHash,connectionName);
-              allowedDocIds.put(key,docIDHash);
-            }
-          }
-        }
-
-        // Now, assemble a result, and change the state of the records accordingly
-        // First thing to do is order by document hash, so we reduce the risk of deadlock.
-        String[] compositeIDArray = new String[documentIDMap.size()];
-        i = 0;
-        iter = documentIDMap.keySet().iterator();
-        while (iter.hasNext())
-        {
-          compositeIDArray[i++] = (String)iter.next();
-        }
-        
-        java.util.Arrays.sort(compositeIDArray);
-        
-        DocumentDescription[] rval = new DocumentDescription[documentIDMap.size()];
-        boolean[] rvalBoolean = new boolean[documentIDMap.size()];
-        i = 0;
-        while (i < compositeIDArray.length)
-        {
-          String compositeDocID = compositeIDArray[i];
-          DocumentDescription dd = (DocumentDescription)documentIDMap.get(compositeDocID);
-          // Determine whether we can delete it from the index or not
-          rvalBoolean[i] = (allowedDocIds.get(compositeDocID) != null);
-          // Set the record status to "being cleaned" and return it
-          rval[i++] = dd;
-          jobQueue.setCleaningStatus(dd.getID());
-        }
-
-        TrackerClass.notePrecommit();
-        database.performCommit();
-        TrackerClass.noteCommit();
-        
-        if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug("Done pruning unindexable docs after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
-
-        return new DocumentSetAndFlags(rval,rvalBoolean);
-
-      }
-      catch (Error e)
-      {
-        database.signalRollback();
-        TrackerClass.noteRollback();
-        throw e;
-      }
-      catch (ManifoldCFException e)
-      {
-        database.signalRollback();
-        TrackerClass.noteRollback();
-        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        database.beginTransaction();
+        try
         {
           if (Logging.perf.isDebugEnabled())
-            Logging.perf.debug("Aborted transaction finding deleteable docs: "+e.getMessage());
-          sleepAmt = getRandomAmount();
-          continue;
+            Logging.perf.debug("After "+new Long(System.currentTimeMillis()-startTime).toString()+" ms, beginning query to look for documents to put on cleaning queue");
+
+          // Note: This query does not do "FOR UPDATE", because it is running under the only thread that can possibly change the document's state to "being cleaned".
+          ArrayList list = new ArrayList();
+          
+          StringBuilder sb = new StringBuilder("SELECT ");
+          sb.append(jobQueue.idField).append(",")
+            .append(jobQueue.jobIDField).append(",")
+            .append(jobQueue.docHashField).append(",")
+            .append(jobQueue.docIDField).append(",")
+            .append(jobQueue.failTimeField).append(",")
+            .append(jobQueue.failCountField)
+            .append(" FROM ").append(jobQueue.getTableName()).append(" t0 WHERE ")
+            .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+              new UnitaryClause("t0."+jobQueue.statusField,jobQueue.statusToString(jobQueue.STATUS_PURGATORY))})).append(" AND ")
+            .append("(t0.").append(jobQueue.checkTimeField).append(" IS NULL OR t0.").append(jobQueue.checkTimeField).append("<=?) AND ");
+            
+          list.add(new Long(currentTime));
+
+          sb.append("EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t1 WHERE ")
+            .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+              new UnitaryClause("t1."+jobs.statusField,jobs.statusToString(jobs.STATUS_SHUTTINGDOWN)),
+              new JoinClause("t1."+jobs.idField,"t0."+jobQueue.jobIDField)}))
+            .append(") AND ");
+          
+          sb.append("NOT EXISTS(SELECT 'x' FROM ").append(jobQueue.getTableName()).append(" t2 WHERE ")
+            .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+              new JoinClause("t2."+jobQueue.docHashField,"t0."+jobQueue.docHashField)})).append(" AND ")
+            .append("t2.").append(jobQueue.statusField).append(" IN (?,?,?,?,?,?) AND ")
+            .append("t2.").append(jobQueue.jobIDField).append("!=t0.").append(jobQueue.jobIDField)
+            .append(") ");
+            
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVE));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVEPURGATORY));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCAN));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCANPURGATORY));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGDELETED));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGCLEANED));
+
+          sb.append(database.constructOffsetLimitClause(0,maxCount));
+          
+          // The checktime is null field check is for backwards compatibility
+          IResultSet set = database.performQuery(sb.toString(),list,null,null,maxCount,null);
+
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Done getting docs to cleaning queue after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
+
+          // We need to organize the returned set by connection name and output connection name, so that we can efficiently
+          // use  getUnindexableDocumentIdentifiers.
+          // This is a table keyed by connection name and containing an ArrayList, which in turn contains DocumentDescription
+          // objects.
+          HashMap connectionNameMap = new HashMap();
+          HashMap documentIDMap = new HashMap();
+          int i = 0;
+          while (i < set.getRowCount())
+          {
+            IResultRow row = set.getRow(i);
+            Long jobID = (Long)row.getValue(jobQueue.jobIDField);
+            String documentIDHash = (String)row.getValue(jobQueue.docHashField);
+            String documentID = (String)row.getValue(jobQueue.docIDField);
+            Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
+            Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
+            // Failtime is probably not useful in this context, but we'll bring it along for completeness
+            long failTime;
+            if (failTimeValue == null)
+              failTime = -1L;
+            else
+              failTime = failTimeValue.longValue();
+            int failCount;
+            if (failCountValue == null)
+              failCount = 0;
+            else
+              failCount = (int)failCountValue.longValue();
+            IJobDescription jobDesc = load(jobID);
+            String connectionName = jobDesc.getConnectionName();
+            String outputConnectionName = jobDesc.getOutputConnectionName();
+            DocumentDescription dd = new DocumentDescription((Long)row.getValue(jobQueue.idField),
+              jobID,documentIDHash,documentID,failTime,failCount);
+            String compositeDocumentID = makeCompositeID(documentIDHash,connectionName);
+            documentIDMap.put(compositeDocumentID,dd);
+            Map y = (Map)connectionNameMap.get(connectionName);
+            if (y == null)
+            {
+              y = new HashMap();
+              connectionNameMap.put(connectionName,y);
+            }
+            ArrayList x = (ArrayList)y.get(outputConnectionName);
+            if (x == null)
+            {
+              // New entry needed
+              x = new ArrayList();
+              y.put(outputConnectionName,x);
+            }
+            x.add(dd);
+            i++;
+          }
+
+          // For each bin, obtain a filtered answer, and enter all answers into a hash table.
+          // We'll then scan the result again to look up the right descriptions for return,
+          // and delete the ones that are owned multiply.
+          HashMap allowedDocIds = new HashMap();
+          Iterator iter = connectionNameMap.keySet().iterator();
+          while (iter.hasNext())
+          {
+            String connectionName = (String)iter.next();
+            Map y = (Map)connectionNameMap.get(connectionName);
+            Iterator outputIter = y.keySet().iterator();
+            while (outputIter.hasNext())
+            {
+              String outputConnectionName = (String)outputIter.next();
+              ArrayList x = (ArrayList)y.get(outputConnectionName);
+              // Do the filter query
+              DocumentDescription[] descriptions = new DocumentDescription[x.size()];
+              int j = 0;
+              while (j < descriptions.length)
+              {
+                descriptions[j] = (DocumentDescription)x.get(j);
+                j++;
+              }
+              String[] docIDHashes = getUnindexableDocumentIdentifiers(descriptions,connectionName,outputConnectionName);
+              j = 0;
+              while (j < docIDHashes.length)
+              {
+                String docIDHash = docIDHashes[j++];
+                String key = makeCompositeID(docIDHash,connectionName);
+                allowedDocIds.put(key,docIDHash);
+              }
+            }
+          }
+
+          // Now, assemble a result, and change the state of the records accordingly
+          // First thing to do is order by document hash, so we reduce the risk of deadlock.
+          String[] compositeIDArray = new String[documentIDMap.size()];
+          i = 0;
+          iter = documentIDMap.keySet().iterator();
+          while (iter.hasNext())
+          {
+            compositeIDArray[i++] = (String)iter.next();
+          }
+          
+          java.util.Arrays.sort(compositeIDArray);
+          
+          DocumentDescription[] rval = new DocumentDescription[documentIDMap.size()];
+          boolean[] rvalBoolean = new boolean[documentIDMap.size()];
+          i = 0;
+          while (i < compositeIDArray.length)
+          {
+            String compositeDocID = compositeIDArray[i];
+            DocumentDescription dd = (DocumentDescription)documentIDMap.get(compositeDocID);
+            // Determine whether we can delete it from the index or not
+            rvalBoolean[i] = (allowedDocIds.get(compositeDocID) != null);
+            // Set the record status to "being cleaned" and return it
+            rval[i++] = dd;
+            jobQueue.setCleaningStatus(dd.getID(),processID);
+          }
+
+          TrackerClass.notePrecommit();
+          database.performCommit();
+          TrackerClass.noteCommit();
+          
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Done pruning unindexable docs after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
+
+          return new DocumentSetAndFlags(rval,rvalBoolean);
+
         }
-        throw e;
+        catch (Error e)
+        {
+          database.signalRollback();
+          TrackerClass.noteRollback();
+          throw e;
+        }
+        catch (ManifoldCFException e)
+        {
+          database.signalRollback();
+          TrackerClass.noteRollback();
+          if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+          {
+            if (Logging.perf.isDebugEnabled())
+              Logging.perf.debug("Aborted transaction finding deleteable docs: "+e.getMessage());
+            sleepAmt = getRandomAmount();
+            continue;
+          }
+          throw e;
+        }
+        finally
+        {
+          database.endTransaction();
+        }
       }
       finally
       {
-        database.endTransaction();
+        lockManager.leaveWriteLock(cleanStufferLock);
         sleepFor(sleepAmt);
       }
     }
@@ -1058,11 +1419,14 @@
   * is returned will be transitioned to the "beingdeleted" state.  Documents which are
   * not in transition and are eligible, but are owned by other jobs, will have their
   * jobqueue entries deleted by this method.
+  *@param processID is the current process ID.
   *@param maxCount is the maximum number of documents to return.
   *@param currentTime is the current time; some fetches do not occur until a specific time.
   *@return the document descriptions for these documents.
   */
-  public DocumentDescription[] getNextDeletableDocuments(int maxCount, long currentTime)
+  @Override
+  public DocumentDescription[] getNextDeletableDocuments(String processID,
+    int maxCount, long currentTime)
     throws ManifoldCFException
   {
     // The query will be built here, because it joins the jobs table against the jobqueue
@@ -1100,211 +1464,221 @@
     while (true)
     {
       long sleepAmt = 0L;
-      database.beginTransaction();
+      
+      // Enter a write lock so that multiple threads can't be in here at the same time
+      lockManager.enterWriteLock(deleteStufferLock);
       try
       {
-        if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug("After "+new Long(System.currentTimeMillis()-startTime).toString()+" ms, beginning query to look for documents to put on delete queue");
-
-        // Note: This query does not do "FOR UPDATE", because it is running under the only thread that can possibly change the document's state to "being deleted".
-        // If FOR UPDATE was included, deadlock happened a lot.
-        ArrayList list = new ArrayList();
-        StringBuilder sb = new StringBuilder("SELECT ");
-        sb.append(jobQueue.idField).append(",")
-          .append(jobQueue.jobIDField).append(",")
-          .append(jobQueue.docHashField).append(",")
-          .append(jobQueue.docIDField).append(",")
-          .append(jobQueue.failTimeField).append(",")
-          .append(jobQueue.failCountField).append(" FROM ").append(jobQueue.getTableName()).append(" t0 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new UnitaryClause("t0."+jobQueue.statusField,jobQueue.statusToString(jobQueue.STATUS_ELIGIBLEFORDELETE))})).append(" AND ")
-          .append("t0.").append(jobQueue.checkTimeField).append("<=? AND ");
-        
-        list.add(new Long(currentTime));
-        
-        sb.append("EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t1 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new UnitaryClause("t1."+jobs.statusField,jobs.statusToString(jobs.STATUS_DELETING)),
-            new JoinClause("t1."+jobs.idField,"t0."+jobQueue.jobIDField)})).append(") AND ");
-          
-        sb.append("NOT EXISTS(SELECT 'x' FROM ").append(jobQueue.getTableName()).append(" t2 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new JoinClause("t2."+jobQueue.docHashField,"t0."+jobQueue.docHashField)})).append(" AND ")
-          .append("t2.").append(jobQueue.statusField).append(" IN (?,?,?,?,?,?) AND ")
-          .append("t2.").append(jobQueue.jobIDField).append("!=t0.").append(jobQueue.jobIDField)
-          .append(") ");
-          
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVE));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVEPURGATORY));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCAN));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCANPURGATORY));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGDELETED));
-        list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGCLEANED));
-        
-        sb.append(database.constructOffsetLimitClause(0,maxCount));
-        
-        // The checktime is null field check is for backwards compatibility
-        IResultSet set = database.performQuery(sb.toString(),list,null,null,maxCount,null);
-
-        if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug("Done getting docs to delete queue after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
-
-        // We need to organize the returned set by connection name, so that we can efficiently
-        // use  getUnindexableDocumentIdentifiers.
-        // This is a table keyed by connection name and containing an ArrayList, which in turn contains DocumentDescription
-        // objects.
-        HashMap connectionNameMap = new HashMap();
-        HashMap documentIDMap = new HashMap();
-        int i = 0;
-        while (i < set.getRowCount())
-        {
-          IResultRow row = set.getRow(i);
-          Long jobID = (Long)row.getValue(jobQueue.jobIDField);
-          String documentIDHash = (String)row.getValue(jobQueue.docHashField);
-          String documentID = (String)row.getValue(jobQueue.docIDField);
-          Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
-          Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
-          // Failtime is probably not useful in this context, but we'll bring it along for completeness
-          long failTime;
-          if (failTimeValue == null)
-            failTime = -1L;
-          else
-            failTime = failTimeValue.longValue();
-          int failCount;
-          if (failCountValue == null)
-            failCount = 0;
-          else
-            failCount = (int)failCountValue.longValue();
-          IJobDescription jobDesc = load(jobID);
-          String connectionName = jobDesc.getConnectionName();
-          String outputConnectionName = jobDesc.getOutputConnectionName();
-          DocumentDescription dd = new DocumentDescription((Long)row.getValue(jobQueue.idField),
-            jobID,documentIDHash,documentID,failTime,failCount);
-          String compositeDocumentID = makeCompositeID(documentIDHash,connectionName);
-          documentIDMap.put(compositeDocumentID,dd);
-          Map y = (Map)connectionNameMap.get(connectionName);
-          if (y == null)
-          {
-            y = new HashMap();
-            connectionNameMap.put(connectionName,y);
-          }
-          ArrayList x = (ArrayList)y.get(outputConnectionName);
-          if (x == null)
-          {
-            // New entry needed
-            x = new ArrayList();
-            y.put(outputConnectionName,x);
-          }
-          x.add(dd);
-          i++;
-        }
-
-        // For each bin, obtain a filtered answer, and enter all answers into a hash table.
-        // We'll then scan the result again to look up the right descriptions for return,
-        // and delete the ones that are owned multiply.
-        HashMap allowedDocIds = new HashMap();
-        Iterator iter = connectionNameMap.keySet().iterator();
-        while (iter.hasNext())
-        {
-          String connectionName = (String)iter.next();
-          Map y = (Map)connectionNameMap.get(connectionName);
-          Iterator outputIter = y.keySet().iterator();
-          while (outputIter.hasNext())
-          {
-            String outputConnectionName = (String)outputIter.next();
-            ArrayList x = (ArrayList)y.get(outputConnectionName);
-            // Do the filter query
-            DocumentDescription[] descriptions = new DocumentDescription[x.size()];
-            int j = 0;
-            while (j < descriptions.length)
-            {
-              descriptions[j] = (DocumentDescription)x.get(j);
-              j++;
-            }
-            String[] docIDHashes = getUnindexableDocumentIdentifiers(descriptions,connectionName,outputConnectionName);
-            j = 0;
-            while (j < docIDHashes.length)
-            {
-              String docIDHash = docIDHashes[j++];
-              String key = makeCompositeID(docIDHash,connectionName);
-              allowedDocIds.put(key,docIDHash);
-            }
-          }
-        }
-
-        // Now, assemble a result, and change the state of the records accordingly
-        // First thing to do is order by document hash to reduce chances of deadlock.
-        String[] compositeIDArray = new String[documentIDMap.size()];
-        i = 0;
-        iter = documentIDMap.keySet().iterator();
-        while (iter.hasNext())
-        {
-          compositeIDArray[i++] = (String)iter.next();
-        }
-        
-        java.util.Arrays.sort(compositeIDArray);
-        
-        DocumentDescription[] rval = new DocumentDescription[allowedDocIds.size()];
-        int j = 0;
-        i = 0;
-        while (i < compositeIDArray.length)
-        {
-          String compositeDocumentID = compositeIDArray[i];
-          DocumentDescription dd = (DocumentDescription)documentIDMap.get(compositeDocumentID);
-          if (allowedDocIds.get(compositeDocumentID) == null)
-          {
-            // Delete this record and do NOT return it.
-            jobQueue.deleteRecord(dd.getID());
-            // What should we do about hopcount here?
-            // We are deleting a record which belongs to a job that is being
-            // cleaned up.  The job itself will go away when this is done,
-            // and so will all the hopcount stuff pertaining to it.  So, the
-            // treatment I've chosen here is to leave the hopcount alone and
-            // let the job cleanup get rid of it at the right time.
-            // Note: carrydown records handled in the same manner...
-            //carryDown.deleteRecords(dd.getJobID(),new String[]{dd.getDocumentIdentifier()});
-          }
-          else
-          {
-            // Set the record status to "being deleted" and return it
-            rval[j++] = dd;
-            jobQueue.setDeletingStatus(dd.getID());
-          }
-          i++;
-        }
-
-        TrackerClass.notePrecommit();
-        database.performCommit();
-        TrackerClass.noteCommit();
-        
-        if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug("Done pruning unindexable docs after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
-
-        return rval;
-
-      }
-      catch (Error e)
-      {
-        database.signalRollback();
-        TrackerClass.noteRollback();
-        throw e;
-      }
-      catch (ManifoldCFException e)
-      {
-        database.signalRollback();
-        TrackerClass.noteRollback();
-        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        database.beginTransaction();
+        try
         {
           if (Logging.perf.isDebugEnabled())
-            Logging.perf.debug("Aborted transaction finding deleteable docs: "+e.getMessage());
-          sleepAmt = getRandomAmount();
-          continue;
+            Logging.perf.debug("After "+new Long(System.currentTimeMillis()-startTime).toString()+" ms, beginning query to look for documents to put on delete queue");
+
+          // Note: This query does not do "FOR UPDATE", because it is running under the only thread that can possibly change the document's state to "being deleted".
+          // If FOR UPDATE was included, deadlock happened a lot.
+          ArrayList list = new ArrayList();
+          StringBuilder sb = new StringBuilder("SELECT ");
+          sb.append(jobQueue.idField).append(",")
+            .append(jobQueue.jobIDField).append(",")
+            .append(jobQueue.docHashField).append(",")
+            .append(jobQueue.docIDField).append(",")
+            .append(jobQueue.failTimeField).append(",")
+            .append(jobQueue.failCountField).append(" FROM ").append(jobQueue.getTableName()).append(" t0 WHERE ")
+            .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+              new UnitaryClause("t0."+jobQueue.statusField,jobQueue.statusToString(jobQueue.STATUS_ELIGIBLEFORDELETE))})).append(" AND ")
+            .append("t0.").append(jobQueue.checkTimeField).append("<=? AND ");
+          
+          list.add(new Long(currentTime));
+          
+          sb.append("EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t1 WHERE ")
+            .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+              new UnitaryClause("t1."+jobs.statusField,jobs.statusToString(jobs.STATUS_DELETING)),
+              new JoinClause("t1."+jobs.idField,"t0."+jobQueue.jobIDField)})).append(") AND ");
+            
+          sb.append("NOT EXISTS(SELECT 'x' FROM ").append(jobQueue.getTableName()).append(" t2 WHERE ")
+            .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+              new JoinClause("t2."+jobQueue.docHashField,"t0."+jobQueue.docHashField)})).append(" AND ")
+            .append("t2.").append(jobQueue.statusField).append(" IN (?,?,?,?,?,?) AND ")
+            .append("t2.").append(jobQueue.jobIDField).append("!=t0.").append(jobQueue.jobIDField)
+            .append(") ");
+            
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVE));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVEPURGATORY));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCAN));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_ACTIVENEEDRESCANPURGATORY));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGDELETED));
+          list.add(jobQueue.statusToString(jobQueue.STATUS_BEINGCLEANED));
+          
+          sb.append(database.constructOffsetLimitClause(0,maxCount));
+          
+          // The checktime is null field check is for backwards compatibility
+          IResultSet set = database.performQuery(sb.toString(),list,null,null,maxCount,null);
+
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Done getting docs to delete queue after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
+
+          // We need to organize the returned set by connection name, so that we can efficiently
+          // use  getUnindexableDocumentIdentifiers.
+          // This is a table keyed by connection name and containing an ArrayList, which in turn contains DocumentDescription
+          // objects.
+          HashMap connectionNameMap = new HashMap();
+          HashMap documentIDMap = new HashMap();
+          int i = 0;
+          while (i < set.getRowCount())
+          {
+            IResultRow row = set.getRow(i);
+            Long jobID = (Long)row.getValue(jobQueue.jobIDField);
+            String documentIDHash = (String)row.getValue(jobQueue.docHashField);
+            String documentID = (String)row.getValue(jobQueue.docIDField);
+            Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
+            Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
+            // Failtime is probably not useful in this context, but we'll bring it along for completeness
+            long failTime;
+            if (failTimeValue == null)
+              failTime = -1L;
+            else
+              failTime = failTimeValue.longValue();
+            int failCount;
+            if (failCountValue == null)
+              failCount = 0;
+            else
+              failCount = (int)failCountValue.longValue();
+            IJobDescription jobDesc = load(jobID);
+            String connectionName = jobDesc.getConnectionName();
+            String outputConnectionName = jobDesc.getOutputConnectionName();
+            DocumentDescription dd = new DocumentDescription((Long)row.getValue(jobQueue.idField),
+              jobID,documentIDHash,documentID,failTime,failCount);
+            String compositeDocumentID = makeCompositeID(documentIDHash,connectionName);
+            documentIDMap.put(compositeDocumentID,dd);
+            Map y = (Map)connectionNameMap.get(connectionName);
+            if (y == null)
+            {
+              y = new HashMap();
+              connectionNameMap.put(connectionName,y);
+            }
+            ArrayList x = (ArrayList)y.get(outputConnectionName);
+            if (x == null)
+            {
+              // New entry needed
+              x = new ArrayList();
+              y.put(outputConnectionName,x);
+            }
+            x.add(dd);
+            i++;
+          }
+
+          // For each bin, obtain a filtered answer, and enter all answers into a hash table.
+          // We'll then scan the result again to look up the right descriptions for return,
+          // and delete the ones that are owned multiply.
+          HashMap allowedDocIds = new HashMap();
+          Iterator iter = connectionNameMap.keySet().iterator();
+          while (iter.hasNext())
+          {
+            String connectionName = (String)iter.next();
+            Map y = (Map)connectionNameMap.get(connectionName);
+            Iterator outputIter = y.keySet().iterator();
+            while (outputIter.hasNext())
+            {
+              String outputConnectionName = (String)outputIter.next();
+              ArrayList x = (ArrayList)y.get(outputConnectionName);
+              // Do the filter query
+              DocumentDescription[] descriptions = new DocumentDescription[x.size()];
+              int j = 0;
+              while (j < descriptions.length)
+              {
+                descriptions[j] = (DocumentDescription)x.get(j);
+                j++;
+              }
+              String[] docIDHashes = getUnindexableDocumentIdentifiers(descriptions,connectionName,outputConnectionName);
+              j = 0;
+              while (j < docIDHashes.length)
+              {
+                String docIDHash = docIDHashes[j++];
+                String key = makeCompositeID(docIDHash,connectionName);
+                allowedDocIds.put(key,docIDHash);
+              }
+            }
+          }
+
+          // Now, assemble a result, and change the state of the records accordingly
+          // First thing to do is order by document hash to reduce chances of deadlock.
+          String[] compositeIDArray = new String[documentIDMap.size()];
+          i = 0;
+          iter = documentIDMap.keySet().iterator();
+          while (iter.hasNext())
+          {
+            compositeIDArray[i++] = (String)iter.next();
+          }
+          
+          java.util.Arrays.sort(compositeIDArray);
+          
+          DocumentDescription[] rval = new DocumentDescription[allowedDocIds.size()];
+          int j = 0;
+          i = 0;
+          while (i < compositeIDArray.length)
+          {
+            String compositeDocumentID = compositeIDArray[i];
+            DocumentDescription dd = (DocumentDescription)documentIDMap.get(compositeDocumentID);
+            if (allowedDocIds.get(compositeDocumentID) == null)
+            {
+              // Delete this record and do NOT return it.
+              jobQueue.deleteRecord(dd.getID());
+              // What should we do about hopcount here?
+              // We are deleting a record which belongs to a job that is being
+              // cleaned up.  The job itself will go away when this is done,
+              // and so will all the hopcount stuff pertaining to it.  So, the
+              // treatment I've chosen here is to leave the hopcount alone and
+              // let the job cleanup get rid of it at the right time.
+              // Note: carrydown records handled in the same manner...
+              //carryDown.deleteRecords(dd.getJobID(),new String[]{dd.getDocumentIdentifier()});
+            }
+            else
+            {
+              // Set the record status to "being deleted" and return it
+              rval[j++] = dd;
+              jobQueue.setDeletingStatus(dd.getID(),processID);
+            }
+            i++;
+          }
+
+          TrackerClass.notePrecommit();
+          database.performCommit();
+          TrackerClass.noteCommit();
+          
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Done pruning unindexable docs after "+new Long(System.currentTimeMillis()-startTime).toString()+" ms.");
+
+          return rval;
+
         }
-        throw e;
+        catch (Error e)
+        {
+          database.signalRollback();
+          TrackerClass.noteRollback();
+          throw e;
+        }
+        catch (ManifoldCFException e)
+        {
+          database.signalRollback();
+          TrackerClass.noteRollback();
+          if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+          {
+            if (Logging.perf.isDebugEnabled())
+              Logging.perf.debug("Aborted transaction finding deleteable docs: "+e.getMessage());
+            sleepAmt = getRandomAmount();
+            continue;
+          }
+          throw e;
+        }
+        finally
+        {
+          database.endTransaction();
+        }
       }
       finally
       {
-        database.endTransaction();
+        lockManager.leaveWriteLock(deleteStufferLock);
         sleepFor(sleepAmt);
       }
     }
@@ -1560,7 +1934,7 @@
   *@param documentDescriptions are the document descriptions.
   *@param priorities are the desired priorities.
   */
-  public void writeDocumentPriorities(long currentTime, DocumentDescription[] documentDescriptions, double[] priorities)
+  public void writeDocumentPriorities(long currentTime, DocumentDescription[] documentDescriptions, IPriorityCalculator[] priorities)
     throws ManifoldCFException
   {
 
@@ -1599,10 +1973,8 @@
             throw new ManifoldCFException("Assertion failure: duplicate document identifier jobid/hash detected!");
           int index = x.intValue();
           DocumentDescription dd = documentDescriptions[index];
-          double priority = priorities[index];
-          jobQueue.writeDocPriority(currentTime,dd.getID(),priorities[index]);
-          if (Logging.perf.isDebugEnabled())
-            Logging.perf.debug("Setting document priority for '"+dd.getDocumentIdentifier()+"' to "+new Double(priority).toString()+", set time "+new Long(currentTime).toString());
+          IPriorityCalculator priority = priorities[index];
+          jobQueue.writeDocPriority(currentTime,dd.getID(),priority);
           i++;
         }
         database.performCommit();
@@ -1638,11 +2010,13 @@
   * The same marking is used as is used for documents that have been queued for worker threads.  The model
   * is thus identical.
   *
+  *@param processID is the current process ID.
   *@param n is the maximum number of records desired.
   *@param currentTime is the current time.
   *@return the array of document descriptions to expire.
   */
-  public DocumentSetAndFlags getExpiredDocuments(int n, long currentTime)
+  @Override
+  public DocumentSetAndFlags getExpiredDocuments(String processID, int n, long currentTime)
     throws ManifoldCFException
   {
     // Screening query
@@ -1714,164 +2088,172 @@
     {
       long sleepAmt = 0L;
 
-      if (Logging.perf.isDebugEnabled())
-      {
-        repeatCount++;
-        Logging.perf.debug(" Attempt "+Integer.toString(repeatCount)+" to expire documents, after "+
-          new Long(System.currentTimeMillis() - startTime)+" ms");
-      }
-
-      database.beginTransaction();
+      // Enter a write lock, so only one thread can be doing this.  That makes FOR UPDATE unnecessary.
+      lockManager.enterWriteLock(expireStufferLock);
       try
       {
-        IResultSet set = database.performQuery(query,list,null,null,n,null);
-
         if (Logging.perf.isDebugEnabled())
-          Logging.perf.debug(" Expiring "+Integer.toString(set.getRowCount())+" documents");
-
-        // To avoid deadlock, we want to update the document id hashes in order.  This means reading into a structure I can sort by docid hash,
-        // before updating any rows in jobqueue.
-        HashMap connectionNameMap = new HashMap();
-        HashMap documentIDMap = new HashMap();
-        Map statusMap = new HashMap();
-
-        int i = 0;
-        while (i < set.getRowCount())
         {
-          IResultRow row = set.getRow(i);
-          Long jobID = (Long)row.getValue(jobQueue.jobIDField);
-          String documentIDHash = (String)row.getValue(jobQueue.docHashField);
-          String documentID = (String)row.getValue(jobQueue.docIDField);
-          int status = jobQueue.stringToStatus(row.getValue(jobQueue.statusField).toString());
-          Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
-          Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
-          // Failtime is probably not useful in this context, but we'll bring it along for completeness
-          long failTime;
-          if (failTimeValue == null)
-            failTime = -1L;
-          else
-            failTime = failTimeValue.longValue();
-          int failCount;
-          if (failCountValue == null)
-            failCount = 0;
-          else
-            failCount = (int)failCountValue.longValue();
-          IJobDescription jobDesc = load(jobID);
-          String connectionName = jobDesc.getConnectionName();
-          String outputConnectionName = jobDesc.getOutputConnectionName();
-          DocumentDescription dd = new DocumentDescription((Long)row.getValue(jobQueue.idField),
-            jobID,documentIDHash,documentID,failTime,failCount);
-          String compositeDocumentID = makeCompositeID(documentIDHash,connectionName);
-          documentIDMap.put(compositeDocumentID,dd);
-          statusMap.put(compositeDocumentID,new Integer(status));
-          Map y = (Map)connectionNameMap.get(connectionName);
-          if (y == null)
-          {
-            y = new HashMap();
-            connectionNameMap.put(connectionName,y);
-          }
-          ArrayList x = (ArrayList)y.get(outputConnectionName);
-          if (x == null)
-          {
-            // New entry needed
-            x = new ArrayList();
-            y.put(outputConnectionName,x);
-          }
-          x.add(dd);
-          i++;
+          repeatCount++;
+          Logging.perf.debug(" Attempt "+Integer.toString(repeatCount)+" to expire documents, after "+
+            new Long(System.currentTimeMillis() - startTime)+" ms");
         }
 
-        // For each bin, obtain a filtered answer, and enter all answers into a hash table.
-        // We'll then scan the result again to look up the right descriptions for return,
-        // and delete the ones that are owned multiply.
-        HashMap allowedDocIds = new HashMap();
-        Iterator iter = connectionNameMap.keySet().iterator();
-        while (iter.hasNext())
+        database.beginTransaction();
+        try
         {
-          String connectionName = (String)iter.next();
-          Map y = (Map)connectionNameMap.get(connectionName);
-          Iterator outputIter = y.keySet().iterator();
-          while (outputIter.hasNext())
-          {
-            String outputConnectionName = (String)outputIter.next();
-            ArrayList x = (ArrayList)y.get(outputConnectionName);
-            // Do the filter query
-            DocumentDescription[] descriptions = new DocumentDescription[x.size()];
-            int j = 0;
-            while (j < descriptions.length)
-            {
-              descriptions[j] = (DocumentDescription)x.get(j);
-              j++;
-            }
-            String[] docIDHashes = getUnindexableDocumentIdentifiers(descriptions,connectionName,outputConnectionName);
-            j = 0;
-            while (j < docIDHashes.length)
-            {
-              String docIDHash = docIDHashes[j++];
-              String key = makeCompositeID(docIDHash,connectionName);
-              allowedDocIds.put(key,docIDHash);
-            }
-          }
-        }
+          IResultSet set = database.performQuery(query,list,null,null,n,null);
 
-        // Now, assemble a result, and change the state of the records accordingly
-        // First thing to do is order by document hash, so we reduce the risk of deadlock.
-        String[] compositeIDArray = new String[documentIDMap.size()];
-        i = 0;
-        iter = documentIDMap.keySet().iterator();
-        while (iter.hasNext())
-        {
-          compositeIDArray[i++] = (String)iter.next();
-        }
-        
-        java.util.Arrays.sort(compositeIDArray);
-        
-        DocumentDescription[] rval = new DocumentDescription[documentIDMap.size()];
-        boolean[] rvalBoolean = new boolean[documentIDMap.size()];
-        i = 0;
-        while (i < compositeIDArray.length)
-        {
-          String compositeDocID = compositeIDArray[i];
-          DocumentDescription dd = (DocumentDescription)documentIDMap.get(compositeDocID);
-          // Determine whether we can delete it from the index or not
-          rvalBoolean[i] = (allowedDocIds.get(compositeDocID) != null);
-          // Set the record status to "being cleaned" and return it
-          rval[i++] = dd;
-          jobQueue.updateActiveRecord(dd.getID(),((Integer)statusMap.get(compositeDocID)).intValue());
-        }
-
-        TrackerClass.notePrecommit();
-        database.performCommit();
-        TrackerClass.noteCommit();
-        
-        return new DocumentSetAndFlags(rval, rvalBoolean);
-
-      }
-      catch (ManifoldCFException e)
-      {
-        database.signalRollback();
-        TrackerClass.noteRollback();
-        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
-        {
           if (Logging.perf.isDebugEnabled())
-            Logging.perf.debug("Aborted transaction finding docs to expire: "+e.getMessage());
-          sleepAmt = getRandomAmount();
-          continue;
+            Logging.perf.debug(" Expiring "+Integer.toString(set.getRowCount())+" documents");
+
+          // To avoid deadlock, we want to update the document id hashes in order.  This means reading into a structure I can sort by docid hash,
+          // before updating any rows in jobqueue.
+          HashMap connectionNameMap = new HashMap();
+          HashMap documentIDMap = new HashMap();
+          Map statusMap = new HashMap();
+
+          int i = 0;
+          while (i < set.getRowCount())
+          {
+            IResultRow row = set.getRow(i);
+            Long jobID = (Long)row.getValue(jobQueue.jobIDField);
+            String documentIDHash = (String)row.getValue(jobQueue.docHashField);
+            String documentID = (String)row.getValue(jobQueue.docIDField);
+            int status = jobQueue.stringToStatus(row.getValue(jobQueue.statusField).toString());
+            Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
+            Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
+            // Failtime is probably not useful in this context, but we'll bring it along for completeness
+            long failTime;
+            if (failTimeValue == null)
+              failTime = -1L;
+            else
+              failTime = failTimeValue.longValue();
+            int failCount;
+            if (failCountValue == null)
+              failCount = 0;
+            else
+              failCount = (int)failCountValue.longValue();
+            IJobDescription jobDesc = load(jobID);
+            String connectionName = jobDesc.getConnectionName();
+            String outputConnectionName = jobDesc.getOutputConnectionName();
+            DocumentDescription dd = new DocumentDescription((Long)row.getValue(jobQueue.idField),
+              jobID,documentIDHash,documentID,failTime,failCount);
+            String compositeDocumentID = makeCompositeID(documentIDHash,connectionName);
+            documentIDMap.put(compositeDocumentID,dd);
+            statusMap.put(compositeDocumentID,new Integer(status));
+            Map y = (Map)connectionNameMap.get(connectionName);
+            if (y == null)
+            {
+              y = new HashMap();
+              connectionNameMap.put(connectionName,y);
+            }
+            ArrayList x = (ArrayList)y.get(outputConnectionName);
+            if (x == null)
+            {
+              // New entry needed
+              x = new ArrayList();
+              y.put(outputConnectionName,x);
+            }
+            x.add(dd);
+            i++;
+          }
+
+          // For each bin, obtain a filtered answer, and enter all answers into a hash table.
+          // We'll then scan the result again to look up the right descriptions for return,
+          // and delete the ones that are owned multiply.
+          HashMap allowedDocIds = new HashMap();
+          Iterator iter = connectionNameMap.keySet().iterator();
+          while (iter.hasNext())
+          {
+            String connectionName = (String)iter.next();
+            Map y = (Map)connectionNameMap.get(connectionName);
+            Iterator outputIter = y.keySet().iterator();
+            while (outputIter.hasNext())
+            {
+              String outputConnectionName = (String)outputIter.next();
+              ArrayList x = (ArrayList)y.get(outputConnectionName);
+              // Do the filter query
+              DocumentDescription[] descriptions = new DocumentDescription[x.size()];
+              int j = 0;
+              while (j < descriptions.length)
+              {
+                descriptions[j] = (DocumentDescription)x.get(j);
+                j++;
+              }
+              String[] docIDHashes = getUnindexableDocumentIdentifiers(descriptions,connectionName,outputConnectionName);
+              j = 0;
+              while (j < docIDHashes.length)
+              {
+                String docIDHash = docIDHashes[j++];
+                String key = makeCompositeID(docIDHash,connectionName);
+                allowedDocIds.put(key,docIDHash);
+              }
+            }
+          }
+
+          // Now, assemble a result, and change the state of the records accordingly
+          // First thing to do is order by document hash, so we reduce the risk of deadlock.
+          String[] compositeIDArray = new String[documentIDMap.size()];
+          i = 0;
+          iter = documentIDMap.keySet().iterator();
+          while (iter.hasNext())
+          {
+            compositeIDArray[i++] = (String)iter.next();
+          }
+          
+          java.util.Arrays.sort(compositeIDArray);
+          
+          DocumentDescription[] rval = new DocumentDescription[documentIDMap.size()];
+          boolean[] rvalBoolean = new boolean[documentIDMap.size()];
+          i = 0;
+          while (i < compositeIDArray.length)
+          {
+            String compositeDocID = compositeIDArray[i];
+            DocumentDescription dd = (DocumentDescription)documentIDMap.get(compositeDocID);
+            // Determine whether we can delete it from the index or not
+            rvalBoolean[i] = (allowedDocIds.get(compositeDocID) != null);
+            // Set the record status to "being cleaned" and return it
+            rval[i++] = dd;
+            jobQueue.updateActiveRecord(dd.getID(),((Integer)statusMap.get(compositeDocID)).intValue(),processID);
+          }
+
+          TrackerClass.notePrecommit();
+          database.performCommit();
+          TrackerClass.noteCommit();
+          
+          return new DocumentSetAndFlags(rval, rvalBoolean);
+
         }
-        throw e;
-      }
-      catch (Error e)
-      {
-        database.signalRollback();
-        TrackerClass.noteRollback();
-        throw e;
+        catch (ManifoldCFException e)
+        {
+          database.signalRollback();
+          TrackerClass.noteRollback();
+          if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+          {
+            if (Logging.perf.isDebugEnabled())
+              Logging.perf.debug("Aborted transaction finding docs to expire: "+e.getMessage());
+            sleepAmt = getRandomAmount();
+            continue;
+          }
+          throw e;
+        }
+        catch (Error e)
+        {
+          database.signalRollback();
+          TrackerClass.noteRollback();
+          throw e;
+        }
+        finally
+        {
+          database.endTransaction();
+        }
       }
       finally
       {
-        database.endTransaction();
+        lockManager.leaveWriteLock(expireStufferLock);
         sleepFor(sleepAmt);
       }
-
     }
   }
 
@@ -1883,6 +2265,7 @@
   * pertaining to the document's handling (e.g. whether it should be refetched if the version
   * has not changed).
   * This method also marks the documents whose descriptions have be returned as "being processed".
+  *@param processID is the current process ID.
   *@param n is the maximum number of records desired.
   *@param currentTime is the current time; some fetches do not occur until a specific time.
   *@param interval is the number of milliseconds that this set of documents should represent (for throttling).
@@ -1894,7 +2277,9 @@
   * to being overcommitted.
   *@return the array of document descriptions to fetch and process.
   */
-  public DocumentDescription[] getNextDocuments(int n, long currentTime, long interval,
+  @Override
+  public DocumentDescription[] getNextDocuments(String processID,
+    int n, long currentTime, long interval,
     BlockingDocuments blockingDocuments, PerformanceStatistics statistics,
     DepthStatistics scanRecord)
     throws ManifoldCFException
@@ -2096,7 +2481,7 @@
       if (jobs.hasPriorityJobs(currentPriority))
       {
         Long currentPriorityValue = new Long((long)currentPriority);
-        fetchAndProcessDocuments(answers,currentTimeValue,currentPriorityValue,vList,connections);
+        fetchAndProcessDocuments(answers,currentTimeValue,currentPriorityValue,vList,connections,processID);
         isDone = !vList.checkContinue();
       }
       currentPriority++;
@@ -2173,12 +2558,13 @@
 
   /** Fetch and process documents matching the passed-in criteria */
   protected void fetchAndProcessDocuments(ArrayList answers, Long currentTimeValue, Long currentPriorityValue,
-    ThrottleLimit vList, IRepositoryConnection[] connections)
+    ThrottleLimit vList, IRepositoryConnection[] connections, String processID)
     throws ManifoldCFException
   {
 
     // Note well: This query does not do "FOR UPDATE".  The reason is that only one thread can possibly change the document's state to active.
     // When FOR UPDATE was included, deadlock conditions were common because of the complexity of this query.
+    // So, instead, as part of CONNECTORS-781, I've introduced a write lock for the pertinent section.
 
     ArrayList list = new ArrayList();
 
@@ -2246,141 +2632,148 @@
     // at the connector factory level to make sure these requests are properly ordered.
 
     String[] orderingKeys = new String[connections.length];
-    String[] classNames = new String[connections.length];
-    ConfigParams[] configParams = new ConfigParams[connections.length];
-    int[] maxConnections = new int[connections.length];
     int k = 0;
     while (k < connections.length)
     {
       IRepositoryConnection connection = connections[k];
       orderingKeys[k] = connection.getName();
-      classNames[k] = connection.getClassName();
-      configParams[k] = connection.getConfigParams();
-      maxConnections[k] = connection.getMaxConnections();
       k++;
     }
-    IRepositoryConnector[] connectors = RepositoryConnectorFactory.grabMultiple(threadContext,orderingKeys,classNames,configParams,maxConnections);
-    try
+    
+    // Never sleep with a resource locked!
+    while (true)
     {
-      // Hand the connectors off to the ThrottleLimit instance
-      k = 0;
-      while (k < connections.length)
-      {
-        vList.addConnectionName(connections[k].getName(),connectors[k]);
-        k++;
-      }
+      long sleepAmt = 0L;
 
-      // Now we can tack the limit onto the query.  Before this point, remainingDocuments would be crap
-      int limitValue = vList.getRemainingDocuments();
-      sb.append(database.constructOffsetLimitClause(0,limitValue,true));
-
-      if (Logging.perf.isDebugEnabled())
+      // Write lock insures that only one thread cluster-wide can be doing this at a given time, so FOR UPDATE is unneeded.
+      lockManager.enterWriteLock(stufferLock);
+      try
       {
-        Logging.perf.debug("Queuing documents from time "+currentTimeValue.toString()+" job priority "+currentPriorityValue.toString()+
-          " (up to "+Integer.toString(vList.getRemainingDocuments())+" documents)");
-      }
-
-      while (true)
-      {
-        long sleepAmt = 0L;
-        database.beginTransaction();
+    
+        IRepositoryConnector[] connectors = repositoryConnectorPool.grabMultiple(orderingKeys,connections);
         try
         {
-          IResultSet set = database.performQuery(sb.toString(),list,null,null,-1,vList);
+          // Hand the connectors off to the ThrottleLimit instance
+          k = 0;
+          while (k < connections.length)
+          {
+            vList.addConnectionName(connections[k].getName(),connectors[k]);
+            k++;
+          }
+
+          // Now we can tack the limit onto the query.  Before this point, remainingDocuments would be crap
+          int limitValue = vList.getRemainingDocuments();
+          sb.append(database.constructOffsetLimitClause(0,limitValue,true));
 
           if (Logging.perf.isDebugEnabled())
-            Logging.perf.debug(" Queuing "+Integer.toString(set.getRowCount())+" documents");
-
-          // To avoid deadlock, we want to update the document id hashes in order.  This means reading into a structure I can sort by docid hash,
-          // before updating any rows in jobqueue.
-          String[] docIDHashes = new String[set.getRowCount()];
-          Map storageMap = new HashMap();
-          Map statusMap = new HashMap();
-
-          int i = 0;
-          while (i < set.getRowCount())
           {
-            IResultRow row = set.getRow(i);
-            Long id = (Long)row.getValue(jobQueue.idField);
-            Long jobID = (Long)row.getValue(jobQueue.jobIDField);
-            String docIDHash = (String)row.getValue(jobQueue.docHashField);
-            String docID = (String)row.getValue(jobQueue.docIDField);
-            int status = jobQueue.stringToStatus(row.getValue(jobQueue.statusField).toString());
-            Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
-            Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
-            long failTime;
-            if (failTimeValue == null)
-              failTime = -1L;
-            else
-              failTime = failTimeValue.longValue();
-            int failCount;
-            if (failCountValue == null)
-              failCount = -1;
-            else
-              failCount = (int)failCountValue.longValue();
-
-            DocumentDescription dd = new DocumentDescription(id,jobID,docIDHash,docID,failTime,failCount);
-            docIDHashes[i] = docIDHash + ":" + jobID;
-            storageMap.put(docIDHashes[i],dd);
-            statusMap.put(docIDHashes[i],new Integer(status));
-            if (Logging.scheduling.isDebugEnabled())
-            {
-              Double docPriority = (Double)row.getValue(jobQueue.docPriorityField);
-              Logging.scheduling.debug("Stuffing document '"+docID+"' that has priority "+docPriority.toString()+" onto active list");
-            }
-            i++;
+            Logging.perf.debug("Queuing documents from time "+currentTimeValue.toString()+" job priority "+currentPriorityValue.toString()+
+              " (up to "+Integer.toString(vList.getRemainingDocuments())+" documents)");
           }
 
-          // No duplicates are possible here
-          java.util.Arrays.sort(docIDHashes);
-
-          i = 0;
-          while (i < docIDHashes.length)
+          database.beginTransaction();
+          try
           {
-            String docIDHash = docIDHashes[i];
-            DocumentDescription dd = (DocumentDescription)storageMap.get(docIDHash);
-            Long id = dd.getID();
-            int status = ((Integer)statusMap.get(docIDHash)).intValue();
+            IResultSet set = database.performQuery(sb.toString(),list,null,null,-1,vList);
 
-            // Set status to "ACTIVE".
-            jobQueue.updateActiveRecord(id,status);
-
-            answers.add(dd);
-
-            i++;
-          }
-          TrackerClass.notePrecommit();
-          database.performCommit();
-          TrackerClass.noteCommit();
-          break;
-        }
-        catch (ManifoldCFException e)
-        {
-          database.signalRollback();
-          if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
-          {
             if (Logging.perf.isDebugEnabled())
-              Logging.perf.debug("Aborted transaction finding docs to queue: "+e.getMessage());
-            sleepAmt = getRandomAmount();
-            continue;
+              Logging.perf.debug(" Queuing "+Integer.toString(set.getRowCount())+" documents");
+
+            // To avoid deadlock, we want to update the document id hashes in order.  This means reading into a structure I can sort by docid hash,
+            // before updating any rows in jobqueue.
+            String[] docIDHashes = new String[set.getRowCount()];
+            Map storageMap = new HashMap();
+            Map statusMap = new HashMap();
+
+            int i = 0;
+            while (i < set.getRowCount())
+            {
+              IResultRow row = set.getRow(i);
+              Long id = (Long)row.getValue(jobQueue.idField);
+              Long jobID = (Long)row.getValue(jobQueue.jobIDField);
+              String docIDHash = (String)row.getValue(jobQueue.docHashField);
+              String docID = (String)row.getValue(jobQueue.docIDField);
+              int status = jobQueue.stringToStatus(row.getValue(jobQueue.statusField).toString());
+              Long failTimeValue = (Long)row.getValue(jobQueue.failTimeField);
+              Long failCountValue = (Long)row.getValue(jobQueue.failCountField);
+              long failTime;
+              if (failTimeValue == null)
+                failTime = -1L;
+              else
+                failTime = failTimeValue.longValue();
+              int failCount;
+              if (failCountValue == null)
+                failCount = -1;
+              else
+                failCount = (int)failCountValue.longValue();
+
+              DocumentDescription dd = new DocumentDescription(id,jobID,docIDHash,docID,failTime,failCount);
+              docIDHashes[i] = docIDHash + ":" + jobID;
+              storageMap.put(docIDHashes[i],dd);
+              statusMap.put(docIDHashes[i],new Integer(status));
+              if (Logging.scheduling.isDebugEnabled())
+              {
+                Double docPriority = (Double)row.getValue(jobQueue.docPriorityField);
+                Logging.scheduling.debug("Stuffing document '"+docID+"' that has priority "+docPriority.toString()+" onto active list");
+              }
+              i++;
+            }
+
+            // No duplicates are possible here
+            java.util.Arrays.sort(docIDHashes);
+
+            i = 0;
+            while (i < docIDHashes.length)
+            {
+              String docIDHash = docIDHashes[i];
+              DocumentDescription dd = (DocumentDescription)storageMap.get(docIDHash);
+              Long id = dd.getID();
+              int status = ((Integer)statusMap.get(docIDHash)).intValue();
+
+              // Set status to "ACTIVE".
+              jobQueue.updateActiveRecord(id,status,processID);
+
+              answers.add(dd);
+
+              i++;
+            }
+            TrackerClass.notePrecommit();
+            database.performCommit();
+            TrackerClass.noteCommit();
+            break;
           }
-          throw e;
-        }
-        catch (Error e)
-        {
-          database.signalRollback();
-          throw e;
+          catch (ManifoldCFException e)
+          {
+            database.signalRollback();
+            if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+            {
+              if (Logging.perf.isDebugEnabled())
+                Logging.perf.debug("Aborted transaction finding docs to queue: "+e.getMessage());
+              sleepAmt = getRandomAmount();
+              continue;
+            }
+            throw e;
+          }
+          catch (Error e)
+          {
+            database.signalRollback();
+            throw e;
+          }
+          finally
+          {
+            database.endTransaction();
+          }
         }
         finally
         {
-          database.endTransaction();
-          sleepFor(sleepAmt);
+          repositoryConnectorPool.releaseMultiple(connections,connectors);
         }
       }
-    }
-    finally
-    {
-      RepositoryConnectorFactory.releaseMultiple(connectors);
+      finally
+      {
+        lockManager.leaveWriteLock(stufferLock);
+        sleepFor(sleepAmt);
+      }
     }
   }
 
@@ -3020,7 +3413,7 @@
         i = 0;
         while (i < ids.length)
         {
-          jobQueue.setStatus(ids[i],jobQueue.STATUS_PENDINGPURGATORY,executeTimesNew[i],actionsNew[i],-1L,-1);
+          jobQueue.setRequeuedStatus(ids[i],executeTimesNew[i],actionsNew[i],-1L,-1);
           i++;
         }
 
@@ -3147,7 +3540,7 @@
         i = 0;
         while (i < ids.length)
         {
-          jobQueue.setStatus(ids[i],jobQueue.STATUS_PENDINGPURGATORY,executeTimes[i],actions[i],(failTimes==null)?-1L:failTimes[i],(failCounts==null)?-1:failCounts[i]);
+          jobQueue.setRequeuedStatus(ids[i],executeTimes[i],actions[i],(failTimes==null)?-1L:failTimes[i],(failCounts==null)?-1:failCounts[i]);
           i++;
         }
 
@@ -3451,6 +3844,7 @@
   * This method is called during job startup, when the queue is being loaded.
   * A set of document references is passed to this method, which updates the status of the document
   * in the specified job's queue, according to specific state rules.
+  *@param processID is the current process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDs are the local document identifiers.
@@ -3459,26 +3853,25 @@
   *@param currentTime is the current time in milliseconds since epoch.
   *@param documentPriorities are the document priorities corresponding to the document identifiers.
   *@param prereqEventNames are the events that must be completed before each document can be processed.
-  *@return true if the priority value(s) were used, false otherwise.
   */
-  public boolean[] addDocumentsInitial(Long jobID, String[] legalLinkTypes,
+  @Override
+  public void addDocumentsInitial(String processID, Long jobID, String[] legalLinkTypes,
     String[] docIDHashes, String[] docIDs, boolean overrideSchedule,
-    int hopcountMethod, long currentTime, double[] documentPriorities,
+    int hopcountMethod, long currentTime, IPriorityCalculator[] documentPriorities,
     String[][] prereqEventNames)
     throws ManifoldCFException
   {
     if (docIDHashes.length == 0)
-      return new boolean[0];
+      return;
 
     // The document identifiers need to be sorted in a consistent fashion to reduce deadlock, and have duplicates removed, before going ahead.
     // But, the documentPriorities and the return booleans need to correspond to the initial array.  So, after we come up with
     // our internal order, we need to construct a map that takes an original index and maps it to the reduced, reordered index.
     String[] reorderedDocIDHashes = eliminateDuplicates(docIDHashes);
     HashMap reorderMap = buildReorderMap(docIDHashes,reorderedDocIDHashes);
-    double[] reorderedDocumentPriorities = new double[reorderedDocIDHashes.length];
+    IPriorityCalculator[] reorderedDocumentPriorities = new IPriorityCalculator[reorderedDocIDHashes.length];
     String[][] reorderedDocumentPrerequisites = new String[reorderedDocIDHashes.length][];
     String[] reorderedDocumentIdentifiers = new String[reorderedDocIDHashes.length];
-    boolean[] rval = new boolean[docIDHashes.length];
     int i = 0;
     while (i < docIDHashes.length)
     {
@@ -3492,7 +3885,6 @@
           reorderedDocumentPrerequisites[newPosition.intValue()] = null;
         reorderedDocumentIdentifiers[newPosition.intValue()] = docIDs[i];
       }
-      rval[i] = false;
       i++;
     }
 
@@ -3517,12 +3909,11 @@
           " initial docs and hopcounts for job "+jobID.toString());
 
         // Go through document id's one at a time, in order - mainly to prevent deadlock as much as possible.  Search for any existing row in jobqueue first (for update)
-        boolean[] reorderedRval = new boolean[reorderedDocIDHashes.length];
         int z = 0;
         while (z < reorderedDocIDHashes.length)
         {
           String docIDHash = reorderedDocIDHashes[z];
-          double docPriority = reorderedDocumentPriorities[z];
+          IPriorityCalculator docPriority = reorderedDocumentPriorities[z];
           String docID = reorderedDocumentIdentifiers[z];
           String[] docPrereqs = reorderedDocumentPrerequisites[z];
 
@@ -3541,7 +3932,6 @@
 
           IResultSet set = database.performQuery(sb.toString(),list,null,null);
 
-          boolean priorityUsed;
           long executeTime = overrideSchedule?0L:-1L;
 
           if (set.getRowCount() > 0)
@@ -3554,16 +3944,15 @@
             int status = jobQueue.stringToStatus((String)row.getValue(jobQueue.statusField));
             Long checkTimeValue = (Long)row.getValue(jobQueue.checkTimeField);
 
-            priorityUsed = jobQueue.updateExistingRecordInitial(rowID,status,checkTimeValue,executeTime,currentTime,docPriority,docPrereqs);
+            jobQueue.updateExistingRecordInitial(rowID,status,checkTimeValue,executeTime,currentTime,docPriority,docPrereqs,processID);
           }
           else
           {
             // Not found.  Attempt an insert instead.  This may fail due to constraints, but if this happens, the whole transaction will be retried.
-            jobQueue.insertNewRecordInitial(jobID,docIDHash,docID,docPriority,executeTime,currentTime,docPrereqs);
-            priorityUsed = true;
+            jobQueue.insertNewRecordInitial(jobID,docIDHash,docID,docPriority,executeTime,currentTime,docPrereqs,processID);
           }
 
-          reorderedRval[z++] = priorityUsed;
+          z++;
         }
 
         if (Logging.perf.isDebugEnabled())
@@ -3571,7 +3960,7 @@
           " initial docs for job "+jobID.toString());
 
         if (legalLinkTypes.length > 0)
-          hopCount.recordSeedReferences(jobID,legalLinkTypes,reorderedDocIDHashes,hopcountMethod);
+          hopCount.recordSeedReferences(jobID,legalLinkTypes,reorderedDocIDHashes,hopcountMethod,processID);
 
         TrackerClass.notePrecommit();
         database.performCommit();
@@ -3581,17 +3970,7 @@
           Logging.perf.debug("Took "+new Long(System.currentTimeMillis()-startTime).toString()+" ms to add "+Integer.toString(reorderedDocIDHashes.length)+
           " initial docs and hopcounts for job "+jobID.toString());
 
-        // Rejigger to correspond with calling order
-        i = 0;
-        while (i < docIDs.length)
-        {
-          Integer finalPosition = (Integer)reorderMap.get(new Integer(i));
-          if (finalPosition != null)
-            rval[i] = reorderedRval[finalPosition.intValue()];
-          i++;
-        }
-
-        return rval;
+        return;
       }
       catch (ManifoldCFException e)
       {
@@ -3625,12 +4004,15 @@
   * This method is called during job startup, when the queue is being loaded, to list documents that
   * were NOT included by calling addDocumentsInitial().  Documents listed here are simply designed to
   * enable the framework to get rid of old, invalid seeds.  They are not queued for processing.
+  *@param processID is the current process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHashes are the local document identifier hashes.
   *@param hopcountMethod is either accurate, nodelete, or neverdelete.
   */
-  public void addRemainingDocumentsInitial(Long jobID, String[] legalLinkTypes, String[] docIDHashes,
+  @Override
+  public void addRemainingDocumentsInitial(String processID,
+    Long jobID, String[] legalLinkTypes, String[] docIDHashes,
     int hopcountMethod)
     throws ManifoldCFException
   {
@@ -3658,9 +4040,9 @@
           Logging.perf.debug("Waited "+new Long(System.currentTimeMillis()-startTime).toString()+" ms to start adding "+Integer.toString(reorderedDocIDHashes.length)+
           " remaining docs and hopcounts for job "+jobID.toString());
 
-        jobQueue.addRemainingDocumentsInitial(jobID,reorderedDocIDHashes);
+        jobQueue.addRemainingDocumentsInitial(jobID,reorderedDocIDHashes,processID);
         if (legalLinkTypes.length > 0)
-          hopCount.recordSeedReferences(jobID,legalLinkTypes,reorderedDocIDHashes,hopcountMethod);
+          hopCount.recordSeedReferences(jobID,legalLinkTypes,reorderedDocIDHashes,hopcountMethod,processID);
 
         database.performCommit();
         
@@ -3968,10 +4350,12 @@
   * This method is called during document processing, when a set of document references are discovered.
   * The document references are passed to this method, which updates the status of the document(s)
   * in the specified job's queue, according to specific state rules.
+  *@param processID is the process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHashes are the local document identifier hashes.
-  *@param parentIdentifierHash is the optional parent identifier hash of this document.  Pass null if none.
+  *@param parentIdentifierHash is the optional parent identifier hash of this document.  Pass null if none. 
+  *       MUST be present in the case of carrydown information.
   *@param relationshipType is the optional link type between this document and its parent.  Pass null if there
   *       is no relationship with a parent.
   *@param hopcountMethod is the desired method for managing hopcounts.
@@ -3981,18 +4365,19 @@
   *@param currentTime is the time in milliseconds since epoch that will be recorded for this operation.
   *@param documentPriorities are the desired document priorities for the documents.
   *@param prereqEventNames are the events that must be completed before a document can be queued.
-  *@return an array of boolean values indicating whether or not the passed-in priority value was used or not for each doc id (true if used).
   */
-  public boolean[] addDocuments(Long jobID, String[] legalLinkTypes,
+  @Override
+  public void addDocuments(String processID,
+    Long jobID, String[] legalLinkTypes,
     String[] docIDHashes, String[] docIDs,
     String parentIdentifierHash, String relationshipType,
     int hopcountMethod, String[][] dataNames, Object[][][] dataValues,
-    long currentTime, double[] documentPriorities,
+    long currentTime, IPriorityCalculator[] documentPriorities,
     String[][] prereqEventNames)
     throws ManifoldCFException
   {
     if (docIDs.length == 0)
-      return new boolean[0];
+      return;
 
     // Sort the id hashes and eliminate duplicates.  This will help avoid deadlock conditions.
     // However, we also need to keep the carrydown data in synch, so track that around as well, and merge if there are
@@ -4049,7 +4434,7 @@
 
     String[] reorderedDocIDHashes = eliminateDuplicates(docIDHashes);
     HashMap reorderMap = buildReorderMap(docIDHashes,reorderedDocIDHashes);
-    double[] reorderedDocumentPriorities = new double[reorderedDocIDHashes.length];
+    IPriorityCalculator[] reorderedDocumentPriorities = new IPriorityCalculator[reorderedDocIDHashes.length];
     String[][] reorderedDocumentPrerequisites = new String[reorderedDocIDHashes.length][];
     String[] reorderedDocumentIdentifiers = new String[reorderedDocIDHashes.length];
     boolean[] rval = new boolean[docIDHashes.length];
@@ -4147,8 +4532,6 @@
               
           IResultSet set = database.performQuery(sb.toString(),list,null,null);
 
-          boolean priorityUsed;
-
           if (set.getRowCount() > 0)
           {
             // Found a row, and it is now locked.
@@ -4170,15 +4553,12 @@
         }
 
         // Update all the carrydown data at once, for greatest efficiency.
-        boolean[] carrydownChangesSeen = carryDown.recordCarrydownDataMultiple(jobID,parentIdentifierHash,reorderedDocIDHashes,dataNames,dataHashValues,dataValues);
+        boolean[] carrydownChangesSeen = carryDown.recordCarrydownDataMultiple(jobID,parentIdentifierHash,reorderedDocIDHashes,dataNames,dataHashValues,dataValues,processID);
 
         // Same with hopcount.
         boolean[] hopcountChangesSeen = null;
         if (parentIdentifierHash != null && relationshipType != null)
-          hopcountChangesSeen = hopCount.recordReferences(jobID,legalLinkTypes,parentIdentifierHash,reorderedDocIDHashes,relationshipType,hopcountMethod);
-
-        // Loop through the document id's again, and perform updates where needed
-        boolean[] reorderedRval = new boolean[reorderedDocIDHashes.length];
+          hopcountChangesSeen = hopCount.recordReferences(jobID,legalLinkTypes,parentIdentifierHash,reorderedDocIDHashes,relationshipType,hopcountMethod,processID);
 
         boolean reactivateRemovedHopcountRecords = false;
         
@@ -4186,16 +4566,13 @@
         {
           String docIDHash = reorderedDocIDHashes[z];
           JobqueueRecord jr = (JobqueueRecord)existingRows.get(docIDHash);
-          if (jr == null)
-            // It was an insert
-            reorderedRval[z] = true;
-          else
+          if (jr != null)
           {
             // It was an existing row; do the update logic
             // The hopcountChangesSeen array describes whether each reference is a new one.  This
             // helps us determine whether we're going to need to "flip" HOPCOUNTREMOVED documents
             // to the PENDING state.  If the new link ended in an existing record, THEN we need to flip them all!
-            reorderedRval[z] = jobQueue.updateExistingRecord(jr.getRecordID(),jr.getStatus(),jr.getCheckTimeValue(),
+            jobQueue.updateExistingRecord(jr.getRecordID(),jr.getStatus(),jr.getCheckTimeValue(),
               0L,currentTime,carrydownChangesSeen[z] || (hopcountChangesSeen!=null && hopcountChangesSeen[z]),
               reorderedDocumentPriorities[z],reorderedDocumentPrerequisites[z]);
             // Signal if we need to perform the flip
@@ -4215,16 +4592,7 @@
           Logging.perf.debug("Took "+new Long(System.currentTimeMillis()-startTime).toString()+" ms to add "+Integer.toString(reorderedDocIDHashes.length)+
           " docs and hopcounts for job "+jobID.toString()+" parent identifier hash "+parentIdentifierHash);
 
-        i = 0;
-        while (i < docIDHashes.length)
-        {
-          Integer finalPosition = (Integer)reorderMap.get(new Integer(i));
-          if (finalPosition != null)
-            rval[i] = reorderedRval[finalPosition.intValue()];
-          i++;
-        }
-
-        return rval;
+        return;
       }
       catch (ManifoldCFException e)
       {
@@ -4240,6 +4608,12 @@
         }
         throw e;
       }
+      catch (RuntimeException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
       catch (Error e)
       {
         database.signalRollback();
@@ -4259,10 +4633,12 @@
   * This method is called during document processing, when a document reference is discovered.
   * The document reference is passed to this method, which updates the status of the document
   * in the specified job's queue, according to specific state rules.
+  *@param processID is the process ID.
   *@param jobID is the job identifier.
   *@param legalLinkTypes is the set of legal link types that this connector generates.
   *@param docIDHash is the local document identifier hash value.
   *@param parentIdentifierHash is the optional parent identifier hash of this document.  Pass null if none.
+  *       MUST be present in the case of carrydown information.
   *@param relationshipType is the optional link type between this document and its parent.  Pass null if there
   *       is no relationship with a parent.
   *@param hopcountMethod is the desired method for managing hopcounts.
@@ -4271,18 +4647,19 @@
   *@param currentTime is the time in milliseconds since epoch that will be recorded for this operation.
   *@param priority is the desired document priority for the document.
   *@param prereqEventNames are the events that must be completed before the document can be processed.
-  *@return true if the priority value was used, false otherwise.
   */
-  public boolean addDocument(Long jobID, String[] legalLinkTypes, String docIDHash, String docID,
+  @Override
+  public void addDocument(String processID,
+    Long jobID, String[] legalLinkTypes, String docIDHash, String docID,
     String parentIdentifierHash, String relationshipType,
     int hopcountMethod, String[] dataNames, Object[][] dataValues,
-    long currentTime, double priority, String[] prereqEventNames)
+    long currentTime, IPriorityCalculator priority, String[] prereqEventNames)
     throws ManifoldCFException
   {
-    return addDocuments(jobID,legalLinkTypes,
+    addDocuments(processID,jobID,legalLinkTypes,
       new String[]{docIDHash},new String[]{docID},
       parentIdentifierHash,relationshipType,hopcountMethod,new String[][]{dataNames},
-      new Object[][][]{dataValues},currentTime,new double[]{priority},new String[][]{prereqEventNames})[0];
+      new Object[][][]{dataValues},currentTime,new IPriorityCalculator[]{priority},new String[][]{prereqEventNames});
   }
 
   /** Complete adding child documents to the queue, for a set of documents.
@@ -4294,6 +4671,7 @@
   *@return the set of documents for which carrydown data was changed by this operation.  These documents are likely
   *  to be requeued as a result of the change.
   */
+  @Override
   public DocumentDescription[] finishDocuments(Long jobID, String[] legalLinkTypes, String[] parentIdentifierHashes, int hopcountMethod)
     throws ManifoldCFException
   {
@@ -4505,15 +4883,17 @@
   }
 
   /** Begin an event sequence.
+  *@param processID is the current process ID.
   *@param eventName is the name of the event.
   *@return true if the event could be created, or false if it's already there.
   */
-  public boolean beginEventSequence(String eventName)
+  @Override
+  public boolean beginEventSequence(String processID, String eventName)
     throws ManifoldCFException
   {
     try
     {
-      eventManager.createEvent(eventName);
+      eventManager.createEvent(eventName,processID);
       return true;
     }
     catch (ManifoldCFException e)
@@ -4527,6 +4907,7 @@
   /** Complete an event sequence.
   *@param eventName is the name of the event.
   */
+  @Override
   public void completeEventSequence(String eventName)
     throws ManifoldCFException
   {
@@ -4539,13 +4920,13 @@
   * extent that if one is *already* being processed, it will need to be done over again.
   *@param documentDescriptions is the set of description objects for the documents that have had their parent carrydown information changed.
   *@param docPriorities are the document priorities to assign to the documents, if needed.
-  *@return a flag for each document priority, true if it was used, false otherwise.
   */
-  public boolean[] carrydownChangeDocumentMultiple(DocumentDescription[] documentDescriptions, long currentTime, double[] docPriorities)
+  @Override
+  public void carrydownChangeDocumentMultiple(DocumentDescription[] documentDescriptions, long currentTime, IPriorityCalculator[] docPriorities)
     throws ManifoldCFException
   {
     if (documentDescriptions.length == 0)
-      return new boolean[0];
+      return;
 
     // Order the updates by document hash, to prevent deadlock as much as possible.
 
@@ -4564,8 +4945,6 @@
     // Sort the hashes
     java.util.Arrays.sort(docIDHashes);
 
-    boolean[] rval = new boolean[docIDHashes.length];
-
     // Enter transaction and prepare to look up document states in dochash order
     while (true)
     {
@@ -4620,13 +4999,10 @@
           int originalIndex = ((Integer)docHashMap.get(docIDHash)).intValue();
 
           JobqueueRecord jr = (JobqueueRecord)existingRows.get(docIDHash);
-          if (jr == null)
-            // It wasn't found, so the doc priority wasn't used.
-            rval[originalIndex] = false;
-          else
+          if (jr != null)
             // It was an existing row; do the update logic; use the 'carrydown changes' flag = true all the time.
-            rval[originalIndex] = jobQueue.updateExistingRecord(jr.getRecordID(),jr.getStatus(),jr.getCheckTimeValue(),
-            0L,currentTime,true,docPriorities[originalIndex],null);
+            jobQueue.updateExistingRecord(jr.getRecordID(),jr.getStatus(),jr.getCheckTimeValue(),
+              0L,currentTime,true,docPriorities[originalIndex],null);
           j++;
         }
         database.performCommit();
@@ -4655,7 +5031,6 @@
         sleepFor(sleepAmt);
       }
     }
-    return rval;
   }
 
   /** Requeue a document because of carrydown changes.
@@ -4663,12 +5038,12 @@
   * extent that if it is *already* being processed, it will need to be done over again.
   *@param documentDescription is the description object for the document that has had its parent carrydown information changed.
   *@param docPriority is the document priority to assign to the document, if needed.
-  *@return a flag for the document priority, true if it was used, false otherwise.
   */
-  public boolean carrydownChangeDocument(DocumentDescription documentDescription, long currentTime, double docPriority)
+  @Override
+  public void carrydownChangeDocument(DocumentDescription documentDescription, long currentTime, IPriorityCalculator docPriority)
     throws ManifoldCFException
   {
-    return carrydownChangeDocumentMultiple(new DocumentDescription[]{documentDescription},currentTime,new double[]{docPriority})[0];
+    carrydownChangeDocumentMultiple(new DocumentDescription[]{documentDescription},currentTime,new IPriorityCalculator[]{docPriority});
   }
 
   /** Sleep a random amount of time after a transaction abort.
@@ -5456,10 +5831,13 @@
     boolean requestMinimum)
     throws ManifoldCFException
   {
+
     // (1) If the connector has MODEL_ADD_CHANGE_DELETE, then
     // we let the connector run the show; there's no purge phase, and therefore the
     // documents are left in a COMPLETED state if they don't show up in the list
-    // of seeds that require the attention of the connector.
+    // of seeds that require the attention of the connector.  However, we do need to
+    // preload the queue with all the existing documents, if there was any change to the
+    // specification information (which will mean that fromBeginningOfTime is set).
     //
     // (2) If the connector has MODEL_ALL, then it's a full crawl no matter what, so
     // we do a full scan initialization.
@@ -5467,16 +5845,26 @@
     // (3) If the connector has some other model, we look at the start time.  A start
     // time of 0 implies a full scan, while any other start time implies an incremental
     // scan.
-
+    
+    // Always reset document schedules for those documents already pending!
+    jobQueue.resetPendingDocumentSchedules(jobID);
+    
     // Complete connector model is told everything, so no delete phase.
     if (connectorModel == IRepositoryConnector.MODEL_ADD_CHANGE_DELETE)
+    {
+      if (fromBeginningOfTime)
+        queueAllExisting(jobID,legalLinkTypes);
       return;
+    }
     
     // If the connector model is complete via chaining, then we just need to make
     // sure discovery works to queue the changes.
     if (connectorModel == IRepositoryConnector.MODEL_CHAINED_ADD_CHANGE_DELETE)
     {
-      jobQueue.preparePartialScan(jobID);
+      if (fromBeginningOfTime)
+        queueAllExisting(jobID,legalLinkTypes);
+      else
+        jobQueue.preparePartialScan(jobID);
       return;
     }
     
@@ -5498,6 +5886,58 @@
       jobQueue.prepareIncrementalScan(jobID);
   }
 
+  /** Queue all existing.
+  *@param jobID is the job id.
+  *@param legalLinkTypes are the link types allowed for the job.
+  */
+  protected void queueAllExisting(Long jobID, String[] legalLinkTypes)
+    throws ManifoldCFException
+  {
+    while (true)
+    {
+      long sleepAmt = 0L;
+      database.beginTransaction();
+      try
+      {
+        if (legalLinkTypes.length > 0)
+        {
+          jobQueue.reactivateHopcountRemovedRecords(jobID);
+        }
+
+        jobQueue.queueAllExisting(jobID);
+        TrackerClass.notePrecommit();
+        database.performCommit();
+        TrackerClass.noteCommit();
+        break;
+      }
+      catch (ManifoldCFException e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        if (e.getErrorCode() == e.DATABASE_TRANSACTION_ABORT)
+        {
+          if (Logging.perf.isDebugEnabled())
+            Logging.perf.debug("Aborted transaction during queueAllExisting: "+e.getMessage());
+          sleepAmt = getRandomAmount();
+          continue;
+        }
+        throw e;
+      }
+      catch (Error e)
+      {
+        database.signalRollback();
+        TrackerClass.noteRollback();
+        throw e;
+      }
+      finally
+      {
+        database.endTransaction();
+        sleepFor(sleepAmt);
+      }
+    }
+
+  }
+  
   /** Prepare for a full scan.
   *@param jobID is the job id.
   *@param legalLinkTypes are the link types allowed for the job.
@@ -5831,10 +6271,12 @@
   }
 
   /** Get the list of jobs that are ready for seeding.
+  *@param processID is the current process ID.
   *@return jobs that are active and are running in adaptive mode.  These will be seeded
   * based on what the connector says should be added to the queue.
   */
-  public JobSeedingRecord[] getJobsReadyForSeeding(long currentTime)
+  @Override
+  public JobSeedingRecord[] getJobsReadyForSeeding(String processID, long currentTime)
     throws ManifoldCFException
   {
     while (true)
@@ -5882,7 +6324,7 @@
 
           // Mark status of job as "active/seeding".  Special status is needed so that abort
           // will not complete until seeding is completed.
-          jobs.writeStatus(jobID,jobs.STATUS_ACTIVESEEDING,reseedTime);
+          jobs.writeTransientStatus(jobID,jobs.STATUS_ACTIVESEEDING,reseedTime,processID);
           if (Logging.jobs.isDebugEnabled())
           {
             Logging.jobs.debug("Marked job "+jobID+" for seeding");
@@ -5920,9 +6362,11 @@
   }
 
   /** Get the list of jobs that are ready for deletion.
+  *@param processID is the current process ID.
   *@return jobs that were in the "readyfordelete" state.
   */
-  public JobDeleteRecord[] getJobsReadyForDelete()
+  @Override
+  public JobDeleteRecord[] getJobsReadyForDelete(String processID)
     throws ManifoldCFException
   {
     while (true)
@@ -5950,7 +6394,7 @@
           Long jobID = (Long)row.getValue(jobs.idField);
 
           // Mark status of job as "starting delete"
-          jobs.writeStatus(jobID,jobs.STATUS_DELETESTARTINGUP);
+          jobs.writeTransientStatus(jobID,jobs.STATUS_DELETESTARTINGUP,processID);
           if (Logging.jobs.isDebugEnabled())
           {
             Logging.jobs.debug("Marked job "+jobID+" for delete startup");
@@ -5988,9 +6432,11 @@
   }
 
   /** Get the list of jobs that are ready for startup.
+  *@param processID is the current process ID.
   *@return jobs that were in the "readyforstartup" state.  These will be marked as being in the "starting up" state.
   */
-  public JobStartRecord[] getJobsReadyForStartup()
+  @Override
+  public JobStartRecord[] getJobsReadyForStartup(String processID)
     throws ManifoldCFException
   {
     while (true)
@@ -6031,7 +6477,7 @@
             synchTime = x.longValue();
 
           // Mark status of job as "starting"
-          jobs.writeStatus(jobID,requestMinimum?jobs.STATUS_STARTINGUPMINIMAL:jobs.STATUS_STARTINGUP);
+          jobs.writeTransientStatus(jobID,requestMinimum?jobs.STATUS_STARTINGUPMINIMAL:jobs.STATUS_STARTINGUP,processID);
           if (Logging.jobs.isDebugEnabled())
           {
             Logging.jobs.debug("Marked job "+jobID+" for startup");
@@ -6168,7 +6614,7 @@
             Logging.jobs.debug("Setting job "+jobID+" back to 'ReadyForDelete' state");
 
           // Set the state of the job back to "ReadyForStartup"
-          jobs.writeStatus(jobID,jobs.STATUS_READYFORDELETE);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_READYFORDELETE);
           break;
         default:
           throw new ManifoldCFException("Unexpected job status: "+Integer.toString(status));
@@ -6236,7 +6682,7 @@
             Logging.jobs.debug("Setting job "+jobID+" back to 'ReadyForNotify' state");
 
           // Set the state of the job back to "ReadyForNotify"
-          jobs.writeStatus(jobID,jobs.STATUS_READYFORNOTIFY);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_READYFORNOTIFY);
           break;
         default:
           throw new ManifoldCFException("Unexpected job status: "+Integer.toString(status));
@@ -6272,6 +6718,7 @@
   /** Reset a starting job back to "ready for startup" state.
   *@param jobID is the job id.
   */
+  @Override
   public void resetStartupJob(Long jobID)
     throws ManifoldCFException
   {
@@ -6303,30 +6750,30 @@
             Logging.jobs.debug("Setting job "+jobID+" back to 'ReadyForStartup' state");
 
           // Set the state of the job back to "ReadyForStartup"
-          jobs.writeStatus(jobID,jobs.STATUS_READYFORSTARTUP);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_READYFORSTARTUP);
           break;
         case Jobs.STATUS_STARTINGUPMINIMAL:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'ReadyForStartupMinimal' state");
 
           // Set the state of the job back to "ReadyForStartupMinimal"
-          jobs.writeStatus(jobID,jobs.STATUS_READYFORSTARTUPMINIMAL);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_READYFORSTARTUPMINIMAL);
           break;
         case Jobs.STATUS_ABORTINGSTARTINGUP:
         case Jobs.STATUS_ABORTINGSTARTINGUPMINIMAL:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" to 'Aborting' state");
-          jobs.writeStatus(jobID,jobs.STATUS_ABORTING);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ABORTING);
           break;
         case Jobs.STATUS_ABORTINGSTARTINGUPFORRESTART:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" to 'AbortingForRestart' state");
-          jobs.writeStatus(jobID,jobs.STATUS_ABORTINGFORRESTART);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ABORTINGFORRESTART);
           break;
         case Jobs.STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" to 'AbortingForRestartMinimal' state");
-          jobs.writeStatus(jobID,jobs.STATUS_ABORTINGFORRESTARTMINIMAL);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ABORTINGFORRESTARTMINIMAL);
           break;
 
         case Jobs.STATUS_READYFORSTARTUP:
@@ -6400,56 +6847,56 @@
             Logging.jobs.debug("Setting job "+jobID+" back to 'Active_Uninstalled' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ACTIVE_UNINSTALLED);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ACTIVE_UNINSTALLED);
           break;
         case Jobs.STATUS_ACTIVESEEDING_NOOUTPUT:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'Active_NoOutput' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ACTIVE_NOOUTPUT);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ACTIVE_NOOUTPUT);
           break;
         case Jobs.STATUS_ACTIVESEEDING_NEITHER:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'Active_Neither' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ACTIVE_NEITHER);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ACTIVE_NEITHER);
           break;
         case Jobs.STATUS_ACTIVESEEDING:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'Active' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ACTIVE);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ACTIVE);
           break;
         case Jobs.STATUS_ACTIVEWAITSEEDING:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'ActiveWait' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ACTIVEWAIT);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ACTIVEWAIT);
           break;
         case Jobs.STATUS_PAUSEDSEEDING:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'Paused' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_PAUSED);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_PAUSED);
           break;
         case Jobs.STATUS_PAUSEDWAITSEEDING:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'PausedWait' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_PAUSEDWAIT);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_PAUSEDWAIT);
           break;
         case Jobs.STATUS_ABORTINGSEEDING:
           if (Logging.jobs.isDebugEnabled())
             Logging.jobs.debug("Setting job "+jobID+" back to 'Aborting' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ABORTING);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ABORTING);
           break;
 
         case Jobs.STATUS_ABORTINGFORRESTARTSEEDING:
@@ -6457,7 +6904,7 @@
             Logging.jobs.debug("Setting job "+jobID+" back to 'AbortingForRestart' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ABORTINGFORRESTART);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ABORTINGFORRESTART);
           break;
 
         case Jobs.STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL:
@@ -6465,7 +6912,7 @@
             Logging.jobs.debug("Setting job "+jobID+" back to 'AbortingForRestartMinimal' state");
 
           // Set the state of the job back to "Active"
-          jobs.writeStatus(jobID,jobs.STATUS_ABORTINGFORRESTARTMINIMAL);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_ABORTINGFORRESTARTMINIMAL);
           break;
 
         case Jobs.STATUS_ABORTING:
@@ -6684,7 +7131,7 @@
             continue;
 
           // Mark status of job as "finishing"
-          jobs.writeStatus(jobID,jobs.STATUS_SHUTTINGDOWN);
+          jobs.writePermanentStatus(jobID,jobs.STATUS_SHUTTINGDOWN);
           if (Logging.jobs.isDebugEnabled())
           {
             Logging.jobs.debug("Marked job "+jobID+" for shutdown");
@@ -6720,9 +7167,11 @@
   }
 
   /** Find the list of jobs that need to have their connectors notified of job completion.
+  *@param processID is the process ID.
   *@return the ID's of jobs that need their output connectors notified in order to become inactive.
   */
-  public JobNotifyRecord[] getJobsReadyForInactivity()
+  @Override
+  public JobNotifyRecord[] getJobsReadyForInactivity(String processID)
     throws ManifoldCFException
   {
     while (true)
@@ -6749,7 +7198,7 @@
           IResultRow row = set.getRow(i);
           Long jobID = (Long)row.getValue(jobs.idField);
           // Mark status of job as "starting delete"
-          jobs.writeStatus(jobID,jobs.STATUS_NOTIFYINGOFCOMPLETION);
+          jobs.writeTransientStatus(jobID,jobs.STATUS_NOTIFYINGOFCOMPLETION,processID);
           if (Logging.jobs.isDebugEnabled())
           {
             Logging.jobs.debug("Found job "+jobID+" in need of notification");
@@ -7000,6 +7449,7 @@
   /** Get the status of a job.
   *@return the status object for the specified job.
   */
+  @Override
   public JobStatus getStatus(Long jobID)
     throws ManifoldCFException
   {
@@ -7009,6 +7459,7 @@
   /** Get a list of all jobs, and their status information.
   *@return an ordered array of job status objects.
   */
+  @Override
   public JobStatus[] getAllStatus()
     throws ManifoldCFException
   {
@@ -7018,6 +7469,7 @@
   /** Get a list of running jobs.  This is for status reporting.
   *@return an array of the job status objects.
   */
+  @Override
   public JobStatus[] getRunningJobs()
     throws ManifoldCFException
   {
@@ -7027,6 +7479,7 @@
   /** Get a list of completed jobs, and their statistics.
   *@return an array of the job status objects.
   */
+  @Override
   public JobStatus[] getFinishedJobs()
     throws ManifoldCFException
   {
@@ -7034,22 +7487,16 @@
   }
 
   /** Get the status of a job.
+  *@param jobID is the job ID.
   *@param includeCounts is true if document counts should be included.
   *@return the status object for the specified job.
   */
   public JobStatus getStatus(Long jobID, boolean includeCounts)
     throws ManifoldCFException
   {
-    ArrayList list = new ArrayList();
-    String whereClause = Jobs.idField+"=?";
-    list.add(jobID);
-    JobStatus[] records = makeJobStatus(whereClause,list,includeCounts);
-    if (records.length == 0)
-      return null;
-    return records[0];
+    return getStatus(jobID, includeCounts, Integer.MAX_VALUE);
   }
 
-
   /** Get a list of all jobs, and their status information.
   *@param includeCounts is true if document counts should be included.
   *@return an ordered array of job status objects.
@@ -7057,7 +7504,7 @@
   public JobStatus[] getAllStatus(boolean includeCounts)
     throws ManifoldCFException
   {
-    return makeJobStatus(null,null,includeCounts);
+    return getAllStatus(includeCounts, Integer.MAX_VALUE);
   }
 
   /** Get a list of running jobs.  This is for status reporting.
@@ -7067,6 +7514,57 @@
   public JobStatus[] getRunningJobs(boolean includeCounts)
     throws ManifoldCFException
   {
+    return getRunningJobs(includeCounts, Integer.MAX_VALUE);
+  }
+
+  /** Get a list of completed jobs, and their statistics.
+  *@param includeCounts is true if document counts should be included.
+  *@return an array of the job status objects.
+  */
+  public JobStatus[] getFinishedJobs(boolean includeCounts)
+    throws ManifoldCFException
+  {
+    return getFinishedJobs(includeCounts, Integer.MAX_VALUE);
+  }
+
+  /** Get the status of a job.
+  *@param includeCounts is true if document counts should be included.
+  *@return the status object for the specified job.
+  */
+  @Override
+  public JobStatus getStatus(Long jobID, boolean includeCounts, int maxCount)
+    throws ManifoldCFException
+  {
+    ArrayList list = new ArrayList();
+    String whereClause = Jobs.idField+"=?";
+    list.add(jobID);
+    JobStatus[] records = makeJobStatus(whereClause,list,includeCounts,maxCount);
+    if (records.length == 0)
+      return null;
+    return records[0];
+  }
+
+
+  /** Get a list of all jobs, and their status information.
+  *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
+  *@return an ordered array of job status objects.
+  */
+  public JobStatus[] getAllStatus(boolean includeCounts, int maxCount)
+    throws ManifoldCFException
+  {
+    return makeJobStatus(null,null,includeCounts,maxCount);
+  }
+
+  /** Get a list of running jobs.  This is for status reporting.
+  *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
+  *@return an array of the job status objects.
+  */
+  @Override
+  public JobStatus[] getRunningJobs(boolean includeCounts, int maxCount)
+    throws ManifoldCFException
+  {
     ArrayList whereParams = new ArrayList();
     
     String whereClause = database.buildConjunctionClause(whereParams,new ClauseDescription[]{
@@ -7095,14 +7593,16 @@
         Jobs.statusToString(Jobs.STATUS_RESUMINGSEEDING)
         })});
     
-    return makeJobStatus(whereClause,whereParams,includeCounts);
+    return makeJobStatus(whereClause,whereParams,includeCounts,maxCount);
   }
 
   /** Get a list of completed jobs, and their statistics.
   *@param includeCounts is true if document counts should be included.
+  *@param maxCount is the maximum number of documents we want to count for each status.
   *@return an array of the job status objects.
   */
-  public JobStatus[] getFinishedJobs(boolean includeCounts)
+  @Override
+  public JobStatus[] getFinishedJobs(boolean includeCounts, int maxCount)
     throws ManifoldCFException
   {
     StringBuilder sb = new StringBuilder();
@@ -7112,7 +7612,7 @@
       new UnitaryClause(Jobs.statusField,Jobs.statusToString(Jobs.STATUS_INACTIVE))})).append(" AND ")
     .append(Jobs.endTimeField).append(" IS NOT NULL");
       
-    return makeJobStatus(sb.toString(),whereParams,includeCounts);
+    return makeJobStatus(sb.toString(),whereParams,includeCounts,maxCount);
   }
 
   // Protected methods and classes
@@ -7121,7 +7621,7 @@
   *@param whereClause is the where clause for the jobs we are interested in.
   *@return the status array.
   */
-  protected JobStatus[] makeJobStatus(String whereClause, ArrayList whereParams, boolean includeCounts)
+  protected JobStatus[] makeJobStatus(String whereClause, ArrayList whereParams, boolean includeCounts, int maxCount)
     throws ManifoldCFException
   {
     IResultSet set = database.performQuery("SELECT t0."+
@@ -7134,130 +7634,48 @@
       " FROM "+jobs.getTableName()+" t0 "+((whereClause==null)?"":(" WHERE "+whereClause))+" ORDER BY "+Jobs.descriptionField+" ASC",
       whereParams,null,null);
 
-    IResultSet set2 = null;
-    IResultSet set3 = null;
-    IResultSet set4 = null;
+    // Build hashes for set2 and set3
+    Map<Long,Long> set2Hash = new HashMap<Long,Long>();
+    Map<Long,Long> set3Hash = new HashMap<Long,Long>();
+    Map<Long,Long> set4Hash = new HashMap<Long,Long>();
+    Map<Long,Boolean> set2Exact = new HashMap<Long,Boolean>();
+    Map<Long,Boolean> set3Exact = new HashMap<Long,Boolean>();
+    Map<Long,Boolean> set4Exact = new HashMap<Long,Boolean>();
     
     if (includeCounts)
     {
-      StringBuilder sb = new StringBuilder("SELECT ");
-      ArrayList list = new ArrayList();
-      
-      sb.append(JobQueue.jobIDField).append(",")
-        .append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
-        .append(" FROM ").append(jobQueue.getTableName()).append(" t1");
-      
-      if (whereClause != null)
+      // If we are counting all of them anyway, do this via GROUP BY since it will be the fastest.  But
+      // otherwise, fire off an individual query at a time.
+      if (maxCount == Integer.MAX_VALUE)
       {
-        sb.append(" WHERE EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t0 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new JoinClause("t0."+Jobs.idField,"t1."+JobQueue.jobIDField)})).append(" AND ")
-          .append(whereClause)
-          .append(")");
-        list.addAll(whereParams);
+        buildCountsUsingGroupBy(whereClause,whereParams,set2Hash,set3Hash,set4Hash,set2Exact,set3Exact,set4Exact);
       }
-      
-      sb.append(" GROUP BY ").append(JobQueue.jobIDField);
-      
-      set2 = database.performQuery(sb.toString(),list,null,null);
-
-      sb = new StringBuilder("SELECT ");
-      list.clear();
-      
-      sb.append(JobQueue.jobIDField).append(",")
-        .append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
-        .append(" FROM ").append(jobQueue.getTableName()).append(" t1 WHERE ")
-        .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-          new MultiClause(JobQueue.statusField,new Object[]{
-            JobQueue.statusToString(JobQueue.STATUS_ACTIVE),
-            JobQueue.statusToString(JobQueue.STATUS_ACTIVENEEDRESCAN),
-            JobQueue.statusToString(JobQueue.STATUS_PENDING),
-            JobQueue.statusToString(JobQueue.STATUS_ACTIVEPURGATORY),
-            JobQueue.statusToString(JobQueue.STATUS_ACTIVENEEDRESCANPURGATORY),
-            JobQueue.statusToString(JobQueue.STATUS_PENDINGPURGATORY)})}));
-      if (whereClause != null)
+      else
       {
-        sb.append(" AND EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t0 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new JoinClause("t0."+Jobs.idField,"t1."+JobQueue.jobIDField)})).append(" AND ")
-          .append(whereClause)
-          .append(")");
-        if (whereParams != null)
-          list.addAll(whereParams);
+        // Check if the total matching jobqueue rows exceeds the limit.  If not, we can still use the cheaper query.
+        StringBuilder sb = new StringBuilder("SELECT ");
+        ArrayList list = new ArrayList();
+            
+        sb.append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+          .append(" FROM ").append(jobQueue.getTableName()).append(" t1");
+        addWhereClause(sb,list,whereClause,whereParams,false);
+        sb.append(" ").append(database.constructOffsetLimitClause(0,maxCount+1,false));
+        IResultSet countResult = database.performQuery(sb.toString(),list,null,null);
+        if (countResult.getRowCount() > 0 && ((Long)countResult.getRow(0).getValue("doccount")).longValue() > maxCount)
+        {
+          // Too many items in queue; do it the hard way
+          buildCountsUsingIndividualQueries(whereClause,whereParams,maxCount,set2Hash,set3Hash,set4Hash,set2Exact,set3Exact,set4Exact);
+        }
+        else
+        {
+          // Cheap way should still work.
+          buildCountsUsingGroupBy(whereClause,whereParams,set2Hash,set3Hash,set4Hash,set2Exact,set3Exact,set4Exact);
+        }
       }
-      sb.append(" GROUP BY ").append(JobQueue.jobIDField);
-      
-      set3 = database.performQuery(sb.toString(),list,null,null);
-
-      sb = new StringBuilder("SELECT ");
-      list.clear();
-      
-      sb.append(JobQueue.jobIDField).append(",")
-        .append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
-        .append(" FROM ").append(jobQueue.getTableName()).append(" t1 WHERE ")
-        .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-          new MultiClause(JobQueue.statusField,new Object[]{
-            JobQueue.statusToString(JobQueue.STATUS_COMPLETE),
-            JobQueue.statusToString(JobQueue.STATUS_UNCHANGED),
-            JobQueue.statusToString(JobQueue.STATUS_PURGATORY),
-            JobQueue.statusToString(JobQueue.STATUS_ACTIVEPURGATORY),
-            JobQueue.statusToString(JobQueue.STATUS_ACTIVENEEDRESCANPURGATORY),
-            JobQueue.statusToString(JobQueue.STATUS_PENDINGPURGATORY)})}));
-      
-      if (whereClause != null)
-      {
-        sb.append(" AND EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t0 WHERE ")
-          .append(database.buildConjunctionClause(list,new ClauseDescription[]{
-            new JoinClause("t0."+Jobs.idField,"t1."+JobQueue.jobIDField)})).append(" AND ")
-          .append(whereClause)
-          .append(")");
-
-        if (whereParams != null)
-          list.addAll(whereParams);
-      }
-      
-      sb.append(" GROUP BY ").append(JobQueue.jobIDField);
-      
-      set4 = database.performQuery(sb.toString(),list,null,null);
     }
     
-    int i;
-    
-    // Build hashes for set2 and set3
-    HashMap set2Hash = new HashMap();
-    if (set2 != null)
-    {
-      i = 0;
-      while (i < set2.getRowCount())
-      {
-        IResultRow row = set2.getRow(i++);
-        set2Hash.put(row.getValue(JobQueue.jobIDField),row.getValue("doccount"));
-      }
-    }
-    HashMap set3Hash = new HashMap();
-    if (set3 != null)
-    {
-      i = 0;
-      while (i < set3.getRowCount())
-      {
-        IResultRow row = set3.getRow(i++);
-        set3Hash.put(row.getValue(JobQueue.jobIDField),row.getValue("doccount"));
-      }
-    }
-    HashMap set4Hash = new HashMap();
-    if (set4 != null)
-    {
-      i = 0;
-      while (i < set4.getRowCount())
-      {
-        IResultRow row = set4.getRow(i++);
-        set4Hash.put(row.getValue(JobQueue.jobIDField),row.getValue("doccount"));
-      }
-    }
-
     JobStatus[] rval = new JobStatus[set.getRowCount()];
-    i = 0;
-    while (i < rval.length)
+    for (int i = 0; i < rval.length; i++)
     {
       IResultRow row = set.getRow(i);
       Long jobID = (Long)row.getValue(Jobs.idField);
@@ -7362,18 +7780,234 @@
         break;
       }
 
-      Long set2Value = (Long)set2Hash.get(jobID);
-      Long set3Value = (Long)set3Hash.get(jobID);
-      Long set4Value = (Long)set4Hash.get(jobID);
-
-      rval[i++] = new JobStatus(jobID.toString(),description,rstatus,((set2Value==null)?0L:set2Value.longValue()),
+      Long set2Value = set2Hash.get(jobID);
+      Long set3Value = set3Hash.get(jobID);
+      Long set4Value = set4Hash.get(jobID);
+      Boolean set2ExactValue = set2Exact.get(jobID);
+      Boolean set3ExactValue = set3Exact.get(jobID);
+      Boolean set4ExactValue = set4Exact.get(jobID);
+      
+      rval[i] = new JobStatus(jobID.toString(),description,rstatus,((set2Value==null)?0L:set2Value.longValue()),
         ((set3Value==null)?0L:set3Value.longValue()),
         ((set4Value==null)?0L:set4Value.longValue()),
+        ((set2ExactValue==null)?true:set2ExactValue.booleanValue()),
+        ((set3ExactValue==null)?true:set3ExactValue.booleanValue()),
+        ((set4ExactValue==null)?true:set4ExactValue.booleanValue()),
         startTime,endTime,errorText);
     }
     return rval;
   }
 
+  protected static ClauseDescription buildOutstandingClause()
+    throws ManifoldCFException
+  {
+    return new MultiClause(JobQueue.statusField,new Object[]{
+    JobQueue.statusToString(JobQueue.STATUS_ACTIVE),
+    JobQueue.statusToString(JobQueue.STATUS_ACTIVENEEDRESCAN),
+    JobQueue.statusToString(JobQueue.STATUS_PENDING),
+    JobQueue.statusToString(JobQueue.STATUS_ACTIVEPURGATORY),
+    JobQueue.statusToString(JobQueue.STATUS_ACTIVENEEDRESCANPURGATORY),
+    JobQueue.statusToString(JobQueue.STATUS_PENDINGPURGATORY)});
+  }
+    
+  protected static ClauseDescription buildProcessedClause()
+    throws ManifoldCFException
+  {
+    return new MultiClause(JobQueue.statusField,new Object[]{
+    JobQueue.statusToString(JobQueue.STATUS_COMPLETE),
+    JobQueue.statusToString(JobQueue.STATUS_UNCHANGED),
+    JobQueue.statusToString(JobQueue.STATUS_PURGATORY),
+    JobQueue.statusToString(JobQueue.STATUS_ACTIVEPURGATORY),
+    JobQueue.statusToString(JobQueue.STATUS_ACTIVENEEDRESCANPURGATORY),
+    JobQueue.statusToString(JobQueue.STATUS_PENDINGPURGATORY)});
+  }
+
+  protected void buildCountsUsingIndividualQueries(String whereClause, ArrayList whereParams, int maxCount,
+    Map<Long,Long> set2Hash, Map<Long,Long> set3Hash, Map<Long,Long> set4Hash,
+    Map<Long,Boolean> set2Exact, Map<Long,Boolean> set3Exact, Map<Long,Boolean> set4Exact)
+    throws ManifoldCFException
+  {
+    // Fire off an individual query with a limit for each job
+    
+    // First, get the list of jobs that we are interested in.
+    StringBuilder sb = new StringBuilder("SELECT ");
+    ArrayList list = new ArrayList();
+
+    sb.append(Jobs.idField).append(" FROM ").append(jobs.getTableName()).append(" t0");
+    if (whereClause != null)
+    {
+      sb.append(" WHERE ")
+        .append(whereClause);
+      if (whereParams != null)
+        list.addAll(whereParams);
+    }
+    
+    IResultSet jobSet = database.performQuery(sb.toString(),list,null,null);
+
+    // Scan the set of jobs
+    for (int i = 0; i < jobSet.getRowCount(); i++)
+    {
+      IResultRow row = jobSet.getRow(i);
+      Long jobID = (Long)row.getValue(Jobs.idField);
+      
+      // Now, for each job, fire off a separate, limited, query for each count we care about
+      sb = new StringBuilder("SELECT ");
+      list.clear();
+      sb.append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+        .append(" FROM ").append(jobQueue.getTableName()).append(" WHERE ");
+      sb.append(database.buildConjunctionClause(list,new ClauseDescription[]{new UnitaryClause(JobQueue.jobIDField,jobID)}));
+      sb.append(" ").append(database.constructOffsetLimitClause(0,maxCount+1,false));
+      
+      IResultSet totalSet = database.performQuery(sb.toString(),list,null,null);
+      if (totalSet.getRowCount() > 0)
+      {
+        long rowCount = ((Long)totalSet.getRow(0).getValue("doccount")).longValue();
+        if (rowCount > maxCount)
+        {
+          set2Hash.put(jobID,new Long(maxCount));
+          set2Exact.put(jobID,new Boolean(false));
+        }
+        else
+        {
+          set2Hash.put(jobID,new Long(rowCount));
+          set2Exact.put(jobID,new Boolean(true));
+        }
+      }
+          
+      sb = new StringBuilder("SELECT ");
+      list.clear();
+      sb.append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+        .append(" FROM ").append(jobQueue.getTableName()).append(" WHERE ");
+      sb.append(database.buildConjunctionClause(list,new ClauseDescription[]{new UnitaryClause(JobQueue.jobIDField,jobID)}));
+      sb.append(" AND ");
+      sb.append(database.buildConjunctionClause(list,new ClauseDescription[]{buildOutstandingClause()}));
+      sb.append(" ").append(database.constructOffsetLimitClause(0,maxCount+1,false));
+      
+      IResultSet outstandingSet = database.performQuery(sb.toString(),list,null,null);
+      if (outstandingSet.getRowCount() > 0)
+      {
+        long rowCount = ((Long)outstandingSet.getRow(0).getValue("doccount")).longValue();
+        if (rowCount > maxCount)
+        {
+          set3Hash.put(jobID,new Long(maxCount));
+          set3Exact.put(jobID,new Boolean(false));
+        }
+        else
+        {
+          set3Hash.put(jobID,new Long(rowCount));
+          set3Exact.put(jobID,new Boolean(true));
+        }
+      }
+
+      sb = new StringBuilder("SELECT ");
+      list.clear();
+      sb.append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+        .append(" FROM ").append(jobQueue.getTableName()).append(" WHERE ");
+      sb.append(database.buildConjunctionClause(list,new ClauseDescription[]{new UnitaryClause(JobQueue.jobIDField,jobID)}));
+      sb.append(" AND ");
+      sb.append(database.buildConjunctionClause(list,new ClauseDescription[]{buildProcessedClause()}));
+      sb.append(" ").append(database.constructOffsetLimitClause(0,maxCount+1,false));
+      
+      IResultSet processedSet = database.performQuery(sb.toString(),list,null,null);
+      if (processedSet.getRowCount() > 0)
+      {
+        long rowCount = ((Long)processedSet.getRow(0).getValue("doccount")).longValue();
+        if (rowCount > maxCount)
+        {
+          set4Hash.put(jobID,new Long(maxCount));
+          set4Exact.put(jobID,new Boolean(false));
+        }
+        else
+        {
+          set4Hash.put(jobID,new Long(rowCount));
+          set4Exact.put(jobID,new Boolean(true));
+        }
+      }
+    }
+  }
+
+  protected void buildCountsUsingGroupBy(String whereClause, ArrayList whereParams,
+    Map<Long,Long> set2Hash, Map<Long,Long> set3Hash, Map<Long,Long> set4Hash,
+    Map<Long,Boolean> set2Exact, Map<Long,Boolean> set3Exact, Map<Long,Boolean> set4Exact)
+    throws ManifoldCFException
+  {
+    StringBuilder sb = new StringBuilder("SELECT ");
+    ArrayList list = new ArrayList();
+        
+    sb.append(JobQueue.jobIDField).append(",")
+      .append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+      .append(" FROM ").append(jobQueue.getTableName()).append(" t1");
+    addWhereClause(sb,list,whereClause,whereParams,false);
+    sb.append(" GROUP BY ").append(JobQueue.jobIDField);
+    
+    IResultSet set2 = database.performQuery(sb.toString(),list,null,null);
+
+    sb = new StringBuilder("SELECT ");
+    list.clear();
+        
+    sb.append(JobQueue.jobIDField).append(",")
+      .append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+      .append(" FROM ").append(jobQueue.getTableName()).append(" t1 WHERE ")
+      .append(database.buildConjunctionClause(list,new ClauseDescription[]{buildOutstandingClause()}));
+    addWhereClause(sb,list,whereClause,whereParams,true);
+    sb.append(" GROUP BY ").append(JobQueue.jobIDField);
+        
+    IResultSet set3 = database.performQuery(sb.toString(),list,null,null);
+
+    sb = new StringBuilder("SELECT ");
+    list.clear();
+        
+    sb.append(JobQueue.jobIDField).append(",")
+      .append(database.constructCountClause(JobQueue.docHashField)).append(" AS doccount")
+      .append(" FROM ").append(jobQueue.getTableName()).append(" t1 WHERE ")
+      .append(database.buildConjunctionClause(list,new ClauseDescription[]{buildProcessedClause()}));
+    addWhereClause(sb,list,whereClause,whereParams,true);
+    sb.append(" GROUP BY ").append(JobQueue.jobIDField);
+        
+    IResultSet set4 = database.performQuery(sb.toString(),list,null,null);
+        
+    for (int j = 0; j < set2.getRowCount(); j++)
+    {
+      IResultRow row = set2.getRow(j);
+      Long jobID = (Long)row.getValue(JobQueue.jobIDField);
+      set2Hash.put(jobID,(Long)row.getValue("doccount"));
+      set2Exact.put(jobID,new Boolean(true));
+    }
+    for (int j = 0; j < set3.getRowCount(); j++)
+    {
+      IResultRow row = set3.getRow(j);
+      Long jobID = (Long)row.getValue(JobQueue.jobIDField);
+      set3Hash.put(jobID,(Long)row.getValue("doccount"));
+      set3Exact.put(jobID,new Boolean(true));
+    }
+    for (int j = 0; j < set4.getRowCount(); j++)
+    {
+      IResultRow row = set4.getRow(j);
+      Long jobID = (Long)row.getValue(JobQueue.jobIDField);
+      set4Hash.put(jobID,(Long)row.getValue("doccount"));
+      set4Exact.put(jobID,new Boolean(true));
+    }
+  }
+
+  protected void addWhereClause(StringBuilder sb, ArrayList list, String whereClause, ArrayList whereParams, boolean wherePresent)
+  {
+    if (whereClause != null)
+    {
+      if (wherePresent)
+        sb.append(" AND");
+      else
+        sb.append(" WHERE");
+      
+      sb.append(" EXISTS(SELECT 'x' FROM ").append(jobs.getTableName()).append(" t0 WHERE ")
+        .append(database.buildConjunctionClause(list,new ClauseDescription[]{
+          new JoinClause("t0."+Jobs.idField,"t1."+JobQueue.jobIDField)})).append(" AND ")
+        .append(whereClause)
+        .append(")");
+      if (whereParams != null)
+        list.addAll(whereParams);
+    }
+  }
+  
   // These methods generate reports for direct display in the UI.
 
   /** Run a 'document status' report.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobQueue.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobQueue.java
index d395392..a5e6c24 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobQueue.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/JobQueue.java
@@ -43,6 +43,8 @@
  * <tr><td>docpriority</td><td>FLOAT</td><td></td></tr>
  * <tr><td>priorityset</td><td>BIGINT</td><td></td></tr>
  * <tr><td>checkaction</td><td>CHAR(1)</td><td></td></tr>
+ * <tr><td>processid</td><td>VARCHAR(16)</td><td></td></tr>
+ * <tr><td>seedingprocessid</td><td>VARCHAR(16)</td><td></td></tr>
  * </table>
  * <br><br>
  * 
@@ -111,7 +113,9 @@
   public static final String docPriorityField = "docpriority";
   public static final String prioritySetField = "priorityset";
   public static final String checkActionField = "checkaction";
-
+  public static final String processIDField = "processid";
+  public static final String seedingProcessIDField = "seedingprocessid";
+  
   public static final double noDocPriorityValue = 1e9;
   public static final Double nullDocPriority = new Double(noDocPriorityValue + 1.0);
   
@@ -200,6 +204,8 @@
         map.put(docPriorityField,new ColumnDescription("FLOAT",false,true,null,null,false));
         map.put(prioritySetField,new ColumnDescription("BIGINT",false,true,null,null,false));
         map.put(checkActionField,new ColumnDescription("CHAR(1)",false,true,null,null,false));
+        map.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+        map.put(seedingProcessIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
         performCreate(map,null);
       }
       else
@@ -208,6 +214,22 @@
         Map map = new HashMap();
         map.put(docPriorityField,nullDocPriority);
         performUpdate(map,"WHERE "+docPriorityField+" IS NULL",null,null);
+        
+        // Also, add processIDField
+        if (existing.get(processIDField) == null)
+        {
+          Map insertMap = new HashMap();
+          insertMap.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+          performAlter(insertMap,null,null,null);
+        }
+        
+        // Add seedingProcessID field too
+        if (existing.get(seedingProcessIDField) == null)
+        {
+          Map insertMap = new HashMap();
+          insertMap.put(seedingProcessIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+          performAlter(insertMap,null,null,null);
+        }
       }
 
       // Secondary table installation
@@ -332,6 +354,72 @@
   /** Restart.
   * This method should be called at initial startup time.  It resets the status of all documents to something
   * reasonable, so the jobs can be restarted and work properly to completion.
+  *@param processID is the processID to clean up after.
+  */
+  public void restart(String processID)
+    throws ManifoldCFException
+  {
+    // Map ACTIVE back to PENDING.
+    HashMap map = new HashMap();
+    map.put(statusField,statusToString(STATUS_PENDING));
+    map.put(processIDField,null);
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_ACTIVE),
+        statusToString(STATUS_ACTIVENEEDRESCAN)}),
+      new UnitaryClause(processIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+
+    // Map ACTIVEPURGATORY to PENDINGPURGATORY
+    map.put(statusField,statusToString(STATUS_PENDINGPURGATORY));
+    map.put(processIDField,null);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_ACTIVEPURGATORY),
+        statusToString(STATUS_ACTIVENEEDRESCANPURGATORY)}),
+      new UnitaryClause(processIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+
+    // Map BEINGDELETED to ELIGIBLEFORDELETE
+    map.put(statusField,statusToString(STATUS_ELIGIBLEFORDELETE));
+    map.put(processIDField,null);
+    map.put(checkTimeField,new Long(0L));
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_BEINGDELETED)),
+      new UnitaryClause(processIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+
+    // Map BEINGCLEANED to PURGATORY
+    map.put(statusField,statusToString(STATUS_PURGATORY));
+    map.put(processIDField,null);
+    map.put(checkTimeField,new Long(0L));
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_BEINGCLEANED)),
+      new UnitaryClause(processIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+
+    // Map newseed fields to seed
+    map.clear();
+    map.put(isSeedField,seedstatusToString(SEEDSTATUS_SEED));
+    map.put(seedingProcessIDField,null);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(isSeedField,seedstatusToString(SEEDSTATUS_NEWSEED)),
+      new UnitaryClause(seedingProcessIDField,processID)});
+    performUpdate(map,"WHERE "+query,list,null);
+
+    // Reindex the jobqueue table, since we've probably made lots of bad tuples doing the above operations.
+    reindexTable();
+    unconditionallyAnalyzeTables();
+
+    TrackerClass.noteGlobalChange("Restart");
+  }
+
+  /** Cleanup after all processIDs.
   */
   public void restart()
     throws ManifoldCFException
@@ -339,6 +427,7 @@
     // Map ACTIVE back to PENDING.
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_PENDING));
+    map.put(processIDField,null);
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new MultiClause(statusField,new Object[]{
@@ -348,6 +437,7 @@
 
     // Map ACTIVEPURGATORY to PENDINGPURGATORY
     map.put(statusField,statusToString(STATUS_PENDINGPURGATORY));
+    map.put(processIDField,null);
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
       new MultiClause(statusField,new Object[]{
@@ -357,6 +447,7 @@
 
     // Map BEINGDELETED to ELIGIBLEFORDELETE
     map.put(statusField,statusToString(STATUS_ELIGIBLEFORDELETE));
+    map.put(processIDField,null);
     map.put(checkTimeField,new Long(0L));
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
@@ -365,6 +456,7 @@
 
     // Map BEINGCLEANED to PURGATORY
     map.put(statusField,statusToString(STATUS_PURGATORY));
+    map.put(processIDField,null);
     map.put(checkTimeField,new Long(0L));
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
@@ -374,26 +466,34 @@
     // Map newseed fields to seed
     map.clear();
     map.put(isSeedField,seedstatusToString(SEEDSTATUS_SEED));
+    map.put(seedingProcessIDField,null);
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(isSeedField,seedstatusToString(SEEDSTATUS_NEWSEED))});
     performUpdate(map,"WHERE "+query,list,null);
 
-    // Clear out all failtime fields (since we obviously haven't been retrying whilst we were not
-    // running)
-    map.clear();
-    map.put(failTimeField,null);
-    list.clear();
-    query = buildConjunctionClause(list,new ClauseDescription[]{
-      new NullCheckClause(failTimeField,false)});
-    performUpdate(map,"WHERE "+query,list,null);
     // Reindex the jobqueue table, since we've probably made lots of bad tuples doing the above operations.
     reindexTable();
     unconditionallyAnalyzeTables();
 
-    TrackerClass.noteGlobalChange("Restart");
+    TrackerClass.noteGlobalChange("Restart cluster");
   }
-
+  
+  /** Restart for entire cluster.
+  */
+  public void restartCluster()
+    throws ManifoldCFException
+  {
+    // Clear out all failtime fields (since we obviously haven't been retrying whilst we were not
+    // running)
+    HashMap map = new HashMap();
+    map.put(failTimeField,null);
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new NullCheckClause(failTimeField,false)});
+    performUpdate(map,"WHERE "+query,list,null);
+  }
+  
   /** Flip all records for a job that have status HOPCOUNTREMOVED back to PENDING.
   * NOTE: We need to actually schedule these!!!  so the following can't really work.  ???
   */
@@ -409,21 +509,11 @@
       new UnitaryClause(jobIDField,jobID),
       new UnitaryClause(statusField,statusToString(STATUS_HOPCOUNTREMOVED))});
     performUpdate(map,"WHERE "+query,list,null);
+    unconditionallyAnalyzeTables();
     
     TrackerClass.noteJobChange(jobID,"Map HOPCOUNTREMOVED to PENDING");
   }
 
-  /** Delete all records for a job that have status HOPCOUNTREMOVED.
-  */
-  public void deleteHopcountRemovedRecords(Long jobID)
-    throws ManifoldCFException
-  {
-    ArrayList list = new ArrayList();
-    String query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(jobIDField,jobID),
-      new UnitaryClause(statusField,statusToString(STATUS_HOPCOUNTREMOVED))});
-    performDelete("WHERE "+query,list,null);
-  }
 
   /** Clear the failtimes for all documents associated with a job.
   * This method is called when the system detects that a significant delaying event has occurred,
@@ -446,63 +536,75 @@
   * This will get called if something went wrong that could have screwed up the
   * status of a worker thread.  The threads all die/end, and this method
   * resets any active documents back to the right state (waiting for stuffing).
+  *@param processID is the current processID.
   */
-  public void resetDocumentWorkerStatus()
+  public void resetDocumentWorkerStatus(String processID)
     throws ManifoldCFException
   {
     // Map ACTIVE back to PENDING.
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_PENDING));
+    map.put(processIDField,null);
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new MultiClause(statusField,new Object[]{
         statusToString(STATUS_ACTIVE),
-        statusToString(STATUS_ACTIVENEEDRESCAN)})});
+        statusToString(STATUS_ACTIVENEEDRESCAN)}),
+      new UnitaryClause(processIDField,processID)});
     performUpdate(map,"WHERE "+query,list,null);
 
     // Map ACTIVEPURGATORY to PENDINGPURGATORY
     map.put(statusField,statusToString(STATUS_PENDINGPURGATORY));
+    map.put(processIDField,null);
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
       new MultiClause(statusField,new Object[]{
         statusToString(STATUS_ACTIVEPURGATORY),
-        statusToString(STATUS_ACTIVENEEDRESCANPURGATORY)})});
+        statusToString(STATUS_ACTIVENEEDRESCANPURGATORY)}),
+      new UnitaryClause(processIDField,processID)});
     performUpdate(map,"WHERE "+query,list,null);
+    unconditionallyAnalyzeTables();
         
     TrackerClass.noteGlobalChange("Reset document worker status");
   }
 
   /** Reset doc delete worker status.
   */
-  public void resetDocDeleteWorkerStatus()
+  public void resetDocDeleteWorkerStatus(String processID)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     // Map BEINGDELETED to ELIGIBLEFORDELETE
     map.put(statusField,statusToString(STATUS_ELIGIBLEFORDELETE));
+    map.put(processIDField,null);
     map.put(checkTimeField,new Long(0L));
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_BEINGDELETED))});
+      new UnitaryClause(statusField,statusToString(STATUS_BEINGDELETED)),
+      new UnitaryClause(processIDField,processID)});
     performUpdate(map,"WHERE "+query,list,null);
-      
+    unconditionallyAnalyzeTables();
+
     TrackerClass.noteGlobalChange("Reset document delete worker status");
   }
 
   /** Reset doc cleaning worker status.
   */
-  public void resetDocCleanupWorkerStatus()
+  public void resetDocCleanupWorkerStatus(String processID)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     // Map BEINGCLEANED to PURGATORY
     map.put(statusField,statusToString(STATUS_PURGATORY));
+    map.put(processIDField,null);
     map.put(checkTimeField,new Long(0L));
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_BEINGCLEANED))});
+      new UnitaryClause(statusField,statusToString(STATUS_BEINGCLEANED)),
+      new UnitaryClause(processIDField,processID)});
     performUpdate(map,"WHERE "+query,list,null);
-      
+    unconditionallyAnalyzeTables();
+
     TrackerClass.noteGlobalChange("Reset document cleanup worker status");
   }
 
@@ -569,16 +671,21 @@
   public void prepareFullScan(Long jobID)
     throws ManifoldCFException
   {
-    // Delete PENDING entries
+    // Delete PENDING and HOPCOUNTREMOVED entries (they are treated the same)
     ArrayList list = new ArrayList();
-    list.add(jobID);
-    list.add(statusToString(STATUS_PENDING));
     // Clean out prereqevents table first
-    prereqEventManager.deleteRows(getTableName()+" t0","t0."+idField,"t0."+jobIDField+"=? AND t0."+statusField+"=?",list);
-    list.clear();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause("t0."+jobIDField,jobID),
+      new MultiClause("t0."+statusField,new Object[]{
+        statusToString(STATUS_PENDING),
+        statusToString(STATUS_HOPCOUNTREMOVED)})});
+    prereqEventManager.deleteRows(getTableName()+" t0","t0."+idField,query,list);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(jobIDField,jobID),
-      new UnitaryClause(statusField,statusToString(STATUS_PENDING))});
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_PENDING),
+        statusToString(STATUS_HOPCOUNTREMOVED)})});
     performDelete("WHERE "+query,list,null);
 
     // Turn PENDINGPURGATORY and COMPLETED into PURGATORY.
@@ -610,6 +717,53 @@
     TrackerClass.noteJobChange(jobID,"Prepare full scan");
   }
 
+  /** Reset schedule for all PENDINGPURGATORY entries.
+  *@param jobID is the job identifier.
+  */
+  public void resetPendingDocumentSchedules(Long jobID)
+    throws ManifoldCFException
+  {
+    HashMap map = new HashMap();
+    // Do not reset priorities here!  They should all be blank at this point.
+    map.put(checkTimeField,new Long(0L));
+    map.put(checkActionField,actionToString(ACTION_RESCAN));
+    map.put(failTimeField,null);
+    map.put(failCountField,null);
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(jobIDField,jobID),
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_PENDINGPURGATORY),
+        statusToString(STATUS_PENDING)})});
+    performUpdate(map,"WHERE "+query,list,null);
+    noteModifications(0,1,0);
+  }
+  
+  /** For ADD_CHANGE_DELETE jobs where the specifications have been changed,
+  * we must reconsider every existing document.  So reconsider them all.
+  *@param jobID is the job identifier.
+  */
+  public void queueAllExisting(Long jobID)
+    throws ManifoldCFException
+  {
+    // Map COMPLETE to PENDINGPURGATORY
+    HashMap map = new HashMap();
+    map.put(statusField,statusToString(STATUS_PENDINGPURGATORY));
+    // Do not reset priorities here!  They should all be blank at this point.
+    map.put(checkTimeField,new Long(0L));
+    map.put(checkActionField,actionToString(ACTION_RESCAN));
+    map.put(failTimeField,null);
+    map.put(failCountField,null);
+    ArrayList list = new ArrayList();
+    String query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(jobIDField,jobID),
+      new UnitaryClause(statusField,statusToString(STATUS_COMPLETE))});
+    performUpdate(map,"WHERE "+query,list,null);
+    noteModifications(0,1,0);
+    // Do an analyze, otherwise our plans are going to be crap right off the bat
+    unconditionallyAnalyzeTables();
+  }
+    
   /** Prepare for a "partial" job.  This is called ONLY when the job is inactive.
   *
   * This method maps all COMPLETE entries to UNCHANGED.  The purpose is to
@@ -686,7 +840,8 @@
       list.add(identifiers[i].getID());
       i++;
     }
-    doDeletes(list);
+    if (list.size() > 0)
+      doDeletes(list);
     noteModifications(0,0,identifiers.length);
   }
 
@@ -725,12 +880,12 @@
   }
 
   /** Write out a document priority */
-  public void writeDocPriority(long currentTime, Long rowID, double priority)
+  public void writeDocPriority(long currentTime, Long rowID, IPriorityCalculator priority)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     map.put(prioritySetField,new Long(currentTime));
-    map.put(docPriorityField,new Double(priority));
+    map.put(docPriorityField,new Double(priority.getDocumentPriority()));
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(idField,rowID)});
@@ -787,6 +942,7 @@
     }
 
     map.put(statusField,statusToString(newStatus));
+    map.put(processIDField,null);
     map.put(checkTimeField,checkTimeValue);
     map.put(checkActionField,actionFieldValue);
     map.put(failTimeField,null);
@@ -795,7 +951,7 @@
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(idField,recID)});
     performUpdate(map,"WHERE "+query,list,null);
-      
+    noteModifications(0,1,0);
     TrackerClass.noteRecordChange(recID, newStatus, "Note completion");
   }
 
@@ -837,6 +993,7 @@
     }
 
     map.put(statusField,statusToString(newStatus));
+    map.put(processIDField,null);
     map.put(checkTimeField,checkTimeValue);
     map.put(checkActionField,actionFieldValue);
     map.put(failTimeField,null);
@@ -845,6 +1002,7 @@
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(idField,recID)});
     performUpdate(map,"WHERE "+query,list,null);
+    noteModifications(0,1,0);
     TrackerClass.noteRecordChange(recID, newStatus, "Update or hopcount remove");
     return rval;
   }
@@ -853,7 +1011,7 @@
   *@param id is the job queue id.
   *@param currentStatus is the current status
   */
-  public void updateActiveRecord(Long id, int currentStatus)
+  public void updateActiveRecord(Long id, int currentStatus, String processID)
     throws ManifoldCFException
   {
     int newStatus;
@@ -872,6 +1030,7 @@
 
     HashMap map = new HashMap();
     map.put(statusField,statusToString(newStatus));
+    map.put(processIDField,processID);
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(idField,id)});
@@ -881,17 +1040,16 @@
   }
 
   /** Set the status on a record, including check time and priority.
-  * The status set MUST be a PENDING or PENDINGPURGATORY status.
   *@param id is the job queue id.
-  *@param status is the desired status
   *@param checkTime is the check time.
   */
-  public void setStatus(Long id, int status,
+  public void setRequeuedStatus(Long id,
     Long checkTime, int action, long failTime, int failCount)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
-    map.put(statusField,statusToString(status));
+    map.put(statusField,statusToString(STATUS_PENDINGPURGATORY));
+    map.put(processIDField,null);
     map.put(checkTimeField,checkTime);
     map.put(checkActionField,actionToString(action));
     if (failTime == -1L)
@@ -909,16 +1067,17 @@
       new UnitaryClause(idField,id)});
     performUpdate(map,"WHERE "+query,list,null);
     noteModifications(0,1,0);
-    TrackerClass.noteRecordChange(id, status, "Set status");
+    TrackerClass.noteRecordChange(id, STATUS_PENDINGPURGATORY, "Set requeued status");
   }
 
   /** Set the status of a document to "being deleted".
   */
-  public void setDeletingStatus(Long id)
+  public void setDeletingStatus(Long id, String processID)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_BEINGDELETED));
+    map.put(processIDField,processID);
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(idField,id)});
@@ -933,6 +1092,7 @@
   {
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_ELIGIBLEFORDELETE));
+    map.put(processIDField,null);
     map.put(checkTimeField,new Long(checkTime));
     map.put(checkActionField,null);
     map.put(failTimeField,null);
@@ -947,11 +1107,12 @@
 
   /** Set the status of a document to "being cleaned".
   */
-  public void setCleaningStatus(Long id)
+  public void setCleaningStatus(Long id, String processID)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_BEINGCLEANED));
+    map.put(processIDField,processID);
     ArrayList list = new ArrayList();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
       new UnitaryClause(idField,id)});
@@ -966,6 +1127,7 @@
   {
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_PURGATORY));
+    map.put(processIDField,null);
     map.put(checkTimeField,new Long(checkTime));
     map.put(checkActionField,null);
     map.put(failTimeField,null);
@@ -1037,18 +1199,18 @@
   /** Update an existing record (as the result of an initial add).
   * The record is presumed to exist and have been locked, via "FOR UPDATE".
   */
-  public boolean updateExistingRecordInitial(Long recordID, int currentStatus, Long checkTimeValue,
-    long desiredExecuteTime, long currentTime, double desiredPriority, String[] prereqEvents)
+  public void updateExistingRecordInitial(Long recordID, int currentStatus, Long checkTimeValue,
+    long desiredExecuteTime, long currentTime, IPriorityCalculator desiredPriority, String[] prereqEvents,
+    String processID)
     throws ManifoldCFException
   {
     // The general rule here is:
     // If doesn't exist, make a PENDING entry.
-    // If PENDING, keep it as PENDING.
+    // If PENDING, keep it as PENDING.  
     // If COMPLETE, make a PENDING entry.
     // If PURGATORY, make a PENDINGPURGATORY entry.
     // Leave everything else alone and do nothing.
 
-    boolean rval = false;
     HashMap map = new HashMap();
     switch (currentStatus)
     {
@@ -1076,9 +1238,8 @@
       map.put(failTimeField,null);
       map.put(failCountField,null);
       // Update the doc priority.
-      map.put(docPriorityField,new Double(desiredPriority));
+      map.put(docPriorityField,new Double(desiredPriority.getDocumentPriority()));
       map.put(prioritySetField,new Long(currentTime));
-      rval = true;
       break;
 
     case STATUS_PENDING:
@@ -1116,6 +1277,7 @@
 
     }
     map.put(isSeedField,seedstatusToString(SEEDSTATUS_NEWSEED));
+    map.put(seedingProcessIDField,processID);
     // Delete any existing prereqevent entries first
     prereqEventManager.deleteRows(recordID);
     ArrayList list = new ArrayList();
@@ -1125,7 +1287,6 @@
     // Insert prereqevent entries, if any
     prereqEventManager.addRows(recordID,prereqEvents);
     noteModifications(0,1,0);
-    return rval;
   }
 
   /** Insert a new record into the jobqueue table (as part of adding an initial reference).
@@ -1134,8 +1295,8 @@
   *@param docHash is the hash of the local document identifier.
   *@param docID is the local document identifier.
   */
-  public void insertNewRecordInitial(Long jobID, String docHash, String docID, double desiredDocPriority,
-    long desiredExecuteTime, long currentTime, String[] prereqEvents)
+  public void insertNewRecordInitial(Long jobID, String docHash, String docID, IPriorityCalculator desiredDocPriority,
+    long desiredExecuteTime, long currentTime, String[] prereqEvents, String processID)
     throws ManifoldCFException
   {
     // No prerequisites should be possible at this point.
@@ -1152,8 +1313,9 @@
     map.put(docIDField,docID);
     map.put(statusField,statusToString(STATUS_PENDING));
     map.put(isSeedField,seedstatusToString(SEEDSTATUS_NEWSEED));
+    map.put(seedingProcessIDField,processID);
     // Set the document priority
-    map.put(docPriorityField,new Double(desiredDocPriority));
+    map.put(docPriorityField,new Double(desiredDocPriority.getDocumentPriority()));
     map.put(prioritySetField,new Long(currentTime));
     performInsert(map,null);
     prereqEventManager.addRows(recordID,prereqEvents);
@@ -1164,7 +1326,7 @@
   /** Note the remaining documents that do NOT need to be queued.  These are noted so that the
   * doneDocumentsInitial() method does not clean up seeds from previous runs wrongly.
   */
-  public void addRemainingDocumentsInitial(Long jobID, String[] docIDHashes)
+  public void addRemainingDocumentsInitial(Long jobID, String[] docIDHashes, String processID)
     throws ManifoldCFException
   {
     if (docIDHashes.length == 0)
@@ -1215,7 +1377,7 @@
     {
       if (k == maxClause)
       {
-        updateRemainingDocuments(list);
+        updateRemainingDocuments(list,processID);
         k = 0;
         list.clear();
       }
@@ -1223,7 +1385,7 @@
       k++;
     }
     if (k > 0)
-      updateRemainingDocuments(list);
+      updateRemainingDocuments(list,processID);
     noteModifications(0,docIDHashes.length,0);
   }
 
@@ -1265,11 +1427,12 @@
   }
   
   /** Update the specified set of documents to be "NEWSEED" */
-  protected void updateRemainingDocuments(ArrayList list)
+  protected void updateRemainingDocuments(ArrayList list, String processID)
     throws ManifoldCFException
   {
     HashMap map = new HashMap();
     map.put(isSeedField,seedstatusToString(SEEDSTATUS_NEWSEED));
+    map.put(seedingProcessIDField,processID);
     ArrayList newList = new ArrayList();
     String query = buildConjunctionClause(newList,new ClauseDescription[]{
       new MultiClause(idField,list)});
@@ -1302,6 +1465,7 @@
       new UnitaryClause(isSeedField,seedstatusToString(SEEDSTATUS_SEED)),
       new UnitaryClause(jobIDField,jobID)});
     map.put(isSeedField,seedstatusToString(SEEDSTATUS_SEED));
+    map.put(seedingProcessIDField,null);
     performUpdate(map,"WHERE "+query,list,null);
   }
 
@@ -1331,14 +1495,12 @@
 
   /** Update an existing record (as the result of a reference add).
   * The record is presumed to exist and have been locked, via "FOR UPDATE".
-  *@return true if the document priority slot has been retained, false if freed.
   */
-  public boolean updateExistingRecord(Long recordID, int currentStatus, Long checkTimeValue,
+  public void updateExistingRecord(Long recordID, int currentStatus, Long checkTimeValue,
     long desiredExecuteTime, long currentTime, boolean otherChangesSeen,
-    double desiredPriority, String[] prereqEvents)
+    IPriorityCalculator desiredPriority, String[] prereqEvents)
     throws ManifoldCFException
   {
-    boolean rval = false;
     HashMap map = new HashMap();
     switch (currentStatus)
     {
@@ -1352,9 +1514,8 @@
       map.put(failTimeField,null);
       map.put(failCountField,null);
       // Going into pending: set the docpriority.
-      map.put(docPriorityField,new Double(desiredPriority));
+      map.put(docPriorityField,new Double(desiredPriority.getDocumentPriority()));
       map.put(prioritySetField,new Long(currentTime));
-      rval = true;
       break;
     case STATUS_COMPLETE:
     case STATUS_BEINGCLEANED:
@@ -1370,17 +1531,16 @@
         map.put(failTimeField,null);
         map.put(failCountField,null);
         // Going into pending: set the docpriority.
-        map.put(docPriorityField,new Double(desiredPriority));
+        map.put(docPriorityField,new Double(desiredPriority.getDocumentPriority()));
         map.put(prioritySetField,new Long(currentTime));
-        rval = true;
         break;
       }
-      return rval;
+      return;
     case STATUS_ACTIVENEEDRESCAN:
     case STATUS_ACTIVENEEDRESCANPURGATORY:
       // Document is in the queue, but already needs a rescan for prior reasons.
       // We're done.
-      return rval;
+      return;
     case STATUS_ACTIVE:
       // Document is in the queue.
       // The problem here is that we have no idea when the document is actually being worked on; we only find out when the document is actually *done*.
@@ -1400,12 +1560,11 @@
         map.put(failTimeField,null);
         map.put(failCountField,null);
         // Going into pending: set the docpriority.
-        map.put(docPriorityField,new Double(desiredPriority));
+        map.put(docPriorityField,new Double(desiredPriority.getDocumentPriority()));
         map.put(prioritySetField,new Long(currentTime));
-        rval = true;
         break;
       }
-      return rval;
+      return;
     case STATUS_ACTIVEPURGATORY:
       // Document is in the queue.
       // The problem here is that we have no idea when the document is actually being worked on; we only find out when the document is actually *done*.
@@ -1425,12 +1584,11 @@
         map.put(failTimeField,null);
         map.put(failCountField,null);
         // Going into pending: set the docpriority.
-        map.put(docPriorityField,new Double(desiredPriority));
+        map.put(docPriorityField,new Double(desiredPriority.getDocumentPriority()));
         map.put(prioritySetField,new Long(currentTime));
-        rval = true;
         break;
       }
-      return rval;
+      return;
     case STATUS_PENDING:
       // Document is already waiting to be processed.
       // Bump up the schedule, if called for.  Otherwise, just leave it alone.
@@ -1439,7 +1597,7 @@
       {
         long currentExecuteTime = cv.longValue();
         if (currentExecuteTime <= desiredExecuteTime)
-          return rval;
+          return;
       }
       map.put(checkTimeField,new Long(desiredExecuteTime));
       map.put(checkActionField,actionToString(ACTION_RESCAN));
@@ -1453,7 +1611,7 @@
       // Also, leave doc priority alone
       // Fall through...
     default:
-      return rval;
+      return;
     }
     prereqEventManager.deleteRows(recordID);
     ArrayList list = new ArrayList();
@@ -1462,13 +1620,13 @@
     performUpdate(map,"WHERE "+query,list,null);
     prereqEventManager.addRows(recordID,prereqEvents);
     noteModifications(0,1,0);
-    return rval;
+    return;
   }
 
   /** Insert a new record into the jobqueue table (as part of adding a child reference).
   *
   */
-  public void insertNewRecord(Long jobID, String docIDHash, String docID, double desiredDocPriority, long desiredExecuteTime,
+  public void insertNewRecord(Long jobID, String docIDHash, String docID, IPriorityCalculator desiredDocPriority, long desiredExecuteTime,
     long currentTime, String[] prereqEvents)
     throws ManifoldCFException
   {
@@ -1482,7 +1640,7 @@
     map.put(docIDField,docID);
     map.put(statusField,statusToString(STATUS_PENDING));
     // Be sure to set the priority also
-    map.put(docPriorityField,new Double(desiredDocPriority));
+    map.put(docPriorityField,new Double(desiredDocPriority.getDocumentPriority()));
     map.put(prioritySetField,new Long(currentTime));
     performInsert(map,null);
     prereqEventManager.addRows(recordID,prereqEvents);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Jobs.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Jobs.java
index 418017f..c36f582 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Jobs.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/Jobs.java
@@ -21,6 +21,7 @@
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.agents.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
 import org.apache.manifoldcf.crawler.interfaces.CacheKeyFactory;
 import java.util.*;
 
@@ -52,6 +53,7 @@
  * <tr><td>reseedinterval</td><td>BIGINT</td><td></td></tr>
  * <tr><td>reseedtime</td><td>BIGINT</td><td></td></tr>
  * <tr><td>hopcountmode</td><td>CHAR(1)</td><td></td></tr>
+ * <tr><td>processid</td><td>VARCHAR(16)</td><td></td></tr>
  * </table>
  * <br><br>
  * 
@@ -181,12 +183,13 @@
   public final static String reseedTimeField = "reseedtime";
   /** For a job whose connector supports hopcounts, this describes how those hopcounts are handled. */
   public final static String hopcountModeField = "hopcountmode";
-
+  /** Process id field, for keeping track of which process owns transient state */
+  public final static String processIDField = "processid";
+  
   protected static Map statusMap;
   protected static Map typeMap;
   protected static Map startMap;
   protected static Map hopmodeMap;
-
   static
   {
     statusMap = new HashMap();
@@ -254,6 +257,36 @@
     hopmodeMap.put("V",new Integer(HOPCOUNT_NEVERDELETE));
   }
 
+  /*
+  protected static Set<Integer> transientStates;
+  static
+  {
+    transientStates = new HashSet<Integer>();
+    transientStates.add(new Integer(STATUS_DELETESTARTINGUP));
+    transientStates.add(new Integer(STATUS_NOTIFYINGOFCOMPLETION));
+    transientStates.add(new Integer(STATUS_STARTINGUP));
+    transientStates.add(new Integer(STATUS_ABORTINGSTARTINGUP));
+    transientStates.add(new Integer(STATUS_STARTINGUPMINIMAL));
+    transientStates.add(new Integer(STATUS_ABORTINGSTARTINGUPMINIMAL));
+    transientStates.add(new Integer(STATUS_ABORTINGSTARTINGUPFORRESTART));
+    transientStates.add(new Integer(STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL));
+    transientStates.add(new Integer(STATUS_ACTIVESEEDING));
+    transientStates.add(new Integer(STATUS_PAUSINGSEEDING));
+    transientStates.add(new Integer(STATUS_ACTIVEWAITINGSEEDING));
+    transientStates.add(new Integer(STATUS_PAUSINGWAITINGSEEDING));
+    transientStates.add(new Integer(STATUS_RESUMINGSEEDING));
+    transientStates.add(new Integer(STATUS_ABORTINGSEEDING));
+    transientStates.add(new Integer(STATUS_ABORTINGFORRESTARTSEEDING));
+    transientStates.add(new Integer(STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL));
+    transientStates.add(new Integer(STATUS_PAUSEDSEEDING));
+    transientStates.add(new Integer(STATUS_ACTIVEWAITSEEDING));
+    transientStates.add(new Integer(STATUS_PAUSEDWAITSEEDING));
+    transientStates.add(new Integer(STATUS_ACTIVESEEDING_UNINSTALLED));
+    transientStates.add(new Integer(STATUS_ACTIVESEEDING_NOOUTPUT));
+    transientStates.add(new Integer(STATUS_ACTIVESEEDING_NEITHER));
+  }
+  */
+  
   // Local variables
   protected ICacheManager cacheManager;
   protected ScheduleManager scheduleManager;
@@ -316,11 +349,18 @@
         map.put(reseedIntervalField,new ColumnDescription("BIGINT",false,true,null,null,false));
         map.put(reseedTimeField,new ColumnDescription("BIGINT",false,true,null,null,false));
         map.put(hopcountModeField,new ColumnDescription("CHAR(1)",false,true,null,null,false));
+        map.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
         performCreate(map,null);
       }
       else
       {
         // Do any needed upgrades
+        if (existing.get(processIDField) == null)
+        {
+          Map insertMap = new HashMap();
+          insertMap.put(processIDField,new ColumnDescription("VARCHAR(16)",false,true,null,null,false));
+          performAlter(insertMap,null,null,null);
+        }
       }
 
       // Handle related tables
@@ -330,6 +370,7 @@
 
       // Index management
       IndexDescription statusIndex = new IndexDescription(false,new String[]{statusField,idField,priorityField});
+      IndexDescription statusProcessIndex = new IndexDescription(false,new String[]{statusField,processIDField});
       IndexDescription connectionIndex = new IndexDescription(false,new String[]{connectionNameField});
       IndexDescription outputIndex = new IndexDescription(false,new String[]{outputNameField});
 
@@ -343,6 +384,8 @@
 
         if (statusIndex != null && id.equals(statusIndex))
           statusIndex = null;
+        else if (statusProcessIndex != null && id.equals(statusProcessIndex))
+          statusProcessIndex = null;
         else if (connectionIndex != null && id.equals(connectionIndex))
           connectionIndex = null;
         else if (outputIndex != null && id.equals(outputIndex))
@@ -355,6 +398,8 @@
       // Add the ones we didn't find
       if (statusIndex != null)
         performAddIndex(null,statusIndex);
+      if (statusProcessIndex != null)
+        performAddIndex(null,statusProcessIndex);
       if (connectionIndex != null)
         performAddIndex(null,connectionIndex);
       if (outputIndex != null)
@@ -752,6 +797,33 @@
                   isSame = false;
               }
 
+              if (isSame)
+              {
+                // Compare hopcount filter criteria.
+                Map filterRows = hopFilterManager.readRows(id);
+                Map newFilterRows = jobDescription.getHopCountFilters();
+                if (filterRows.size() != newFilterRows.size())
+                  isSame = false;
+                else
+                {
+                  for (String linkType : (Collection<String>)filterRows.keySet())
+                  {
+                    Long oldCount = (Long)filterRows.get(linkType);
+                    Long newCount = (Long)newFilterRows.get(linkType);
+                    if (oldCount == null || newCount == null)
+                    {
+                      isSame = false;
+                      break;
+                    }
+                    else if (oldCount.longValue() != newCount.longValue())
+                    {
+                      isSame = false;
+                      break;
+                    }
+                  }
+                }
+              }
+              
               if (!isSame)
                 values.put(lastCheckTimeField,null);
 
@@ -820,152 +892,328 @@
   }
 
   /** This method is called on a restart.
+  *@param processID is the process to be restarting.
+  */
+  public void restart(String processID)
+    throws ManifoldCFException
+  {
+    StringSet invKey = new StringSet(getJobStatusKey());
+    ArrayList list = new ArrayList();
+    HashMap map = new HashMap();
+    String query;
+      
+    // Starting up delete goes back to just being ready for delete
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_DELETESTARTINGUP)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_READYFORDELETE));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+    // Notifying of completion goes back to just being ready for notify
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_NOTIFYINGOFCOMPLETION)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_READYFORNOTIFY));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+    // Starting up or aborting starting up goes back to just being ready
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_STARTINGUP),
+        statusToString(STATUS_ABORTINGSTARTINGUP)}),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_READYFORSTARTUP));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+    // Starting up or aborting starting up goes back to just being ready
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_STARTINGUPMINIMAL),
+        statusToString(STATUS_ABORTINGSTARTINGUPMINIMAL)}),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_READYFORSTARTUPMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+    // Aborting starting up for restart state goes to ABORTINGFORRESTART
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTART)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+    // Aborting starting up for restart state goes to ABORTINGFORRESTART
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+    // All seeding values return to pre-seeding values
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVEWAITING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSINGWAITINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSINGWAITING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_RESUMINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_RESUMING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSEDSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSED));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVEWAIT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSEDWAITSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSEDWAIT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_UNINSTALLED)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE_UNINSTALLED));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NOOUTPUT)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE_NOOUTPUT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NEITHER)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE_NEITHER));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+
+  }
+
+  /** Clean up after all process IDs.
   */
   public void restart()
     throws ManifoldCFException
   {
-    beginTransaction();
-    try
-    {
-      StringSet invKey = new StringSet(getJobStatusKey());
-      ArrayList list = new ArrayList();
-      HashMap map = new HashMap();
-      String query;
+    StringSet invKey = new StringSet(getJobStatusKey());
+    ArrayList list = new ArrayList();
+    HashMap map = new HashMap();
+    String query;
       
-      // Starting up delete goes back to just being ready for delete
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_DELETESTARTINGUP))});
-      map.put(statusField,statusToString(STATUS_READYFORDELETE));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // Starting up delete goes back to just being ready for delete
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_DELETESTARTINGUP))});
+    map.put(statusField,statusToString(STATUS_READYFORDELETE));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // Notifying of completion goes back to just being ready for notify
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_NOTIFYINGOFCOMPLETION))});
-      map.put(statusField,statusToString(STATUS_READYFORNOTIFY));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // Notifying of completion goes back to just being ready for notify
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_NOTIFYINGOFCOMPLETION))});
+    map.put(statusField,statusToString(STATUS_READYFORNOTIFY));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // Starting up or aborting starting up goes back to just being ready
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new MultiClause(statusField,new Object[]{
-          statusToString(STATUS_STARTINGUP),
-          statusToString(STATUS_ABORTINGSTARTINGUP)})});
-      map.put(statusField,statusToString(STATUS_READYFORSTARTUP));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // Starting up or aborting starting up goes back to just being ready
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_STARTINGUP),
+        statusToString(STATUS_ABORTINGSTARTINGUP)})});
+    map.put(statusField,statusToString(STATUS_READYFORSTARTUP));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // Starting up or aborting starting up goes back to just being ready
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new MultiClause(statusField,new Object[]{
-          statusToString(STATUS_STARTINGUPMINIMAL),
-          statusToString(STATUS_ABORTINGSTARTINGUPMINIMAL)})});
-      map.put(statusField,statusToString(STATUS_READYFORSTARTUPMINIMAL));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // Starting up or aborting starting up goes back to just being ready
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new MultiClause(statusField,new Object[]{
+        statusToString(STATUS_STARTINGUPMINIMAL),
+        statusToString(STATUS_ABORTINGSTARTINGUPMINIMAL)})});
+    map.put(statusField,statusToString(STATUS_READYFORSTARTUPMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // Aborting starting up for restart state goes to ABORTINGFORRESTART
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTART))});
-      map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // Aborting starting up for restart state goes to ABORTINGFORRESTART
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTART))});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // Aborting starting up for restart state goes to ABORTINGFORRESTART
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL))});
-      map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // Aborting starting up for restart state goes to ABORTINGFORRESTART
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL))});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // All seeding values return to pre-seeding values
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING))});
-      map.put(statusField,statusToString(STATUS_ACTIVE));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_ACTIVEWAITING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSINGWAITINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSINGWAITING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_RESUMINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_RESUMING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_ABORTING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDING))});
-      map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL))});
-      map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSEDSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSED));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITSEEDING))});
-      map.put(statusField,statusToString(STATUS_ACTIVEWAIT));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSEDWAITSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSEDWAIT));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_UNINSTALLED))});
-      map.put(statusField,statusToString(STATUS_ACTIVE_UNINSTALLED));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NOOUTPUT))});
-      map.put(statusField,statusToString(STATUS_ACTIVE_NOOUTPUT));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NEITHER))});
-      map.put(statusField,statusToString(STATUS_ACTIVE_NEITHER));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    // All seeding values return to pre-seeding values
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING))});
+    map.put(statusField,statusToString(STATUS_ACTIVE));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSINGSEEDING))});
+    map.put(statusField,statusToString(STATUS_PAUSING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITINGSEEDING))});
+    map.put(statusField,statusToString(STATUS_ACTIVEWAITING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSINGWAITINGSEEDING))});
+    map.put(statusField,statusToString(STATUS_PAUSINGWAITING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_RESUMINGSEEDING))});
+    map.put(statusField,statusToString(STATUS_RESUMING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSEEDING))});
+    map.put(statusField,statusToString(STATUS_ABORTING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDING))});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL))});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSEDSEEDING))});
+    map.put(statusField,statusToString(STATUS_PAUSED));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITSEEDING))});
+    map.put(statusField,statusToString(STATUS_ACTIVEWAIT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSEDWAITSEEDING))});
+    map.put(statusField,statusToString(STATUS_PAUSEDWAIT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_UNINSTALLED))});
+    map.put(statusField,statusToString(STATUS_ACTIVE_UNINSTALLED));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NOOUTPUT))});
+    map.put(statusField,statusToString(STATUS_ACTIVE_NOOUTPUT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NEITHER))});
+    map.put(statusField,statusToString(STATUS_ACTIVE_NEITHER));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-      // No need to do anything to the queue; it looks like it can take care of
-      // itself.
-    }
-    catch (ManifoldCFException e)
-    {
-      signalRollback();
-      throw e;
-    }
-    catch (Error e)
-    {
-      signalRollback();
-      throw e;
-    }
-    finally
-    {
-      endTransaction();
-    }
+  }
+
+  public void restartCluster()
+    throws ManifoldCFException
+  {
+    // Does nothing
   }
 
   /** Signal to a job that its underlying output connector has gone away.
@@ -1201,7 +1449,7 @@
 
   /** Reset delete startup worker thread status.
   */
-  public void resetDeleteStartupWorkerStatus()
+  public void resetDeleteStartupWorkerStatus(String processID)
     throws ManifoldCFException
   {
     // This handles everything that the delete startup thread would resolve.
@@ -1209,15 +1457,17 @@
     ArrayList list = new ArrayList();
     HashMap map = new HashMap();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_DELETESTARTINGUP))});
+      new UnitaryClause(statusField,statusToString(STATUS_DELETESTARTINGUP)),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_READYFORDELETE));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
   }
   
   /** Reset notification worker thread status.
   */
-  public void resetNotificationWorkerStatus()
+  public void resetNotificationWorkerStatus(String processID)
     throws ManifoldCFException
   {
     // This resets everything that the job notification thread would resolve.
@@ -1225,15 +1475,17 @@
     ArrayList list = new ArrayList();
     HashMap map = new HashMap();
     String query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_NOTIFYINGOFCOMPLETION))});
+      new UnitaryClause(statusField,statusToString(STATUS_NOTIFYINGOFCOMPLETION)),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_READYFORNOTIFY));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
   }
   
   /** Reset startup worker thread status.
   */
-  public void resetStartupWorkerStatus()
+  public void resetStartupWorkerStatus(String processID)
     throws ManifoldCFException
   {
     // We have to handle all states that the startup thread would resolve, and change them to something appropriate.
@@ -1244,137 +1496,157 @@
 
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_STARTINGUP))});
+      new UnitaryClause(statusField,statusToString(STATUS_STARTINGUP)),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_READYFORSTARTUP));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_STARTINGUPMINIMAL))});
+      new UnitaryClause(statusField,statusToString(STATUS_STARTINGUPMINIMAL)),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_READYFORSTARTUPMINIMAL));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
       new MultiClause(statusField,new Object[]{
         statusToString(STATUS_ABORTINGSTARTINGUP),
-        statusToString(STATUS_ABORTINGSTARTINGUPMINIMAL)})});
+        statusToString(STATUS_ABORTINGSTARTINGUPMINIMAL)}),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_ABORTING));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTART))});
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTART)),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
     list.clear();
     query = buildConjunctionClause(list,new ClauseDescription[]{
-      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL))});
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSTARTINGUPFORRESTARTMINIMAL)),
+      new UnitaryClause(processIDField,processID)});
     map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
+    map.put(processIDField,null);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
 
   }
 
   /** Reset as part of restoring seeding worker threads.
   */
-  public void resetSeedingWorkerStatus()
+  public void resetSeedingWorkerStatus(String processID)
     throws ManifoldCFException
   {
-    beginTransaction();
-    try
-    {
-      StringSet invKey = new StringSet(getJobStatusKey());
-      ArrayList list = new ArrayList();
-      HashMap map = new HashMap();
-      String query;
-      // All seeding values return to pre-seeding values
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING))});
-      map.put(statusField,statusToString(STATUS_ACTIVE));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_ACTIVEWAITING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSINGWAITINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSINGWAITING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_RESUMINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_RESUMING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSEEDING))});
-      map.put(statusField,statusToString(STATUS_ABORTING));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDING))});
-      map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL))});
-      map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSEDSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSED));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITSEEDING))});
-      map.put(statusField,statusToString(STATUS_ACTIVEWAIT));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_PAUSEDWAITSEEDING))});
-      map.put(statusField,statusToString(STATUS_PAUSEDWAIT));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_UNINSTALLED))});
-      map.put(statusField,statusToString(STATUS_ACTIVE_UNINSTALLED));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NOOUTPUT))});
-      map.put(statusField,statusToString(STATUS_ACTIVE_NOOUTPUT));
-      performUpdate(map,"WHERE "+query,list,invKey);
-      list.clear();
-      query = buildConjunctionClause(list,new ClauseDescription[]{
-        new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NEITHER))});
-      map.put(statusField,statusToString(STATUS_ACTIVE_NEITHER));
-      performUpdate(map,"WHERE "+query,list,invKey);
+    StringSet invKey = new StringSet(getJobStatusKey());
+    ArrayList list = new ArrayList();
+    HashMap map = new HashMap();
+    String query;
+    // All seeding values return to pre-seeding values
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVEWAITING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSINGWAITINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSINGWAITING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_RESUMINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_RESUMING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTING));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTART));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ABORTINGFORRESTARTSEEDINGMINIMAL)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ABORTINGFORRESTARTMINIMAL));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSEDSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSED));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVEWAITSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVEWAIT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_PAUSEDWAITSEEDING)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_PAUSEDWAIT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_UNINSTALLED)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE_UNINSTALLED));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NOOUTPUT)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE_NOOUTPUT));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
+    list.clear();
+    query = buildConjunctionClause(list,new ClauseDescription[]{
+      new UnitaryClause(statusField,statusToString(STATUS_ACTIVESEEDING_NEITHER)),
+      new UnitaryClause(processIDField,processID)});
+    map.put(statusField,statusToString(STATUS_ACTIVE_NEITHER));
+    map.put(processIDField,null);
+    performUpdate(map,"WHERE "+query,list,invKey);
 
-    }
-    catch (ManifoldCFException e)
-    {
-      signalRollback();
-      throw e;
-    }
-    catch (Error e)
-    {
-      signalRollback();
-      throw e;
-    }
-    finally
-    {
-      endTransaction();
-    }
   }
 
 
@@ -1443,6 +1715,7 @@
 
       HashMap map = new HashMap();
       map.put(statusField,statusToString(newStatus));
+      map.put(processIDField,null);
       performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
     }
     catch (ManifoldCFException e)
@@ -1957,7 +2230,29 @@
   *@param status is the desired status.
   *@param reseedTime is the reseed time.
   */
-  public void writeStatus(Long jobID, int status, Long reseedTime)
+  public void writeTransientStatus(Long jobID, int status, Long reseedTime, String processID)
+    throws ManifoldCFException
+  {
+    writeStatus(jobID, status, reseedTime, processID);
+  }
+
+  /** Update a job's status, and its reseed time.
+  *@param jobID is the job id.
+  *@param status is the desired status.
+  *@param reseedTime is the reseed time.
+  */
+  public void writePermanentStatus(Long jobID, int status, Long reseedTime)
+    throws ManifoldCFException
+  {
+    writeStatus(jobID, status, reseedTime, null);
+  }
+
+  /** Update a job's status, and its reseed time.
+  *@param jobID is the job id.
+  *@param status is the desired status.
+  *@param reseedTime is the reseed time.
+  */
+  protected void writeStatus(Long jobID, int status, Long reseedTime, String processID)
     throws ManifoldCFException
   {
     ArrayList list = new ArrayList();
@@ -1965,6 +2260,7 @@
       new UnitaryClause(idField,jobID)});
     HashMap map = new HashMap();
     map.put(statusField,statusToString(status));
+    map.put(processIDField,processID);
     map.put(reseedTimeField,reseedTime);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
   }
@@ -1973,7 +2269,27 @@
   *@param jobID is the job id.
   *@param status is the desired status.
   */
-  public void writeStatus(Long jobID, int status)
+  public void writeTransientStatus(Long jobID, int status, String processID)
+    throws ManifoldCFException
+  {
+    writeStatus(jobID, status, processID);
+  }
+  
+  /** Update a job's status.
+  *@param jobID is the job id.
+  *@param status is the desired status.
+  */
+  public void writePermanentStatus(Long jobID, int status)
+    throws ManifoldCFException
+  {
+    writeStatus(jobID, status, null);
+  }
+
+  /** Update a job's status.
+  *@param jobID is the job id.
+  *@param status is the desired status.
+  */
+  protected void writeStatus(Long jobID, int status, String processID)
     throws ManifoldCFException
   {
     ArrayList list = new ArrayList();
@@ -1981,6 +2297,7 @@
       new UnitaryClause(idField,jobID)});
     HashMap map = new HashMap();
     map.put(statusField,statusToString(status));
+    map.put(processIDField,processID);
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
   }
 
@@ -2194,6 +2511,7 @@
       new UnitaryClause(idField,jobID)});
     HashMap map = new HashMap();
     map.put(statusField,statusToString(STATUS_INACTIVE));
+    map.put(processIDField,null);
     // Leave everything else around from the abort/finish.
     performUpdate(map,"WHERE "+query,list,new StringSet(getJobStatusKey()));
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/PrereqEventManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/PrereqEventManager.java
index 29ab77b..3e11eef 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/PrereqEventManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/jobs/PrereqEventManager.java
@@ -78,6 +78,8 @@
 
       // Index management
       IndexDescription ownerIndex = new IndexDescription(false,new String[]{ownerField});
+      // eventNameIndex was proposed by postgresql team, but it did not help, so I am removing it.
+      //IndexDescription eventNameIndex = new IndexDescription(false,new String[]{eventNameField});
 
       // Get rid of indexes that shouldn't be there
       Map indexes = getTableIndexes(null,null);
@@ -89,6 +91,8 @@
 
         if (ownerIndex != null && id.equals(ownerIndex))
           ownerIndex = null;
+        //else if (eventNameIndex != null && id.equals(eventNameIndex))
+        //  eventNameIndex = null;
         else if (indexName.indexOf("_pkey") == -1)
           // This index shouldn't be here; drop it
           performRemoveIndex(indexName);
@@ -97,6 +101,8 @@
       // Add the ones we didn't find
       if (ownerIndex != null)
         performAddIndex(null,ownerIndex);
+      //if (eventNameIndex != null)
+      //  performAddIndex(null,eventNameIndex);
 
       break;
     }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryConnectionManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryConnectionManager.java
index 0485387..18f887b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryConnectionManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryConnectionManager.java
@@ -57,9 +57,12 @@
   protected final static String nameField = "connectionname";     // Changed this to work around constraint bug in postgresql
   protected final static String descriptionField = "description";
   protected final static String classNameField = "classname";
-  protected final static String authorityNameField = "authorityname";
   protected final static String maxCountField = "maxcount";
   protected final static String configField = "configxml";
+  protected final static String groupNameField = "groupname";
+
+  // Discontinued fields
+  protected final static String authorityNameField = "authorityname";
 
   protected static Random random = new Random();
 
@@ -93,7 +96,7 @@
     throws ManifoldCFException
   {
     // First, get the authority manager table name and name column
-    IAuthorityConnectionManager authMgr = AuthorityConnectionManagerFactory.make(threadContext);
+    IAuthorityGroupManager authMgr = AuthorityGroupManagerFactory.make(threadContext);
 
     // Always use a loop, and no transaction, as we may need to retry due to upgrade
     while (true)
@@ -106,15 +109,63 @@
         map.put(nameField,new ColumnDescription("VARCHAR(32)",true,false,null,null,false));
         map.put(descriptionField,new ColumnDescription("VARCHAR(255)",false,true,null,null,false));
         map.put(classNameField,new ColumnDescription("VARCHAR(255)",false,false,null,null,false));
-        map.put(authorityNameField,new ColumnDescription("VARCHAR(32)",false,true,
-          authMgr.getTableName(),authMgr.getAuthorityNameColumn(),false));
+        map.put(groupNameField,new ColumnDescription("VARCHAR(32)",false,true,
+          authMgr.getTableName(),authMgr.getGroupNameColumn(),false));
         map.put(maxCountField,new ColumnDescription("BIGINT",false,false,null,null,false));
         map.put(configField,new ColumnDescription("LONGTEXT",false,true,null,null,false));
         performCreate(map,null);
       }
       else
       {
-        // Upgrade code would go here, if needed.
+        // Upgrade code
+        ColumnDescription cd = (ColumnDescription)existing.get(groupNameField);
+        if (cd == null)
+        {
+          Map addMap = new HashMap();
+          addMap.put(groupNameField,new ColumnDescription("VARCHAR(32)",false,true,
+            authMgr.getTableName(),authMgr.getGroupNameColumn(),false));
+          performAlter(addMap,null,null,null);
+        }
+        // Get rid of the authorityName field.  When we do this we need to copy into the group name
+        // field, adding groups if they don't yet exist first
+        cd = (ColumnDescription)existing.get(authorityNameField);
+        if (cd != null)
+        {
+          ArrayList params = new ArrayList();
+          IResultSet set = performQuery("SELECT "+nameField+","+authorityNameField+" FROM "+getTableName(),null,null,null);
+          for (int i = 0 ; i < set.getRowCount() ; i++)
+          {
+            IResultRow row = set.getRow(i);
+            String repoName = (String)row.getValue(nameField);
+            String authName = (String)row.getValue(authorityNameField);
+            if (authName != null && authName.length() > 0)
+            {
+              // Attempt to create a matching auth group.  This will fail if the group
+              // already exists
+              IAuthorityGroup grp = authMgr.create();
+              grp.setName(authName);
+              try
+              {
+                authMgr.save(grp);
+              }
+              catch (ManifoldCFException e)
+              {
+                if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+                  throw e;
+                // Fall through; the row exists already
+              }
+              Map<String,String> map = new HashMap<String,String>();
+              map.put(groupNameField,authName);
+              params.clear();
+              String query = buildConjunctionClause(params,new ClauseDescription[]{
+                new UnitaryClause(nameField,repoName)});
+              performUpdate(map," WHERE "+query,params,null);
+            }
+          }
+          List<String> deleteList = new ArrayList<String>();
+          deleteList.add(authorityNameField);
+          performAlter(null,null,deleteList,null);
+        }
       }
 
       // Install dependent tables.
@@ -122,7 +173,7 @@
       throttleSpecManager.install(getTableName(),nameField);
 
       // Index management
-      IndexDescription authorityIndex = new IndexDescription(false,new String[]{authorityNameField});
+      IndexDescription authorityIndex = new IndexDescription(false,new String[]{groupNameField});
       IndexDescription classIndex = new IndexDescription(false,new String[]{classNameField});
       
       // Get rid of indexes that shouldn't be there
@@ -368,7 +419,7 @@
             HashMap values = new HashMap();
             values.put(descriptionField,object.getDescription());
             values.put(classNameField,object.getClassName());
-            values.put(authorityNameField,object.getACLAuthority());
+            values.put(groupNameField,object.getACLAuthority());
             values.put(maxCountField,new Long((long)object.getMaxConnections()));
             String configXML = object.getConfigParams().toXML();
             values.put(configField,configXML);
@@ -477,7 +528,7 @@
           throw new ManifoldCFException("Can't delete repository connection '"+name+"': existing jobs refer to it");
         ManifoldCF.noteConfigurationChange();
         throttleSpecManager.deleteRows(name);
-        historyManager.deleteOwner(name,null);
+        historyManager.deleteOwner(name);
         ArrayList params = new ArrayList();
         String query = buildConjunctionClause(params,new ClauseDescription[]{
           new UnitaryClause(nameField,name)});
@@ -507,10 +558,11 @@
   }
 
   /** Return true if the specified authority name is referenced.
-  *@param authorityName is the authority name.
+  *@param groupName is the group name.
   *@return true if referenced, false otherwise.
   */
-  public boolean isReferenced(String authorityName)
+  @Override
+  public boolean isGroupReferenced(String groupName)
     throws ManifoldCFException
   {
     StringSetBuffer ssb = new StringSetBuffer();
@@ -518,7 +570,7 @@
     StringSet localCacheKeys = new StringSet(ssb);
     ArrayList params = new ArrayList();
     String query = buildConjunctionClause(params,new ClauseDescription[]{
-      new UnitaryClause(authorityNameField,authorityName)});
+      new UnitaryClause(groupNameField,groupName)});
     IResultSet set = performQuery("SELECT "+nameField+" FROM "+getTableName()+" WHERE "+query,params,
       localCacheKeys,null);
     return set.getRowCount() > 0;
@@ -528,6 +580,7 @@
   *@param className is the class name of the connector.
   *@return the repository connections that use that connector.
   */
+  @Override
   public String[] findConnectionsForConnector(String className)
     throws ManifoldCFException
   {
@@ -555,6 +608,7 @@
   *@param name is the name of the connection to check.
   *@return true if the underlying connector is registered.
   */
+  @Override
   public boolean checkConnectorExists(String name)
     throws ManifoldCFException
   {
@@ -596,6 +650,7 @@
   /** Return the name column.
   *@return the name column.
   */
+  @Override
   public String getConnectionNameColumn()
   {
     return nameField;
@@ -604,7 +659,26 @@
 
   // Reporting and analysis related
 
-
+  /** Delete history rows related to a specific connection, upon user request.
+  *@param connectionName is the connection whose history records should be removed.
+  */
+  @Override
+  public void cleanUpHistoryData(String connectionName)
+    throws ManifoldCFException
+  {
+    historyManager.deleteOwner(connectionName);
+  }
+  
+  /** Delete history rows older than a specified timestamp.
+  *@param timeCutoff is the timestamp to delete older rows before.
+  */
+  @Override
+  public void cleanUpHistoryData(long timeCutoff)
+    throws ManifoldCFException
+  {
+    historyManager.deleteOldRows(timeCutoff);
+  }
+  
   /** Record time-stamped information about the activity of the connection.  This information can originate from
   * either the connector or from the framework.  The reason it is here is that it is viewed as 'belonging' to an
   * individual connection, and is segregated accordingly.
@@ -867,7 +941,7 @@
       rc.setName(name);
       rc.setDescription((String)row.getValue(descriptionField));
       rc.setClassName((String)row.getValue(classNameField));
-      rc.setACLAuthority((String)row.getValue(authorityNameField));
+      rc.setACLAuthority((String)row.getValue(groupNameField));
       rc.setMaxConnections((int)((Long)row.getValue(maxCountField)).longValue());
       String xml = (String)row.getValue(configField);
       if (xml != null && xml.length() > 0)
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryHistoryManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryHistoryManager.java
index 82f05d6..51f152b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryHistoryManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repository/RepositoryHistoryManager.java
@@ -161,15 +161,27 @@
   *@param owner is the name of the owner.
   *@param invKeys are the invalidation keys.
   */
-  public void deleteOwner(String owner, StringSet invKeys)
+  public void deleteOwner(String owner)
     throws ManifoldCFException
   {
     ArrayList params = new ArrayList();
     String query = buildConjunctionClause(params,new ClauseDescription[]{
       new UnitaryClause(ownerNameField,owner)});
-    performDelete("WHERE "+query,params,invKeys);
+    performDelete("WHERE "+query,params,null);
   }
 
+  /** Delete records older than a specified time.
+  *@param timeCutoff is the time, earlier than which records are removed.
+  */
+  public void deleteOldRows(long timeCutoff)
+    throws ManifoldCFException
+  {
+    ArrayList params = new ArrayList();
+    String query = buildConjunctionClause(params,new ClauseDescription[]{
+      new UnitaryClause(startTimeField,"<",new Long(timeCutoff))});
+    performDelete("WHERE "+query,params,null);
+  }
+  
   /** Add row to table, and reanalyze if necessary.
   */
   public Long addRow(String connectionName, long startTime, long endTime, long dataSize, String activityType,
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repositoryconnectorpool/RepositoryConnectorPool.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repositoryconnectorpool/RepositoryConnectorPool.java
new file mode 100644
index 0000000..2c7858b
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/repositoryconnectorpool/RepositoryConnectorPool.java
@@ -0,0 +1,179 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.repositoryconnectorpool;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+
+import java.util.*;
+import java.io.*;
+
+/** An implementation of IRepositoryConnectorPool.
+* Coordination and allocation among cluster members is managed within. 
+* These objects are thread-local, so do not share them among threads.
+*/
+public class RepositoryConnectorPool implements IRepositoryConnectorPool
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** Local connector pool */
+  protected final static LocalPool localPool = new LocalPool();
+
+  // This implementation is a place-holder for the real one, which will likely fold in the pooling code
+  // as we strip it out of RepositoryConnectorFactory.
+
+  /** Thread context */
+  protected final IThreadContext threadContext;
+  
+  /** Constructor */
+  public RepositoryConnectorPool(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    this.threadContext = threadContext;
+  }
+  
+  /** Get multiple repository connectors, all at once.  Do this in a particular order
+  * so that any connector exhaustion will not cause a deadlock.
+  *@param orderingKeys are the keys which determine in what order the connectors are obtained.
+  *@param repositoryConnections are the connections to use the build the connector instances.
+  */
+  @Override
+  public IRepositoryConnector[] grabMultiple(String[] orderingKeys, IRepositoryConnection[] repositoryConnections)
+    throws ManifoldCFException
+  {
+    // For now, use the RepositoryConnectorFactory method.  This will require us to extract info
+    // from each repository connection, however.
+    String[] connectionNames = new String[repositoryConnections.length];
+    String[] classNames = new String[repositoryConnections.length];
+    ConfigParams[] configInfos = new ConfigParams[repositoryConnections.length];
+    int[] maxPoolSizes = new int[repositoryConnections.length];
+    
+    for (int i = 0; i < repositoryConnections.length; i++)
+    {
+      connectionNames[i] = repositoryConnections[i].getName();
+      classNames[i] = repositoryConnections[i].getClassName();
+      configInfos[i] = repositoryConnections[i].getConfigParams();
+      maxPoolSizes[i] = repositoryConnections[i].getMaxConnections();
+    }
+    return localPool.grabMultiple(threadContext,
+      orderingKeys, connectionNames, classNames, configInfos, maxPoolSizes);
+  }
+
+  /** Get a repository connector.
+  * The connector is specified by a repository connection object.
+  *@param authorityConnection is the repository connection to base the connector instance on.
+  */
+  @Override
+  public IRepositoryConnector grab(IRepositoryConnection repositoryConnection)
+    throws ManifoldCFException
+  {
+    return localPool.grab(threadContext, repositoryConnection.getName(), repositoryConnection.getClassName(),
+      repositoryConnection.getConfigParams(), repositoryConnection.getMaxConnections());
+  }
+
+  /** Release multiple repository connectors.
+  *@param connections are the connections describing the instances to release.
+  *@param connectors are the connector instances to release.
+  */
+  @Override
+  public void releaseMultiple(IRepositoryConnection[] connections, IRepositoryConnector[] connectors)
+    throws ManifoldCFException
+  {
+    String[] connectionNames = new String[connections.length];
+    for (int i = 0; i < connections.length; i++)
+    {
+      connectionNames[i] = connections[i].getName();
+    }
+    localPool.releaseMultiple(threadContext, connectionNames, connectors);
+  }
+
+  /** Release a repository connector.
+  *@param connection is the connection describing the instance to release.
+  *@param connector is the connector to release.
+  */
+  @Override
+  public void release(IRepositoryConnection connection, IRepositoryConnector connector)
+    throws ManifoldCFException
+  {
+    localPool.release(threadContext, connection.getName(), connector);
+  }
+
+  /** Idle notification for inactive repository connector handles.
+  * This method polls all inactive handles.
+  */
+  @Override
+  public void pollAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.pollAllConnectors(threadContext);
+  }
+
+  /** Flush only those connector handles that are currently unused.
+  */
+  @Override
+  public void flushUnusedConnectors()
+    throws ManifoldCFException
+  {
+    localPool.flushUnusedConnectors(threadContext);
+  }
+
+  /** Clean up all open repository connector handles.
+  * This method is called when the connector pool needs to be flushed,
+  * to free resources.
+  */
+  @Override
+  public void closeAllConnectors()
+    throws ManifoldCFException
+  {
+    localPool.closeAllConnectors(threadContext);
+  }
+
+  /** Actual static mapping connector pool */
+  protected static class LocalPool extends org.apache.manifoldcf.core.connectorpool.ConnectorPool<IRepositoryConnector>
+  {
+    public LocalPool()
+    {
+      super("_REPOSITORYCONNECTORPOOL_");
+    }
+    
+    @Override
+    protected boolean isInstalled(IThreadContext tc, String className)
+      throws ManifoldCFException
+    {
+      IConnectorManager connectorManager = ConnectorManagerFactory.make(tc);
+      return connectorManager.isInstalled(className);
+    }
+
+    @Override
+    protected boolean isConnectionNameValid(IThreadContext tc, String connectionName)
+      throws ManifoldCFException
+    {
+      IRepositoryConnectionManager connectionManager = RepositoryConnectionManagerFactory.make(tc);
+      return connectionManager.load(connectionName) != null;
+    }
+
+    public IRepositoryConnector[] grabMultiple(IThreadContext tc, String[] orderingKeys, String[] connectionNames, String[] classNames, ConfigParams[] configInfos, int[] maxPoolSizes)
+      throws ManifoldCFException
+    {
+      return grabMultiple(tc,IRepositoryConnector.class,orderingKeys,connectionNames,classNames,configInfos,maxPoolSizes);
+    }
+
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/reprioritizationtracker/ReprioritizationTracker.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/reprioritizationtracker/ReprioritizationTracker.java
new file mode 100644
index 0000000..66b482d
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/reprioritizationtracker/ReprioritizationTracker.java
@@ -0,0 +1,551 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.reprioritizationtracker;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import java.util.*;
+import java.io.*;
+
+/** This class tracks cluster-wide reprioritization operations.
+* These operations are driven forward by whatever thread needs them,
+* and are completed if those processes die by the threads that clean up
+* after the original process.
+*/
+public class ReprioritizationTracker implements IReprioritizationTracker
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected final static String trackerWriteLock = "_REPR_TRACKER_LOCK_";
+  protected final static String trackerProcessIDResource = "_REPR_TRACKER_PID_";
+  protected final static String trackerReproIDResource = "_REPR_TRACKER_RID_";
+  protected final static String trackerTimestampResource = "_REPR_TIMESTAMP_";
+  protected final static String trackerMinimumDepthResource = "_REPR_MINDEPTH_";
+  
+  /** Lock manager */
+  protected final ILockManager lockManager;
+  protected final IBinManager binManager;
+
+  /** Preload requests */
+  protected final Map<String,PreloadRequest> preloadRequests = new HashMap<String,PreloadRequest>();
+  /** Preload values */
+  protected final Map<String,PreloadedValues> preloadedValues = new HashMap<String,PreloadedValues>();
+    
+  /** Constructor.
+  */
+  public ReprioritizationTracker(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    lockManager = LockManagerFactory.make(threadContext);
+    binManager = BinManagerFactory.make(threadContext);
+  }
+  
+  /** Start a reprioritization activity.
+  *@param prioritizationTime is the timestamp of the prioritization.
+  *@param processID is the process ID of the process performing/waiting for the prioritization
+  * to complete.
+  *@param reproID is the reprocessing thread ID
+  */
+  @Override
+  public void startReprioritization(long prioritizationTime, String processID, String reproID)
+    throws ManifoldCFException
+  {
+    lockManager.enterWriteLock(trackerWriteLock);
+    try
+    {
+      Long currentTime = readTime();
+      String currentProcessID = readProcessID();
+      if (currentTime != null && currentProcessID != null)
+      {
+        // Already a reprioritization in progress.
+        if (prioritizationTime <= currentTime.longValue())
+          return;
+      }
+      writeTime(new Long(prioritizationTime));
+      writeProcessID(processID);
+      writeReproID(reproID);
+      try
+      {
+        binManager.reset();
+      }
+      catch (Throwable e)
+      {
+        writeTime(null);
+        writeProcessID(null);
+        writeReproID(null);
+        if (e instanceof Error)
+          throw (Error)e;
+        else if (e instanceof RuntimeException)
+          throw (RuntimeException)e;
+        else if (e instanceof ManifoldCFException)
+          throw (ManifoldCFException)e;
+        else
+          throw new RuntimeException("Unknown exception: "+e.getClass().getName()+": "+e.getMessage(),e);
+      }
+      writeMinimumDepth(0.0);
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(trackerWriteLock);
+    }
+  }
+  
+  
+  /** Retrieve the current reprioritization time stamp.  This should be obtained before
+  * performing any prioritization steps.
+  *@return the current prioritization timestamp, or null if no prioritization is in effect.
+  */
+  @Override
+  public Long checkReprioritizationInProgress()
+    throws ManifoldCFException
+  {
+    lockManager.enterWriteLock(trackerWriteLock);
+    try
+    {
+      Long currentTime = readTime();
+      String currentProcessID = readProcessID();
+      String currentReproID = readReproID();
+      if (currentTime == null || currentProcessID == null || currentReproID == null)
+        return null;
+      return currentTime;
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(trackerWriteLock);
+    }
+  }
+
+  /** Complete a reprioritization activity.  Prioritization will be marked as complete
+  * only if the processID matches the one that started the current reprioritization.
+  *@param processID is the process ID of the process completing the prioritization.
+  */
+  @Override
+  public void doneReprioritization(String reproID)
+    throws ManifoldCFException
+  {
+    lockManager.enterWriteLock(trackerWriteLock);
+    try
+    {
+      Long currentTime = readTime();
+      String currentProcessID = readProcessID();
+      String currentReproID = readReproID();
+      if (currentTime != null && currentProcessID != null && currentReproID != null && currentReproID.equals(reproID))
+      {
+        // Null out the fields
+        writeTime(null);
+        writeProcessID(null);
+        writeReproID(null);
+      }
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(trackerWriteLock);
+    }
+  }
+  
+  /** Check if the specified processID is the one performing reprioritization.
+  *@param processID is the process ID to check.
+  *@return the repro ID if the processID is confirmed to be the one.
+  */
+  @Override
+  public String isSpecifiedProcessReprioritizing(String processID)
+    throws ManifoldCFException
+  {
+    lockManager.enterWriteLock(trackerWriteLock);
+    try
+    {
+      Long currentTime = readTime();
+      String currentProcessID = readProcessID();
+      String currentReproID = readReproID();
+      if (currentTime != null && currentProcessID != null && currentReproID != null && currentProcessID.equals(processID))
+        return currentReproID;
+      return null;
+    }
+    finally
+    {
+      lockManager.leaveWriteLock(trackerWriteLock);
+    }
+  }
+  
+  /** Assess the current minimum depth.
+  * This method is called to provide information about the priorities of the documents being currently
+  * queued.  It is the case that it is unoptimal to assign document priorities that are fundamentally higher than this value,
+  * because then the new documents will be preferentially queued, and the goal of distributing documents across bins will not be
+  * adequately met.
+  *@param binNamesSet is the current set of priorities we see on the queuing operation.
+  */
+  @Override
+  public void assessMinimumDepth(Double[] binNamesSet)
+    throws ManifoldCFException
+  {
+    double newMinPriority = Double.MAX_VALUE;
+    for (Double binValue : binNamesSet)
+    {
+      if (binValue.doubleValue() < newMinPriority)
+        newMinPriority = binValue.doubleValue();
+    }
+
+    if (newMinPriority != Double.MAX_VALUE)
+    {
+
+      lockManager.enterWriteLock(trackerWriteLock);
+      try
+      {
+        Long reproTime = readTime();
+        String processID = readProcessID();
+        if (reproTime == null || processID == null)
+        {
+          double currentMinimumDepth = readMinimumDepth();
+
+          // Convert minPriority to minDepth.
+          // Note that this calculation does not take into account anything having to do with connection rates, throttling,
+          // or other adjustment factors.  It allows us only to obtain the "raw" minimum depth: the depth without any
+          // adjustments.
+          double newMinDepth = Math.exp(newMinPriority)-1.0;
+
+          if (newMinDepth > currentMinimumDepth)
+          {
+            writeMinimumDepth(newMinDepth);
+            if (Logging.scheduling.isDebugEnabled())
+              Logging.scheduling.debug("Setting new minimum depth value to "+new Double(currentMinimumDepth).toString());
+          }
+          else
+          {
+            if (newMinDepth < currentMinimumDepth && Logging.scheduling.isDebugEnabled())
+              Logging.scheduling.debug("Minimum depth value seems to have been set too high too early! currentMin = "+new Double(currentMinimumDepth).toString()+"; queue value = "+new Double(newMinDepth).toString());
+          }
+        }
+      }
+      finally
+      {
+        lockManager.leaveWriteLock(trackerWriteLock);
+      }
+    }
+  }
+
+  /** Retrieve current minimum depth.
+  *@return the current minimum depth to use.
+  */
+  @Override
+  public double getMinimumDepth()
+    throws ManifoldCFException
+  {
+    lockManager.enterReadLock(trackerWriteLock);
+    try
+    {
+      return readMinimumDepth();
+    }
+    finally
+    {
+      lockManager.leaveReadLock(trackerWriteLock);
+    }
+  }
+  
+  /** Note preload amounts.
+  */
+  @Override
+  public void addPreloadRequest(String binName, double weightedMinimumDepth)
+  {
+    PreloadRequest pr = preloadRequests.get(binName);
+    if (pr == null)
+    {
+      pr = new PreloadRequest(weightedMinimumDepth);
+      preloadRequests.put(binName,pr);
+    }
+    else
+      pr.updateRequest(weightedMinimumDepth);
+  }
+  
+  
+  /** Preload bin values.  Call this OUTSIDE of a transaction.
+  */
+  @Override
+  public void preloadBinValues()
+    throws ManifoldCFException
+  {
+    for (String binName : preloadRequests.keySet())
+    {
+      PreloadRequest pr = preloadRequests.get(binName);
+      double[] newValues = binManager.getIncrementBinValuesInTransaction(binName, pr.getWeightedMinimumDepth(), pr.getRequestCount());
+      PreloadedValues pv = new PreloadedValues(newValues);
+      preloadedValues.put(binName,pv);
+    }
+    preloadRequests.clear();
+  }
+  
+  /** Clear any preload requests.
+  */
+  @Override
+  public void clearPreloadRequests()
+  {
+    preloadRequests.clear();
+  }
+  
+  /** Clear remaining preloaded values.
+  */
+  @Override
+  public void clearPreloadedValues()
+  {
+    preloadedValues.clear();
+  }
+
+  /** Get a bin value.
+  *@param binName is the bin name.
+  *@param weightedMinimumDepth is the minimum depth to use.
+  *@return the bin value.
+  */
+  @Override
+  public double getIncrementBinValue(String binName, double weightedMinimumDepth)
+    throws ManifoldCFException
+  {
+    PreloadedValues pv = preloadedValues.get(binName);
+    if (pv != null)
+    {
+      Double rval = pv.getNextValue();
+      if (rval != null)
+        return rval.doubleValue();
+    }
+    return binManager.getIncrementBinValues(binName, weightedMinimumDepth,1)[0];
+  }
+  
+  // Protected methods
+  
+  /** Read time.
+  *@return the time, or null if none.
+  */
+  protected Long readTime()
+    throws ManifoldCFException
+  {
+    byte[] timeData = lockManager.readData(trackerTimestampResource);
+    if (timeData == null || timeData.length != 8)
+      return null;
+    
+    long rval = ((long)timeData[0]) & 0xffL +
+      (((long)timeData[1]) << 8) & 0xff00L +
+      (((long)timeData[2]) << 16) & 0xff0000L +
+      (((long)timeData[3]) << 24) & 0xff000000L +
+      (((long)timeData[4]) << 32) & 0xff00000000L +
+      (((long)timeData[5]) << 40) & 0xff0000000000L +
+      (((long)timeData[6]) << 48) & 0xff000000000000L +
+      (((long)timeData[7]) << 56) & 0xff00000000000000L;
+    
+    return new Long(rval);
+  }
+  
+  /** Write time.
+  *@param time is the time to write.
+  */
+  protected void writeTime(Long timeValue)
+    throws ManifoldCFException
+  {
+    if (timeValue == null)
+      lockManager.writeData(trackerTimestampResource, null);
+    else
+    {
+      long time = timeValue.longValue();
+      byte[] timeData = new byte[8];
+      timeData[0] = (byte)(time & 0xffL);
+      timeData[1] = (byte)((time >> 8) & 0xffL);
+      timeData[2] = (byte)((time >> 16) & 0xffL);
+      timeData[3] = (byte)((time >> 24) & 0xffL);
+      timeData[4] = (byte)((time >> 32) & 0xffL);
+      timeData[5] = (byte)((time >> 40) & 0xffL);
+      timeData[6] = (byte)((time >> 48) & 0xffL);
+      timeData[7] = (byte)((time >> 56) & 0xffL);
+      lockManager.writeData(trackerTimestampResource, timeData);
+    }
+  }
+  
+  /** Read process ID.
+  *@return processID, or null if none.
+  */
+  protected String readProcessID()
+    throws ManifoldCFException
+  {
+    byte[] processIDData = lockManager.readData(trackerProcessIDResource);
+    if (processIDData == null)
+      return null;
+    try
+    {
+      return new String(processIDData, "utf-8");
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      throw new RuntimeException(e.getMessage(),e);
+    }
+  }
+  
+  /** Write process ID.
+  *@param processID is the process ID to write.
+  */
+  protected void writeProcessID(String processID)
+    throws ManifoldCFException
+  {
+    if (processID == null)
+      lockManager.writeData(trackerProcessIDResource, null);
+    else
+    {
+      try
+      {
+        byte[] processIDData = processID.getBytes("utf-8");
+        lockManager.writeData(trackerProcessIDResource, processIDData);
+      }
+      catch (UnsupportedEncodingException e)
+      {
+        throw new RuntimeException(e.getMessage(),e);
+      }
+    }
+  }
+
+  /** Read repriotization ID.
+  *@return reproID, or null if none.
+  */
+  protected String readReproID()
+    throws ManifoldCFException
+  {
+    byte[] reproIDData = lockManager.readData(trackerReproIDResource);
+    if (reproIDData == null)
+      return null;
+    try
+    {
+      return new String(reproIDData, "utf-8");
+    }
+    catch (UnsupportedEncodingException e)
+    {
+      throw new RuntimeException(e.getMessage(),e);
+    }
+  }
+  
+  /** Write repro ID.
+  *@param reproID is the repro ID to write.
+  */
+  protected void writeReproID(String reproID)
+    throws ManifoldCFException
+  {
+    if (reproID == null)
+      lockManager.writeData(trackerReproIDResource, null);
+    else
+    {
+      try
+      {
+        byte[] reproIDData = reproID.getBytes("utf-8");
+        lockManager.writeData(trackerReproIDResource, reproIDData);
+      }
+      catch (UnsupportedEncodingException e)
+      {
+        throw new RuntimeException(e.getMessage(),e);
+      }
+    }
+  }
+
+  /** Read minimum depth.
+  *@return the minimum depth.
+  */
+  protected double readMinimumDepth()
+    throws ManifoldCFException
+  {
+    byte[] data = lockManager.readData(trackerMinimumDepthResource);
+    if (data == null || data.length != 8)
+      return 0.0;
+    long dataLong = ((long)data[0]) & 0xffL +
+      (((long)data[1]) << 8) & 0xff00L +
+      (((long)data[2]) << 16) & 0xff0000L +
+      (((long)data[3]) << 24) & 0xff000000L +
+      (((long)data[4]) << 32) & 0xff00000000L +
+      (((long)data[5]) << 40) & 0xff0000000000L +
+      (((long)data[6]) << 48) & 0xff000000000000L +
+      (((long)data[7]) << 56) & 0xff00000000000000L;
+
+    return Double.longBitsToDouble(dataLong);
+  }
+  
+  /** Write minimum depth.
+  *@param the minimum depth.
+  */
+  protected void writeMinimumDepth(double depth)
+    throws ManifoldCFException
+  {
+    long dataLong = Double.doubleToLongBits(depth);
+    byte[] data = new byte[8];
+    data[0] = (byte)(dataLong & 0xffL);
+    data[1] = (byte)((dataLong >> 8) & 0xffL);
+    data[2] = (byte)((dataLong >> 16) & 0xffL);
+    data[3] = (byte)((dataLong >> 24) & 0xffL);
+    data[4] = (byte)((dataLong >> 32) & 0xffL);
+    data[5] = (byte)((dataLong >> 40) & 0xffL);
+    data[6] = (byte)((dataLong >> 48) & 0xffL);
+    data[7] = (byte)((dataLong >> 56) & 0xffL);
+    lockManager.writeData(trackerMinimumDepthResource,data);
+  }
+  
+  /** A preload request */
+  protected static class PreloadRequest
+  {
+    protected double weightedMinimumDepth;
+    protected int requestCount;
+    
+    public PreloadRequest(double weightedMinimumDepth)
+    {
+      this.weightedMinimumDepth = weightedMinimumDepth;
+      this.requestCount = 1;
+    }
+    
+    public void updateRequest(double weightedMinimumDepth)
+    {
+      if (this.weightedMinimumDepth < weightedMinimumDepth)
+        this.weightedMinimumDepth = weightedMinimumDepth;
+      requestCount++;
+    }
+    
+    public double getWeightedMinimumDepth()
+    {
+      return weightedMinimumDepth;
+    }
+    
+    public int getRequestCount()
+    {
+      return requestCount;
+    }
+  }
+  
+  /** A set of preloaded values */
+  protected static class PreloadedValues
+  {
+    protected double[] values;
+    protected int valueIndex;
+    
+    public PreloadedValues(double[] values)
+    {
+      this.values = values;
+      this.valueIndex = valueIndex;
+    }
+    
+    public Double getNextValue()
+    {
+      if (valueIndex == values.length)
+        return null;
+      return new Double(values[valueIndex++]);
+    }
+  }
+  
+
+}
+
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/CrawlerAgent.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/CrawlerAgent.java
index 80de13b..54fc172 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/CrawlerAgent.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/CrawlerAgent.java
@@ -21,6 +21,7 @@
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.agents.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.*;
+import java.util.*;
 
 /** This is the main agent class for the crawler.
 */
@@ -28,20 +29,86 @@
 {
   public static final String _rcsid = "@(#)$Id: CrawlerAgent.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  protected IThreadContext threadContext;
+  // Thread objects.
+  // These get filled in as threads are created.
+  protected JobStartThread jobStartThread = null;
+  protected StufferThread stufferThread = null;
+  protected FinisherThread finisherThread = null;
+  protected JobNotificationThread notificationThread = null;
+  protected StartupThread startupThread = null;
+  protected StartDeleteThread startDeleteThread = null;
+  protected JobDeleteThread jobDeleteThread = null;
+  protected WorkerThread[] workerThreads = null;
+  protected ExpireStufferThread expireStufferThread = null;
+  protected ExpireThread[] expireThreads = null;
+  protected DocumentDeleteStufferThread deleteStufferThread = null;
+  protected DocumentDeleteThread[] deleteThreads = null;
+  protected DocumentCleanupStufferThread cleanupStufferThread = null;
+  protected DocumentCleanupThread[] cleanupThreads = null;
+  protected JobResetThread jobResetThread = null;
+  protected SeedingThread seedingThread = null;
+  protected IdleCleanupThread idleCleanupThread = null;
+  protected SetPriorityThread setPriorityThread = null;
+  protected HistoryCleanupThread historyCleanupThread = null;
+
+  // Reset managers
+  /** Worker thread pool reset manager */
+  protected WorkerResetManager workerResetManager = null;
+  /** Delete thread pool reset manager */
+  protected DocDeleteResetManager docDeleteResetManager = null;
+  /** Cleanup thread pool reset manager */
+  protected DocCleanupResetManager docCleanupResetManager = null;
+
+  // Number of worker threads
+  protected int numWorkerThreads = 0;
+  // Number of delete threads
+  protected int numDeleteThreads = 0;
+  // Number of cleanup threads
+  protected int numCleanupThreads = 0;
+  // Number of expiration threads
+  protected int numExpireThreads = 0;
+  // Factor for low water level in queueing
+  protected float lowWaterFactor = 5.0f;
+  // Factor in amount to stuff
+  protected float stuffAmtFactor = 0.5f;
+
+  /** Process identifier for this agent */
+  protected String processID = null;
 
   /** Constructor.
   *@param threadContext is the thread context.
   */
-  public CrawlerAgent(IThreadContext threadContext)
+  public CrawlerAgent()
     throws ManifoldCFException
   {
-    this.threadContext = threadContext;
+  }
+
+  /** Initialize agent environment.
+  * This is called before any of the other operations are called, and is meant to insure that
+  * the environment is properly initialized.
+  */
+  public void initialize(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(threadContext);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(threadContext);
+  }
+  
+  /** Tear down agent environment.
+  * This is called after all the other operations are completed, and is meant to allow
+  * environment resources to be freed.
+  */
+  public void cleanUp(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(threadContext);
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(threadContext);
   }
 
   /** Install agent.  This usually installs the agent's database tables etc.
   */
-  public void install()
+  @Override
+  public void install(IThreadContext threadContext)
     throws ManifoldCFException
   {
     // Install the system tables for the crawler.
@@ -50,79 +117,603 @@
 
   /** Uninstall agent.  This must clean up everything the agent is responsible for.
   */
-  public void deinstall()
+  @Override
+  public void deinstall(IThreadContext threadContext)
     throws ManifoldCFException
   {
     ManifoldCF.deinstallSystemTables(threadContext);
   }
 
+  /** Called ONLY when no other active services of this kind are running.  Meant to be
+  * used after the cluster has been down for an indeterminate period of time.
+  */
+  @Override
+  public void clusterInit(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    jobManager.prepareForClusterStart();
+  }
+
+  /** Cleanup after ALL agents processes.
+  * Call this method to clean up dangling persistent state when a cluster is just starting
+  * to come up.  This method CANNOT be called when there are any active agents
+  * processes at all.
+  *@param processID is the current process ID.
+  */
+  @Override
+  public void cleanUpAllAgentData(IThreadContext threadContext, String currentProcessID)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    jobManager.cleanupProcessData();
+    // What kind of reprioritization should be done here?
+    // Answer: since we basically keep everything in the database now, the only kind of reprioritization we need
+    // to take care of are dangling ones that won't get done because the process that was doing them went
+    // away.  BUT: somebody may have blown away lock info, in which case we won't know anything at all.
+    // So we do everything in that case.
+    ManifoldCF.resetAllDocumentPriorities(threadContext,System.currentTimeMillis(),currentProcessID);
+  }
+  
+  /** Cleanup after agents process.
+  * Call this method to clean up dangling persistent state after agent has been stopped.
+  * This method CANNOT be called when the agent is active, but it can
+  * be called at any time and by any process in order to guarantee that a terminated
+  * agent does not block other agents from completing their tasks.
+  *@param currentProcessID is the current process ID.
+  *@param cleanupProcessID is the process ID of the agent to clean up after.
+  */
+  @Override
+  public void cleanUpAgentData(IThreadContext threadContext, String currentProcessID, String cleanupProcessID)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    jobManager.cleanupProcessData(cleanupProcessID);
+    IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
+    String reproID = rt.isSpecifiedProcessReprioritizing(cleanupProcessID);
+    if (reproID != null)
+    {
+      // We have to take over the prioritization for the process, which apparently died
+      // in the middle.
+      IRepositoryConnectionManager connectionManager = RepositoryConnectionManagerFactory.make(threadContext);
+
+      // Reprioritize all documents in the jobqueue, 1000 at a time
+
+      Map<String,IRepositoryConnection> connectionMap = new HashMap<String,IRepositoryConnection>();
+      Map<Long,IJobDescription> jobDescriptionMap = new HashMap<Long,IJobDescription>();
+      
+      // Do the 'not yet processed' documents only.  Documents that are queued for reprocessing will be assigned
+      // new priorities.  Already processed documents won't.  This guarantees that our bins are appropriate for current thread
+      // activity.
+      // In order for this to be the correct functionality, ALL reseeding and requeuing operations MUST reset the associated document
+      // priorities.
+      while (true)
+      {
+        long startTime = System.currentTimeMillis();
+
+        Long currentTimeValue = rt.checkReprioritizationInProgress();
+        if (currentTimeValue == null)
+        {
+          // Some other process or thread superceded us.
+          return;
+        }
+        long updateTime = currentTimeValue.longValue();
+        
+        DocumentDescription[] docs = jobManager.getNextNotYetProcessedReprioritizationDocuments(updateTime, 10000);
+        if (docs.length == 0)
+          break;
+
+        // Calculate new priorities for all these documents
+        ManifoldCF.writeDocumentPriorities(threadContext,docs,connectionMap,jobDescriptionMap,updateTime);
+
+        Logging.threads.debug("Reprioritized "+Integer.toString(docs.length)+" not-yet-processed documents in "+new Long(System.currentTimeMillis()-startTime)+" ms");
+      }
+      
+      rt.doneReprioritization(reproID);
+    }
+  }
+
   /** Start the agent.  This method should spin up the agent threads, and
   * then return.
   */
-  public void startAgent()
+  @Override
+  public void startAgent(IThreadContext threadContext, String processID)
     throws ManifoldCFException
   {
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
-    ManifoldCF.startSystem(threadContext);
+    this.processID = processID;
+    startSystem(threadContext);
   }
 
-  /** Stop the agent.  This should shut down the agent threads.
+  /** Stop the agent.  This should shut down the agent threads etc.
   */
-  public void stopAgent()
+  @Override
+  public void stopAgent(IThreadContext threadContext)
     throws ManifoldCFException
   {
-    ManifoldCF.stopSystem(threadContext);
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
+    stopSystem(threadContext);
   }
 
   /** Request permission from agent to delete an output connection.
   *@param connName is the name of the output connection.
   *@return true if the connection is in use, false otherwise.
   */
-  public boolean isOutputConnectionInUse(String connName)
+  @Override
+  public boolean isOutputConnectionInUse(IThreadContext threadContext, String connName)
     throws ManifoldCFException
   {
-    return ManifoldCF.isOutputConnectionInUse(threadContext,connName);
+    // Check with job manager.
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    return jobManager.checkIfOutputReference(connName);
   }
 
   /** Note the deregistration of a set of output connections.
   *@param connectionNames are the names of the connections being deregistered.
   */
-  public void noteOutputConnectorDeregistration(String[] connectionNames)
+  @Override
+  public void noteOutputConnectorDeregistration(IThreadContext threadContext, String[] connectionNames)
     throws ManifoldCFException
   {
-    ManifoldCF.noteOutputConnectorDeregistration(threadContext,connectionNames);
+    // Notify job manager
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    jobManager.noteOutputConnectorDeregistration(connectionNames);
   }
 
   /** Note the registration of a set of output connections.
   *@param connectionNames are the names of the connections being registered.
   */
-  public void noteOutputConnectorRegistration(String[] connectionNames)
+  @Override
+  public void noteOutputConnectorRegistration(IThreadContext threadContext, String[] connectionNames)
     throws ManifoldCFException
   {
-    ManifoldCF.noteOutputConnectorRegistration(threadContext,connectionNames);
+    // Notify job manager
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    jobManager.noteOutputConnectorRegistration(connectionNames);
   }
 
   /** Note a change in configuration for an output connection.
   *@param connectionName is the name of the connections being changed.
   */
-  public void noteOutputConnectionChange(String connectionName)
+  @Override
+  public void noteOutputConnectionChange(IThreadContext threadContext, String connectionName)
     throws ManifoldCFException
   {
-    ManifoldCF.noteOutputConnectionChange(threadContext,connectionName);
+    // Notify job manager
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    jobManager.noteOutputConnectionChange(connectionName);
   }
 
-  /** Signal that an output connection needs to be "redone".  This means that all documents sent to that output connection must be sent again,
-  * and the history as to their status must be forgotten.
-  *@param connectionName is the name of the connection being signalled.
+  /** Start everything.
   */
-  public void signalOutputConnectionRedo(String connectionName)
+  public void startSystem(IThreadContext threadContext)
     throws ManifoldCFException
   {
-    ManifoldCF.signalOutputConnectionRedo(threadContext,connectionName);
+    Logging.root.info("Starting up pull-agent...");
+    // Now, start all the threads
+    numWorkerThreads = ManifoldCF.getMaxWorkerThreads(threadContext);
+    if (numWorkerThreads < 1 || numWorkerThreads > 300)
+      throw new ManifoldCFException("Illegal value for the number of worker threads");
+    numDeleteThreads = ManifoldCF.getMaxDeleteThreads(threadContext);
+    numCleanupThreads = ManifoldCF.getMaxCleanupThreads(threadContext);
+    numExpireThreads = ManifoldCF.getMaxExpireThreads(threadContext);
+    if (numDeleteThreads < 1 || numDeleteThreads > 300)
+      throw new ManifoldCFException("Illegal value for the number of delete threads");
+    if (numCleanupThreads < 1 || numCleanupThreads > 300)
+      throw new ManifoldCFException("Illegal value for the number of cleanup threads");
+    if (numExpireThreads < 1 || numExpireThreads > 300)
+      throw new ManifoldCFException("Illegal value for the number of expire threads");
+    lowWaterFactor = (float)LockManagerFactory.getDoubleProperty(threadContext,ManifoldCF.lowWaterFactorProperty,5.0);
+    if (lowWaterFactor < 1.0 || lowWaterFactor > 1000.0)
+      throw new ManifoldCFException("Illegal value for the low water factor");
+    stuffAmtFactor = (float)LockManagerFactory.getDoubleProperty(threadContext,ManifoldCF.stuffAmtFactorProperty,2.0);
+    if (stuffAmtFactor < 0.1 || stuffAmtFactor > 1000.0)
+      throw new ManifoldCFException("Illegal value for the stuffing amount factor");
+
+
+    // Create the threads and objects.  This MUST be completed before there is any chance of "shutdownSystem" getting called.
+
+    QueueTracker queueTracker = new QueueTracker();
+
+
+    DocumentQueue documentQueue = new DocumentQueue();
+    DocumentDeleteQueue documentDeleteQueue = new DocumentDeleteQueue();
+    DocumentCleanupQueue documentCleanupQueue = new DocumentCleanupQueue();
+    DocumentCleanupQueue expireQueue = new DocumentCleanupQueue();
+
+    BlockingDocuments blockingDocuments = new BlockingDocuments();
+
+    workerResetManager = new WorkerResetManager(documentQueue,expireQueue,processID);
+    docDeleteResetManager = new DocDeleteResetManager(documentDeleteQueue,processID);
+    docCleanupResetManager = new DocCleanupResetManager(documentCleanupQueue,processID);
+
+    jobStartThread = new JobStartThread(processID);
+    startupThread = new StartupThread(new StartupResetManager(processID),processID);
+    startDeleteThread = new StartDeleteThread(new DeleteStartupResetManager(processID),processID);
+    finisherThread = new FinisherThread(processID);
+    notificationThread = new JobNotificationThread(new NotificationResetManager(processID),processID);
+    jobDeleteThread = new JobDeleteThread(processID);
+    stufferThread = new StufferThread(documentQueue,numWorkerThreads,workerResetManager,queueTracker,blockingDocuments,lowWaterFactor,stuffAmtFactor,processID);
+    expireStufferThread = new ExpireStufferThread(expireQueue,numExpireThreads,workerResetManager,processID);
+    setPriorityThread = new SetPriorityThread(numWorkerThreads,blockingDocuments,processID);
+    historyCleanupThread = new HistoryCleanupThread(processID);
+
+    workerThreads = new WorkerThread[numWorkerThreads];
+    int i = 0;
+    while (i < numWorkerThreads)
+    {
+      workerThreads[i] = new WorkerThread(Integer.toString(i),documentQueue,workerResetManager,queueTracker,processID);
+      i++;
+    }
+
+    expireThreads = new ExpireThread[numExpireThreads];
+    i = 0;
+    while (i < numExpireThreads)
+    {
+      expireThreads[i] = new ExpireThread(Integer.toString(i),expireQueue,workerResetManager,processID);
+      i++;
+    }
+
+    deleteStufferThread = new DocumentDeleteStufferThread(documentDeleteQueue,numDeleteThreads,docDeleteResetManager,processID);
+    deleteThreads = new DocumentDeleteThread[numDeleteThreads];
+    i = 0;
+    while (i < numDeleteThreads)
+    {
+      deleteThreads[i] = new DocumentDeleteThread(Integer.toString(i),documentDeleteQueue,docDeleteResetManager,processID);
+      i++;
+    }
+      
+    cleanupStufferThread = new DocumentCleanupStufferThread(documentCleanupQueue,numCleanupThreads,docCleanupResetManager,processID);
+    cleanupThreads = new DocumentCleanupThread[numCleanupThreads];
+    i = 0;
+    while (i < numCleanupThreads)
+    {
+      cleanupThreads[i] = new DocumentCleanupThread(Integer.toString(i),documentCleanupQueue,docCleanupResetManager,processID);
+      i++;
+    }
+
+    jobResetThread = new JobResetThread(processID);
+    seedingThread = new SeedingThread(new SeedingResetManager(processID),processID);
+    idleCleanupThread = new IdleCleanupThread(processID);
+
+    // Start all the threads
+    jobStartThread.start();
+    startupThread.start();
+    startDeleteThread.start();
+    finisherThread.start();
+    notificationThread.start();
+    jobDeleteThread.start();
+    stufferThread.start();
+    expireStufferThread.start();
+    setPriorityThread.start();
+    historyCleanupThread.start();
+
+    i = 0;
+    while (i < numWorkerThreads)
+    {
+      workerThreads[i].start();
+      i++;
+    }
+
+    i = 0;
+    while (i < numExpireThreads)
+    {
+      expireThreads[i].start();
+      i++;
+    }
+
+    cleanupStufferThread.start();
+    i = 0;
+    while (i < numCleanupThreads)
+    {
+      cleanupThreads[i].start();
+      i++;
+    }
+
+    deleteStufferThread.start();
+    i = 0;
+    while (i < numDeleteThreads)
+    {
+      deleteThreads[i].start();
+      i++;
+    }
+
+    jobResetThread.start();
+    seedingThread.start();
+    idleCleanupThread.start();
+
+    Logging.root.info("Pull-agent started");
   }
 
+  /** Stop the system.
+  */
+  public void stopSystem(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    Logging.root.info("Shutting down pull-agent...");
+    while (jobDeleteThread != null || startupThread != null || startDeleteThread != null ||
+      jobStartThread != null || stufferThread != null ||
+      finisherThread != null || notificationThread != null || workerThreads != null || expireStufferThread != null || expireThreads != null ||
+      deleteStufferThread != null || deleteThreads != null ||
+      cleanupStufferThread != null || cleanupThreads != null ||
+      jobResetThread != null || seedingThread != null || idleCleanupThread != null || setPriorityThread != null || historyCleanupThread != null)
+    {
+      // Send an interrupt to all threads that are still there.
+      // In theory, this only needs to be done once.  In practice, I have seen cases where the thread loses track of the fact that it has been
+      // interrupted (which may be a JVM bug - who knows?), but in any case there's no harm in doing it again.
+      if (historyCleanupThread != null)
+      {
+        historyCleanupThread.interrupt();
+      }
+      if (setPriorityThread != null)
+      {
+        setPriorityThread.interrupt();
+      }
+      if (jobStartThread != null)
+      {
+        jobStartThread.interrupt();
+      }
+      if (jobDeleteThread != null)
+      {
+        jobDeleteThread.interrupt();
+      }
+      if (startupThread != null)
+      {
+        startupThread.interrupt();
+      }
+      if (startDeleteThread != null)
+      {
+        startDeleteThread.interrupt();
+      }
+      if (stufferThread != null)
+      {
+        stufferThread.interrupt();
+      }
+      if (expireStufferThread != null)
+      {
+        expireStufferThread.interrupt();
+      }
+      if (finisherThread != null)
+      {
+        finisherThread.interrupt();
+      }
+      if (notificationThread != null)
+      {
+        notificationThread.interrupt();
+      }
+      if (workerThreads != null)
+      {
+        int i = 0;
+        while (i < workerThreads.length)
+        {
+          Thread workerThread = workerThreads[i++];
+          if (workerThread != null)
+            workerThread.interrupt();
+        }
+      }
+      if (expireThreads != null)
+      {
+        int i = 0;
+        while (i < expireThreads.length)
+        {
+          Thread expireThread = expireThreads[i++];
+          if (expireThread != null)
+            expireThread.interrupt();
+        }
+      }
+      if (cleanupStufferThread != null)
+      {
+        cleanupStufferThread.interrupt();
+      }
+      if (cleanupThreads != null)
+      {
+        int i = 0;
+        while (i < cleanupThreads.length)
+        {
+          Thread cleanupThread = cleanupThreads[i++];
+          if (cleanupThread != null)
+            cleanupThread.interrupt();
+        }
+      }
+      if (deleteStufferThread != null)
+      {
+        deleteStufferThread.interrupt();
+      }
+      if (deleteThreads != null)
+      {
+        int i = 0;
+        while (i < deleteThreads.length)
+        {
+          Thread deleteThread = deleteThreads[i++];
+          if (deleteThread != null)
+            deleteThread.interrupt();
+        }
+      }
+      if (jobResetThread != null)
+      {
+        jobResetThread.interrupt();
+      }
+      if (seedingThread != null)
+      {
+        seedingThread.interrupt();
+      }
+      if (idleCleanupThread != null)
+      {
+        idleCleanupThread.interrupt();
+      }
+
+      // Now, wait for all threads to die.
+      try
+      {
+        ManifoldCF.sleep(1000L);
+      }
+      catch (InterruptedException e)
+      {
+      }
+
+      // Check to see which died.
+      if (historyCleanupThread != null)
+      {
+        if (!historyCleanupThread.isAlive())
+          historyCleanupThread = null;
+      }
+      if (setPriorityThread != null)
+      {
+        if (!setPriorityThread.isAlive())
+          setPriorityThread = null;
+      }
+      if (jobDeleteThread != null)
+      {
+        if (!jobDeleteThread.isAlive())
+          jobDeleteThread = null;
+      }
+      if (startupThread != null)
+      {
+        if (!startupThread.isAlive())
+          startupThread = null;
+      }
+      if (startDeleteThread != null)
+      {
+        if (!startDeleteThread.isAlive())
+          startDeleteThread = null;
+      }
+      if (jobStartThread != null)
+      {
+        if (!jobStartThread.isAlive())
+          jobStartThread = null;
+      }
+      if (stufferThread != null)
+      {
+        if (!stufferThread.isAlive())
+          stufferThread = null;
+      }
+      if (expireStufferThread != null)
+      {
+        if (!expireStufferThread.isAlive())
+          expireStufferThread = null;
+      }
+      if (finisherThread != null)
+      {
+        if (!finisherThread.isAlive())
+          finisherThread = null;
+      }
+      if (notificationThread != null)
+      {
+        if (!notificationThread.isAlive())
+          notificationThread = null;
+      }
+      if (workerThreads != null)
+      {
+        int i = 0;
+        boolean isAlive = false;
+        while (i < workerThreads.length)
+        {
+          Thread workerThread = workerThreads[i];
+          if (workerThread != null)
+          {
+            if (!workerThread.isAlive())
+              workerThreads[i] = null;
+            else
+              isAlive = true;
+          }
+          i++;
+        }
+        if (!isAlive)
+          workerThreads = null;
+      }
+
+      if (expireThreads != null)
+      {
+        int i = 0;
+        boolean isAlive = false;
+        while (i < expireThreads.length)
+        {
+          Thread expireThread = expireThreads[i];
+          if (expireThread != null)
+          {
+            if (!expireThread.isAlive())
+              expireThreads[i] = null;
+            else
+              isAlive = true;
+          }
+          i++;
+        }
+        if (!isAlive)
+          expireThreads = null;
+      }
+      
+      if (cleanupStufferThread != null)
+      {
+        if (!cleanupStufferThread.isAlive())
+          cleanupStufferThread = null;
+      }
+      if (cleanupThreads != null)
+      {
+        int i = 0;
+        boolean isAlive = false;
+        while (i < cleanupThreads.length)
+        {
+          Thread cleanupThread = cleanupThreads[i];
+          if (cleanupThread != null)
+          {
+            if (!cleanupThread.isAlive())
+              cleanupThreads[i] = null;
+            else
+              isAlive = true;
+          }
+          i++;
+        }
+        if (!isAlive)
+          cleanupThreads = null;
+      }
+
+      if (deleteStufferThread != null)
+      {
+        if (!deleteStufferThread.isAlive())
+          deleteStufferThread = null;
+      }
+      if (deleteThreads != null)
+      {
+        int i = 0;
+        boolean isAlive = false;
+        while (i < deleteThreads.length)
+        {
+          Thread deleteThread = deleteThreads[i];
+          if (deleteThread != null)
+          {
+            if (!deleteThread.isAlive())
+              deleteThreads[i] = null;
+            else
+              isAlive = true;
+          }
+          i++;
+        }
+        if (!isAlive)
+          deleteThreads = null;
+      }
+      if (jobResetThread != null)
+      {
+        if (!jobResetThread.isAlive())
+          jobResetThread = null;
+      }
+      if (seedingThread != null)
+      {
+        if (!seedingThread.isAlive())
+          seedingThread = null;
+      }
+      if (idleCleanupThread != null)
+      {
+        if (!idleCleanupThread.isAlive())
+          idleCleanupThread = null;
+      }
+    }
+
+    // Threads are down; release connectors
+    RepositoryConnectorPoolFactory.make(threadContext).flushUnusedConnectors();
+    numWorkerThreads = 0;
+    numDeleteThreads = 0;
+    numExpireThreads = 0;
+    Logging.root.info("Pull-agent successfully shut down");
+  }
+  
+
 }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DeleteStartupResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DeleteStartupResetManager.java
new file mode 100644
index 0000000..276668c
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DeleteStartupResetManager.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** Class which handles reset for seeding thread pool (of which there's
+* typically only one member).  The reset action here
+* is to move the status of jobs back from "seeding" to normal.
+*/
+public class DeleteStartupResetManager extends ResetManager
+{
+
+  /** Constructor. */
+  public DeleteStartupResetManager(String processID)
+  {
+    super(processID);
+  }
+
+  /** Reset */
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(tc);
+    jobManager.resetDeleteStartupWorkerStatus(processID);
+  }
+    
+  /** Do the wakeup logic.
+  */
+  @Override
+  protected void performWakeupLogic()
+  {
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocCleanupResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocCleanupResetManager.java
index e18a0c6..fc303a4 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocCleanupResetManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocCleanupResetManager.java
@@ -31,26 +31,28 @@
 {
   public static final String _rcsid = "@(#)$Id$";
 
-  protected DocumentCleanupQueue ddq;
+  protected final DocumentCleanupQueue ddq;
 
   /** Constructor. */
-  public DocCleanupResetManager(DocumentCleanupQueue ddq)
+  public DocCleanupResetManager(DocumentCleanupQueue ddq, String processID)
   {
-    super();
+    super(processID);
     this.ddq = ddq;
   }
 
   /** Reset */
-  protected void performResetLogic(IThreadContext tc)
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
     throws ManifoldCFException
   {
     IJobManager jobManager = JobManagerFactory.make(tc);
-    jobManager.resetDocCleanupWorkerStatus();
+    jobManager.resetDocCleanupWorkerStatus(processID);
     ddq.clear();
   }
   
   /** Do the wakeup logic.
   */
+  @Override
   protected void performWakeupLogic()
   {
     ddq.reset();
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocDeleteResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocDeleteResetManager.java
index fec993a..d11f280 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocDeleteResetManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocDeleteResetManager.java
@@ -31,26 +31,28 @@
 {
   public static final String _rcsid = "@(#)$Id: DocDeleteResetManager.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  protected DocumentDeleteQueue ddq;
+  protected final DocumentDeleteQueue ddq;
 
   /** Constructor. */
-  public DocDeleteResetManager(DocumentDeleteQueue ddq)
+  public DocDeleteResetManager(DocumentDeleteQueue ddq, String processID)
   {
-    super();
+    super(processID);
     this.ddq = ddq;
   }
 
   /** Reset */
-  protected void performResetLogic(IThreadContext tc)
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
     throws ManifoldCFException
   {
     IJobManager jobManager = JobManagerFactory.make(tc);
-    jobManager.resetDocDeleteWorkerStatus();
+    jobManager.resetDocDeleteWorkerStatus(processID);
     ddq.clear();
   }
 
   /** Do the wakeup logic.
   */
+  @Override
   protected void performWakeupLogic()
   {
     ddq.reset();
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupQueue.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupQueue.java
index 3bb663b..60791ac 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupQueue.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupQueue.java
@@ -32,7 +32,9 @@
   public static final String _rcsid = "@(#)$Id$";
 
   // Since the queue has a maximum size, an ArrayList is a fine way to keep it
-  protected ArrayList queue = new ArrayList();
+  protected final List<DocumentCleanupSet> queue = new ArrayList<DocumentCleanupSet>();
+  // This flag gets set to 'true' if the queue is being cleared due to a reset
+  protected boolean resetFlag = false;
 
   /** Constructor.
   */
@@ -46,6 +48,7 @@
   {
     synchronized (queue)
     {
+      resetFlag = true;
       queue.notifyAll();
     }
   }
@@ -57,6 +60,7 @@
     synchronized (queue)
     {
       queue.clear();
+      resetFlag = false;
     }
   }
 
@@ -96,14 +100,19 @@
     synchronized (queue)
     {
       // If queue is empty, go to sleep
-      while (queue.size() == 0)
+      if (resetFlag)
+        return null;
+        
+      while (queue.size() == 0 && resetFlag == false)
         queue.wait();
+      
       // If we've been awakened, there's either an entry to grab, or we've been
       // awakened because it's time to reset.
-      if (queue.size() == 0)
-        return null;
+      if (resetFlag)
+          return null;
+      
       // If we've been awakened, there's an entry to grab
-      DocumentCleanupSet dd = (DocumentCleanupSet)queue.remove(queue.size()-1);
+      DocumentCleanupSet dd = queue.remove(queue.size()-1);
       return dd;
     }
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupStufferThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupStufferThread.java
index 21e3e87..e52408c 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupStufferThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupStufferThread.java
@@ -36,24 +36,27 @@
   public static final String _rcsid = "@(#)$Id$";
 
   // Local data
-  // This is a reference to the static main document queue
-  protected DocumentCleanupQueue documentCleanupQueue;
-  // This is the reset manager
-  protected DocCleanupResetManager resetManager;
-  // This is the number of entries we want to stuff at any one time.
-  int n;
+  /** This is a reference to the static main document queue */
+  protected final DocumentCleanupQueue documentCleanupQueue;
+  /** This is the reset manager */
+  protected final DocCleanupResetManager resetManager;
+  /** This is the number of entries we want to stuff at any one time. */
+  protected final int n;
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   *@param documentCleanupQueue is the document queue we'll be stuffing.
   *@param n is the maximum number of threads that will be doing delete processing.
   */
-  public DocumentCleanupStufferThread(DocumentCleanupQueue documentCleanupQueue, int n, DocCleanupResetManager resetManager)
+  public DocumentCleanupStufferThread(DocumentCleanupQueue documentCleanupQueue, int n, DocCleanupResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     this.documentCleanupQueue = documentCleanupQueue;
     this.n = n;
     this.resetManager = resetManager;
+    this.processID = processID;
     setName("Document cleanup stuffer thread");
     setDaemon(true);
   }
@@ -102,7 +105,7 @@
           // This method will set the status of the documents in question
           // to "beingcleaned".
 
-          DocumentSetAndFlags documentsToClean = jobManager.getNextCleanableDocuments(deleteChunkSize,currentTime);
+          DocumentSetAndFlags documentsToClean = jobManager.getNextCleanableDocuments(processID,deleteChunkSize,currentTime);
           DocumentDescription[] descs = documentsToClean.getDocumentSet();
           boolean[] removeFromIndex = documentsToClean.getFlags();
           
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupThread.java
index f6aa078..f448d61 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentCleanupThread.java
@@ -43,27 +43,28 @@
 {
   public static final String _rcsid = "@(#)$Id$";
 
-
   // Local data
-  protected String id;
-  // This is a reference to the static main document queue
-  protected DocumentCleanupQueue documentCleanupQueue;
+  /** Thread id */
+  protected final String id;
+  /** This is a reference to the static main document queue */
+  protected final DocumentCleanupQueue documentCleanupQueue;
   /** Delete thread pool reset manager */
-  protected DocCleanupResetManager resetManager;
-  /** Queue tracker */
-  protected QueueTracker queueTracker;
+  protected final DocCleanupResetManager resetManager;
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   *@param id is the worker thread id.
   */
-  public DocumentCleanupThread(String id, DocumentCleanupQueue documentCleanupQueue, QueueTracker queueTracker, DocCleanupResetManager resetManager)
+  public DocumentCleanupThread(String id, DocumentCleanupQueue documentCleanupQueue,
+    DocCleanupResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     this.id = id;
     this.documentCleanupQueue = documentCleanupQueue;
-    this.queueTracker = queueTracker;
     this.resetManager = resetManager;
+    this.processID = processID;
     setName("Document cleanup thread '"+id+"'");
     setDaemon(true);
   }
@@ -79,7 +80,10 @@
       IIncrementalIngester ingester = IncrementalIngesterFactory.make(threadContext);
       IJobManager jobManager = JobManagerFactory.make(threadContext);
       IRepositoryConnectionManager connMgr = RepositoryConnectionManagerFactory.make(threadContext);
+      IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
 
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       // Loop
       while (true)
       {
@@ -142,7 +146,7 @@
             }
 
             // Grab one connection for each connectionName.  If we fail, nothing is lost and retries are possible.
-            IRepositoryConnector connector = RepositoryConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+            IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
             try
             {
 
@@ -235,7 +239,7 @@
                   DocumentDescription[] requeueCandidates = jobManager.markDocumentCleanedUp(jobID,legalLinkTypes,ddd,hopcountMethod);
                   // Use the common method for doing the requeuing
                   ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,requeueCandidates,
-                    connector,connection,queueTracker,currentTime);
+                    connector,connection,rt,currentTime);
                   // Finally, completed expiration of the document.
                   dqd.setProcessed();
                 }
@@ -245,7 +249,7 @@
             finally
             {
               // Free up the reserved connector instance
-              RepositoryConnectorFactory.release(connector);
+              repositoryConnectorPool.release(connection,connector);
             }
           }
           catch (ManifoldCFException e)
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteStufferThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteStufferThread.java
index 578bd83..4d281d4 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteStufferThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteStufferThread.java
@@ -36,24 +36,27 @@
   public static final String _rcsid = "@(#)$Id: DocumentDeleteStufferThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Local data
-  // This is a reference to the static main document queue
-  protected DocumentDeleteQueue documentDeleteQueue;
-  // This is the reset manager
-  protected DocDeleteResetManager resetManager;
-  // This is the number of entries we want to stuff at any one time.
-  int n;
-
+  /** This is a reference to the static main document queue */
+  protected final DocumentDeleteQueue documentDeleteQueue;
+  /** This is the reset manager */
+  protected final DocDeleteResetManager resetManager;
+  /** This is the number of entries we want to stuff at any one time. */
+  protected final int n;
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   *@param documentDeleteQueue is the document queue we'll be stuffing.
   *@param n is the maximum number of threads that will be doing delete processing.
   */
-  public DocumentDeleteStufferThread(DocumentDeleteQueue documentDeleteQueue, int n, DocDeleteResetManager resetManager)
+  public DocumentDeleteStufferThread(DocumentDeleteQueue documentDeleteQueue, int n, DocDeleteResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     this.documentDeleteQueue = documentDeleteQueue;
     this.n = n;
     this.resetManager = resetManager;
+    this.processID = processID;
     setName("Document delete stuffer thread");
     setDaemon(true);
   }
@@ -102,7 +105,7 @@
           // This method will set the status of the documents in question
           // to "beingdeleted".
 
-          DocumentDescription[] descs = jobManager.getNextDeletableDocuments(deleteChunkSize,currentTime);
+          DocumentDescription[] descs = jobManager.getNextDeletableDocuments(processID,deleteChunkSize,currentTime);
 
           // If there are no chunks at all, then we can sleep for a while.
           // The theory is that we need to allow stuff to accumulate.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteThread.java
index 3a7bcfe..8f284e2 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentDeleteThread.java
@@ -44,22 +44,26 @@
 
 
   // Local data
-  protected String id;
-  // This is a reference to the static main document queue
-  protected DocumentDeleteQueue documentDeleteQueue;
+  /** Thread ID */
+  protected final String id;
+  /** This is a reference to the static main document queue */
+  protected final DocumentDeleteQueue documentDeleteQueue;
   /** Delete thread pool reset manager */
-  protected DocDeleteResetManager resetManager;
+  protected final DocDeleteResetManager resetManager;
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   *@param id is the worker thread id.
   */
-  public DocumentDeleteThread(String id, DocumentDeleteQueue documentDeleteQueue, DocDeleteResetManager resetManager)
+  public DocumentDeleteThread(String id, DocumentDeleteQueue documentDeleteQueue, DocDeleteResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     this.id = id;
     this.documentDeleteQueue = documentDeleteQueue;
     this.resetManager = resetManager;
+    this.processID = processID;
     setName("Document delete thread '"+id+"'");
     setDaemon(true);
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentQueue.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentQueue.java
index 4e222b0..66a2a81 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentQueue.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/DocumentQueue.java
@@ -32,7 +32,7 @@
   public static final String _rcsid = "@(#)$Id: DocumentQueue.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Since the queue has a maximum size, an ArrayList is a fine way to keep it
-  protected ArrayList queue = new ArrayList();
+  protected final List<QueuedDocumentSet> queue = new ArrayList<QueuedDocumentSet>();
   // This flag gets set to 'true' if the queue is being cleared due to a reset
   protected boolean resetFlag = false;
 
@@ -111,7 +111,7 @@
 
       // If we've been awakened, there's either an entry to grab, or we've been
       // awakened because it's time to reset.
-      if (queue.size() == 0)
+      if (resetFlag)
         return null;
 
       // Go through all the documents and pick the one with the best rating
@@ -120,7 +120,7 @@
       double bestRating = Double.NEGATIVE_INFINITY;
       while (i < queue.size())
       {
-        QueuedDocumentSet dd = (QueuedDocumentSet)queue.get(i);
+        QueuedDocumentSet dd = queue.get(i);
         // Evaluate each document's bins.  These will be saved in the QueuedDocumentSet.
         double rating = dd.calculateAssignmentRating(overlapCalculator);
         if (bestIndex == -1 || rating > bestRating)
@@ -131,7 +131,7 @@
         i++;
       }
       // Pull off the best one.  DON'T REORDER!!
-      QueuedDocumentSet rval = (QueuedDocumentSet)queue.remove(bestIndex);
+      QueuedDocumentSet rval = queue.remove(bestIndex);
       return rval;
     }
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireStufferThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireStufferThread.java
index 38cb83d..f2d6546 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireStufferThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireStufferThread.java
@@ -33,25 +33,28 @@
   public static final String _rcsid = "@(#)$Id: ExpireStufferThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Local data
-  // This is a reference to the static main document expiration queue
-  protected DocumentCleanupQueue documentQueue;
+  /** This is a reference to the static main document expiration queue */
+  protected final DocumentCleanupQueue documentQueue;
   /** Worker thread pool reset manager */
-  protected WorkerResetManager resetManager;
-  // This is the number of entries we want to stuff at any one time.
-  protected int n;
-
+  protected final WorkerResetManager resetManager;
+  /** This is the number of entries we want to stuff at any one time. */
+  protected final int n;
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   *@param documentQueue is the document queue we'll be stuffing.
   *@param n represents the number of threads that will be processing queued stuff, NOT the
   * number of documents to be done at once!
   */
-  public ExpireStufferThread(DocumentCleanupQueue documentQueue, int n, WorkerResetManager resetManager)
+  public ExpireStufferThread(DocumentCleanupQueue documentQueue, int n, WorkerResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     this.documentQueue = documentQueue;
     this.n = n;
     this.resetManager = resetManager;
+    this.processID = processID;
     setName("Expire stuffer thread");
     setDaemon(true);
     // The priority of this thread is higher than most others.  We want stuffing to proceed even if the machine
@@ -115,7 +118,7 @@
           // The number n passed in here thus cannot be used in a query to limit the number of returned
           // results.  Instead, it must be factored into the limit portion of the query.
           long currentTime = System.currentTimeMillis();
-          DocumentSetAndFlags docsAndFlags = jobManager.getExpiredDocuments(deleteChunkSize,currentTime);
+          DocumentSetAndFlags docsAndFlags = jobManager.getExpiredDocuments(processID,deleteChunkSize,currentTime);
           DocumentDescription[] descs = docsAndFlags.getDocumentSet();
           boolean[] deleteFromIndex = docsAndFlags.getFlags();
           
@@ -131,7 +134,7 @@
           // The theory is that we need to allow stuff to accumulate.
           if (descs.length == 0)
           {
-            ManifoldCF.sleep(60000L);      // 1 minute
+            ManifoldCF.sleep(5000L);      // 5 seconds
             continue;
           }
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireThread.java
index 47c29e8..7f8e3e4 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ExpireThread.java
@@ -33,25 +33,26 @@
 
 
   // Local data
-  protected String id;
-  // This is a reference to the static main document queue
-  protected DocumentCleanupQueue documentQueue;
+  /** Thread id */
+  protected final String id;
+  /** This is a reference to the static main document queue */
+  protected final DocumentCleanupQueue documentQueue;
   /** Worker thread pool reset manager */
-  protected WorkerResetManager resetManager;
-  /** Queue tracker */
-  protected QueueTracker queueTracker;
-
+  protected final WorkerResetManager resetManager;
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   *@param id is the expire thread id.
   */
-  public ExpireThread(String id, DocumentCleanupQueue documentQueue, QueueTracker queueTracker, WorkerResetManager resetManager)
+  public ExpireThread(String id, DocumentCleanupQueue documentQueue, WorkerResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     this.id = id;
     this.documentQueue = documentQueue;
     this.resetManager = resetManager;
-    this.queueTracker = queueTracker;
+    this.processID = processID;
     setName("Expiration thread '"+id+"'");
     setDaemon(true);
 
@@ -69,7 +70,10 @@
       IIncrementalIngester ingester = IncrementalIngesterFactory.make(threadContext);
       IJobManager jobManager = JobManagerFactory.make(threadContext);
       IRepositoryConnectionManager connMgr = RepositoryConnectionManagerFactory.make(threadContext);
+      IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
 
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       // Loop
       while (true)
       {
@@ -134,7 +138,7 @@
             }
 
             // Grab one connection for the connectionName.  If we fail, nothing is lost and retries are possible.
-            IRepositoryConnector connector = RepositoryConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+            IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
             try
             {
 
@@ -238,7 +242,7 @@
                   DocumentDescription[] requeueCandidates = jobManager.markDocumentExpired(jobID,legalLinkTypes,ddd,hopcountMethod);
                   // Use the common method for doing the requeuing
                   ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,requeueCandidates,
-                    connector,connection,queueTracker,currentTime);
+                    connector,connection,rt,currentTime);
                   // Finally, completed expiration of the document.
                   dqd.setProcessed();
                 }
@@ -248,7 +252,7 @@
             finally
             {
               // Free up the reserved connector instance
-              RepositoryConnectorFactory.release(connector);
+              repositoryConnectorPool.release(connection,connector);
             }
           }
           catch (ManifoldCFException e)
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/FinisherThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/FinisherThread.java
index b584073..ff2134e 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/FinisherThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/FinisherThread.java
@@ -32,13 +32,16 @@
   public static final String _rcsid = "@(#)$Id: FinisherThread.java 991295 2010-08-31 19:12:14Z kwright $";
 
   // Local data
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   */
-  public FinisherThread()
+  public FinisherThread(String processID)
     throws ManifoldCFException
   {
     super();
+    this.processID = processID;
     setName("Finisher thread");
     setDaemon(true);
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/HistoryCleanupThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/HistoryCleanupThread.java
new file mode 100644
index 0000000..652d7b3
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/HistoryCleanupThread.java
@@ -0,0 +1,136 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** This class describes the thread that cleans up history records.
+* It fires infrequently and removes history records older than a configuration-determined cutoff.
+*/
+public class HistoryCleanupThread extends Thread
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  protected static final String historyCleanupIntervalProperty = "org.apache.manifoldcf.crawler.historycleanupinterval";
+  
+  // Local data
+  /** Process ID */
+  protected final String processID;
+
+  /** Constructor.
+  */
+  public HistoryCleanupThread(String processID)
+    throws ManifoldCFException
+  {
+    super();
+    this.processID = processID;
+    setName("History cleanup thread");
+    setDaemon(true);
+  }
+
+  public void run()
+  {
+    try
+    {
+      // Create a thread context object.
+      IThreadContext threadContext = ThreadContextFactory.make();
+      IRepositoryConnectionManager connectionManager = RepositoryConnectionManagerFactory.make(threadContext);
+      // Default zero value means we never clean up, which is the backwards-compatible behavior
+      long historyCleanupInterval = LockManagerFactory.getLongProperty(threadContext, historyCleanupIntervalProperty, 0L);
+      // Loop
+      while (true)
+      {
+        if (Thread.currentThread().isInterrupted())
+          break;
+
+        // Do another try/catch around everything in the loop
+        try
+        {
+          // Get current time
+          long currentTime = System.currentTimeMillis();
+          // Log it
+          if (Logging.threads.isDebugEnabled())
+            Logging.threads.debug("History cleanup thread - removing old history at "+new Long(currentTime).toString());
+          if (historyCleanupInterval > 0L && historyCleanupInterval < currentTime)
+            connectionManager.cleanUpHistoryData(currentTime - historyCleanupInterval);
+          else
+            Logging.threads.debug(" History cleanup thread did nothing because cleanup disabled");
+          // Loop around again, after resting a while
+          ManifoldCF.sleep(60L * 60L * 1000L);
+        }
+        catch (ManifoldCFException e)
+        {
+          if (e.getErrorCode() == ManifoldCFException.INTERRUPTED)
+            break;
+
+          if (e.getErrorCode() == ManifoldCFException.DATABASE_CONNECTION_ERROR)
+          {
+            Logging.threads.error("History thread aborting and restarting due to database connection reset: "+e.getMessage(),e);
+            try
+            {
+              // Give the database a chance to catch up/wake up
+              ManifoldCF.sleep(10000L);
+            }
+            catch (InterruptedException se)
+            {
+              break;
+            }
+            continue;
+          }
+
+          // Log it, but keep the thread alive
+          Logging.threads.error("Exception tossed: "+e.getMessage(),e);
+
+          if (e.getErrorCode() == ManifoldCFException.SETUP_ERROR)
+          {
+            // Shut the whole system down!
+            System.exit(1);
+          }
+        }
+        catch (InterruptedException e)
+        {
+          // We're supposed to quit
+          break;
+        }
+        catch (OutOfMemoryError e)
+        {
+          System.err.println("agents process ran out of memory - shutting down");
+          e.printStackTrace(System.err);
+          System.exit(-200);
+        }
+        catch (Throwable e)
+        {
+          // A more severe error - but stay alive
+          Logging.threads.fatal("Error tossed: "+e.getMessage(),e);
+        }
+      }
+    }
+    catch (Throwable e)
+    {
+      // Severe error on initialization
+      System.err.println("agents process could not start - shutting down");
+      Logging.threads.fatal("HistoryCleanupThread initialization error tossed: "+e.getMessage(),e);
+      System.exit(-300);
+    }
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/IdleCleanupThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/IdleCleanupThread.java
index cfa6db7..65b897c 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/IdleCleanupThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/IdleCleanupThread.java
@@ -33,14 +33,16 @@
   public static final String _rcsid = "@(#)$Id: IdleCleanupThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Local data
-
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   */
-  public IdleCleanupThread()
+  public IdleCleanupThread(String processID)
     throws ManifoldCFException
   {
     super();
+    this.processID = processID;
     setName("Idle cleanup thread");
     setDaemon(true);
   }
@@ -55,6 +57,8 @@
       // Get the cache handle.
       ICacheManager cacheManager = CacheManagerFactory.make(threadContext);
       
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       // Loop
       while (true)
       {
@@ -62,12 +66,11 @@
         try
         {
           // Do the cleanup
-          RepositoryConnectorFactory.pollAllConnectors(threadContext);
-          OutputConnectorFactory.pollAllConnectors(threadContext);
+          repositoryConnectorPool.pollAllConnectors();
           cacheManager.expireObjects(System.currentTimeMillis());
           
           // Sleep for the retry interval.
-          ManifoldCF.sleep(15000L);
+          ManifoldCF.sleep(5000L);
         }
         catch (ManifoldCFException e)
         {
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobDeleteThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobDeleteThread.java
index b9071f2..5416ef9 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobDeleteThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobDeleteThread.java
@@ -35,13 +35,16 @@
   public static final String _rcsid = "@(#)$Id: JobDeleteThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Local data
-
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   */
-  public JobDeleteThread()
+  public JobDeleteThread(String processID)
     throws ManifoldCFException
   {
     super();
+    this.processID = processID;
     setName("Job delete thread");
     setDaemon(true);
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobNotificationThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobNotificationThread.java
index 0a775e4..96b892b 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobNotificationThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobNotificationThread.java
@@ -33,14 +33,18 @@
   public static final String _rcsid = "@(#)$Id: JobNotificationThread.java 998081 2010-09-17 11:33:15Z kwright $";
 
   /** Notification reset manager */
-  protected static NotificationResetManager resetManager = new NotificationResetManager();
-
+  protected final NotificationResetManager resetManager;
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   */
-  public JobNotificationThread()
+  public JobNotificationThread(NotificationResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
+    this.resetManager = resetManager;
+    this.processID = processID;
     setName("Job notification thread");
     setDaemon(true);
   }
@@ -57,6 +61,8 @@
       IOutputConnectionManager connectionManager = OutputConnectionManagerFactory.make(threadContext);
       IRepositoryConnectionManager repositoryConnectionManager = RepositoryConnectionManagerFactory.make(threadContext);
 
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(threadContext);
+      
       // Loop
       while (true)
       {
@@ -66,7 +72,7 @@
           // Before we begin, conditionally reset
           resetManager.waitForReset(threadContext);
 
-          JobNotifyRecord[] jobsNeedingNotification = jobManager.getJobsReadyForInactivity();
+          JobNotifyRecord[] jobsNeedingNotification = jobManager.getJobsReadyForInactivity(processID);
           try
           {
             HashMap connectionNames = new HashMap();
@@ -104,7 +110,7 @@
               if (connection != null)
               {
                 // Grab an appropriate connection instance
-                IOutputConnector connector = OutputConnectorFactory.grab(threadContext,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+                IOutputConnector connector = outputConnectorPool.grab(connection);
                 if (connector != null)
                 {
                   try
@@ -134,7 +140,7 @@
                   }
                   finally
                   {
-                    OutputConnectorFactory.release(connector);
+                    outputConnectorPool.release(connection,connector);
                   }
                 }
               }
@@ -329,33 +335,4 @@
 
   }
   
-  /** Class which handles reset for seeding thread pool (of which there's
-  * typically only one member).  The reset action here
-  * is to move the status of jobs back from "seeding" to normal.
-  */
-  protected static class NotificationResetManager extends ResetManager
-  {
-
-    /** Constructor. */
-    public NotificationResetManager()
-    {
-      super();
-    }
-
-    /** Reset */
-    protected void performResetLogic(IThreadContext tc)
-      throws ManifoldCFException
-    {
-      IJobManager jobManager = JobManagerFactory.make(tc);
-      jobManager.resetNotificationWorkerStatus();
-    }
-
-    /** Do the wakeup logic.
-    */
-    protected void performWakeupLogic()
-    {
-    }
-
-  }
-
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobResetThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobResetThread.java
index 1e21eba..58dd9c2 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobResetThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobResetThread.java
@@ -33,17 +33,18 @@
   public static final String _rcsid = "@(#)$Id: JobResetThread.java 991295 2010-08-31 19:12:14Z kwright $";
 
   // Local data
-  protected QueueTracker queueTracker;
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   */
-  public JobResetThread(QueueTracker queueTracker)
+  public JobResetThread(String processID)
     throws ManifoldCFException
   {
     super();
     setName("Job reset thread");
     setDaemon(true);
-    this.queueTracker = queueTracker;
+    this.processID = processID;
   }
 
   public void run()
@@ -105,7 +106,7 @@
           {
             Logging.threads.debug("Job reset thread reprioritizing documents...");
 
-            ManifoldCF.resetAllDocumentPriorities(threadContext,queueTracker,currentTime);
+            ManifoldCF.resetAllDocumentPriorities(threadContext,currentTime,processID);
             
             Logging.threads.debug("Job reset thread done reprioritizing documents.");
 
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobStartThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobStartThread.java
index d378114..07b0b08 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobStartThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/JobStartThread.java
@@ -30,12 +30,16 @@
 {
   public static final String _rcsid = "@(#)$Id: JobStartThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   */
-  public JobStartThread()
+  public JobStartThread(String processID)
     throws ManifoldCFException
   {
     super();
+    this.processID = processID;
     setName("Job start thread");
     setDaemon(true);
   }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ManifoldCF.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ManifoldCF.java
index df81108..288a287 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ManifoldCF.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ManifoldCF.java
@@ -51,49 +51,6 @@
 
   // Initialization flag.
   protected static boolean crawlerInitialized = false;
-  
-  // Thread objects.
-  // These get filled in as threads are created.
-  protected static InitializationThread initializationThread = null;
-  protected static JobStartThread jobStartThread = null;
-  protected static StufferThread stufferThread = null;
-  protected static FinisherThread finisherThread = null;
-  protected static JobNotificationThread notificationThread = null;
-  protected static StartupThread startupThread = null;
-  protected static StartDeleteThread startDeleteThread = null;
-  protected static JobDeleteThread jobDeleteThread = null;
-  protected static WorkerThread[] workerThreads = null;
-  protected static ExpireStufferThread expireStufferThread = null;
-  protected static ExpireThread[] expireThreads = null;
-  protected static DocumentDeleteStufferThread deleteStufferThread = null;
-  protected static DocumentDeleteThread[] deleteThreads = null;
-  protected static DocumentCleanupStufferThread cleanupStufferThread = null;
-  protected static DocumentCleanupThread[] cleanupThreads = null;
-  protected static JobResetThread jobResetThread = null;
-  protected static SeedingThread seedingThread = null;
-  protected static IdleCleanupThread idleCleanupThread = null;
-  protected static SetPriorityThread setPriorityThread = null;
-
-  // Reset managers
-  /** Worker thread pool reset manager */
-  protected static WorkerResetManager workerResetManager = null;
-  /** Delete thread pool reset manager */
-  protected static DocDeleteResetManager docDeleteResetManager = null;
-  /** Cleanup thread pool reset manager */
-  protected static DocCleanupResetManager docCleanupResetManager = null;
-
-  // Number of worker threads
-  protected static int numWorkerThreads = 0;
-  // Number of delete threads
-  protected static int numDeleteThreads = 0;
-  // Number of cleanup threads
-  protected static int numCleanupThreads = 0;
-  // Number of expiration threads
-  protected static int numExpireThreads = 0;
-  // Factor for low water level in queueing
-  protected static float lowWaterFactor = 5.0f;
-  // Factor in amount to stuff
-  protected static float stuffAmtFactor = 0.5f;
 
   // Properties
   protected static final String workerThreadCountProperty = "org.apache.manifoldcf.crawler.threads";
@@ -105,35 +62,33 @@
   protected static final String connectorsConfigurationFileProperty = "org.apache.manifoldcf.connectorsconfigurationfile";
   protected static final String databaseSuperuserNameProperty = "org.apache.manifoldcf.dbsuperusername";
   protected static final String databaseSuperuserPasswordProperty = "org.apache.manifoldcf.dbsuperuserpassword";
-  protected static final String salt = "org.apache.manifoldcf.salt";
+  protected static final String saltProperty = "org.apache.manifoldcf.salt";
 
-  /** This object is used to make sure the initialization sequence is atomic.  Shutdown cannot occur until the system is in a known state. */
-  protected static Integer startupLock = new Integer(0);
   
   /** Initialize environment.
   */
-  public static void initializeEnvironment()
+  public static void initializeEnvironment(IThreadContext tc)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
     {
-      org.apache.manifoldcf.agents.system.ManifoldCF.initializeEnvironment();
-      org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-      org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
+      org.apache.manifoldcf.agents.system.ManifoldCF.initializeEnvironment(tc);
+      org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
+      org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(tc);
     }
   }
 
-  public static void cleanUpEnvironment()
+  public static void cleanUpEnvironment(IThreadContext tc)
   {
     synchronized (initializeFlagLock)
     {
-      org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-      org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
-      org.apache.manifoldcf.agents.system.ManifoldCF.cleanUpEnvironment();
+      org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+      org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(tc);
+      org.apache.manifoldcf.agents.system.ManifoldCF.cleanUpEnvironment(tc);
     }
   }
   
-  public static void localInitialize()
+  public static void localInitialize(IThreadContext tc)
     throws ManifoldCFException
   {
     synchronized (initializeFlagLock)
@@ -143,13 +98,22 @@
         return;
       
       Logging.initializeLoggers();
-      Logging.setLogLevels();
+      Logging.setLogLevels(tc);
       crawlerInitialized = true;
     }
   }
   
-  public static void localCleanup()
+  public static void localCleanup(IThreadContext tc)
   {
+    try
+    {
+      RepositoryConnectorPoolFactory.make(tc).closeAllConnectors();
+    }
+    catch (ManifoldCFException e)
+    {
+      if (Logging.root != null)
+        Logging.root.warn("Exception tossed on repository connector pool cleanup: "+e.getMessage(),e);
+    }
   }
   
   /** Create system database using superuser properties from properties.xml.
@@ -158,12 +122,8 @@
     throws ManifoldCFException
   {
     // Get the specified superuser name and password, in case this isn't Derby we're using
-    String superuserName = getProperty(databaseSuperuserNameProperty);
-    if (superuserName == null)
-      superuserName = "";
-    String superuserPassword = getProperty(databaseSuperuserPasswordProperty);
-    if (superuserPassword == null)
-      superuserPassword = "";
+    String superuserName = LockManagerFactory.getStringProperty(threadContext, databaseSuperuserNameProperty, "");
+    String superuserPassword = LockManagerFactory.getStringProperty(threadContext, databaseSuperuserPasswordProperty, "");
     createSystemDatabase(threadContext,superuserName,superuserPassword);
   }
   
@@ -234,11 +194,14 @@
   }
 
   // Connectors configuration file
+  protected static final String NODE_AUTHORIZATIONDOMAIN = "authorizationdomain";
   protected static final String NODE_OUTPUTCONNECTOR = "outputconnector";
+  protected static final String NODE_MAPPINGCONNECTOR = "mappingconnector";
   protected static final String NODE_AUTHORITYCONNECTOR = "authorityconnector";
   protected static final String NODE_REPOSITORYCONNECTOR = "repositoryconnector";
   protected static final String ATTRIBUTE_NAME = "name";
   protected static final String ATTRIBUTE_CLASS = "class";
+  protected static final String ATTRIBUTE_DOMAIN = "domain";
   
   /** Unregister all connectors which don't match a specified connector list.
   */
@@ -248,20 +211,35 @@
     // Create a map of class name and description, so we can compare what we can find
     // against what we want.
     Map<String,String> desiredOutputConnectors = new HashMap<String,String>();
+    Map<String,String> desiredMappingConnectors = new HashMap<String,String>();
     Map<String,String> desiredAuthorityConnectors = new HashMap<String,String>();
     Map<String,String> desiredRepositoryConnectors = new HashMap<String,String>();
 
+    Map<String,String> desiredDomains = new HashMap<String,String>();
+
     if (c != null)
     {
       for (int i = 0; i < c.getChildCount(); i++)
       {
         ConfigurationNode cn = c.findChild(i);
-        if (cn.getType().equals(NODE_OUTPUTCONNECTOR))
+        if (cn.getType().equals(NODE_AUTHORIZATIONDOMAIN))
+        {
+          String domainName = cn.getAttributeValue(ATTRIBUTE_DOMAIN);
+          String name = cn.getAttributeValue(ATTRIBUTE_NAME);
+          desiredDomains.put(domainName,name);
+        }
+        else if (cn.getType().equals(NODE_OUTPUTCONNECTOR))
         {
           String name = cn.getAttributeValue(ATTRIBUTE_NAME);
           String className = cn.getAttributeValue(ATTRIBUTE_CLASS);
           desiredOutputConnectors.put(className,name);
         }
+        else if (cn.getType().equals(NODE_MAPPINGCONNECTOR))
+        {
+          String name = cn.getAttributeValue(ATTRIBUTE_NAME);
+          String className = cn.getAttributeValue(ATTRIBUTE_CLASS);
+          desiredMappingConnectors.put(className,name);
+        }
         else if (cn.getType().equals(NODE_AUTHORITYCONNECTOR))
         {
           String name = cn.getAttributeValue(ATTRIBUTE_NAME);
@@ -283,6 +261,23 @@
       ManifoldCF.getMasterDatabaseUsername(),
       ManifoldCF.getMasterDatabasePassword());
 
+    // Domains...
+    {
+      IAuthorizationDomainManager mgr = AuthorizationDomainManagerFactory.make(tc);
+      IResultSet domains = mgr.getDomains();
+      for (int i = 0; i < domains.getRowCount(); i++)
+      {
+        IResultRow row = domains.getRow(i);
+        String domainName = (String)row.getValue("domainname");
+        String description = (String)row.getValue("description");
+        if (desiredDomains.get(domainName) == null || !desiredDomains.get(domainName).equals(description))
+        {
+          mgr.unregisterDomain(domainName);
+        }
+      }
+      System.err.println("Successfully unregistered all domains");
+    }
+    
     // Output connectors...
     {
       IOutputConnectorManager mgr = OutputConnectorManagerFactory.make(tc);
@@ -325,7 +320,25 @@
       }
       System.err.println("Successfully unregistered all output connectors");
     }
-      
+
+    // Mapping connectors...
+    {
+      IMappingConnectorManager mgr = MappingConnectorManagerFactory.make(tc);
+      IResultSet classNames = mgr.getConnectors();
+      int i = 0;
+      while (i < classNames.getRowCount())
+      {
+        IResultRow row = classNames.getRow(i++);
+        String className = (String)row.getValue("classname");
+        String description = (String)row.getValue("description");
+        if (desiredMappingConnectors.get(className) == null || !desiredMappingConnectors.get(className).equals(description))
+        {
+          mgr.unregisterConnector(className);
+        }
+      }
+      System.err.println("Successfully unregistered all mapping connectors");
+    }
+
     // Authority connectors...
     {
       IAuthorityConnectorManager mgr = AuthorityConnectorManagerFactory.make(tc);
@@ -408,7 +421,14 @@
       while (i < c.getChildCount())
       {
         ConfigurationNode cn = c.findChild(i++);
-        if (cn.getType().equals(NODE_OUTPUTCONNECTOR))
+        if (cn.getType().equals(NODE_AUTHORIZATIONDOMAIN))
+        {
+          String domainName = cn.getAttributeValue(ATTRIBUTE_DOMAIN);
+          String name = cn.getAttributeValue(ATTRIBUTE_NAME);
+          IAuthorizationDomainManager mgr = AuthorizationDomainManagerFactory.make(tc);
+          mgr.registerDomain(name,domainName);
+        }
+        else if (cn.getType().equals(NODE_OUTPUTCONNECTOR))
         {
           String name = cn.getAttributeValue(ATTRIBUTE_NAME);
           String className = cn.getAttributeValue(ATTRIBUTE_CLASS);
@@ -450,6 +470,14 @@
           mgr.registerConnector(name,className);
           System.err.println("Successfully registered authority connector '"+className+"'");
         }
+        else if (cn.getType().equals(NODE_MAPPINGCONNECTOR))
+        {
+          String name = cn.getAttributeValue(ATTRIBUTE_NAME);
+          String className = cn.getAttributeValue(ATTRIBUTE_CLASS);
+          IMappingConnectorManager mgr = MappingConnectorManagerFactory.make(tc);
+          mgr.registerConnector(name,className);
+          System.err.println("Successfully registered mapping connector '"+className+"'");
+        }
         else if (cn.getType().equals(NODE_REPOSITORYCONNECTOR))
         {
           String name = cn.getAttributeValue(ATTRIBUTE_NAME);
@@ -500,10 +528,12 @@
     IConnectorManager repConnMgr = ConnectorManagerFactory.make(threadcontext);
     IRepositoryConnectionManager repCon = RepositoryConnectionManagerFactory.make(threadcontext);
     IJobManager jobManager = JobManagerFactory.make(threadcontext);
+    IBinManager binManager = BinManagerFactory.make(threadcontext);
     org.apache.manifoldcf.authorities.system.ManifoldCF.installSystemTables(threadcontext);
     repConnMgr.install();
     repCon.install();
     jobManager.install();
+    binManager.install();
   }
 
   /** Uninstall all the crawler system tables.
@@ -512,554 +542,17 @@
   public static void deinstallSystemTables(IThreadContext threadcontext)
     throws ManifoldCFException
   {
-    ManifoldCFException se = null;
-
     IConnectorManager repConnMgr = ConnectorManagerFactory.make(threadcontext);
     IRepositoryConnectionManager repCon = RepositoryConnectionManagerFactory.make(threadcontext);
     IJobManager jobManager = JobManagerFactory.make(threadcontext);
+    IBinManager binManager = BinManagerFactory.make(threadcontext);
+    binManager.deinstall();
     jobManager.deinstall();
     repCon.deinstall();
     repConnMgr.deinstall();
     org.apache.manifoldcf.authorities.system.ManifoldCF.deinstallSystemTables(threadcontext);
-    if (se != null)
-      throw se;
   }
 
-
-  /** Start everything.
-  */
-  public static void startSystem(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    Logging.root.info("Starting up pull-agent...");
-    synchronized (startupLock)
-    {
-      // Now, start all the threads
-      String maxThreads = getProperty(workerThreadCountProperty);
-      if (maxThreads == null)
-        maxThreads = "100";
-      numWorkerThreads = new Integer(maxThreads).intValue();
-      if (numWorkerThreads < 1 || numWorkerThreads > 300)
-        throw new ManifoldCFException("Illegal value for the number of worker threads");
-      String maxDeleteThreads = getProperty(deleteThreadCountProperty);
-      if (maxDeleteThreads == null)
-        maxDeleteThreads = "10";
-      String maxCleanupThreads = getProperty(cleanupThreadCountProperty);
-      if (maxCleanupThreads == null)
-        maxCleanupThreads = "10";
-      String maxExpireThreads = getProperty(expireThreadCountProperty);
-      if (maxExpireThreads == null)
-        maxExpireThreads = "10";
-      numDeleteThreads = new Integer(maxDeleteThreads).intValue();
-      if (numDeleteThreads < 1 || numDeleteThreads > 300)
-        throw new ManifoldCFException("Illegal value for the number of delete threads");
-      numCleanupThreads = new Integer(maxCleanupThreads).intValue();
-      if (numCleanupThreads < 1 || numCleanupThreads > 300)
-        throw new ManifoldCFException("Illegal value for the number of cleanup threads");
-      numExpireThreads = new Integer(maxExpireThreads).intValue();
-      if (numExpireThreads < 1 || numExpireThreads > 300)
-        throw new ManifoldCFException("Illegal value for the number of expire threads");
-      String lowWaterFactorString = getProperty(lowWaterFactorProperty);
-      if (lowWaterFactorString == null)
-        lowWaterFactorString = "5";
-      lowWaterFactor = new Float(lowWaterFactorString).floatValue();
-      if (lowWaterFactor < 1.0 || lowWaterFactor > 1000.0)
-        throw new ManifoldCFException("Illegal value for the low water factor");
-      String stuffAmtFactorString = getProperty(stuffAmtFactorProperty);
-      if (stuffAmtFactorString == null)
-        stuffAmtFactorString = "2";
-      stuffAmtFactor = new Float(stuffAmtFactorString).floatValue();
-      if (stuffAmtFactor < 0.1 || stuffAmtFactor > 1000.0)
-        throw new ManifoldCFException("Illegal value for the stuffing amount factor");
-
-
-      // Create the threads and objects.  This MUST be completed before there is any chance of "shutdownSystem" getting called.
-
-      QueueTracker queueTracker = new QueueTracker();
-
-
-      DocumentQueue documentQueue = new DocumentQueue();
-      DocumentDeleteQueue documentDeleteQueue = new DocumentDeleteQueue();
-      DocumentCleanupQueue documentCleanupQueue = new DocumentCleanupQueue();
-      DocumentCleanupQueue expireQueue = new DocumentCleanupQueue();
-
-      BlockingDocuments blockingDocuments = new BlockingDocuments();
-
-      workerResetManager = new WorkerResetManager(documentQueue,expireQueue);
-      docDeleteResetManager = new DocDeleteResetManager(documentDeleteQueue);
-      docCleanupResetManager = new DocCleanupResetManager(documentCleanupQueue);
-
-      jobStartThread = new JobStartThread();
-      startupThread = new StartupThread(queueTracker);
-      startDeleteThread = new StartDeleteThread();
-      finisherThread = new FinisherThread();
-      notificationThread = new JobNotificationThread();
-      jobDeleteThread = new JobDeleteThread();
-      stufferThread = new StufferThread(documentQueue,numWorkerThreads,workerResetManager,queueTracker,blockingDocuments,lowWaterFactor,stuffAmtFactor);
-      expireStufferThread = new ExpireStufferThread(expireQueue,numExpireThreads,workerResetManager);
-      setPriorityThread = new SetPriorityThread(queueTracker,numWorkerThreads,blockingDocuments);
-
-      workerThreads = new WorkerThread[numWorkerThreads];
-      int i = 0;
-      while (i < numWorkerThreads)
-      {
-        workerThreads[i] = new WorkerThread(Integer.toString(i),documentQueue,workerResetManager,queueTracker);
-        i++;
-      }
-
-      expireThreads = new ExpireThread[numExpireThreads];
-      i = 0;
-      while (i < numExpireThreads)
-      {
-        expireThreads[i] = new ExpireThread(Integer.toString(i),expireQueue,queueTracker,workerResetManager);
-        i++;
-      }
-
-      deleteStufferThread = new DocumentDeleteStufferThread(documentDeleteQueue,numDeleteThreads,docDeleteResetManager);
-      deleteThreads = new DocumentDeleteThread[numDeleteThreads];
-      i = 0;
-      while (i < numDeleteThreads)
-      {
-        deleteThreads[i] = new DocumentDeleteThread(Integer.toString(i),documentDeleteQueue,docDeleteResetManager);
-        i++;
-      }
-      
-      cleanupStufferThread = new DocumentCleanupStufferThread(documentCleanupQueue,numCleanupThreads,docCleanupResetManager);
-      cleanupThreads = new DocumentCleanupThread[numCleanupThreads];
-      i = 0;
-      while (i < numCleanupThreads)
-      {
-        cleanupThreads[i] = new DocumentCleanupThread(Integer.toString(i),documentCleanupQueue,queueTracker,docCleanupResetManager);
-        i++;
-      }
-
-      jobResetThread = new JobResetThread(queueTracker);
-      seedingThread = new SeedingThread(queueTracker);
-      idleCleanupThread = new IdleCleanupThread();
-
-      initializationThread = new InitializationThread(queueTracker);
-      // Start the initialization thread.  This does the initialization work and starts all the other threads when that's done.  It then exits.
-      initializationThread.start();
-    }
-    Logging.root.info("Pull-agent started");
-  }
-
-  protected static class InitializationThread extends Thread
-  {
-
-    protected final QueueTracker queueTracker;
-
-    public InitializationThread(QueueTracker queueTracker)
-    {
-      super();
-      this.queueTracker = queueTracker;
-      setName("Initialization thread");
-      setDaemon(true);
-    }
-
-    public void run()
-    {
-      int i;
-
-      // Initialize the database
-      try
-      {
-        IThreadContext threadContext = ThreadContextFactory.make();
-
-        // First, get a job manager
-        IJobManager jobManager = JobManagerFactory.make(threadContext);
-        IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(threadContext);
-
-        Logging.threads.debug("Agents process starting initialization...");
-
-        // Call the database to get it ready
-        jobManager.prepareForStart();
-
-        Logging.threads.debug("Agents process reprioritizing documents...");
-
-        HashMap connectionMap = new HashMap();
-        HashMap jobDescriptionMap = new HashMap();
-        // Reprioritize all documents in the jobqueue, 1000 at a time
-        long currentTime = System.currentTimeMillis();
-
-        // Do the 'not yet processed' documents only.  Documents that are queued for reprocessing will be assigned
-        // new priorities.  Already processed documents won't.  This guarantees that our bins are appropriate for current thread
-        // activity.
-        // In order for this to be the correct functionality, ALL reseeding and requeuing operations MUST reset the associated document
-        // priorities.
-        while (true)
-        {
-          long startTime = System.currentTimeMillis();
-
-          DocumentDescription[] docs = jobManager.getNextNotYetProcessedReprioritizationDocuments(currentTime, 10000);
-          if (docs.length == 0)
-            break;
-
-          // Calculate new priorities for all these documents
-          writeDocumentPriorities(threadContext,mgr,jobManager,docs,connectionMap,jobDescriptionMap,queueTracker,currentTime);
-
-          Logging.threads.debug("Reprioritized "+Integer.toString(docs.length)+" not-yet-processed documents in "+new Long(System.currentTimeMillis()-startTime)+" ms");
-        }
-
-        Logging.threads.debug("Agents process initialization complete!");
-
-        // Start all the threads
-        jobStartThread.start();
-        startupThread.start();
-        startDeleteThread.start();
-        finisherThread.start();
-        notificationThread.start();
-        jobDeleteThread.start();
-        stufferThread.start();
-        expireStufferThread.start();
-        setPriorityThread.start();
-
-        i = 0;
-        while (i < numWorkerThreads)
-        {
-          workerThreads[i].start();
-          i++;
-        }
-
-        i = 0;
-        while (i < numExpireThreads)
-        {
-          expireThreads[i].start();
-          i++;
-        }
-
-        cleanupStufferThread.start();
-        i = 0;
-        while (i < numCleanupThreads)
-        {
-          cleanupThreads[i].start();
-          i++;
-        }
-
-        deleteStufferThread.start();
-        i = 0;
-        while (i < numDeleteThreads)
-        {
-          deleteThreads[i].start();
-          i++;
-        }
-
-        jobResetThread.start();
-        seedingThread.start();
-        idleCleanupThread.start();
-        // exit!
-      }
-      catch (Throwable e)
-      {
-        // Severe error on initialization
-        if (e instanceof ManifoldCFException)
-        {
-          // Deal with interrupted exception gracefully, because it means somebody is trying to shut us down before we got started.
-          if (((ManifoldCFException)e).getErrorCode() == ManifoldCFException.INTERRUPTED)
-            return;
-        }
-        System.err.println("agents process could not start - shutting down");
-        Logging.threads.fatal("Startup initialization error tossed: "+e.getMessage(),e);
-        System.exit(-300);
-      }
-    }
-  }
-
-  /** Stop the system.
-  */
-  public static void stopSystem(IThreadContext threadContext)
-    throws ManifoldCFException
-  {
-    Logging.root.info("Shutting down pull-agent...");
-    synchronized (startupLock)
-    {
-      while (initializationThread != null || jobDeleteThread != null || startupThread != null || startDeleteThread != null ||
-        jobStartThread != null || stufferThread != null ||
-        finisherThread != null || notificationThread != null || workerThreads != null || expireStufferThread != null || expireThreads != null ||
-        deleteStufferThread != null || deleteThreads != null ||
-        cleanupStufferThread != null || cleanupThreads != null ||
-        jobResetThread != null || seedingThread != null || idleCleanupThread != null || setPriorityThread != null)
-      {
-        // Send an interrupt to all threads that are still there.
-        // In theory, this only needs to be done once.  In practice, I have seen cases where the thread loses track of the fact that it has been
-        // interrupted (which may be a JVM bug - who knows?), but in any case there's no harm in doing it again.
-        if (initializationThread != null)
-        {
-          initializationThread.interrupt();
-        }
-        if (setPriorityThread != null)
-        {
-          setPriorityThread.interrupt();
-        }
-        if (jobStartThread != null)
-        {
-          jobStartThread.interrupt();
-        }
-        if (jobDeleteThread != null)
-        {
-          jobDeleteThread.interrupt();
-        }
-        if (startupThread != null)
-        {
-          startupThread.interrupt();
-        }
-        if (startDeleteThread != null)
-        {
-          startDeleteThread.interrupt();
-        }
-        if (stufferThread != null)
-        {
-          stufferThread.interrupt();
-        }
-        if (expireStufferThread != null)
-        {
-          expireStufferThread.interrupt();
-        }
-        if (finisherThread != null)
-        {
-          finisherThread.interrupt();
-        }
-        if (notificationThread != null)
-        {
-          notificationThread.interrupt();
-        }
-        if (workerThreads != null)
-        {
-          int i = 0;
-          while (i < workerThreads.length)
-          {
-            Thread workerThread = workerThreads[i++];
-            if (workerThread != null)
-              workerThread.interrupt();
-          }
-        }
-        if (expireThreads != null)
-        {
-          int i = 0;
-          while (i < expireThreads.length)
-          {
-            Thread expireThread = expireThreads[i++];
-            if (expireThread != null)
-              expireThread.interrupt();
-          }
-        }
-        if (cleanupStufferThread != null)
-        {
-          cleanupStufferThread.interrupt();
-        }
-        if (cleanupThreads != null)
-        {
-          int i = 0;
-          while (i < cleanupThreads.length)
-          {
-            Thread cleanupThread = cleanupThreads[i++];
-            if (cleanupThread != null)
-              cleanupThread.interrupt();
-          }
-        }
-        if (deleteStufferThread != null)
-        {
-          deleteStufferThread.interrupt();
-        }
-        if (deleteThreads != null)
-        {
-          int i = 0;
-          while (i < deleteThreads.length)
-          {
-            Thread deleteThread = deleteThreads[i++];
-            if (deleteThread != null)
-              deleteThread.interrupt();
-          }
-        }
-        if (jobResetThread != null)
-        {
-          jobResetThread.interrupt();
-        }
-        if (seedingThread != null)
-        {
-          seedingThread.interrupt();
-        }
-        if (idleCleanupThread != null)
-        {
-          idleCleanupThread.interrupt();
-        }
-
-        // Now, wait for all threads to die.
-        try
-        {
-          ManifoldCF.sleep(1000L);
-        }
-        catch (InterruptedException e)
-        {
-        }
-
-        // Check to see which died.
-        if (initializationThread != null)
-        {
-          if (!initializationThread.isAlive())
-            initializationThread = null;
-        }
-        if (setPriorityThread != null)
-        {
-          if (!setPriorityThread.isAlive())
-            setPriorityThread = null;
-        }
-        if (jobDeleteThread != null)
-        {
-          if (!jobDeleteThread.isAlive())
-            jobDeleteThread = null;
-        }
-        if (startupThread != null)
-        {
-          if (!startupThread.isAlive())
-            startupThread = null;
-        }
-        if (startDeleteThread != null)
-        {
-          if (!startDeleteThread.isAlive())
-            startDeleteThread = null;
-        }
-        if (jobStartThread != null)
-        {
-          if (!jobStartThread.isAlive())
-            jobStartThread = null;
-        }
-        if (stufferThread != null)
-        {
-          if (!stufferThread.isAlive())
-            stufferThread = null;
-        }
-        if (expireStufferThread != null)
-        {
-          if (!expireStufferThread.isAlive())
-            expireStufferThread = null;
-        }
-        if (finisherThread != null)
-        {
-          if (!finisherThread.isAlive())
-            finisherThread = null;
-        }
-        if (notificationThread != null)
-        {
-          if (!notificationThread.isAlive())
-            notificationThread = null;
-        }
-        if (workerThreads != null)
-        {
-          int i = 0;
-          boolean isAlive = false;
-          while (i < workerThreads.length)
-          {
-            Thread workerThread = workerThreads[i];
-            if (workerThread != null)
-            {
-              if (!workerThread.isAlive())
-                workerThreads[i] = null;
-              else
-                isAlive = true;
-            }
-            i++;
-          }
-          if (!isAlive)
-            workerThreads = null;
-        }
-
-        if (expireThreads != null)
-        {
-          int i = 0;
-          boolean isAlive = false;
-          while (i < expireThreads.length)
-          {
-            Thread expireThread = expireThreads[i];
-            if (expireThread != null)
-            {
-              if (!expireThread.isAlive())
-                expireThreads[i] = null;
-              else
-                isAlive = true;
-            }
-            i++;
-          }
-          if (!isAlive)
-            expireThreads = null;
-        }
-
-        if (cleanupStufferThread != null)
-        {
-          if (!cleanupStufferThread.isAlive())
-            cleanupStufferThread = null;
-        }
-        if (cleanupThreads != null)
-        {
-          int i = 0;
-          boolean isAlive = false;
-          while (i < cleanupThreads.length)
-          {
-            Thread cleanupThread = cleanupThreads[i];
-            if (cleanupThread != null)
-            {
-              if (!cleanupThread.isAlive())
-                cleanupThreads[i] = null;
-              else
-                isAlive = true;
-            }
-            i++;
-          }
-          if (!isAlive)
-            cleanupThreads = null;
-        }
-
-        if (deleteStufferThread != null)
-        {
-          if (!deleteStufferThread.isAlive())
-            deleteStufferThread = null;
-        }
-        if (deleteThreads != null)
-        {
-          int i = 0;
-          boolean isAlive = false;
-          while (i < deleteThreads.length)
-          {
-            Thread deleteThread = deleteThreads[i];
-            if (deleteThread != null)
-            {
-              if (!deleteThread.isAlive())
-                deleteThreads[i] = null;
-              else
-                isAlive = true;
-            }
-            i++;
-          }
-          if (!isAlive)
-            deleteThreads = null;
-        }
-        if (jobResetThread != null)
-        {
-          if (!jobResetThread.isAlive())
-            jobResetThread = null;
-        }
-        if (seedingThread != null)
-        {
-          if (!seedingThread.isAlive())
-            seedingThread = null;
-        }
-        if (idleCleanupThread != null)
-        {
-          if (!idleCleanupThread.isAlive())
-            idleCleanupThread = null;
-        }
-      }
-
-      // Threads are down; release connectors
-      RepositoryConnectorFactory.closeAllConnectors(threadContext);
-      numWorkerThreads = 0;
-      numDeleteThreads = 0;
-      numExpireThreads = 0;
-    }
-    Logging.root.info("Pull-agent successfully shut down");
-  }
-  
-
   /** Atomically export the crawler configuration */
   public static void exportConfiguration(IThreadContext threadContext, String exportFilename, String passCode)
     throws ManifoldCFException
@@ -1072,7 +565,9 @@
       ManifoldCF.getMasterDatabasePassword());
     // Also create the following managers, which will handle the actual details of writing configuration data
     IOutputConnectionManager outputManager = OutputConnectionManagerFactory.make(threadContext);
+    IAuthorityGroupManager groupManager = AuthorityGroupManagerFactory.make(threadContext);
     IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(threadContext);
+    IMappingConnectionManager mappingManager = MappingConnectionManagerFactory.make(threadContext);
     IAuthorityConnectionManager authManager = AuthorityConnectionManagerFactory.make(threadContext);
     IJobManager jobManager = JobManagerFactory.make(threadContext);
 
@@ -1104,7 +599,7 @@
           Cipher cipher = null; 
           try
           {
-            cipher = getCipher(Cipher.ENCRYPT_MODE, passCode, iv);
+            cipher = getCipher(threadContext, Cipher.ENCRYPT_MODE, passCode, iv);
           }
           catch (GeneralSecurityException gse)
           {
@@ -1134,6 +629,16 @@
             outputManager.exportConfiguration(zos);
             zos.closeEntry();
 
+            java.util.zip.ZipEntry groupEntry = new java.util.zip.ZipEntry("groups");
+            zos.putNextEntry(groupEntry);
+            groupManager.exportConfiguration(zos);
+            zos.closeEntry();
+
+            java.util.zip.ZipEntry mappingEntry = new java.util.zip.ZipEntry("mappings");
+            zos.putNextEntry(mappingEntry);
+            mappingManager.exportConfiguration(zos);
+            zos.closeEntry();
+
             java.util.zip.ZipEntry authEntry = new java.util.zip.ZipEntry("authorities");
             zos.putNextEntry(authEntry);
             authManager.exportConfiguration(zos);
@@ -1200,7 +705,9 @@
       ManifoldCF.getMasterDatabasePassword());
     // Also create the following managers, which will handle the actual details of reading configuration data
     IOutputConnectionManager outputManager = OutputConnectionManagerFactory.make(threadContext);
+    IAuthorityGroupManager groupManager = AuthorityGroupManagerFactory.make(threadContext);
     IRepositoryConnectionManager connManager = RepositoryConnectionManagerFactory.make(threadContext);
+    IMappingConnectionManager mappingManager = MappingConnectionManagerFactory.make(threadContext);
     IAuthorityConnectionManager authManager = AuthorityConnectionManagerFactory.make(threadContext);
     IJobManager jobManager = JobManagerFactory.make(threadContext);
 
@@ -1225,7 +732,7 @@
           Cipher cipher = null; 
           try
           {
-            cipher = getCipher(Cipher.DECRYPT_MODE, passCode, iv);
+            cipher = getCipher(threadContext, Cipher.DECRYPT_MODE, passCode, iv);
           }
           catch (GeneralSecurityException gse)
           {
@@ -1256,6 +763,10 @@
               String name = z.getName();
               if (name.equals("outputs"))
                 outputManager.importConfiguration(zis);
+              else if (name.equals("groups"))
+                groupManager.importConfiguration(zis);
+              else if (name.equals("mappings"))
+                mappingManager.importConfiguration(zis);
               else if (name.equals("authorities"))
                 authManager.importConfiguration(zis);
               else if (name.equals("connections"))
@@ -1309,34 +820,46 @@
 
   /** Get the maximum number of worker threads.
   */
-  public static int getMaxWorkerThreads()
+  public static int getMaxWorkerThreads(IThreadContext threadContext)
+    throws ManifoldCFException
   {
-    return numWorkerThreads;
+    return LockManagerFactory.getIntProperty(threadContext,workerThreadCountProperty,100);
   }
 
   /** Get the maximum number of delete threads.
   */
-  public static int getMaxDeleteThreads()
+  public static int getMaxDeleteThreads(IThreadContext threadContext)
+    throws ManifoldCFException
   {
-    return numDeleteThreads;
+    return LockManagerFactory.getIntProperty(threadContext,deleteThreadCountProperty,10);
   }
 
   /** Get the maximum number of expire threads.
   */
-  public static int getMaxExpireThreads()
+  public static int getMaxExpireThreads(IThreadContext threadContext)
+    throws ManifoldCFException
   {
-    return numExpireThreads;
+    return LockManagerFactory.getIntProperty(threadContext,expireThreadCountProperty,10);
   }
 
+  /** Get the maximum number of cleanup threads.
+  */
+  public static int getMaxCleanupThreads(IThreadContext threadContext)
+    throws ManifoldCFException
+  {
+    return LockManagerFactory.getIntProperty(threadContext,cleanupThreadCountProperty,10);
+  }
+  
   /** Requeue documents due to carrydown.
   */
-  public static void requeueDocumentsDueToCarrydown(IJobManager jobManager, DocumentDescription[] requeueCandidates,
-    IRepositoryConnector connector, IRepositoryConnection connection, QueueTracker queueTracker, long currentTime)
+  public static void requeueDocumentsDueToCarrydown(IJobManager jobManager,
+    DocumentDescription[] requeueCandidates,
+    IRepositoryConnector connector, IRepositoryConnection connection, IReprioritizationTracker rt, long currentTime)
     throws ManifoldCFException
   {
     // A list of document descriptions from finishDocuments() above represents those documents that may need to be requeued, for the
     // reason that carrydown information for those documents has changed.  In order to requeue, we need to calculate document priorities, however.
-    double[] docPriorities = new double[requeueCandidates.length];
+    IPriorityCalculator[] docPriorities = new IPriorityCalculator[requeueCandidates.length];
     String[][] binNames = new String[requeueCandidates.length][];
     int q = 0;
     while (q < requeueCandidates.length)
@@ -1344,27 +867,12 @@
       DocumentDescription dd = requeueCandidates[q];
       String[] bins = calculateBins(connector,dd.getDocumentIdentifier());
       binNames[q] = bins;
-      docPriorities[q] = queueTracker.calculatePriority(bins,connection);
-      if (Logging.scheduling.isDebugEnabled())
-        Logging.scheduling.debug("Document '"+dd.getDocumentIdentifier()+" given priority "+new Double(docPriorities[q]).toString());
+      docPriorities[q] = new PriorityCalculator(rt,connection,bins);
       q++;
     }
 
     // Now, requeue the documents with the new priorities
-    boolean[] trackerNote = jobManager.carrydownChangeDocumentMultiple(requeueCandidates,currentTime,docPriorities);
-
-    // Free the unused priorities.
-    // Inform queuetracker about what we used and what we didn't
-    q = 0;
-    while (q < trackerNote.length)
-    {
-      if (trackerNote[q] == false)
-      {
-        String[] bins = binNames[q];
-        queueTracker.notePriorityNotUsed(bins,connection,docPriorities[q]);
-      }
-      q++;
-    }
+    jobManager.carrydownChangeDocumentMultiple(requeueCandidates,currentTime,docPriorities);
   }
 
   /** Stuff colons so we can't have conflicts. */
@@ -1411,74 +919,83 @@
   /** Reset all (active) document priorities.  This operation may occur due to various externally-triggered
   * events, such a job abort, pause, resume, wait, or unwait.
   */
-  public static void resetAllDocumentPriorities(IThreadContext threadContext, QueueTracker queueTracker, long currentTime)
+  public static void resetAllDocumentPriorities(IThreadContext threadContext, long currentTime, String processID)
     throws ManifoldCFException
   {
+    ILockManager lockManager = LockManagerFactory.make(threadContext);
     IJobManager jobManager = JobManagerFactory.make(threadContext);
     IRepositoryConnectionManager connectionManager = RepositoryConnectionManagerFactory.make(threadContext);
+    IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
+
+    String reproID = IDFactory.make(threadContext);
+
+    rt.startReprioritization(System.currentTimeMillis(),processID,reproID);
+    // Reprioritize all documents in the jobqueue, 1000 at a time
+
+    Map<String,IRepositoryConnection> connectionMap = new HashMap<String,IRepositoryConnection>();
+    Map<Long,IJobDescription> jobDescriptionMap = new HashMap<Long,IJobDescription>();
     
-    // Reset the queue tracker
-    queueTracker.beginReset();
-    // Perform the reprioritization, for all active documents in active jobs.  During this time,
-    // it is safe to have other threads assign new priorities to documents, but it is NOT safe
-    // for other threads to attempt to change the minimum priority level.  The queuetracker object
-    // will therefore block that from occurring, until the reset is complete.
-    try
+    // Do the 'not yet processed' documents only.  Documents that are queued for reprocessing will be assigned
+    // new priorities.  Already processed documents won't.  This guarantees that our bins are appropriate for current thread
+    // activity.
+    // In order for this to be the correct functionality, ALL reseeding and requeuing operations MUST reset the associated document
+    // priorities.
+    while (true)
     {
-      // Reprioritize all documents in the jobqueue, 1000 at a time
+      long startTime = System.currentTimeMillis();
 
-      HashMap connectionMap = new HashMap();
-      HashMap jobDescriptionMap = new HashMap();
-
-      // Do the 'not yet processed' documents only.  Documents that are queued for reprocessing will be assigned
-      // new priorities.  Already processed documents won't.  This guarantees that our bins are appropriate for current thread
-      // activity.
-      // In order for this to be the correct functionality, ALL reseeding and requeuing operations MUST reset the associated document
-      // priorities.
-      while (true)
+      Long currentTimeValue = rt.checkReprioritizationInProgress();
+      if (currentTimeValue == null)
       {
-        long startTime = System.currentTimeMillis();
-
-        DocumentDescription[] docs = jobManager.getNextNotYetProcessedReprioritizationDocuments(currentTime, 10000);
-        if (docs.length == 0)
-          break;
-
-        // Calculate new priorities for all these documents
-        writeDocumentPriorities(threadContext,connectionManager,jobManager,docs,connectionMap,jobDescriptionMap,queueTracker,currentTime);
-
-        Logging.threads.debug("Reprioritized "+Integer.toString(docs.length)+" not-yet-processed documents in "+new Long(System.currentTimeMillis()-startTime)+" ms");
+        // Some other process or thread superceded us.
+        return;
       }
+      long updateTime = currentTimeValue.longValue();
+      
+      DocumentDescription[] docs = jobManager.getNextNotYetProcessedReprioritizationDocuments(updateTime, 10000);
+      if (docs.length == 0)
+        break;
+
+      // Calculate new priorities for all these documents
+      writeDocumentPriorities(threadContext,docs,connectionMap,jobDescriptionMap,updateTime);
+
+      Logging.threads.debug("Reprioritized "+Integer.toString(docs.length)+" not-yet-processed documents in "+new Long(System.currentTimeMillis()-startTime)+" ms");
     }
-    finally
-    {
-      queueTracker.endReset();
-    }
+    
+    rt.doneReprioritization(reproID);
   }
   
   /** Write a set of document priorities, based on the current queue tracker.
   */
-  public static void writeDocumentPriorities(IThreadContext threadContext, IRepositoryConnectionManager mgr, IJobManager jobManager, DocumentDescription[] descs, HashMap connectionMap, HashMap jobDescriptionMap, QueueTracker queueTracker, long currentTime)
+  public static void writeDocumentPriorities(IThreadContext threadContext, DocumentDescription[] descs,
+    Map<String,IRepositoryConnection> connectionMap, Map<Long,IJobDescription> jobDescriptionMap,
+    long currentTime)
     throws ManifoldCFException
   {
+    IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+    IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(threadContext);
+    IJobManager jobManager = JobManagerFactory.make(threadContext);
+    IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
+    
     if (Logging.scheduling.isDebugEnabled())
       Logging.scheduling.debug("Reprioritizing "+Integer.toString(descs.length)+" documents");
 
 
-    double[] priorities = new double[descs.length];
+    IPriorityCalculator[] priorities = new IPriorityCalculator[descs.length];
 
     // Go through the documents and calculate the priorities
     int i = 0;
     while (i < descs.length)
     {
       DocumentDescription dd = descs[i];
-      IJobDescription job = (IJobDescription)jobDescriptionMap.get(dd.getJobID());
+      IJobDescription job = jobDescriptionMap.get(dd.getJobID());
       if (job == null)
       {
         job = jobManager.load(dd.getJobID(),true);
         jobDescriptionMap.put(dd.getJobID(),job);
       }
       String connectionName = job.getConnectionName();
-      IRepositoryConnection connection = (IRepositoryConnection)connectionMap.get(connectionName);
+      IRepositoryConnection connection = connectionMap.get(connectionName);
       if (connection == null)
       {
         connection = mgr.load(connectionName);
@@ -1487,10 +1004,7 @@
 
       String[] binNames;
       // Grab a connector handle
-      IRepositoryConnector connector = RepositoryConnectorFactory.grab(threadContext,
-        connection.getClassName(),
-        connection.getConfigParams(),
-        connection.getMaxConnections());
+      IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
       try
       {
         if (connector == null)
@@ -1501,12 +1015,10 @@
       }
       finally
       {
-        RepositoryConnectorFactory.release(connector);
+        repositoryConnectorPool.release(connection,connector);
       }
 
-      priorities[i] = queueTracker.calculatePriority(binNames,connection);
-      if (Logging.scheduling.isDebugEnabled())
-        Logging.scheduling.debug("Document '"+dd.getDocumentIdentifier()+"' given priority "+new Double(priorities[i]).toString());
+      priorities[i] = new PriorityCalculator(rt,connection,binNames);
 
       i++;
     }
@@ -1517,55 +1029,6 @@
 
   }
 
-  /** Request permission from agent to delete an output connection.
-  *@param threadContext is the thread context.
-  *@param connName is the name of the output connection.
-  *@return true if the connection is in use, false otherwise.
-  */
-  public static boolean isOutputConnectionInUse(IThreadContext threadContext, String connName)
-    throws ManifoldCFException
-  {
-    // Check with job manager.
-    IJobManager jobManager = JobManagerFactory.make(threadContext);
-    return jobManager.checkIfOutputReference(connName);
-  }
-
-  /** Note the deregistration of a set of output connections.
-  *@param threadContext is the thread context.
-  *@param connectionNames are the names of the connections being deregistered.
-  */
-  public static void noteOutputConnectorDeregistration(IThreadContext threadContext, String[] connectionNames)
-    throws ManifoldCFException
-  {
-    // Notify job manager
-    IJobManager jobManager = JobManagerFactory.make(threadContext);
-    jobManager.noteOutputConnectorDeregistration(connectionNames);
-  }
-
-  /** Note the registration of a set of output connections.
-  *@param threadContext is the thread context.
-  *@param connectionNames are the names of the connections being registered.
-  */
-  public static void noteOutputConnectorRegistration(IThreadContext threadContext, String[] connectionNames)
-    throws ManifoldCFException
-  {
-    // Notify job manager
-    IJobManager jobManager = JobManagerFactory.make(threadContext);
-    jobManager.noteOutputConnectorRegistration(connectionNames);
-  }
-
-  /** Note the change in configuration of an output connection.
-  *@param threadContext is the thread context.
-  *@param connectionName is the output connection name.
-  */
-  public static void noteOutputConnectionChange(IThreadContext threadContext, String connectionName)
-    throws ManifoldCFException
-  {
-    // Notify job manager
-    IJobManager jobManager = JobManagerFactory.make(threadContext);
-    jobManager.noteOutputConnectionChange(connectionName);
-  }
-  
   /** Qualify output activity name.
   *@param outputActivityName is the name of the output activity.
   *@param outputConnectionName is the corresponding name of the output connection.
@@ -1618,10 +1081,10 @@
   
   private static final int IV_LENGTH = 16;
   
-  private static Cipher getCipher(final int mode, final String passCode, final byte[] iv) throws GeneralSecurityException,
+  private static Cipher getCipher(IThreadContext threadContext, final int mode, final String passCode, final byte[] iv) throws GeneralSecurityException,
     ManifoldCFException
   {
-    final String saltValue = getProperty(salt);
+    final String saltValue = LockManagerFactory.getProperty(threadContext, saltProperty);
 
     if (saltValue == null || saltValue.length() == 0)
       throw new ManifoldCFException("Missing required SALT value");
@@ -1642,12 +1105,16 @@
   
   protected static final String API_JOBNODE = "job";
   protected static final String API_JOBSTATUSNODE = "jobstatus";
+  protected static final String API_AUTHORIZATIONDOMAINNODE = "authorizationdomain";
+  protected static final String API_AUTHORITYGROUPNODE = "authoritygroup";
   protected static final String API_REPOSITORYCONNECTORNODE = "repositoryconnector";
   protected static final String API_OUTPUTCONNECTORNODE = "outputconnector";
   protected static final String API_AUTHORITYCONNECTORNODE = "authorityconnector";
+  protected static final String API_MAPPINGCONNECTORNODE = "mappingconnector";
   protected static final String API_REPOSITORYCONNECTIONNODE = "repositoryconnection";
   protected static final String API_OUTPUTCONNECTIONNODE = "outputconnection";
   protected static final String API_AUTHORITYCONNECTIONNODE = "authorityconnection";
+  protected static final String API_MAPPINGCONNECTIONNODE = "mappingconnection";
   protected static final String API_CHECKRESULTNODE = "check_result";
   protected static final String API_JOBIDNODE = "job_id";
   protected static final String API_CONNECTIONNAMENODE = "connection_name";
@@ -1661,6 +1128,10 @@
   protected static final String CONNECTORNODE_DESCRIPTION = "description";
   protected static final String CONNECTORNODE_CLASSNAME = "class_name";
   
+  // Authorization domain nodes
+  protected static final String AUTHORIZATIONDOMAINNODE_DESCRIPTION = "description";
+  protected static final String AUTHORIZATIONDOMAINNODE_DOMAINNAME = "domain_name";
+  
   /** Decode path element.
   * Path elements in the API world cannot have "/" characters, or they become impossible to parse.  This method undoes
   * escaping that prevents "/" from appearing.
@@ -1755,6 +1226,7 @@
   {
     try
     {
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(tc);
       IOutputConnectionManager connectionManager = OutputConnectionManagerFactory.make(tc);
       IOutputConnection connection = connectionManager.load(connectionName);
       if (connection == null)
@@ -1765,7 +1237,7 @@
           
       String results;
       // Grab a connection handle, and call the test method
-      IOutputConnector connector = OutputConnectorFactory.grab(tc,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+      IOutputConnector connector = outputConnectorPool.grab(connection);
       try
       {
         results = connector.check();
@@ -1776,7 +1248,7 @@
       }
       finally
       {
-        OutputConnectorFactory.release(connector);
+        outputConnectorPool.release(connection,connector);
       }
           
       ConfigurationNode response = new ConfigurationNode(API_CHECKRESULTNODE);
@@ -1796,6 +1268,7 @@
   {
     try
     {
+      IAuthorityConnectorPool authorityConnectorPool = AuthorityConnectorPoolFactory.make(tc);
       IAuthorityConnectionManager connectionManager = AuthorityConnectionManagerFactory.make(tc);
       IAuthorityConnection connection = connectionManager.load(connectionName);
       if (connection == null)
@@ -1806,7 +1279,7 @@
           
       String results;
       // Grab a connection handle, and call the test method
-      IAuthorityConnector connector = AuthorityConnectorFactory.grab(tc,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+      IAuthorityConnector connector = authorityConnectorPool.grab(connection);
       try
       {
         results = connector.check();
@@ -1817,7 +1290,49 @@
       }
       finally
       {
-        AuthorityConnectorFactory.release(connector);
+        authorityConnectorPool.release(connection,connector);
+      }
+          
+      ConfigurationNode response = new ConfigurationNode(API_CHECKRESULTNODE);
+      response.setValue(results);
+      output.addChild(output.getChildCount(),response);
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+
+  /** Read a mapping connection status */
+  protected static int apiReadMappingConnectionStatus(IThreadContext tc, Configuration output, String connectionName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      IMappingConnectorPool mappingConnectorPool = MappingConnectorPoolFactory.make(tc);
+      IMappingConnectionManager connectionManager = MappingConnectionManagerFactory.make(tc);
+      IMappingConnection connection = connectionManager.load(connectionName);
+      if (connection == null)
+      {
+        createErrorNode(output,"Connection '"+connectionName+"' does not exist");
+        return READRESULT_NOTFOUND;
+      }
+          
+      String results;
+      // Grab a connection handle, and call the test method
+      IMappingConnector connector = mappingConnectorPool.grab(connection);
+      try
+      {
+        results = connector.check();
+      }
+      catch (ManifoldCFException e)
+      {
+        results = e.getMessage();
+      }
+      finally
+      {
+        mappingConnectorPool.release(connection,connector);
       }
           
       ConfigurationNode response = new ConfigurationNode(API_CHECKRESULTNODE);
@@ -1837,6 +1352,7 @@
   {
     try
     {
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(tc);
       IRepositoryConnectionManager connectionManager = RepositoryConnectionManagerFactory.make(tc);
       IRepositoryConnection connection = connectionManager.load(connectionName);
       if (connection == null)
@@ -1847,7 +1363,7 @@
           
       String results;
       // Grab a connection handle, and call the test method
-      IRepositoryConnector connector = RepositoryConnectorFactory.grab(tc,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+      IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
       try
       {
         results = connector.check();
@@ -1858,7 +1374,7 @@
       }
       finally
       {
-        RepositoryConnectorFactory.release(connector);
+        repositoryConnectorPool.release(connection,connector);
       }
           
       ConfigurationNode response = new ConfigurationNode(API_CHECKRESULTNODE);
@@ -1872,12 +1388,14 @@
     return READRESULT_FOUND;
   }
   
+  
   /** Read an output connection's info */
   protected static int apiReadOutputConnectionInfo(IThreadContext tc, Configuration output, String connectionName, String command)
     throws ManifoldCFException
   {
     try
     {
+      IOutputConnectorPool outputConnectorPool = OutputConnectorPoolFactory.make(tc);
       IOutputConnectionManager connectionManager = OutputConnectionManagerFactory.make(tc);
       IOutputConnection connection = connectionManager.load(connectionName);
       if (connection == null)
@@ -1887,14 +1405,14 @@
       }
 
       // Grab a connection handle, and call the test method
-      IOutputConnector connector = OutputConnectorFactory.grab(tc,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+      IOutputConnector connector = outputConnectorPool.grab(connection);
       try
       {
         return connector.requestInfo(output,command)?READRESULT_FOUND:READRESULT_NOTFOUND;
       }
       finally
       {
-        OutputConnectorFactory.release(connector);
+        outputConnectorPool.release(connection,connector);
       }
     }
     catch (ManifoldCFException e)
@@ -1910,6 +1428,7 @@
   {
     try
     {
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(tc);
       IRepositoryConnectionManager connectionManager = RepositoryConnectionManagerFactory.make(tc);
       IRepositoryConnection connection = connectionManager.load(connectionName);
       if (connection == null)
@@ -1919,14 +1438,14 @@
       }
 
       // Grab a connection handle, and call the test method
-      IRepositoryConnector connector = RepositoryConnectorFactory.grab(tc,connection.getClassName(),connection.getConfigParams(),connection.getMaxConnections());
+      IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
       try
       {
         return connector.requestInfo(output,command)?READRESULT_FOUND:READRESULT_NOTFOUND;
       }
       finally
       {
-        RepositoryConnectorFactory.release(connector);
+        repositoryConnectorPool.release(connection,connector);
       }
     }
     catch (ManifoldCFException e)
@@ -1937,13 +1456,47 @@
   }
 
   /** Get api job statuses */
-  protected static int apiReadJobStatuses(IThreadContext tc, Configuration output)
+  protected static int apiReadJobStatuses(IThreadContext tc, Configuration output, Map<String,List<String>> queryParameters)
+    throws ManifoldCFException
+  {
+    if (queryParameters == null)
+      queryParameters = new HashMap<String,List<String>>();
+    int maxCount;
+    List<String> maxCountList = queryParameters.get("maxcount");
+    if (maxCountList == null || maxCountList.size() == 0)
+      maxCount = Integer.MAX_VALUE;
+    else if (maxCountList.size() > 1)
+      throw new ManifoldCFException("Multiple values for maxcount parameter");
+    else
+      maxCount = new Integer(maxCountList.get(0)).intValue();
+      
+    try
+    {
+      IJobManager jobManager = JobManagerFactory.make(tc);
+      JobStatus[] jobStatuses = jobManager.getAllStatus(true,maxCount);
+      int i = 0;
+      while (i < jobStatuses.length)
+      {
+        ConfigurationNode jobStatusNode = new ConfigurationNode(API_JOBSTATUSNODE);
+        formatJobStatus(jobStatusNode,jobStatuses[i++]);
+        output.addChild(output.getChildCount(),jobStatusNode);
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+
+  /** Get api job statuses */
+  protected static int apiReadJobStatusesNoCounts(IThreadContext tc, Configuration output)
     throws ManifoldCFException
   {
     try
     {
       IJobManager jobManager = JobManagerFactory.make(tc);
-      JobStatus[] jobStatuses = jobManager.getAllStatus();
+      JobStatus[] jobStatuses = jobManager.getAllStatus(false);
       int i = 0;
       while (i < jobStatuses.length)
       {
@@ -1960,13 +1513,24 @@
   }
   
   /** Get api job status */
-  protected static int apiReadJobStatus(IThreadContext tc, Configuration output, Long jobID)
+  protected static int apiReadJobStatus(IThreadContext tc, Configuration output, Long jobID, Map<String,List<String>> queryParameters)
     throws ManifoldCFException
   {
+    if (queryParameters == null)
+      queryParameters = new HashMap<String,List<String>>();
+    int maxCount;
+    List<String> maxCountList = queryParameters.get("maxcount");
+    if (maxCountList == null || maxCountList.size() == 0)
+      maxCount = Integer.MAX_VALUE;
+    else if (maxCountList.size() > 1)
+      throw new ManifoldCFException("Multiple values for maxcount parameter");
+    else
+      maxCount = new Integer(maxCountList.get(0)).intValue();
+
     try
     {
       IJobManager jobManager = JobManagerFactory.make(tc);
-      JobStatus status = jobManager.getStatus(jobID);
+      JobStatus status = jobManager.getStatus(jobID,true,maxCount);
       if (status != null)
       {
         ConfigurationNode jobStatusNode = new ConfigurationNode(API_JOBSTATUSNODE);
@@ -2003,6 +1567,57 @@
     return READRESULT_FOUND;
   }
   
+  /** Get authority groups */
+  protected static int apiReadAuthorityGroups(IThreadContext tc, Configuration output)
+    throws ManifoldCFException
+  {
+    try
+    {
+      IAuthorityGroupManager groupManager = AuthorityGroupManagerFactory.make(tc);
+      IAuthorityGroup[] groups = groupManager.getAllGroups();
+      int i = 0;
+      while (i < groups.length)
+      {
+        ConfigurationNode groupNode = new ConfigurationNode(API_AUTHORITYGROUPNODE);
+        formatAuthorityGroup(groupNode,groups[i++]);
+        output.addChild(output.getChildCount(),groupNode);
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+  
+  /** Read authority group */
+  protected static int apiReadAuthorityGroup(IThreadContext tc, Configuration output, String groupName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      IAuthorityGroupManager groupManager = AuthorityGroupManagerFactory.make(tc);
+      IAuthorityGroup group = groupManager.load(groupName);
+      if (group != null)
+      {
+        // Fill the return object with job information
+        ConfigurationNode groupNode = new ConfigurationNode(API_AUTHORITYGROUPNODE);
+        formatAuthorityGroup(groupNode,group);
+        output.addChild(output.getChildCount(),groupNode);
+      }
+      else
+      {
+        createErrorNode(output,"Authority group '"+groupName+"' does not exist.");
+        return READRESULT_NOTFOUND;
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+
   /** Get output connections */
   protected static int apiReadOutputConnections(IThreadContext tc, Configuration output)
     throws ManifoldCFException
@@ -2077,6 +1692,29 @@
     return READRESULT_FOUND;
   }
 
+  /** Get mapping connections */
+  protected static int apiReadMappingConnections(IThreadContext tc, Configuration output)
+    throws ManifoldCFException
+  {
+    try
+    {
+      IMappingConnectionManager connManager = MappingConnectionManagerFactory.make(tc);
+      IMappingConnection[] connections = connManager.getAllConnections();
+      int i = 0;
+      while (i < connections.length)
+      {
+        ConfigurationNode connectionNode = new ConfigurationNode(API_MAPPINGCONNECTIONNODE);
+        formatMappingConnection(connectionNode,connections[i++]);
+        output.addChild(output.getChildCount(),connectionNode);
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+
   /** Read authority connection */
   protected static int apiReadAuthorityConnection(IThreadContext tc, Configuration output, String connectionName)
     throws ManifoldCFException
@@ -2105,6 +1743,34 @@
     return READRESULT_FOUND;
   }
 
+  /** Read mapping connection */
+  protected static int apiReadMappingConnection(IThreadContext tc, Configuration output, String connectionName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      IMappingConnectionManager connectionManager = MappingConnectionManagerFactory.make(tc);
+      IMappingConnection connection = connectionManager.load(connectionName);
+      if (connection != null)
+      {
+        // Fill the return object with job information
+        ConfigurationNode connectionNode = new ConfigurationNode(API_MAPPINGCONNECTIONNODE);
+        formatMappingConnection(connectionNode,connection);
+        output.addChild(output.getChildCount(),connectionNode);
+      }
+      else
+      {
+        createErrorNode(output,"Mapping connection '"+connectionName+"' does not exist.");
+        return READRESULT_NOTFOUND;
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+
   /** Get repository connections */
   protected static int apiReadRepositoryConnections(IThreadContext tc, Configuration output)
     throws ManifoldCFException
@@ -2230,6 +1896,81 @@
     return READRESULT_FOUND;
   }
 
+  /** List mapping connectors */
+  protected static int apiReadMappingConnectors(IThreadContext tc, Configuration output)
+    throws ManifoldCFException
+  {
+    // List registered authority connectors
+    try
+    {
+      IMappingConnectorManager manager = MappingConnectorManagerFactory.make(tc);
+      IResultSet resultSet = manager.getConnectors();
+      int j = 0;
+      while (j < resultSet.getRowCount())
+      {
+        IResultRow row = resultSet.getRow(j++);
+        ConfigurationNode child = new ConfigurationNode(API_MAPPINGCONNECTORNODE);
+        String description = (String)row.getValue("description");
+        String className = (String)row.getValue("classname");
+        ConfigurationNode node;
+        if (description != null)
+        {
+          node = new ConfigurationNode(CONNECTORNODE_DESCRIPTION);
+          node.setValue(description);
+          child.addChild(child.getChildCount(),node);
+        }
+        node = new ConfigurationNode(CONNECTORNODE_CLASSNAME);
+        node.setValue(className);
+        child.addChild(child.getChildCount(),node);
+
+        output.addChild(output.getChildCount(),child);
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+  }
+
+  /** List authorization domains */
+  protected static int apiReadAuthorizationDomains(IThreadContext tc, Configuration output)
+    throws ManifoldCFException
+  {
+    // List registered authorization domains
+    try
+    {
+      IAuthorizationDomainManager manager = AuthorizationDomainManagerFactory.make(tc);
+      IResultSet resultSet = manager.getDomains();
+      int j = 0;
+      while (j < resultSet.getRowCount())
+      {
+        IResultRow row = resultSet.getRow(j++);
+        ConfigurationNode child = new ConfigurationNode(API_AUTHORIZATIONDOMAINNODE);
+        String description = (String)row.getValue("description");
+        String domainName = (String)row.getValue("domainname");
+        ConfigurationNode node;
+        if (description != null)
+        {
+          node = new ConfigurationNode(AUTHORIZATIONDOMAINNODE_DESCRIPTION);
+          node.setValue(description);
+          child.addChild(child.getChildCount(),node);
+        }
+        node = new ConfigurationNode(AUTHORIZATIONDOMAINNODE_DOMAINNAME);
+        node.setValue(domainName);
+        child.addChild(child.getChildCount(),node);
+
+        output.addChild(output.getChildCount(),child);
+      }
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return READRESULT_FOUND;
+
+  }
+  
   /** List repository connectors */
   protected static int apiReadRepositoryConnectors(IThreadContext tc, Configuration output)
     throws ManifoldCFException
@@ -2940,6 +2681,10 @@
       {
         return apiReadOutputConnectionStatus(tc,output,connectionName);
       }
+      else if (connectionType.equals("mappingconnections"))
+      {
+        return apiReadMappingConnectionStatus(tc,output,connectionName);
+      }
       else if (connectionType.equals("authorityconnections"))
       {
         return apiReadAuthorityConnectionStatus(tc,output,connectionName);
@@ -2991,18 +2736,31 @@
     }
     else if (path.equals("jobstatuses"))
     {
-      return apiReadJobStatuses(tc,output);
+      return apiReadJobStatuses(tc,output,queryParameters);
     }
     else if (path.startsWith("jobstatuses/"))
     {
       Long jobID = new Long(path.substring("jobstatuses/".length()));
-      return apiReadJobStatus(tc,output,jobID);
+      return apiReadJobStatus(tc,output,jobID,queryParameters);
+    }
+    else if (path.equals("jobstatusesnocounts"))
+    {
+      return apiReadJobStatusesNoCounts(tc,output);
     }
     else if (path.startsWith("jobstatusesnocounts/"))
     {
       Long jobID = new Long(path.substring("jobstatusesnocounts/".length()));
       return apiReadJobStatusNoCounts(tc,output,jobID);
     }
+    else if (path.equals("authoritygroups"))
+    {
+      return apiReadAuthorityGroups(tc,output);
+    }
+    else if (path.startsWith("authoritygroups/"))
+    {
+      String groupName = decodeAPIPathElement(path.substring("authoritygroups/".length()));
+      return apiReadAuthorityGroup(tc,output,groupName);
+    }
     else if (path.equals("outputconnections"))
     {
       return apiReadOutputConnections(tc,output);
@@ -3012,6 +2770,15 @@
       String connectionName = decodeAPIPathElement(path.substring("outputconnections/".length()));
       return apiReadOutputConnection(tc,output,connectionName);
     }
+    else if (path.equals("mappingconnections"))
+    {
+      return apiReadMappingConnections(tc,output);
+    }
+    else if (path.startsWith("mappingconnections/"))
+    {
+      String connectionName = decodeAPIPathElement(path.substring("mappingconnections/".length()));
+      return apiReadMappingConnection(tc,output,connectionName);
+    }
     else if (path.equals("authorityconnections"))
     {
       return apiReadAuthorityConnections(tc,output);
@@ -3034,6 +2801,10 @@
     {
       return apiReadOutputConnectors(tc,output);
     }
+    else if (path.equals("mappingconnectors"))
+    {
+      return apiReadMappingConnectors(tc,output);
+    }
     else if (path.equals("authorityconnectors"))
     {
       return apiReadAuthorityConnectors(tc,output);
@@ -3041,7 +2812,11 @@
     else if (path.equals("repositoryconnectors"))
     {
       return apiReadRepositoryConnectors(tc,output);
-    }   
+    }
+    else if (path.equals("authorizationdomains"))
+    {
+      return apiReadAuthorizationDomains(tc,output);
+    }
     else
     {
       createErrorNode(output,"Unrecognized resource.");
@@ -3247,6 +3022,41 @@
     return WRITERESULT_FOUND;
   }
 
+  /** Write authority group.
+  */
+  protected static int apiWriteAuthorityGroup(IThreadContext tc, Configuration output, Configuration input, String groupName)
+    throws ManifoldCFException
+  {
+    ConfigurationNode groupNode = findConfigurationNode(input,API_AUTHORITYGROUPNODE);
+    if (groupNode == null)
+      throw new ManifoldCFException("Input argument must have '"+API_AUTHORITYGROUPNODE+"' field");
+      
+    // Turn the configuration node into an AuthorityGroup
+    org.apache.manifoldcf.authorities.authgroups.AuthorityGroup authorityGroup = new org.apache.manifoldcf.authorities.authgroups.AuthorityGroup();
+    processAuthorityGroup(authorityGroup,groupNode);
+      
+    if (authorityGroup.getName() == null)
+      authorityGroup.setName(groupName);
+    else
+    {
+      if (!authorityGroup.getName().equals(groupName))
+        throw new ManifoldCFException("Authority group name in path and in object must agree");
+    }
+      
+    try
+    {
+      // Save the connection.
+      IAuthorityGroupManager groupManager = AuthorityGroupManagerFactory.make(tc);
+      if (groupManager.save(authorityGroup))
+        return WRITERESULT_CREATED;
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return WRITERESULT_FOUND;
+  }
+
   /** Write output connection.
   */
   protected static int apiWriteOutputConnection(IThreadContext tc, Configuration output, Configuration input, String connectionName)
@@ -3317,6 +3127,41 @@
     return WRITERESULT_FOUND;
   }
   
+  /** Write mapping connection.
+  */
+  protected static int apiWriteMappingConnection(IThreadContext tc, Configuration output, Configuration input, String connectionName)
+    throws ManifoldCFException
+  {
+    ConfigurationNode connectionNode = findConfigurationNode(input,API_MAPPINGCONNECTIONNODE);
+    if (connectionNode == null)
+      throw new ManifoldCFException("Input argument must have '"+API_MAPPINGCONNECTIONNODE+"' field");
+      
+    // Turn the configuration node into an OutputConnection
+    org.apache.manifoldcf.authorities.mapping.MappingConnection mappingConnection = new org.apache.manifoldcf.authorities.mapping.MappingConnection();
+    processMappingConnection(mappingConnection,connectionNode);
+      
+    if (mappingConnection.getName() == null)
+      mappingConnection.setName(connectionName);
+    else
+    {
+      if (!mappingConnection.getName().equals(connectionName))
+        throw new ManifoldCFException("Connection name in path and in object must agree");
+    }
+      
+    try
+    {
+      // Save the connection.
+      IMappingConnectionManager connectionManager = MappingConnectionManagerFactory.make(tc);
+      if (connectionManager.save(mappingConnection))
+        return WRITERESULT_CREATED;
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return WRITERESULT_FOUND;
+  }
+
   /** Write repository connection.
   */
   protected static int apiWriteRepositoryConnection(IThreadContext tc, Configuration output, Configuration input, String connectionName)
@@ -3419,11 +3264,21 @@
       Long jobID = new Long(path.substring("jobs/".length()));
       return apiWriteJob(tc,output,input,jobID);
     }
+    else if (path.startsWith("authoritygroups/"))
+    {
+      String groupName = decodeAPIPathElement(path.substring("authoritygroups/".length()));
+      return apiWriteAuthorityGroup(tc,output,input,groupName);
+    }
     else if (path.startsWith("outputconnections/"))
     {
       String connectionName = decodeAPIPathElement(path.substring("outputconnections/".length()));
       return apiWriteOutputConnection(tc,output,input,connectionName);
     }
+    else if (path.startsWith("mappingconnections/"))
+    {
+      String connectionName = decodeAPIPathElement(path.substring("mappingconnections/".length()));
+      return apiWriteMappingConnection(tc,output,input,connectionName);
+    }
     else if (path.startsWith("authorityconnections/"))
     {
       String connectionName = decodeAPIPathElement(path.substring("authorityconnections/".length()));
@@ -3485,6 +3340,23 @@
     return DELETERESULT_FOUND;
   }
   
+  /** Delete authority group.
+  */
+  protected static int apiDeleteAuthorityGroup(IThreadContext tc, Configuration output, String groupName)
+    throws ManifoldCFException
+  {
+    try
+    {
+      IAuthorityGroupManager groupManager = AuthorityGroupManagerFactory.make(tc);
+      groupManager.delete(groupName);
+    }
+    catch (ManifoldCFException e)
+    {
+      createErrorNode(output,e);
+    }
+    return DELETERESULT_FOUND;
+  }
+
   /** Delete output connection.
   */
   protected static int apiDeleteOutputConnection(IThreadContext tc, Configuration output, String connectionName)
@@ -3550,6 +3422,11 @@
       Long jobID = new Long(path.substring("jobs/".length()));
       return apiDeleteJob(tc,output,jobID);
     }
+    else if (path.startsWith("authoritygroups/"))
+    {
+      String groupName = decodeAPIPathElement(path.substring("authoritygroups/".length()));
+      return apiDeleteAuthorityGroup(tc,output,groupName);
+    }
     else if (path.startsWith("outputconnections/"))
     {
       String connectionName = decodeAPIPathElement(path.substring("outputconnections/".length()));
@@ -3764,7 +3641,7 @@
           {
             requestMinimum = scheduleField.getValue().equals("true");
           }
-          if (fieldType.equals(JOBNODE_TIMEZONE))
+          else if (fieldType.equals(JOBNODE_TIMEZONE))
           {
             timezone = scheduleField.getValue();
           }
@@ -4139,6 +4016,9 @@
   protected static final String JOBSTATUSNODE_DOCUMENTSINQUEUE = "documents_in_queue";
   protected static final String JOBSTATUSNODE_DOCUMENTSOUTSTANDING = "documents_outstanding";
   protected static final String JOBSTATUSNODE_DOCUMENTSPROCESSED = "documents_processed";
+  protected static final String JOBSTATUSNODE_QUEUEEXACT = "queue_exact";
+  protected static final String JOBSTATUSNODE_OUTSTANDINGEXACT = "outstanding_exact";
+  protected static final String JOBSTATUSNODE_PROCESSEDEXACT = "processed_exact";
   
   /** Format a job status.
   */
@@ -4197,6 +4077,19 @@
     child.setValue(new Long(jobStatus.getDocumentsProcessed()).toString());
     jobStatusNode.addChild(jobStatusNode.getChildCount(),child);
 
+    // Exact flags
+    child = new ConfigurationNode(JOBSTATUSNODE_QUEUEEXACT);
+    child.setValue(new Boolean(jobStatus.getQueueCountExact()).toString());
+    jobStatusNode.addChild(jobStatusNode.getChildCount(),child);
+
+    child = new ConfigurationNode(JOBSTATUSNODE_OUTSTANDINGEXACT);
+    child.setValue(new Boolean(jobStatus.getOutstandingCountExact()).toString());
+    jobStatusNode.addChild(jobStatusNode.getChildCount(),child);
+
+    child = new ConfigurationNode(JOBSTATUSNODE_PROCESSEDEXACT);
+    child.setValue(new Boolean(jobStatus.getProcessedCountExact()).toString());
+    jobStatusNode.addChild(jobStatusNode.getChildCount(),child);
+
   }
 
   protected static String statusMap(int status)
@@ -4240,6 +4133,73 @@
 
   // End of jobstatus API support.
   
+  // Authority group API
+  
+  protected static final String AUTHGROUPNODE_ISNEW = "isnew";
+  protected static final String AUTHGROUPNODE_NAME = "name";
+  protected static final String AUTHGROUPNODE_DESCRIPTION = "description";
+  
+  // Output connection API support.
+  
+  /** Convert input hierarchy into an AuthorityGroup object.
+  */
+  protected static void processAuthorityGroup(org.apache.manifoldcf.authorities.authgroups.AuthorityGroup group, ConfigurationNode groupNode)
+    throws ManifoldCFException
+  {
+    // Walk through the node's children
+    int i = 0;
+    while (i < groupNode.getChildCount())
+    {
+      ConfigurationNode child = groupNode.findChild(i++);
+      String childType = child.getType();
+      if (childType.equals(AUTHGROUPNODE_ISNEW))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Authority group isnew node requires a value");
+        group.setIsNew(child.getValue().equals("true"));
+      }
+      else if (childType.equals(AUTHGROUPNODE_NAME))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Authority group name node requires a value");
+        group.setName(child.getValue());
+      }
+      else if (childType.equals(AUTHGROUPNODE_DESCRIPTION))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Authority group description node requires a value");
+        group.setDescription(child.getValue());
+      }
+      else
+        throw new ManifoldCFException("Unrecognized authority group field: '"+childType+"'");
+    }
+
+  }
+  
+  /** Format an authority group.
+  */
+  protected static void formatAuthorityGroup(ConfigurationNode groupNode, IAuthorityGroup group)
+  {
+    ConfigurationNode child;
+    int j;
+
+    child = new ConfigurationNode(AUTHGROUPNODE_ISNEW);
+    child.setValue(group.getIsNew()?"true":"false");
+    groupNode.addChild(groupNode.getChildCount(),child);
+
+    child = new ConfigurationNode(AUTHGROUPNODE_NAME);
+    child.setValue(group.getName());
+    groupNode.addChild(groupNode.getChildCount(),child);
+
+    if (group.getDescription() != null)
+    {
+      child = new ConfigurationNode(AUTHGROUPNODE_DESCRIPTION);
+      child.setValue(group.getDescription());
+      groupNode.addChild(groupNode.getChildCount(),child);
+    }
+    
+  }
+
   // Connection API
   
   protected static final String CONNECTIONNODE_ISNEW = "isnew";
@@ -4247,12 +4207,15 @@
   protected static final String CONNECTIONNODE_CLASSNAME = "class_name";
   protected static final String CONNECTIONNODE_MAXCONNECTIONS = "max_connections";
   protected static final String CONNECTIONNODE_DESCRIPTION = "description";
+  protected static final String CONNECTIONNODE_PREREQUISITE = "prerequisite";
   protected static final String CONNECTIONNODE_CONFIGURATION = "configuration";
   protected static final String CONNECTIONNODE_ACLAUTHORITY = "acl_authority";
   protected static final String CONNECTIONNODE_THROTTLE = "throttle";
   protected static final String CONNECTIONNODE_MATCH = "match";
   protected static final String CONNECTIONNODE_MATCHDESCRIPTION = "match_description";
   protected static final String CONNECTIONNODE_RATE = "rate";
+  protected static final String CONNECTIONNODE_AUTHDOMAIN = "authdomain";
+  protected static final String CONNECTIONNODE_AUTHGROUP = "authgroup";
   
   // Output connection API support.
   
@@ -4410,6 +4373,24 @@
           throw new ManifoldCFException("Error parsing max connections: "+e.getMessage(),e);
         }
       }
+      else if (childType.equals(CONNECTIONNODE_PREREQUISITE))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection prerequisite node requires a value");
+        connection.setPrerequisiteMapping(child.getValue());
+      }
+      else if (childType.equals(CONNECTIONNODE_AUTHDOMAIN))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection authdomain node requires a value");
+        connection.setAuthDomain(child.getValue());
+      }
+      else if (childType.equals(CONNECTIONNODE_AUTHGROUP))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection authgroup node requires a value");
+        connection.setAuthGroup(child.getValue());
+      }
       else if (childType.equals(CONNECTIONNODE_DESCRIPTION))
       {
         if (child.getValue() == null)
@@ -4435,7 +4416,8 @@
       throw new ManifoldCFException("Missing connection field: '"+CONNECTIONNODE_CLASSNAME+"'");
 
   }
-  
+
+
   /** Format an authority connection.
   */
   protected static void formatAuthorityConnection(ConfigurationNode connectionNode, IAuthorityConnection connection)
@@ -4459,6 +4441,149 @@
     child.setValue(Integer.toString(connection.getMaxConnections()));
     connectionNode.addChild(connectionNode.getChildCount(),child);
 
+    if (connection.getPrerequisiteMapping() != null)
+    {
+      child = new ConfigurationNode(CONNECTIONNODE_PREREQUISITE);
+      child.setValue(connection.getPrerequisiteMapping());
+      connectionNode.addChild(connectionNode.getChildCount(),child);
+    }
+
+    if (connection.getDescription() != null)
+    {
+      child = new ConfigurationNode(CONNECTIONNODE_DESCRIPTION);
+      child.setValue(connection.getDescription());
+      connectionNode.addChild(connectionNode.getChildCount(),child);
+    }
+    
+    if (connection.getAuthDomain() != null)
+    {
+      child = new ConfigurationNode(CONNECTIONNODE_AUTHDOMAIN);
+      child.setValue(connection.getAuthDomain());
+      connectionNode.addChild(connectionNode.getChildCount(),child);
+    }
+    
+    child = new ConfigurationNode(CONNECTIONNODE_AUTHGROUP);
+    child.setValue(connection.getAuthGroup());
+    connectionNode.addChild(connectionNode.getChildCount(),child);
+
+    ConfigParams cp = connection.getConfigParams();
+    child = new ConfigurationNode(CONNECTIONNODE_CONFIGURATION);
+    j = 0;
+    while (j < cp.getChildCount())
+    {
+      ConfigurationNode cn = cp.findChild(j++);
+      child.addChild(child.getChildCount(),cn);
+    }
+    connectionNode.addChild(connectionNode.getChildCount(),child);
+
+  }
+
+  // Mapping connection API methods
+  
+  /** Convert input hierarchy into an MappingConnection object.
+  */
+  protected static void processMappingConnection(org.apache.manifoldcf.authorities.mapping.MappingConnection connection, ConfigurationNode connectionNode)
+    throws ManifoldCFException
+  {
+    // Walk through the node's children
+    int i = 0;
+    while (i < connectionNode.getChildCount())
+    {
+      ConfigurationNode child = connectionNode.findChild(i++);
+      String childType = child.getType();
+      if (childType.equals(CONNECTIONNODE_ISNEW))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection isnew node requires a value");
+        connection.setIsNew(child.getValue().equals("true"));
+      }
+      else if (childType.equals(CONNECTIONNODE_NAME))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection name node requires a value");
+        connection.setName(child.getValue());
+      }
+      else if (childType.equals(CONNECTIONNODE_CLASSNAME))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection classname node requires a value");
+        connection.setClassName(child.getValue());
+      }
+      else if (childType.equals(CONNECTIONNODE_MAXCONNECTIONS))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection maxconnections node requires a value");
+        try
+        {
+          connection.setMaxConnections(Integer.parseInt(child.getValue()));
+        }
+        catch (NumberFormatException e)
+        {
+          throw new ManifoldCFException("Error parsing max connections: "+e.getMessage(),e);
+        }
+      }
+      else if (childType.equals(CONNECTIONNODE_PREREQUISITE))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection prerequisite node requires a value");
+        connection.setPrerequisiteMapping(child.getValue());
+      }
+      else if (childType.equals(CONNECTIONNODE_DESCRIPTION))
+      {
+        if (child.getValue() == null)
+          throw new ManifoldCFException("Connection description node requires a value");
+        connection.setDescription(child.getValue());
+      }
+      else if (childType.equals(CONNECTIONNODE_CONFIGURATION))
+      {
+        // Get the connection's configuration, clear out the children, and copy new ones from the child.
+        ConfigParams cp = connection.getConfigParams();
+        cp.clearChildren();
+        int j = 0;
+        while (j < child.getChildCount())
+        {
+          ConfigurationNode cn = child.findChild(j++);
+          cp.addChild(cp.getChildCount(),new ConfigNode(cn));
+        }
+      }
+      else
+        throw new ManifoldCFException("Unrecognized mapping connection field: '"+childType+"'");
+    }
+    if (connection.getClassName() == null)
+      throw new ManifoldCFException("Missing connection field: '"+CONNECTIONNODE_CLASSNAME+"'");
+
+  }
+
+  /** Format a mapping connection.
+  */
+  protected static void formatMappingConnection(ConfigurationNode connectionNode, IMappingConnection connection)
+  {
+    ConfigurationNode child;
+    int j;
+    
+    child = new ConfigurationNode(CONNECTIONNODE_ISNEW);
+    child.setValue(connection.getIsNew()?"true":"false");
+    connectionNode.addChild(connectionNode.getChildCount(),child);
+
+    child = new ConfigurationNode(CONNECTIONNODE_NAME);
+    child.setValue(connection.getName());
+    connectionNode.addChild(connectionNode.getChildCount(),child);
+
+    child = new ConfigurationNode(CONNECTIONNODE_CLASSNAME);
+    child.setValue(connection.getClassName());
+    connectionNode.addChild(connectionNode.getChildCount(),child);
+
+    child = new ConfigurationNode(CONNECTIONNODE_MAXCONNECTIONS);
+    child.setValue(Integer.toString(connection.getMaxConnections()));
+    connectionNode.addChild(connectionNode.getChildCount(),child);
+
+    if (connection.getPrerequisiteMapping() != null)
+    {
+      child = new ConfigurationNode(CONNECTIONNODE_PREREQUISITE);
+      child.setValue(connection.getPrerequisiteMapping());
+      connectionNode.addChild(connectionNode.getChildCount(),child);
+    }
+    
     if (connection.getDescription() != null)
     {
       child = new ConfigurationNode(CONNECTIONNODE_DESCRIPTION);
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/NotificationResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/NotificationResetManager.java
new file mode 100644
index 0000000..77062e8
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/NotificationResetManager.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** Class which handles reset for seeding thread pool (of which there's
+* typically only one member).  The reset action here
+* is to move the status of jobs back from "seeding" to normal.
+*/
+public class NotificationResetManager extends ResetManager
+{
+
+  /** Constructor. */
+  public NotificationResetManager(String processID)
+  {
+    super(processID);
+  }
+
+  /** Reset */
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(tc);
+    jobManager.resetNotificationWorkerStatus(processID);
+  }
+
+  /** Do the wakeup logic.
+  */
+  @Override
+  protected void performWakeupLogic()
+  {
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/PriorityCalculator.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/PriorityCalculator.java
new file mode 100644
index 0000000..4f71bbf
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/PriorityCalculator.java
@@ -0,0 +1,268 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import java.util.*;
+import java.util.regex.*;
+
+/** This class calculates a document priority given all the required inputs.
+* It is not thread safe, but calls classes that are (e.g. QueueTracker).
+*/
+public class PriorityCalculator implements IPriorityCalculator
+{
+  public static final String _rcsid = "@(#)$Id$";
+
+  /** This is a made-up constant, originally based on 100 documents/second, but adjusted downward as a result of experimentation and testing, which is described as "T" below.
+  */
+  private final static double minMsPerFetch = 50.0;
+
+  protected final IRepositoryConnection connection;
+  protected final String[] binNames;
+  protected final IReprioritizationTracker rt;
+  
+  protected final double[] binCountScaleFactors;
+  protected final double[] weightedMinimumDepths;
+  
+  protected Double cachedValue = null;
+  
+  /** Constructor. */
+  public PriorityCalculator(IReprioritizationTracker rt, IRepositoryConnection connection, String[] documentBins)
+    throws ManifoldCFException
+  {
+    this.connection = connection;
+    this.binNames = documentBins;
+    this.rt = rt;
+    
+    // Now, precompute the weightedMinimumDepths etc; we'll need it whether we preload or not.
+    
+    // For each bin, we will be calculating the bin count scale factor, which is what we multiply the bincount by to adjust for the
+    // throttling on that bin.
+    binCountScaleFactors = new double[binNames.length];
+    weightedMinimumDepths = new double[binNames.length];
+
+    // NOTE: We must be sure to adjust the return value by the factor calculated due to performance; a slower throttle rate
+    // should yield a lower priority.  In theory it should be possible to calculate an adjusted priority pretty exactly,
+    // on the basis that the fetch rates of two distinct bins should grant priorities such that:
+    //
+    //  (n documents) / (the rate of fetch (docs/millisecond) of the first bin) = milliseconds for the first bin
+    //
+    //  should equal:
+    //
+    //  (m documents) / (the rate of fetch of the second bin) = milliseconds for the second bin
+    //
+    // ... and then assigning priorities so that after a given number of document priorities are assigned from the first bin, the
+    // corresponding (*m/n) number of document priorities would get assigned for the second bin.
+    //
+    // Suppose the maximum fetch rate for the document is F fetches per millisecond.  If the document priority assigned for the Bth
+    // bin member is -log(1/(1+B)) for a document fetched with no throttling whatsoever,
+    // then we want the priority to be -log(1/(1+k)) for a throttled bin, where k is chosen so that:
+    // k = B * ((T + 1/F)/T) = B * (1 + 1/TF)
+    // ... where T is the time taken to fetch a single document that has no throttling at all.
+    // For the purposes of this exercise, a value of 100 doc/sec, or T=10ms.
+    //
+    // Basically, for F = 0, k should be infinity, and for F = infinity, k should be B.
+
+    // First, calculate the document's max fetch rate, in fetches per millisecond.  This will be used to adjust the priority, and
+    // also when resetting the bin counts.
+    double[] maxFetchRates = calculateMaxFetchRates(binNames,connection);
+
+    // Before calculating priority, calculate some factors that will allow us to determine the proper starting value for a bin.
+    double currentMinimumDepth = rt.getMinimumDepth();
+
+    // First thing to do is to reset the bin values based on the current minimum.
+    for (int i = 0; i < binNames.length; i++)
+    {
+      String binName = binNames[i];
+      // Remember, maxFetchRate is in fetches per ms.
+      double maxFetchRate = maxFetchRates[i];
+
+      // Calculate (and save for later) the scale factor for this bin.
+      double binCountScaleFactor;
+      if (maxFetchRate == 0.0)
+        binCountScaleFactor = Double.POSITIVE_INFINITY;
+      else
+        binCountScaleFactor = 1.0 + 1.0 / (minMsPerFetch * maxFetchRate);
+      binCountScaleFactors[i] = binCountScaleFactor;
+      weightedMinimumDepths[i] = currentMinimumDepth / binCountScaleFactor;
+    }
+    
+  }
+  
+  /** Log a preload request for this priority value.
+  */
+  public void makePreloadRequest()
+  {
+    for (int i = 0; i < binNames.length; i++)
+    {
+      String binName = binNames[i];
+      rt.addPreloadRequest(binName, weightedMinimumDepths[i]);
+    }
+
+  }
+
+  /** Calculate a document priority value.  Priorities are reversed, and in log space, so that
+  * zero (0.0) is considered the highest possible priority, and larger priority values are considered lower in actual priority.
+  *@param binNames are the global bins to which the document belongs.
+  *@param connection is the connection, from which the throttles may be obtained.  More highly throttled connections are given
+  *          less favorable priority.
+  *@return the priority value, based on recent history.  Also updates statistics atomically.
+  */
+  @Override
+  public double getDocumentPriority()
+    throws ManifoldCFException
+  {
+     if (cachedValue != null)
+       return cachedValue.doubleValue();
+
+    double highestAdjustedCount = 0.0;
+    // Find the bin with the largest effective count, and use that for the document's priority.
+    // (This of course assumes that the slowest throttle is the one that wins.)
+    for (int i = 0; i < binNames.length; i++)
+    {
+      String binName = binNames[i];
+      double binCountScaleFactor = binCountScaleFactors[i];
+      double weightedMinimumDepth = weightedMinimumDepths[i];
+
+      double thisCount = rt.getIncrementBinValue(binName,weightedMinimumDepth);
+      double adjustedCount;
+      // Use the scale factor already calculated above to yield a priority that is adjusted for the fetch rate.
+      if (binCountScaleFactor == Double.POSITIVE_INFINITY)
+        adjustedCount = Double.POSITIVE_INFINITY;
+      else
+        adjustedCount = thisCount * binCountScaleFactor;
+      if (adjustedCount > highestAdjustedCount)
+        highestAdjustedCount = adjustedCount;
+    }
+    
+    // Calculate the proper log value
+    double returnValue;
+    
+    if (highestAdjustedCount == Double.POSITIVE_INFINITY)
+      returnValue = Double.POSITIVE_INFINITY;
+    else
+      returnValue = Math.log(1.0 + highestAdjustedCount);
+
+    if (Logging.scheduling.isDebugEnabled())
+    {
+      StringBuilder sb = new StringBuilder();
+      int k = 0;
+      while (k < binNames.length)
+      {
+        sb.append(binNames[k++]).append(" ");
+      }
+      Logging.scheduling.debug("Document with bins ["+sb.toString()+"] given priority value "+new Double(returnValue).toString());
+    }
+
+    cachedValue = new Double(returnValue);
+
+    return returnValue;
+  }
+
+
+  /** Calculate the maximum fetch rate for a given set of bins for a given connection.
+  * This is used to adjust the final priority of a document.
+  */
+  protected static double[] calculateMaxFetchRates(String[] binNames, IRepositoryConnection connection)
+  {
+    ThrottleLimits tl = new ThrottleLimits(connection);
+    return tl.getMaximumRates(binNames);
+  }
+
+  /** This class represents the throttle limits out of the connection specification */
+  protected static class ThrottleLimits
+  {
+    protected List<ThrottleLimitSpec> specs = new ArrayList<ThrottleLimitSpec>();
+
+    public ThrottleLimits(IRepositoryConnection connection)
+    {
+      String[] throttles = connection.getThrottles();
+      int i = 0;
+      while (i < throttles.length)
+      {
+        try
+        {
+          specs.add(new ThrottleLimitSpec(throttles[i],(double)connection.getThrottleValue(throttles[i])));
+        }
+        catch (PatternSyntaxException e)
+        {
+        }
+        i++;
+      }
+    }
+
+    public double[] getMaximumRates(String[] binNames)
+    {
+      double[] rval = new double[binNames.length];
+      for (int j = 0 ; j < binNames.length ; j++)
+      {
+        String binName = binNames[j];
+        double maxRate = Double.POSITIVE_INFINITY;
+        for (ThrottleLimitSpec spec : specs)
+        {
+          Pattern p = spec.getRegexp();
+          Matcher m = p.matcher(binName);
+          if (m.find())
+          {
+            double rate = spec.getMaxRate();
+            // The direction of this inequality reflects the fact that the throttling is conservative when more rules are present.
+            if (rate < maxRate)
+              maxRate = rate;
+          }
+        }
+        rval[j] = maxRate;
+      }
+      return rval;
+    }
+
+  }
+
+  /** This is a class which describes an individual throttle limit, in fetches per millisecond. */
+  protected static class ThrottleLimitSpec
+  {
+    /** Regexp */
+    protected final Pattern regexp;
+    /** The fetch limit for all bins matching that regexp, in fetches per millisecond */
+    protected final double maxRate;
+
+    /** Constructor */
+    public ThrottleLimitSpec(String regexp, double maxRate)
+      throws PatternSyntaxException
+    {
+      this.regexp = Pattern.compile(regexp);
+      this.maxRate = maxRate;
+    }
+
+    /** Get the regexp. */
+    public Pattern getRegexp()
+    {
+      return regexp;
+    }
+
+    /** Get the max count */
+    public double getMaxRate()
+    {
+      return maxRate;
+    }
+  }
+
+}
+
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ResetManager.java
index bc09001..2a809f5 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ResetManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/ResetManager.java
@@ -37,17 +37,21 @@
 {
   public static final String _rcsid = "@(#)$Id: ResetManager.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  /** Process ID */
+  protected final String processID;
+  
   /** Boolean which describes whether an event requiring reset has occurred. */
-  protected boolean resetRequired = false;
+  protected volatile boolean resetRequired = false;
   /** This is the count of the threads that care about this resource. */
   protected int involvedThreadCount = 0;
   /** This is the number of threads that are waiting for the reset. */
-  protected int waitingThreads = 0;
+  protected volatile int waitingThreads = 0;
 
   /** Constructor.
   */
-  public ResetManager()
+  public ResetManager(String processID)
   {
+    this.processID = processID;
   }
 
   /** Register a thread with this reset manager.
@@ -59,9 +63,13 @@
 
   /** Note a resettable event.
   */
-  public synchronized void noteEvent()
+  public void noteEvent()
   {
-    resetRequired = true;
+    //System.out.println(this + " Event noted; involvedThreadCount = "+involvedThreadCount);
+    synchronized (this)
+    {
+      resetRequired = true;
+    }
     performWakeupLogic();
   }
 
@@ -77,12 +85,14 @@
   {
     if (resetRequired == false)
       return false;
+
     waitingThreads++;
 
     // Check if this is the "Prince Charming" thread, who will wake up
     // all the others.
     if (waitingThreads == involvedThreadCount)
     {
+      //System.out.println(this + " Prince Charming thread found!");
       // Kick off reset, and wake everyone up
       // There's a question of what to do if the reset fails.
       // Right now, my notion is that we throw the exception
@@ -90,7 +100,7 @@
       // is tracked.
       try
       {
-        performResetLogic(tc);
+        performResetLogic(tc, processID);
       }
       finally
       {
@@ -104,6 +114,7 @@
       return true;
     }
 
+    //System.out.println(this + " Waiting threads = "+waitingThreads+"; going to sleep");
     // Just go to sleep until kicked.
     wait();
     // If we were awakened, it's because reset was fired.
@@ -112,7 +123,7 @@
 
   /** Do the reset logic.
   */
-  protected abstract void performResetLogic(IThreadContext tc)
+  protected abstract void performResetLogic(IThreadContext tc, String processID)
     throws ManifoldCFException;
 
   /** Do the wakeup logic.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingActivity.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingActivity.java
index e2e6dd0..f858b4f 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingActivity.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingActivity.java
@@ -36,33 +36,37 @@
   protected static final int MAX_COUNT = 100;
 
   // Variables
-  protected String connectionName;
-  protected IRepositoryConnectionManager connManager;
-  protected IJobManager jobManager;
-  protected QueueTracker queueTracker;
-  protected IRepositoryConnection connection;
-  protected IRepositoryConnector connector;
-  protected Long jobID;
-  protected String[] legalLinkTypes;
-  protected boolean overrideSchedule;
-  protected int hopcountMethod;
-  protected String[] documentHashList = new String[MAX_COUNT];
-  protected String[] documentList = new String[MAX_COUNT];
-  protected String[][] documentPrereqList = new String[MAX_COUNT][];
+  protected final String processID;
+  protected final String connectionName;
+  protected final IRepositoryConnectionManager connManager;
+  protected final IJobManager jobManager;
+  protected final IReprioritizationTracker rt;
+  protected final IRepositoryConnection connection;
+  protected final IRepositoryConnector connector;
+  protected final Long jobID;
+  protected final String[] legalLinkTypes;
+  protected final boolean overrideSchedule;
+  protected final int hopcountMethod;
+  
+  protected final String[] documentHashList = new String[MAX_COUNT];
+  protected final String[] documentList = new String[MAX_COUNT];
+  protected final String[][] documentPrereqList = new String[MAX_COUNT][];
   protected int documentCount = 0;
-  protected String[] remainingDocumentHashList = new String[MAX_COUNT];
+  protected final String[] remainingDocumentHashList = new String[MAX_COUNT];
   protected int remainingDocumentCount = 0;
 
   /** Constructor.
   */
-  public SeedingActivity(String connectionName, IRepositoryConnectionManager connManager, IJobManager jobManager,
-    QueueTracker queueTracker, IRepositoryConnection connection, IRepositoryConnector connector,
-    Long jobID, String[] legalLinkTypes, boolean overrideSchedule, int hopcountMethod)
+  public SeedingActivity(String connectionName, IRepositoryConnectionManager connManager,
+    IJobManager jobManager,
+    IReprioritizationTracker rt, IRepositoryConnection connection, IRepositoryConnector connector,
+    Long jobID, String[] legalLinkTypes, boolean overrideSchedule, int hopcountMethod, String processID)
   {
+    this.processID = processID;
     this.connectionName = connectionName;
     this.connManager = connManager;
     this.jobManager = jobManager;
-    this.queueTracker = queueTracker;
+    this.rt = rt;
     this.connection = connection;
     this.connector = connector;
     this.jobID = jobID;
@@ -139,7 +143,7 @@
     if (remainingDocumentCount == MAX_COUNT)
     {
       // Flush the remaining documents
-      jobManager.addRemainingDocumentsInitial(jobID,legalLinkTypes,remainingDocumentHashList,hopcountMethod);
+      jobManager.addRemainingDocumentsInitial(processID,jobID,legalLinkTypes,remainingDocumentHashList,hopcountMethod);
       remainingDocumentCount = 0;
     }
     remainingDocumentHashList[remainingDocumentCount++] = ManifoldCF.hash(documentIdentifier);
@@ -174,7 +178,7 @@
         documents[i] = remainingDocumentHashList[i];
         i++;
       }
-      jobManager.addRemainingDocumentsInitial(jobID,legalLinkTypes,documents,hopcountMethod);
+      jobManager.addRemainingDocumentsInitial(processID,jobID,legalLinkTypes,documents,hopcountMethod);
       remainingDocumentCount = 0;
     }
 
@@ -212,38 +216,22 @@
   {
     // First, prioritize the documents using the queue tracker
     long prioritizationTime = System.currentTimeMillis();
-    double[] docPriorities = new double[docIDHashes.length];
-    String[][] binNames = new String[docIDHashes.length][];
+    IPriorityCalculator[] docPriorities = new IPriorityCalculator[docIDHashes.length];
 
     int i = 0;
     while (i < docIDHashes.length)
     {
       // Calculate desired document priority based on current queuetracker status.
       String[] bins = connector.getBinNames(docIDs[i]);
-
-      binNames[i] = bins;
-      docPriorities[i] = queueTracker.calculatePriority(bins,connection);
-      if (Logging.scheduling.isDebugEnabled())
-        Logging.scheduling.debug("Giving document '"+docIDs[i]+"' priority "+new Double(docPriorities[i]).toString());
+      docPriorities[i] = new PriorityCalculator(rt,connection,bins);
 
       i++;
     }
 
-    boolean[] trackerNote = jobManager.addDocumentsInitial(jobID,legalLinkTypes,docIDHashes,docIDs,overrideSchedule,hopcountMethod,
+    jobManager.addDocumentsInitial(processID,
+      jobID,legalLinkTypes,docIDHashes,docIDs,overrideSchedule,hopcountMethod,
       prioritizationTime,docPriorities,prereqEventNames);
 
-    // Inform queuetracker about what we used and what we didn't
-    int j = 0;
-    while (j < trackerNote.length)
-    {
-      if (trackerNote[j] == false)
-      {
-        String[] bins = binNames[j];
-        queueTracker.notePriorityNotUsed(bins,connection,docPriorities[j]);
-      }
-      j++;
-    }
-
   }
 
   /** Check whether current job is still active.
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingResetManager.java
new file mode 100644
index 0000000..b2723b2
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingResetManager.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** Class which handles reset for seeding thread pool (of which there's
+* typically only one member).  The reset action here
+* is to move the status of jobs back from "seeding" to normal.
+*/
+public class SeedingResetManager extends ResetManager
+{
+
+  /** Constructor. */
+  public SeedingResetManager(String processID)
+  {
+    super(processID);
+  }
+
+  /** Reset */
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(tc);
+    jobManager.resetSeedingWorkerStatus(processID);
+  }
+    
+  /** Do the wakeup logic.
+  */
+  @Override
+  protected void performWakeupLogic()
+  {
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingThread.java
index 6147c58..ea84e62 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SeedingThread.java
@@ -35,24 +35,25 @@
 {
   public static final String _rcsid = "@(#)$Id: SeedingThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  /** Worker thread pool reset manager */
-  protected static SeedingResetManager resetManager = new SeedingResetManager();
-
   // Local data
-  protected QueueTracker queueTracker;
+  /** Seeding reset manager */
+  protected final SeedingResetManager resetManager;
+  /** Process ID */
+  protected final String processID;
 
   /** The number of documents that are added to the queue per transaction */
   protected final static int MAX_COUNT = 100;
 
   /** Constructor.
   */
-  public SeedingThread(QueueTracker queueTracker)
+  public SeedingThread(SeedingResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     setName("Seeding thread");
     setDaemon(true);
-    this.queueTracker = queueTracker;
+    this.resetManager = resetManager;
+    this.processID = processID;
   }
 
   public void run()
@@ -65,12 +66,10 @@
       IThreadContext threadContext = ThreadContextFactory.make();
       IJobManager jobManager = JobManagerFactory.make(threadContext);
       IRepositoryConnectionManager connectionMgr = RepositoryConnectionManagerFactory.make(threadContext);
+      IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
 
-      IDBInterface database = DBInterfaceFactory.make(threadContext,
-        ManifoldCF.getMasterDatabaseName(),
-        ManifoldCF.getMasterDatabaseUsername(),
-        ManifoldCF.getMasterDatabasePassword());
-
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       String[] identifiers = new String[MAX_COUNT];
       // Loop
       while (true)
@@ -91,7 +90,7 @@
           Logging.threads.debug("Seeding thread woke up");
 
           // Grab active, adaptive jobs (and set their state to xxxSEEDING as a side effect)
-          JobSeedingRecord[] seedJobs = jobManager.getJobsReadyForSeeding(currentTime);
+          JobSeedingRecord[] seedJobs = jobManager.getJobsReadyForSeeding(processID,currentTime);
 
           // Process these jobs, and do the seeding.  The seeding is based on what came back
           // in the job start record for sync time.  If there's an interruption, we just go on
@@ -124,17 +123,12 @@
                 int hopcountMethod = jobDescription.getHopcountMode();
 
                 IRepositoryConnection connection = connectionMgr.load(jobDescription.getConnectionName());
-                IRepositoryConnector connector = RepositoryConnectorFactory.grab(threadContext,
-                  connection.getClassName(),
-                  connection.getConfigParams(),
-                  connection.getMaxConnections());
+                IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
                 // Null will come back if the connector instance could not be obtained, so just skip in that case.
                 if (connector == null)
                   continue;
                 try
                 {
-
-
                   // Get the number of link types.
                   String[] legalLinkTypes = connector.getRelationshipTypes();
 
@@ -143,8 +137,9 @@
                   try
                   {
 
-                    SeedingActivity activity = new SeedingActivity(connection.getName(),connectionMgr,jobManager,queueTracker,
-                      connection,connector,jobID,legalLinkTypes,false,hopcountMethod);
+                    SeedingActivity activity = new SeedingActivity(connection.getName(),connectionMgr,
+                      jobManager,rt,
+                      connection,connector,jobID,legalLinkTypes,false,hopcountMethod,processID);
 
                     if (Logging.threads.isDebugEnabled())
                       Logging.threads.debug("Seeding thread: Getting seeds for job "+jobID.toString());
@@ -161,7 +156,7 @@
                   catch (ServiceInterruption e)
                   {
                     // Note the service interruption
-                    Logging.threads.error("Service interruption for job "+jobID,e);
+                    Logging.threads.warn("Service interruption for job "+jobID,e);
                     long retryInterval = e.getRetryTime() - currentTime;
                     if (retryInterval >= 0L && retryInterval < waitTime)
                       waitTime = retryInterval;
@@ -171,7 +166,7 @@
                 }
                 finally
                 {
-                  RepositoryConnectorFactory.release(connector);
+                  repositoryConnectorPool.release(connection,connector);
                 }
 
 
@@ -287,33 +282,4 @@
     }
   }
 
-  /** Class which handles reset for seeding thread pool (of which there's
-  * typically only one member).  The reset action here
-  * is to move the status of jobs back from "seeding" to normal.
-  */
-  protected static class SeedingResetManager extends ResetManager
-  {
-
-    /** Constructor. */
-    public SeedingResetManager()
-    {
-      super();
-    }
-
-    /** Reset */
-    protected void performResetLogic(IThreadContext tc)
-      throws ManifoldCFException
-    {
-      IJobManager jobManager = JobManagerFactory.make(tc);
-      jobManager.resetSeedingWorkerStatus();
-    }
-    
-    /** Do the wakeup logic.
-    */
-    protected void performWakeupLogic()
-    {
-    }
-
-  }
-
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SetPriorityThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SetPriorityThread.java
index b5797bb..85078c1 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SetPriorityThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/SetPriorityThread.java
@@ -38,22 +38,21 @@
   public static final String _rcsid = "@(#)$Id: SetPriorityThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
   // Local data
-  // This is the queue tracker object.
-  protected QueueTracker queueTracker;
-  // This is the number of documents per cycle
-  protected int cycleCount;
-  // The blocking documents object
-  protected BlockingDocuments blockingDocuments;
+  /** This is the number of documents per cycle */
+  protected final int cycleCount;
+  /** The blocking documents object */
+  protected final BlockingDocuments blockingDocuments;
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
-  *@param qt is the queue tracker object.
   */
-  public SetPriorityThread(QueueTracker qt, int workerThreadCount, BlockingDocuments blockingDocuments)
+  public SetPriorityThread(int workerThreadCount, BlockingDocuments blockingDocuments, String processID)
     throws ManifoldCFException
   {
     super();
-    this.queueTracker = qt;
     this.blockingDocuments = blockingDocuments;
+    this.processID = processID;
     cycleCount = workerThreadCount * 10;
     setName("Set priority thread");
     setDaemon(true);
@@ -69,7 +68,7 @@
       IThreadContext threadContext = ThreadContextFactory.make();
       IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(threadContext);
       IJobManager jobManager = JobManagerFactory.make(threadContext);
-
+      
       Logging.threads.debug("Set priority thread coming up");
 
       // Job description map (local) - designed to improve performance.
@@ -126,7 +125,8 @@
             DocumentDescription desc = blockingDocuments.getBlockingDocument();
             if (desc != null)
             {
-              ManifoldCF.writeDocumentPriorities(threadContext,mgr,jobManager,new DocumentDescription[]{desc},connectionMap,jobDescriptionMap,queueTracker,currentTime);
+              ManifoldCF.writeDocumentPriorities(threadContext,
+                new DocumentDescription[]{desc},connectionMap,jobDescriptionMap,currentTime);
               processedCount++;
               continue;
             }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartDeleteThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartDeleteThread.java
index b70888c..05b49cb 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartDeleteThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartDeleteThread.java
@@ -33,14 +33,18 @@
   public static final String _rcsid = "@(#)$Id$";
 
   /** Delete startup reset manager */
-  protected static DeleteStartupResetManager resetManager = new DeleteStartupResetManager();
-
+  protected final DeleteStartupResetManager resetManager;
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   */
-  public StartDeleteThread()
+  public StartDeleteThread(DeleteStartupResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
+    this.resetManager = resetManager;
+    this.processID = processID;
     setName("Delete startup thread");
     setDaemon(true);
   }
@@ -80,7 +84,7 @@
 
           // See if there are any starting jobs.
           // Note: Since this following call changes the job state, we must be careful to reset it on any kind of failure.
-          JobDeleteRecord[] deleteJobs = jobManager.getJobsReadyForDelete();
+          JobDeleteRecord[] deleteJobs = jobManager.getJobsReadyForDelete(processID);
           try
           {
 
@@ -208,33 +212,4 @@
     }
   }
 
-  /** Class which handles reset for seeding thread pool (of which there's
-  * typically only one member).  The reset action here
-  * is to move the status of jobs back from "seeding" to normal.
-  */
-  protected static class DeleteStartupResetManager extends ResetManager
-  {
-
-    /** Constructor. */
-    public DeleteStartupResetManager()
-    {
-      super();
-    }
-
-    /** Reset */
-    protected void performResetLogic(IThreadContext tc)
-      throws ManifoldCFException
-    {
-      IJobManager jobManager = JobManagerFactory.make(tc);
-      jobManager.resetDeleteStartupWorkerStatus();
-    }
-    
-    /** Do the wakeup logic.
-    */
-    protected void performWakeupLogic()
-    {
-    }
-
-  }
-
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupResetManager.java
new file mode 100644
index 0000000..55e0e12
--- /dev/null
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupResetManager.java
@@ -0,0 +1,57 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.system;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.Logging;
+import java.util.*;
+import java.lang.reflect.*;
+
+/** Class which handles reset for seeding thread pool (of which there's
+* typically only one member).  The reset action here
+* is to move the status of jobs back from "seeding" to normal.
+*/
+public class StartupResetManager extends ResetManager
+{
+
+  /** Constructor. */
+  public StartupResetManager(String processID)
+  {
+    super(processID);
+  }
+
+  /** Reset */
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
+    throws ManifoldCFException
+  {
+    IJobManager jobManager = JobManagerFactory.make(tc);
+    jobManager.resetStartupWorkerStatus(processID);
+  }
+    
+  /** Do the wakeup logic.
+  */
+  @Override
+  protected void performWakeupLogic()
+  {
+  }
+
+}
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupThread.java
index 2fbfa16..7b86288 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StartupThread.java
@@ -32,21 +32,22 @@
 {
   public static final String _rcsid = "@(#)$Id: StartupThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  /** Worker thread pool reset manager */
-  protected static StartupResetManager resetManager = new StartupResetManager();
-
   // Local data
-  protected QueueTracker queueTracker;
-
+  /** Process ID */
+  protected final String processID;
+  /** Reset manager */
+  protected final StartupResetManager resetManager;
+  
   /** Constructor.
   */
-  public StartupThread(QueueTracker queueTracker)
+  public StartupThread(StartupResetManager resetManager, String processID)
     throws ManifoldCFException
   {
     super();
     setName("Startup thread");
     setDaemon(true);
-    this.queueTracker = queueTracker;
+    this.resetManager = resetManager;
+    this.processID = processID;
   }
 
   public void run()
@@ -59,12 +60,10 @@
       IThreadContext threadContext = ThreadContextFactory.make();
       IJobManager jobManager = JobManagerFactory.make(threadContext);
       IRepositoryConnectionManager connectionMgr = RepositoryConnectionManagerFactory.make(threadContext);
+      IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
 
-      IDBInterface database = DBInterfaceFactory.make(threadContext,
-        ManifoldCF.getMasterDatabaseName(),
-        ManifoldCF.getMasterDatabaseUsername(),
-        ManifoldCF.getMasterDatabasePassword());
-
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       // Loop
       while (true)
       {
@@ -84,7 +83,7 @@
 
           // See if there are any starting jobs.
           // Note: Since this following call changes the job state, we must be careful to reset it on any kind of failure.
-          JobStartRecord[] startupJobs = jobManager.getJobsReadyForStartup();
+          JobStartRecord[] startupJobs = jobManager.getJobsReadyForStartup(processID);
           try
           {
 
@@ -115,22 +114,19 @@
                 int hopcountMethod = jobDescription.getHopcountMode();
 
                 IRepositoryConnection connection = connectionMgr.load(jobDescription.getConnectionName());
-                IRepositoryConnector connector = RepositoryConnectorFactory.grab(threadContext,
-                  connection.getClassName(),
-                  connection.getConfigParams(),
-                  connection.getMaxConnections());
+                IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
 
                 // If the attempt to grab a connector instance failed, don't start the job, of course.
                 if (connector == null)
                   continue;
 
-                // Only now record the fact that we are trying to start the job.
-                connectionMgr.recordHistory(jobDescription.getConnectionName(),
-                  null,connectionMgr.ACTIVITY_JOBSTART,null,
-                  jobID.toString()+"("+jobDescription.getDescription()+")",null,null,null);
-
                 try
                 {
+                  // Only now record the fact that we are trying to start the job.
+                  connectionMgr.recordHistory(jobDescription.getConnectionName(),
+                    null,connectionMgr.ACTIVITY_JOBSTART,null,
+                    jobID.toString()+"("+jobDescription.getDescription()+")",null,null,null);
+
                   int model = connector.getConnectorModel();
                   // Get the number of link types.
                   String[] legalLinkTypes = connector.getRelationshipTypes();
@@ -147,8 +143,9 @@
 
                   try
                   {
-                    SeedingActivity activity = new SeedingActivity(connection.getName(),connectionMgr,jobManager,queueTracker,
-                      connection,connector,jobID,legalLinkTypes,true,hopcountMethod);
+                    SeedingActivity activity = new SeedingActivity(connection.getName(),connectionMgr,
+                      jobManager,rt,
+                      connection,connector,jobID,legalLinkTypes,true,hopcountMethod,processID);
 
                     if (Logging.threads.isDebugEnabled())
                       Logging.threads.debug("Adding initial seed documents for job "+jobID.toString()+"...");
@@ -172,7 +169,7 @@
                 }
                 finally
                 {
-                  RepositoryConnectorFactory.release(connector);
+                  repositoryConnectorPool.release(connection,connector);
                 }
 
                 // Start this job!
@@ -279,33 +276,4 @@
     }
   }
 
-  /** Class which handles reset for seeding thread pool (of which there's
-  * typically only one member).  The reset action here
-  * is to move the status of jobs back from "seeding" to normal.
-  */
-  protected static class StartupResetManager extends ResetManager
-  {
-
-    /** Constructor. */
-    public StartupResetManager()
-    {
-      super();
-    }
-
-    /** Reset */
-    protected void performResetLogic(IThreadContext tc)
-      throws ManifoldCFException
-    {
-      IJobManager jobManager = JobManagerFactory.make(tc);
-      jobManager.resetStartupWorkerStatus();
-    }
-    
-    /** Do the wakeup logic.
-    */
-    protected void performWakeupLogic()
-    {
-    }
-
-  }
-
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StufferThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StufferThread.java
index 107f807..185b723 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StufferThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/StufferThread.java
@@ -32,29 +32,38 @@
 {
   public static final String _rcsid = "@(#)$Id: StufferThread.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  /** Write lock which allows us to keep track of the last time ANY stuffer thread stuffed data */
+  protected final static String stufferThreadLockName = "_STUFFERTHREAD_LOCK";
+  /** Datum which contains the last time, in milliseconds since epoch, that any stuffer thread in the cluster
+      successfully fired. */
+  protected final static String stufferThreadLastTimeDatumName = "_STUFFERTHREAD_LASTTIME";
+  
   // Local data
-  // This is a reference to the static main document queue
-  protected DocumentQueue documentQueue;
+  
+  /** This is a reference to the static main document queue */
+  protected final DocumentQueue documentQueue;
   /** Worker thread pool reset manager */
-  protected WorkerResetManager resetManager;
-  // This is the lowest number of entries we want ot stuff at any one time
-  protected int lowestStuffAmt;
-  // This is the number of entries we want to stuff at any one time.
+  protected final WorkerResetManager resetManager;
+  /** This is the lowest number of entries we want ot stuff at any one time */
+  protected final int lowestStuffAmt;
+  /** This is the number of entries we want to stuff at any one time. */
   protected int stuffAmt;
-  // This is the low water mark for attempting to restuff
-  protected int lowWaterMark;
-  // This is the queue tracker object.
-  protected QueueTracker queueTracker;
-  // Blocking documents object.
-  protected BlockingDocuments blockingDocuments;
-
+  /** This is the low water mark for attempting to restuff */
+  protected final int lowWaterMark;
+  /** This is the queue tracker object. */
+  protected final QueueTracker queueTracker;
+  /** Blocking documents object. */
+  protected final BlockingDocuments blockingDocuments;
+  /** Process ID */
+  protected final String processID;
+  
   /** Constructor.
   *@param documentQueue is the document queue we'll be stuffing.
   *@param n represents the number of threads that will be processing queued stuff, NOT the
   * number of documents to be done at once!
   */
   public StufferThread(DocumentQueue documentQueue, int n, WorkerResetManager resetManager, QueueTracker qt,
-    BlockingDocuments blockingDocuments, float lowWaterFactor, float stuffSizeFactor)
+    BlockingDocuments blockingDocuments, float lowWaterFactor, float stuffSizeFactor, String processID)
     throws ManifoldCFException
   {
     super();
@@ -65,6 +74,7 @@
     this.resetManager = resetManager;
     this.queueTracker = qt;
     this.blockingDocuments = blockingDocuments;
+    this.processID = processID;
     setName("Stuffer thread");
     setDaemon(true);
     // The priority of this thread is higher than most others.  We want stuffing to proceed even if the machine
@@ -83,13 +93,13 @@
       IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(threadContext);
       IIncrementalIngester ingester = IncrementalIngesterFactory.make(threadContext);
       IJobManager jobManager = JobManagerFactory.make(threadContext);
+      ILockManager lockManager = LockManagerFactory.make(threadContext);
+      IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
 
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       Logging.threads.debug("Stuffer thread: Low water mark is "+Integer.toString(lowWaterMark)+"; amount per stuffing is "+Integer.toString(stuffAmt));
 
-      // This is used to adjust the number of records returned for jobs
-      // that are throttled.
-      long lastTime = System.currentTimeMillis();
-
       // Hashmap keyed by jobid and containing ArrayLists.
       // This way we can guarantee priority will do the right thing, because the
       // priority is per-job.  We CANNOT guarantee anything about scheduling order, however,
@@ -151,21 +161,40 @@
           // What we want to do is load enough documents to completely fill n queued document sets.
           // The number n passed in here thus cannot be used in a query to limit the number of returned
           // results.  Instead, it must be factored into the limit portion of the query.
+          
+          // Note well: the stuffer code stuffs based on intervals, so it is perfectly OK to 
+          // compute the interval for this request AND update the global "last time" even
+          // before actually firing off the query.  The worst that can happen is if the query
+          // fails, the interval will be "lost", and thus fewer documents will be stuffed than could
+          // be.
+          long stuffingStartTime;
+          long stuffingEndTime;
+          lockManager.enterWriteLock(stufferThreadLockName);
+          try
+          {
+            stuffingStartTime = readLastTime(lockManager);
+            stuffingEndTime = System.currentTimeMillis();
+            // Set the last time to be the current time
+            writeLastTime(lockManager,stuffingEndTime);
+          }
+          finally
+          {
+            lockManager.leaveWriteLock(stufferThreadLockName);
+          }
+
+          lastQueueStart = System.currentTimeMillis();
           DepthStatistics depthStatistics = new DepthStatistics();
-          long currentTime = System.currentTimeMillis();
-          lastQueueStart = currentTime;
-          DocumentDescription[] descs = jobManager.getNextDocuments(stuffAmt,currentTime,currentTime-lastTime,
+          DocumentDescription[] descs = jobManager.getNextDocuments(processID,stuffAmt,stuffingEndTime,stuffingEndTime-stuffingStartTime,
             blockingDocuments,queueTracker.getCurrentStatistics(),depthStatistics);
           lastQueueEnd = System.currentTimeMillis();
           lastQueueFullResults = (descs.length == stuffAmt);
+          
+          // Assess what we've done.
+          rt.assessMinimumDepth(depthStatistics.getBins());
 
           if (Thread.currentThread().isInterrupted())
             throw new ManifoldCFException("Interrupted",ManifoldCFException.INTERRUPTED);
 
-          queueTracker.assessMinimumDepth(depthStatistics.getBins());
-
-          // Set the last time to be the current time
-          lastTime = currentTime;
           if (Logging.threads.isDebugEnabled())
           {
             Logging.threads.debug("Stuffer thread: Found "+Integer.toString(descs.length)+" documents to queue");
@@ -245,10 +274,7 @@
             try
             {
               // Grab a connector handle
-              IRepositoryConnector connector = RepositoryConnectorFactory.grab(threadContext,
-                connection.getClassName(),
-                connection.getConfigParams(),
-                connection.getMaxConnections());
+              IRepositoryConnector connector = repositoryConnectorPool.grab(connection);
               if (connector == null)
               {
                 maxDocuments = 1;
@@ -265,7 +291,7 @@
                 }
                 finally
                 {
-                  RepositoryConnectorFactory.release(connector);
+                  repositoryConnectorPool.release(connection,connector);
                 }
               }
             }
@@ -403,4 +429,36 @@
     }
   }
 
+  protected static long readLastTime(ILockManager lockManager)
+    throws ManifoldCFException
+  {
+    byte[] data = lockManager.readData(stufferThreadLastTimeDatumName);
+    if (data == null || data.length != 8)
+      return System.currentTimeMillis();
+    long value = (((long)data[0]) & 0xffL) +
+      ((((long)data[1]) << 8) & 0xff00L) +
+      ((((long)data[2]) << 16) & 0xff0000L) +
+      ((((long)data[3]) << 24) & 0xff000000L) +
+      ((((long)data[4]) << 32) & 0xff00000000L) +
+      ((((long)data[5]) << 40) & 0xff0000000000L) +
+      ((((long)data[6]) << 48) & 0xff000000000000L) +
+      ((((long)data[7]) << 56) & 0xff00000000000000L);
+    return value;
+  }
+
+  protected static void writeLastTime(ILockManager lockManager, long lastTime)
+    throws ManifoldCFException
+  {
+    byte[] data = new byte[8];
+    data[0] = (byte)(lastTime & 0xffL);
+    data[1] = (byte)((lastTime >> 8) & 0xffL);
+    data[2] = (byte)((lastTime >> 16) & 0xffL);
+    data[3] = (byte)((lastTime >> 24) & 0xffL);
+    data[4] = (byte)((lastTime >> 32) & 0xffL);
+    data[5] = (byte)((lastTime >> 40) & 0xffL);
+    data[6] = (byte)((lastTime >> 48) & 0xffL);
+    data[7] = (byte)((lastTime >> 56) & 0xffL);
+    lockManager.writeData(stufferThreadLastTimeDatumName,data);
+  }
+  
 }
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerResetManager.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerResetManager.java
index 8db49c8..6b5d1d7 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerResetManager.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerResetManager.java
@@ -31,30 +31,32 @@
   public static final String _rcsid = "@(#)$Id: WorkerResetManager.java 988245 2010-08-23 18:39:35Z kwright $";
 
   /** The document queue */
-  protected DocumentQueue dq;
+  protected final DocumentQueue dq;
   /** The expiration queue */
-  protected DocumentCleanupQueue eq;
+  protected final DocumentCleanupQueue eq;
 
   /** Constructor. */
-  public WorkerResetManager(DocumentQueue dq, DocumentCleanupQueue eq)
+  public WorkerResetManager(DocumentQueue dq, DocumentCleanupQueue eq, String processID)
   {
-    super();
+    super(processID);
     this.dq = dq;
     this.eq = eq;
   }
 
   /** Reset */
-  protected void performResetLogic(IThreadContext tc)
+  @Override
+  protected void performResetLogic(IThreadContext tc, String processID)
     throws ManifoldCFException
   {
     IJobManager jobManager = JobManagerFactory.make(tc);
-    jobManager.resetDocumentWorkerStatus();
+    jobManager.resetDocumentWorkerStatus(processID);
     dq.clear();
     eq.clear();
   }
   
   /** Do the wakeup logic.
   */
+  @Override
   protected void performWakeupLogic()
   {
     // Wake up all sleeping worker threads
diff --git a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerThread.java b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerThread.java
index c845c05..267ab9f 100644
--- a/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerThread.java
+++ b/framework/pull-agent/src/main/java/org/apache/manifoldcf/crawler/system/WorkerThread.java
@@ -35,18 +35,21 @@
 
 
   // Local data
-  protected String id;
-  // This is a reference to the static main document queue
-  protected DocumentQueue documentQueue;
+  /** Thread id */
+  protected final String id;
+  /** This is a reference to the static main document queue */
+  protected final DocumentQueue documentQueue;
   /** Worker thread pool reset manager */
-  protected WorkerResetManager resetManager;
+  protected final WorkerResetManager resetManager;
   /** Queue tracker */
-  protected QueueTracker queueTracker;
+  protected final QueueTracker queueTracker;
+  /** Process ID */
+  protected final String processID;
 
   /** Constructor.
   *@param id is the worker thread id.
   */
-  public WorkerThread(String id, DocumentQueue documentQueue, WorkerResetManager resetManager, QueueTracker queueTracker)
+  public WorkerThread(String id, DocumentQueue documentQueue, WorkerResetManager resetManager, QueueTracker queueTracker, String processID)
     throws ManifoldCFException
   {
     super();
@@ -54,6 +57,7 @@
     this.documentQueue = documentQueue;
     this.resetManager = resetManager;
     this.queueTracker = queueTracker;
+    this.processID = processID;
     setName("Worker thread '"+id+"'");
     setDaemon(true);
 
@@ -70,9 +74,13 @@
       IThreadContext threadContext = ThreadContextFactory.make();
       IIncrementalIngester ingester = IncrementalIngesterFactory.make(threadContext);
       IJobManager jobManager = JobManagerFactory.make(threadContext);
+      IBinManager binManager = BinManagerFactory.make(threadContext);
       IRepositoryConnectionManager connMgr = RepositoryConnectionManagerFactory.make(threadContext);
       IOutputConnectionManager outputMgr = OutputConnectionManagerFactory.make(threadContext);
+      IReprioritizationTracker rt = ReprioritizationTrackerFactory.make(threadContext);
 
+      IRepositoryConnectorPool repositoryConnectorPool = RepositoryConnectorPoolFactory.make(threadContext);
+      
       List<DocumentToProcess> fetchList = new ArrayList<DocumentToProcess>();
       Map<String,String> versionMap = new HashMap<String,String>();
       List<QueuedDocument> finishList = new ArrayList<QueuedDocument>();
@@ -251,10 +259,7 @@
               IRepositoryConnector connector = null;
               if (activeDocuments.size() > 0 || hopcountremoveList.size() > 0)
               {
-                connector = RepositoryConnectorFactory.grab(threadContext,
-                  connection.getClassName(),
-                  connection.getConfigParams(),
-                  connection.getMaxConnections());
+                connector = repositoryConnectorPool.grab(connection);
 
                 // If we wind up with a null here, it means that a document got queued for a connector which is now gone.
                 // Basically, what we want to do in that case is to treat this kind of like a service interruption - the document
@@ -306,7 +311,7 @@
                     String outputVersion = ingester.getOutputDescription(outputName,outputSpec);
                       
                     HashMap abortSet = new HashMap();
-                    VersionActivity versionActivity = new VersionActivity(connectionName,connMgr,jobManager,job,ingester,abortSet,outputVersion);
+                    VersionActivity versionActivity = new VersionActivity(processID,connectionName,connMgr,jobManager,job,ingester,abortSet,outputVersion);
 
                     String aclAuthority = connection.getACLAuthority();
                     boolean isDefaultAuthority = (aclAuthority == null || aclAuthority.length() == 0);
@@ -522,7 +527,8 @@
                         }
 
                         // First, make the things we will need for all subsequent steps.
-                        ProcessActivity activity = new ProcessActivity(threadContext,queueTracker,jobManager,ingester,
+                        ProcessActivity activity = new ProcessActivity(processID,
+                          threadContext,rt,jobManager,ingester,
                           currentTime,job,connection,connector,connMgr,legalLinkTypes,ingestLogger,abortSet,outputVersion,newParameterVersion);
                         try
                         {
@@ -564,7 +570,8 @@
                               // "Finish" the documents (removing unneeded carrydown info, etc.)
                               DocumentDescription[] requeueCandidates = jobManager.finishDocuments(job.getID(),legalLinkTypes,processIDHashes,job.getHopcountMode());
 
-                              ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,requeueCandidates,connector,connection,queueTracker,currentTime);
+                              ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,
+                                requeueCandidates,connector,connection,rt,currentTime);
 
                               if (Logging.threads.isDebugEnabled())
                                 Logging.threads.debug("Worker thread done processing "+Integer.toString(processIDs.length)+" documents");
@@ -814,17 +821,17 @@
                   // Now, handle the delete list
                   processDeleteLists(outputName,connector,connection,jobManager,
                     deleteList,ingester,
-                    job.getID(),legalLinkTypes,ingestLogger,job.getHopcountMode(),queueTracker,currentTime);
+                    job.getID(),legalLinkTypes,ingestLogger,job.getHopcountMode(),rt,currentTime);
 
                   // Handle hopcount removal
                   processHopcountRemovalLists(outputName,connector,connection,jobManager,
                     hopcountremoveList,ingester,
-                    job.getID(),legalLinkTypes,ingestLogger,job.getHopcountMode(),queueTracker,currentTime);
+                    job.getID(),legalLinkTypes,ingestLogger,job.getHopcountMode(),rt,currentTime);
 
                 }
                 finally
                 {
-                  RepositoryConnectorFactory.release(connector);
+                  repositoryConnectorPool.release(connection,connector);
                 }
               
               }
@@ -970,17 +977,18 @@
   * documents from the index should they be already present.
   */
   protected static void processHopcountRemovalLists(String outputName, IRepositoryConnector connector,
-    IRepositoryConnection connection, IJobManager jobManager, List<QueuedDocument> hopcountremoveList,
+    IRepositoryConnection connection, IJobManager jobManager,
+    List<QueuedDocument> hopcountremoveList,
     IIncrementalIngester ingester,
     Long jobID, String[] legalLinkTypes, OutputActivity ingestLogger,
-    int hopcountMethod, QueueTracker queueTracker, long currentTime)
+    int hopcountMethod, IReprioritizationTracker rt, long currentTime)
     throws ManifoldCFException
   {
     // Remove from index
     hopcountremoveList = removeFromIndex(outputName,connection.getName(),jobManager,hopcountremoveList,ingester,ingestLogger);
     // Mark as 'hopcountremoved' in the job queue
     processJobQueueHopcountRemovals(hopcountremoveList,connector,connection,
-      jobManager,jobID,legalLinkTypes,hopcountMethod,queueTracker,currentTime);
+      jobManager,jobID,legalLinkTypes,hopcountMethod,rt,currentTime);
   }
 
   /** Clear specified documents out of the job queue and from the appliance.
@@ -991,17 +999,18 @@
   *@param ingesterDeleteList is a list of document id's to delete.
   */
   protected static void processDeleteLists(String outputName, IRepositoryConnector connector,
-    IRepositoryConnection connection, IJobManager jobManager, List<QueuedDocument> deleteList,
+    IRepositoryConnection connection, IJobManager jobManager,
+    List<QueuedDocument> deleteList,
     IIncrementalIngester ingester,
     Long jobID, String[] legalLinkTypes, OutputActivity ingestLogger,
-    int hopcountMethod, QueueTracker queueTracker, long currentTime)
+    int hopcountMethod, IReprioritizationTracker rt, long currentTime)
     throws ManifoldCFException
   {
     // Remove from index
     deleteList = removeFromIndex(outputName,connection.getName(),jobManager,deleteList,ingester,ingestLogger);
     // Delete from the job queue
     processJobQueueDeletions(deleteList,connector,connection,
-      jobManager,jobID,legalLinkTypes,hopcountMethod,queueTracker,currentTime);
+      jobManager,jobID,legalLinkTypes,hopcountMethod,rt,currentTime);
   }
 
   /** Remove a specified set of documents from the index.
@@ -1081,7 +1090,7 @@
   */
   protected static void processJobQueueDeletions(List<QueuedDocument> jobmanagerDeleteList,
     IRepositoryConnector connector, IRepositoryConnection connection, IJobManager jobManager,
-    Long jobID, String[] legalLinkTypes, int hopcountMethod, QueueTracker queueTracker, long currentTime)
+    Long jobID, String[] legalLinkTypes, int hopcountMethod, IReprioritizationTracker rt, long currentTime)
     throws ManifoldCFException
   {
     // Now, do the document queue cleanup for deletions.
@@ -1098,7 +1107,8 @@
       DocumentDescription[] requeueCandidates = jobManager.markDocumentDeletedMultiple(jobID,legalLinkTypes,deleteDescriptions,hopcountMethod);
 
       // Requeue those documents that had carrydown data modifications
-      ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,requeueCandidates,connector,connection,queueTracker,currentTime);
+      ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,
+        requeueCandidates,connector,connection,rt,currentTime);
 
       // Mark all these as done
       for (int i = 0; i < jobmanagerDeleteList.size(); i++)
@@ -1113,7 +1123,7 @@
   */
   protected static void processJobQueueHopcountRemovals(List<QueuedDocument> jobmanagerRemovalList,
     IRepositoryConnector connector, IRepositoryConnection connection, IJobManager jobManager,
-    Long jobID, String[] legalLinkTypes, int hopcountMethod, QueueTracker queueTracker, long currentTime)
+    Long jobID, String[] legalLinkTypes, int hopcountMethod, IReprioritizationTracker rt, long currentTime)
     throws ManifoldCFException
   {
     // Now, do the document queue cleanup for deletions.
@@ -1130,7 +1140,8 @@
       DocumentDescription[] requeueCandidates = jobManager.markDocumentHopcountRemovalMultiple(jobID,legalLinkTypes,removalDescriptions,hopcountMethod);
 
       // Requeue those documents that had carrydown data modifications
-      ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,requeueCandidates,connector,connection,queueTracker,currentTime);
+      ManifoldCF.requeueDocumentsDueToCarrydown(jobManager,
+        requeueCandidates,connector,connection,rt,currentTime);
 
       // Mark all these as done
       for (int i = 0; i < jobmanagerRemovalList.size(); i++)
@@ -1227,21 +1238,22 @@
   */
   protected static class VersionActivity implements IVersionActivity
   {
-    protected String connectionName;
-    protected IRepositoryConnectionManager connMgr;
-    protected IJobManager jobManager;
-    protected Long jobID;
-    protected IJobDescription job;
-    protected IIncrementalIngester ingester;
-    protected HashMap abortSet;
-    protected String outputVersion;
+    protected final String processID;
+    protected final String connectionName;
+    protected final IRepositoryConnectionManager connMgr;
+    protected final IJobManager jobManager;
+    protected final IJobDescription job;
+    protected final IIncrementalIngester ingester;
+    protected final HashMap abortSet;
+    protected final String outputVersion;
 
     /** Constructor.
     */
-    public VersionActivity(String connectionName, IRepositoryConnectionManager connMgr,
+    public VersionActivity(String processID, String connectionName, IRepositoryConnectionManager connMgr,
       IJobManager jobManager, IJobDescription job, IIncrementalIngester ingester, HashMap abortSet,
       String outputVersion)
     {
+      this.processID = processID;
       this.connectionName = connectionName;
       this.connMgr = connMgr;
       this.jobManager = jobManager;
@@ -1360,7 +1372,7 @@
     public boolean beginEventSequence(String eventName)
       throws ManifoldCFException
     {
-      return jobManager.beginEventSequence(eventName);
+      return jobManager.beginEventSequence(processID,eventName);
     }
 
     /** Complete an event sequence.
@@ -1413,7 +1425,7 @@
     */
     public String createJobSpecificString(String simpleString)
     {
-      return ManifoldCF.createJobSpecificString(jobID,simpleString);
+      return ManifoldCF.createJobSpecificString(job.getID(),simpleString);
     }
 
   }
@@ -1423,6 +1435,7 @@
   protected static class ProcessActivity implements IProcessActivity
   {
     // Member variables
+    protected final String processID;
     protected final IThreadContext threadContext;
     protected final IJobManager jobManager;
     protected final IIncrementalIngester ingester;
@@ -1433,7 +1446,7 @@
     protected final IRepositoryConnectionManager connMgr;
     protected final String[] legalLinkTypes;
     protected final OutputActivity ingestLogger;
-    protected final QueueTracker queueTracker;
+    protected final IReprioritizationTracker rt;
     protected final HashMap abortSet;
     protected final String outputVersion;
     protected final String parameterVersion;
@@ -1454,12 +1467,16 @@
     *@param jobManager is the job manager
     *@param ingester is the ingester
     */
-    public ProcessActivity(IThreadContext threadContext, QueueTracker queueTracker, IJobManager jobManager, IIncrementalIngester ingester,
-      long currentTime, IJobDescription job, IRepositoryConnection connection, IRepositoryConnector connector, IRepositoryConnectionManager connMgr,
-      String[] legalLinkTypes, OutputActivity ingestLogger, HashMap abortSet, String outputVersion, String parameterVersion)
+    public ProcessActivity(String processID, IThreadContext threadContext,
+      IReprioritizationTracker rt, IJobManager jobManager,
+      IIncrementalIngester ingester, long currentTime,
+      IJobDescription job, IRepositoryConnection connection, IRepositoryConnector connector,
+      IRepositoryConnectionManager connMgr, String[] legalLinkTypes, OutputActivity ingestLogger,
+      HashMap abortSet, String outputVersion, String parameterVersion)
     {
+      this.processID = processID;
       this.threadContext = threadContext;
-      this.queueTracker = queueTracker;
+      this.rt = rt;
       this.jobManager = jobManager;
       this.ingester = ingester;
       this.currentTime = currentTime;
@@ -1981,16 +1998,15 @@
 
         String[] docidHashes = new String[set.size()];
         String[] docids = new String[set.size()];
-        double[] priorities = new double[set.size()];
-        String[][] binNames = new String[set.size()][];
+        IPriorityCalculator[] priorities = new IPriorityCalculator[set.size()];
         String[][] dataNames = new String[docids.length][];
         Object[][][] dataValues = new Object[docids.length][][];
         String[][] eventNames = new String[docids.length][];
 
         long currentTime = System.currentTimeMillis();
 
-        int j = 0;
-        while (j < docidHashes.length)
+        rt.clearPreloadRequests();
+        for (int j = 0; j < docidHashes.length; j++)
         {
           DocumentReference dr = (DocumentReference)set.get(j);
           docidHashes[j] = dr.getLocalIdentifierHash();
@@ -2001,34 +2017,17 @@
 
           // Calculate desired document priority based on current queuetracker status.
           String[] bins = ManifoldCF.calculateBins(connector,dr.getLocalIdentifier());
-
-
-          binNames[j] = bins;
-          priorities[j] = queueTracker.calculatePriority(bins,connection);
-          if (Logging.scheduling.isDebugEnabled())
-            Logging.scheduling.debug("Assigning '"+docids[j]+"' priority "+new Double(priorities[j]).toString());
-
-          // No longer used; the functionality is folded atomically into calculatePriority above:
-          //queueTracker.notePrioritySet(currentTime,job.getID(),bins,connection);
-
-          j++;
+          PriorityCalculator p = new PriorityCalculator(rt,connection,bins);
+          priorities[j] = p;
+          p.makePreloadRequest();
         }
+        rt.preloadBinValues();
 
-        boolean[] trackerNote = jobManager.addDocuments(job.getID(),legalLinkTypes,docidHashes,docids,db.getParentIdentifierHash(),db.getLinkType(),job.getHopcountMode(),
+        jobManager.addDocuments(processID,
+          job.getID(),legalLinkTypes,docidHashes,docids,db.getParentIdentifierHash(),db.getLinkType(),job.getHopcountMode(),
           dataNames,dataValues,currentTime,priorities,eventNames);
-
-        // Inform queuetracker about what we used and what we didn't
-        j = 0;
-        while (j < trackerNote.length)
-        {
-          if (trackerNote[j] == false)
-          {
-            String[] bins = binNames[j];
-            queueTracker.notePriorityNotUsed(bins,connection,priorities[j]);
-          }
-          j++;
-        }
-
+        
+        rt.clearPreloadedValues();
       }
 
       discard();
@@ -2058,7 +2057,7 @@
     public boolean beginEventSequence(String eventName)
       throws ManifoldCFException
     {
-      return jobManager.beginEventSequence(eventName);
+      return jobManager.beginEventSequence(processID,eventName);
     }
 
     /** Complete an event sequence.
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseDerby.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseDerby.java
index c3607d6..a4084f8 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseDerby.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseDerby.java
@@ -116,15 +116,17 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(tc);
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(tc);
     super.cleanupSystem();
   }
 
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDB.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDB.java
index d009c86..a1bab4b 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDB.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDB.java
@@ -114,15 +114,17 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(tc);
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(tc);
     super.cleanupSystem();
   }
 
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDBext.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDBext.java
index d101937..759a2f1 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDBext.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseHSQLDBext.java
@@ -114,15 +114,17 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(tc);
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(tc);
     super.cleanupSystem();
   }
 
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITDerby.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITDerby.java
index 76b66f6..82cd2f2 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITDerby.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITDerby.java
@@ -100,6 +100,7 @@
   /** Construct a command url.
   */
   protected String makeAPIURL(String command)
+    throws Exception
   {
     return mcfInstance.makeAPIURL(command);
   }
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITHSQLDB.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITHSQLDB.java
index b83db22..3c98e5f 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITHSQLDB.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITHSQLDB.java
@@ -44,6 +44,12 @@
     mcfInstance = new ManifoldCFInstance(singleWar);
   }
   
+  public BaseITHSQLDB(boolean singleWar, boolean webapps)
+  {
+    super();
+    mcfInstance = new ManifoldCFInstance(singleWar, webapps);
+  }
+  
   // Basic job support
   
   protected void waitJobInactiveNative(IJobManager jobManager, Long jobID, long maxTime)
@@ -101,6 +107,7 @@
   /** Construct a command url.
   */
   protected String makeAPIURL(String command)
+    throws Exception
   {
     return mcfInstance.makeAPIURL(command);
   }
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITMySQL.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITMySQL.java
index 398f690..e784017 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITMySQL.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITMySQL.java
@@ -101,6 +101,7 @@
   /** Construct a command url.
   */
   protected String makeAPIURL(String command)
+    throws Exception
   {
     return mcfInstance.makeAPIURL(command);
   }
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITPostgresql.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITPostgresql.java
index a579526..984f919 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITPostgresql.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseITPostgresql.java
@@ -101,6 +101,7 @@
   /** Construct a command url.
   */
   protected String makeAPIURL(String command)
+    throws Exception
   {
     return mcfInstance.makeAPIURL(command);
   }
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseMySQL.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseMySQL.java
index 97e4eec..cba246c 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseMySQL.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BaseMySQL.java
@@ -114,15 +114,17 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(tc);
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(tc);
     super.cleanupSystem();
   }
 
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BasePostgresql.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BasePostgresql.java
index fa5424b..ddd03e3 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BasePostgresql.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/BasePostgresql.java
@@ -114,15 +114,17 @@
     throws Exception
   {
     super.initializeSystem();
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localInitialize(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localInitialize(tc);
   }
   
   protected void cleanupSystem()
     throws Exception
   {
-    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup();
-    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup();
+    IThreadContext tc = ThreadContextFactory.make();
+    org.apache.manifoldcf.authorities.system.ManifoldCF.localCleanup(tc);
+    org.apache.manifoldcf.crawler.system.ManifoldCF.localCleanup(tc);
     super.cleanupSystem();
   }
 
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/ManifoldCFInstance.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/ManifoldCFInstance.java
index 34352a4..91e9ee7 100644
--- a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/ManifoldCFInstance.java
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/ManifoldCFInstance.java
@@ -22,6 +22,7 @@
 import org.apache.manifoldcf.agents.interfaces.*;
 import org.apache.manifoldcf.crawler.interfaces.*;
 import org.apache.manifoldcf.crawler.system.ManifoldCF;
+import org.apache.manifoldcf.agents.system.AgentsDaemon;
 
 import java.io.*;
 import java.util.*;
@@ -56,30 +57,58 @@
 /** Tests that run the "agents daemon" should be derived from this */
 public class ManifoldCFInstance
 {
-  public static final String agentShutdownSignal = "agent-process";
-  
-  protected boolean singleWar = false;
-  protected int testPort = 8346;
+  protected final boolean webapps;
+  protected final boolean singleWar;
+  protected final int testPort;
+  protected final String processID;
   
   protected DaemonThread daemonThread = null;
   protected Server server = null;
 
   public ManifoldCFInstance()
   {
+    this("", 8346, false, true);
   }
   
-  public ManifoldCFInstance(boolean singleWar)
+  public ManifoldCFInstance(String processID)
   {
-    this(8346,singleWar);
-  }
-  
-  public ManifoldCFInstance(int testPort)
-  {
-    this(testPort,false);
+    this(processID, 8346, false, true);
   }
 
-  public ManifoldCFInstance(int testPort, boolean singleWar)
+  public ManifoldCFInstance(boolean singleWar)
   {
+    this("", 8346, singleWar, true);
+  }
+  
+  public ManifoldCFInstance(String processID, boolean singleWar)
+  {
+    this(processID, 8346, singleWar, true);
+  }
+
+  public ManifoldCFInstance(boolean singleWar, boolean webapps)
+  {
+    this("", 8346, singleWar, webapps);
+  }
+
+  public ManifoldCFInstance(String processID, boolean singleWar, boolean webapps)
+  {
+    this(processID, 8346, singleWar, webapps);
+  }
+  
+  public ManifoldCFInstance(String processID, int testPort)
+  {
+    this(processID, testPort, false, true);
+  }
+
+  public ManifoldCFInstance(String processID, int testPort, boolean singleWar)
+  {
+    this(processID, testPort, singleWar, true);
+  }
+
+  public ManifoldCFInstance(String processID, int testPort, boolean singleWar, boolean webapps)
+  {
+    this.processID = processID;
+    this.webapps = webapps;
     this.testPort = testPort;
     this.singleWar = singleWar;
   }
@@ -279,11 +308,17 @@
   /** Construct a command url.
   */
   public String makeAPIURL(String command)
+    throws Exception
   {
-    if (singleWar)
-      return "http://localhost:"+Integer.toString(testPort)+"/mcf/api/json/"+command;
+    if (webapps)
+    {
+      if (singleWar)
+        return "http://localhost:"+Integer.toString(testPort)+"/mcf/api/json/"+command;
+      else
+        return "http://localhost:"+Integer.toString(testPort)+"/mcf-api-service/json/"+command;
+    }
     else
-      return "http://localhost:"+Integer.toString(testPort)+"/mcf-api-service/json/"+command;
+      throw new Exception("No API servlet running");
   }
 
   public static String convertToString(HttpResponse httpResponse)
@@ -497,61 +532,71 @@
   public void start()
     throws Exception
   {
-    // Start jetty
-    server = new Server( testPort );    
-    server.setStopAtShutdown( true );
-    // Initialize the servlets
     ContextHandlerCollection contexts = new ContextHandlerCollection();
-    server.setHandler(contexts);
+    if (webapps)
+    {
+      // Start jetty
+      server = new Server( testPort );    
+      server.setStopAtShutdown( true );
+      // Initialize the servlets
+      server.setHandler(contexts);
+    }
 
     if (singleWar)
     {
-      // Start the single combined war
-      String combinedWarPath = "../../framework/build/war-proprietary/mcf-combined-service.war";
-      if (System.getProperty("combinedWarPath") != null)
-        combinedWarPath = System.getProperty("combinedWarPath");
-      
-      // Initialize the servlet
-      WebAppContext lcfCombined = new WebAppContext(combinedWarPath,"/mcf");
-      // This will cause jetty to ignore all of the framework and jdbc jars in the war, which is what we want.
-      lcfCombined.setParentLoaderPriority(true);
-      contexts.addHandler(lcfCombined);
-      server.start();
+      if (webapps)
+      {
+        // Start the single combined war
+        String combinedWarPath = "../../framework/build/war-proprietary/mcf-combined-service.war";
+        if (System.getProperty("combinedWarPath") != null)
+          combinedWarPath = System.getProperty("combinedWarPath");
+        
+        // Initialize the servlet
+        WebAppContext lcfCombined = new WebAppContext(combinedWarPath,"/mcf");
+        // This will cause jetty to ignore all of the framework and jdbc jars in the war, which is what we want.
+        lcfCombined.setParentLoaderPriority(true);
+        contexts.addHandler(lcfCombined);
+        server.start();
+      }
+      else
+        throw new Exception("Can't run singleWar without webapps");
     }
     else
     {
-      String crawlerWarPath = "../../framework/build/war-proprietary/mcf-crawler-ui.war";
-      String authorityserviceWarPath = "../../framework/build/war-proprietary/mcf-authority-service.war";
-      String apiWarPath = "../../framework/build/war-proprietary/mcf-api-service.war";
+      if (webapps)
+      {
+        String crawlerWarPath = "../../framework/build/war-proprietary/mcf-crawler-ui.war";
+        String authorityserviceWarPath = "../../framework/build/war-proprietary/mcf-authority-service.war";
+        String apiWarPath = "../../framework/build/war-proprietary/mcf-api-service.war";
 
-      if (System.getProperty("crawlerWarPath") != null)
-          crawlerWarPath = System.getProperty("crawlerWarPath");
-      if (System.getProperty("authorityserviceWarPath") != null)
-          authorityserviceWarPath = System.getProperty("authorityserviceWarPath");
-      if (System.getProperty("apiWarPath") != null)
-          apiWarPath = System.getProperty("apiWarPath");
+        if (System.getProperty("crawlerWarPath") != null)
+            crawlerWarPath = System.getProperty("crawlerWarPath");
+        if (System.getProperty("authorityserviceWarPath") != null)
+            authorityserviceWarPath = System.getProperty("authorityserviceWarPath");
+        if (System.getProperty("apiWarPath") != null)
+            apiWarPath = System.getProperty("apiWarPath");
 
-      // Initialize the servlets
-      WebAppContext lcfCrawlerUI = new WebAppContext(crawlerWarPath,"/mcf-crawler-ui");
-      // This will cause jetty to ignore all of the framework and jdbc jars in the war, which is what we want.
-      lcfCrawlerUI.setParentLoaderPriority(true);
-      contexts.addHandler(lcfCrawlerUI);
-      WebAppContext lcfAuthorityService = new WebAppContext(authorityserviceWarPath,"/mcf-authority-service");
-      // This will cause jetty to ignore all of the framework and jdbc jars in the war, which is what we want.
-      lcfAuthorityService.setParentLoaderPriority(true);
-      contexts.addHandler(lcfAuthorityService);
-      WebAppContext lcfApi = new WebAppContext(apiWarPath,"/mcf-api-service");
-      lcfApi.setParentLoaderPriority(true);
-      contexts.addHandler(lcfApi);
-      server.start();
+        // Initialize the servlets
+        WebAppContext lcfCrawlerUI = new WebAppContext(crawlerWarPath,"/mcf-crawler-ui");
+        // This will cause jetty to ignore all of the framework and jdbc jars in the war, which is what we want.
+        lcfCrawlerUI.setParentLoaderPriority(true);
+        contexts.addHandler(lcfCrawlerUI);
+        WebAppContext lcfAuthorityService = new WebAppContext(authorityserviceWarPath,"/mcf-authority-service");
+        // This will cause jetty to ignore all of the framework and jdbc jars in the war, which is what we want.
+        lcfAuthorityService.setParentLoaderPriority(true);
+        contexts.addHandler(lcfAuthorityService);
+        WebAppContext lcfApi = new WebAppContext(apiWarPath,"/mcf-api-service");
+        lcfApi.setParentLoaderPriority(true);
+        contexts.addHandler(lcfApi);
+        server.start();
+      }
 
       // If all worked, then we can start the daemon.
       // Clear the agents shutdown signal.
       IThreadContext tc = ThreadContextFactory.make();
-      ILockManager lockManager = LockManagerFactory.make(tc);
-      lockManager.clearGlobalFlag(agentShutdownSignal);
+      AgentsDaemon.clearAgentsShutdownSignal(tc);
 
-      daemonThread = new DaemonThread();
+      daemonThread = new DaemonThread(processID);
       daemonThread.start();
     }
   }
@@ -641,31 +686,48 @@
         }
       }
 
-      if (!singleWar)
+      try
       {
-        // Shut down daemon
-        ILockManager lockManager = LockManagerFactory.make(tc);
-        lockManager.setGlobalFlag(agentShutdownSignal);
-        
-        // Wait for daemon thread to exit.
-        while (true)
-        {
-          if (daemonThread.isAlive())
-          {
-            Thread.sleep(1000L);
-            continue;
-          }
-          break;
-        }
-
-        Exception e = daemonThread.getDaemonException();
-        if (e != null)
+        stopNoCleanup();
+      }
+      catch (Exception e)
+      {
+        if (currentException == null)
           currentException = e;
       }
+    }
+  }
+  
+  public void stopNoCleanup()
+    throws Exception
+  {
+    if (daemonThread != null)
+    {
+      Exception currentException = null;
+      
+      // Shut down daemon - but only ONE daemon
+      //AgentsDaemon.assertAgentsShutdownSignal(tc);
         
+      // Wait for daemon thread to exit.
+      while (true)
+      {
+        daemonThread.interrupt();
+        if (daemonThread.isAlive())
+        {
+          Thread.sleep(1000L);
+          continue;
+        }
+        break;
+      }
+
+      Exception e = daemonThread.getDaemonException();
+      if (e != null || !(e instanceof InterruptedException))
+        currentException = e;
+
       if (currentException != null)
         throw currentException;
     }
+        
   }
   
   public void unload()
@@ -683,10 +745,12 @@
   
   protected static class DaemonThread extends Thread
   {
+    protected final String processID;
     protected Exception daemonException = null;
     
-    public DaemonThread()
+    public DaemonThread(String processID)
     {
+      this.processID = processID;
       setName("Daemon thread");
     }
     
@@ -695,27 +759,10 @@
       IThreadContext tc = ThreadContextFactory.make();
       // Now, start the server, and then wait for the shutdown signal.  On shutdown, we have to actually do the cleanup,
       // because the JVM isn't going away.
+      AgentsDaemon ad = new AgentsDaemon(processID);
       try
       {
-        ILockManager lockManager = LockManagerFactory.make(tc);
-        while (true)
-        {
-          // Any shutdown signal yet?
-          if (lockManager.checkGlobalFlag(agentShutdownSignal))
-            break;
-            
-          // Start whatever agents need to be started
-          ManifoldCF.startAgents(tc);
-
-          try
-          {
-            ManifoldCF.sleep(5000);
-          }
-          catch (InterruptedException e)
-          {
-            break;
-          }
-        }
+        ad.runAgents(tc);
       }
       catch (ManifoldCFException e)
       {
@@ -725,7 +772,7 @@
       {
         try
         {
-          ManifoldCF.stopAgents(tc);
+          ad.stopAgents(tc);
         }
         catch (ManifoldCFException e)
         {
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/NullOutputConnector.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/NullOutputConnector.java
new file mode 100644
index 0000000..9a3fe23
--- /dev/null
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/NullOutputConnector.java
@@ -0,0 +1,40 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+
+/** Output connector class to be used by scheduling tests.  This connector never expects to see any documents at all,
+* which is fine because it rejects them all. */
+public class NullOutputConnector extends org.apache.manifoldcf.agents.output.BaseOutputConnector
+{
+
+  public NullOutputConnector()
+  {
+    super();
+  }
+  
+
+}
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulerHSQLDBTest.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulerHSQLDBTest.java
new file mode 100644
index 0000000..dd08710
--- /dev/null
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulerHSQLDBTest.java
@@ -0,0 +1,140 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.agents.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a test of the scheduler.  If the test succeeds, it is because
+* the scheduler has properly distributed requests for all bins evenly. */
+public class SchedulerHSQLDBTest extends ConnectorBaseHSQLDB
+{
+  protected final ManifoldCFInstance mcfInstance1;
+  protected final ManifoldCFInstance mcfInstance2;
+  protected SchedulerTester tester;
+
+  public SchedulerHSQLDBTest()
+  {
+    super();
+    mcfInstance1 = new ManifoldCFInstance("A",false,false);
+    mcfInstance2 = new ManifoldCFInstance("B",false,false);
+    tester = new SchedulerTester(mcfInstance1,mcfInstance2);
+  }
+  
+  @Override
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.tests.SchedulingRepositoryConnector"};
+  }
+  
+  @Override
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"SchedulingConnector"};
+  }
+
+  @Override
+  protected String[] getOutputClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.tests.NullOutputConnector"};
+  }
+  
+  @Override
+  protected String[] getOutputNames()
+  {
+    return new String[]{"NullOutput"};
+  }
+
+  @Test
+  public void schedulingTestRun()
+    throws Exception
+  {
+    tester.executeTest();
+  }
+  
+  @Before
+  public void setUp()
+    throws Exception
+  {
+    initializeSystem();
+    try
+    {
+      localReset();
+    }
+    catch (Exception e)
+    {
+      System.out.println("Warning: Preclean failed: "+e.getMessage());
+    }
+    try
+    {
+      localSetUp();
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      throw e;
+    }
+  }
+  
+  @After
+  public void cleanUp()
+    throws Exception
+  {
+    Exception currentException = null;
+    // Last, shut down the web applications.
+    // If this is done too soon it closes the database before the rest of the cleanup happens.
+    try
+    {
+      mcfInstance1.unload();
+    }
+    catch (Exception e)
+    {
+      if (currentException == null)
+        currentException = e;
+    }
+    try
+    {
+      mcfInstance2.unload();
+    }
+    catch (Exception e)
+    {
+      if (currentException == null)
+        currentException = e;
+    }
+    try
+    {
+      localCleanUp();
+    }
+    catch (Exception e)
+    {
+      e.printStackTrace();
+      throw e;
+    }
+    if (currentException != null)
+      throw currentException;
+    cleanupSystem();
+  }
+  
+
+}
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulerTester.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulerTester.java
new file mode 100644
index 0000000..87d8d79
--- /dev/null
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulerTester.java
@@ -0,0 +1,112 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+
+/** This is a very basic sanity check */
+public class SchedulerTester
+{
+  protected final ManifoldCFInstance instance1;
+  protected final ManifoldCFInstance instance2;
+  
+  public SchedulerTester(ManifoldCFInstance instance1, ManifoldCFInstance instance2)
+  {
+    this.instance1 = instance1;
+    this.instance2 = instance2;
+  }
+  
+  public void executeTest()
+    throws Exception
+  {
+    instance1.start();
+    
+    // Hey, we were able to install the file system connector etc.
+    // Now, create a local test job and run it.
+    IThreadContext tc = ThreadContextFactory.make();
+      
+    // Create a basic file system connection, and save it.
+    IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(tc);
+    IRepositoryConnection conn = mgr.create();
+    conn.setName("SchedulerTest Connection");
+    conn.setDescription("SchedulerTest Connection");
+    conn.setClassName("org.apache.manifoldcf.crawler.tests.SchedulingRepositoryConnector");
+    conn.setMaxConnections(100);
+    // Now, save
+    mgr.save(conn);
+      
+    // Create a basic null output connection, and save it.
+    IOutputConnectionManager outputMgr = OutputConnectionManagerFactory.make(tc);
+    IOutputConnection outputConn = outputMgr.create();
+    outputConn.setName("Null Connection");
+    outputConn.setDescription("Null Connection");
+    outputConn.setClassName("org.apache.manifoldcf.crawler.tests.NullOutputConnector");
+    outputConn.setMaxConnections(100);
+    // Now, save
+    outputMgr.save(outputConn);
+
+    // Create a job.
+    IJobManager jobManager = JobManagerFactory.make(tc);
+    IJobDescription job = jobManager.createJob();
+    job.setDescription("Test Job");
+    job.setConnectionName("SchedulerTest Connection");
+    job.setOutputConnectionName("Null Connection");
+    job.setType(job.TYPE_SPECIFIED);
+    job.setStartMethod(job.START_DISABLE);
+    job.setHopcountMode(job.HOPCOUNT_ACCURATE);
+      
+    // Save the job.
+    jobManager.save(job);
+
+    // Now, start the job, and wait until it is running.
+    jobManager.manualStart(job.getID());
+    instance1.waitJobRunningNative(jobManager,job.getID(),30000L);
+    
+    // Start the second instance.
+    instance2.start();
+    // Wait long enough for the stuffing etc to take place once
+    Thread.sleep(5000L);
+    // Terminate instance1.  Instance2 should keep going.
+    instance1.stopNoCleanup();
+    
+    // Wait for the job to become inactive.  The time should be at least long enough to handle
+    // 100 documents per bin, but not significantly greater than that.  Let's say 120 seconds.
+    long startTime = System.currentTimeMillis();
+    instance2.waitJobInactiveNative(jobManager,job.getID(),150000L);
+    long endTime = System.currentTimeMillis();
+    if (jobManager.getStatus(job.getID()).getDocumentsProcessed() != 10+10*200)
+      throw new Exception("Expected 2010 documents, saw "+jobManager.getStatus(job.getID()).getDocumentsProcessed());
+    if (endTime-startTime < 96000L)
+      throw new Exception("Job finished too quickly; throttling clearly failed");
+    System.out.println("Crawl took "+(endTime-startTime)+" milliseconds");
+    
+    // Now, delete the job.
+    jobManager.deleteJob(job.getID());
+    instance2.waitJobDeletedNative(jobManager,job.getID(),120000L);
+
+    // Shut down instance2
+    instance2.stop();
+  }
+}
diff --git a/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulingRepositoryConnector.java b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulingRepositoryConnector.java
new file mode 100644
index 0000000..76ead4b
--- /dev/null
+++ b/framework/pull-agent/src/test/java/org/apache/manifoldcf/crawler/tests/SchedulingRepositoryConnector.java
@@ -0,0 +1,143 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.crawler.tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+
+/** Connector class to be used by scheduling tests */
+public class SchedulingRepositoryConnector extends org.apache.manifoldcf.crawler.connectors.BaseRepositoryConnector
+{
+  // Throttling: the next time a fetch is allowed, per bin.
+  protected static final Map<String,Long> nextFetchTime = new HashMap<String,Long>();
+
+  public SchedulingRepositoryConnector()
+  {
+  }
+
+  @Override
+  public String[] getBinNames(String documentIdentifier)
+  {
+    int index = documentIdentifier.indexOf("/");
+    return new String[]{documentIdentifier.substring(0,index)};
+  }
+
+  @Override
+  public void addSeedDocuments(ISeedingActivity activities, DocumentSpecification spec,
+    long startTime, long endTime, int jobMode)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    // A seed per domain
+    String numberDomainsString = params.getParameter("numberDomains");
+    if (numberDomainsString == null)
+      numberDomainsString = "10";
+    int numberDomains = Integer.parseInt(numberDomainsString);
+    for (int i = 0; i < numberDomains; i++)
+    {
+      activities.addSeedDocument(Integer.toString(i)+"/",null);
+    }
+    System.out.println("Seeding completed at "+System.currentTimeMillis());
+  }
+  
+  @Override
+  public String[] getDocumentVersions(String[] documentIdentifiers, String[] oldVersions, IVersionActivity activities,
+    DocumentSpecification spec, int jobMode, boolean usesDefaultAuthority)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    String[] rval = new String[documentIdentifiers.length];
+    for (int i = 0; i < rval.length; i++)
+    {
+      rval[i] = "";
+    }
+    return rval;
+  }
+
+  @Override
+  public void processDocuments(String[] documentIdentifiers, String[] versions, IProcessActivity activities,
+    DocumentSpecification spec, boolean[] scanOnly, int jobMode)
+    throws ManifoldCFException, ServiceInterruption
+  {
+    String documentsPerSeedString = params.getParameter("documentsperseed");
+    if (documentsPerSeedString == null)
+      documentsPerSeedString = "200";
+    int documentsPerSeed = Integer.parseInt(documentsPerSeedString);
+    String timePerDocumentString = params.getParameter("timeperdocument");
+    if (timePerDocumentString == null)
+      timePerDocumentString = "500";
+    int timePerDocument = Integer.parseInt(timePerDocumentString);
+
+    // Seeds process instantly; other documents have a throttle based on the bin.
+    for (int i = 0; i < documentIdentifiers.length; i++)
+    {
+      String documentIdentifier = documentIdentifiers[i];
+      if (documentIdentifier.endsWith("/"))
+      {
+        System.out.println("Evaluating seed for "+documentIdentifier+" at "+System.currentTimeMillis());
+        // Seed document.  Add the document ID's
+        for (int j = 0; j < documentsPerSeed; j++)
+        {
+          activities.addDocumentReference(documentIdentifier + Integer.toString(j),documentIdentifier,null,
+            null,null,null);
+        }
+        System.out.println("Done evaluating seed for "+documentIdentifier+" at "+System.currentTimeMillis());
+      }
+      else
+      {
+        if (!scanOnly[i])
+        {
+          System.out.println("Fetching "+documentIdentifier);
+          // Find the bin
+          String bin = documentIdentifier.substring(0,documentIdentifier.indexOf("/"));
+          // For now they are all the same
+          long binTimePerDocument = timePerDocument;
+          long now = System.currentTimeMillis();
+          long whenFetch;
+          synchronized (nextFetchTime)
+          {
+            Long time = nextFetchTime.get(bin);
+            if (time == null)
+              whenFetch = now;
+            else
+              whenFetch = time.longValue();
+            nextFetchTime.put(bin,new Long(whenFetch + binTimePerDocument));
+          }
+          if (whenFetch > now)
+          {
+            System.out.println("Waiting "+(whenFetch-now)+" to fetch "+documentIdentifier);
+            try
+            {
+              ManifoldCF.sleep(whenFetch-now);
+            }
+            catch (InterruptedException e)
+            {
+              throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+            }
+            System.out.println("Wait complete for "+documentIdentifier);
+          }
+        }
+      }
+    }
+  }
+
+}
diff --git a/framework/script-engine/pom.xml b/framework/script-engine/pom.xml
index f2b801a..4a95f4a 100644
--- a/framework/script-engine/pom.xml
+++ b/framework/script-engine/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -76,7 +76,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-collections</groupId>
diff --git a/framework/scripts/executecommand.sh b/framework/scripts/executecommand.sh
index f1d9056..61ae124 100755
--- a/framework/scripts/executecommand.sh
+++ b/framework/scripts/executecommand.sh
@@ -35,7 +35,7 @@
             fi
         done
 
-        # Build the options
+        # Build the global options
 	OPTIONS=$(cat "$MCF_HOME"/processes/options.env)
         
         # Build the defines
@@ -47,7 +47,7 @@
             done
         fi
 
-        "$JAVA_HOME/bin/java" $OPTIONS $DEFINES -cp "$CLASSPATH" "$@"
+        "$JAVA_HOME/bin/java" $OPTIONS $DEFINES -cp "$CLASSPATH" $@
         exit $?
         
     else
diff --git a/framework/ui-core/pom.xml b/framework/ui-core/pom.xml
index 79fad7a..e8d2b6c 100644
--- a/framework/ui-core/pom.xml
+++ b/framework/ui-core/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-framework</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
 
@@ -28,14 +28,19 @@
   <name>ManifoldCF - Framework - UI Core</name>
 
   <build>
+    <resources>
+      <resource>
+        <directory>src/main/native2ascii</directory>
+      </resource>
+    </resources> 
+
     <plugins>
       <plugin>
         <groupId>org.codehaus.mojo</groupId>
         <artifactId>native2ascii-maven-plugin</artifactId>
-        <version>1.0-alpha-1</version>
+        <version>1.0-beta-1</version>
         <configuration>
-            <dest>target/classes</dest>
-            <src>src/main/native2ascii</src>
+            <workDir>target/classes</workDir>
         </configuration>
         <executions>
             <execution>
@@ -45,7 +50,9 @@
                 </goals>
                 <configuration>
                     <encoding>UTF8</encoding>
-                    <includes>**/*.properties</includes>
+                    <includes>
+                      <include>**/*.properties</include>
+                    </includes>
                 </configuration>
             </execution>
         </executions>
diff --git a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/beans/AdminProfile.java b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/beans/AdminProfile.java
index 2506213..77d13c6 100644
--- a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/beans/AdminProfile.java
+++ b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/beans/AdminProfile.java
@@ -24,6 +24,7 @@
 
 import org.apache.manifoldcf.core.interfaces.*;
 import org.apache.manifoldcf.core.system.*;
+import org.apache.manifoldcf.ui.passwords.PasswordMapper;
 
 /** The profile object contains an admin user's login information, and helps establish the
 * session model for the application.  This particular bean maintains the user (against
@@ -34,15 +35,18 @@
   public static final String _rcsid = "@(#)$Id: AdminProfile.java 988245 2010-08-23 18:39:35Z kwright $";
 
   /** Time of login */
-  private long loginTime = 0;
+  private long loginTime = -1L;
   /** Logged in user */
   private String userID = null;
-  /** Session identifier */
-  private String sessionIdentifier = null;
   /** Set to "true" if user is logged in. */
   private boolean isLoggedIn = false;
   /** Set to "true" if user can manage users. */
   private boolean manageUsers = false;
+  /** Password mapper */
+  private PasswordMapper passwordMapper = null;
+
+  /** Session identifier */
+  private String sessionIdentifier = null;
 
   /** Constructor.
   */
@@ -61,15 +65,6 @@
     return sessionIdentifier;
   }
 
-  /** Set the admin user id.
-  *@param userID is the ID of the admin user to log in.
-  */
-  public void setUserID(String userID)
-  {
-    sessionCleanup();       // nuke existing stuff (i.e. log out)
-    this.userID = userID;
-  }
-
   /** Get the admin user id.
   *@return the last login user id.
   */
@@ -86,35 +81,36 @@
     return manageUsers;
   }
 
+  /** Log out the current user.
+  */
+  public void logout()
+  {
+    sessionCleanup();
+  }
+
   /** Log on the user, with the already-set user id and company
   * description.
   *@param userPassword is the login password for the user.
   */
-  public void setPassword(String userPassword)
+  public void login(IThreadContext threadContext,
+    String userID, String userPassword)
   {
     sessionCleanup();
     try
     {
       // Check if everything is in place.
-      IThreadContext threadContext = ThreadContextFactory.make();
-      if (userID != null)
+      if (ManifoldCF.verifyLogin(threadContext,userID,userPassword))
       {
-        IDBInterface database = DBInterfaceFactory.make(threadContext,
-          ManifoldCF.getMasterDatabaseName(),
-          ManifoldCF.getMasterDatabaseUsername(),
-          ManifoldCF.getMasterDatabasePassword());
-        // MHL to actually log in (when we figure out what to use as an authority)
-        if (userID.equals("admin") &&  userPassword.equals("admin"))
-        {
-          isLoggedIn = true;
-          loginTime = System.currentTimeMillis();
-          manageUsers = false;
-        }
+        isLoggedIn = true;
+        loginTime = System.currentTimeMillis();
+        this.userID = userID;
+        manageUsers = false;
+        passwordMapper = new PasswordMapper();
       }
     }
     catch (ManifoldCFException e)
     {
-      Logging.misc.fatal("Exception logging in!",e);
+      Logging.misc.fatal("Exception logging in: "+e.getMessage(),e);
     }
   }
 
@@ -143,12 +139,24 @@
     return loginTime;
   }
 
+  /** Get the password mapper object.
+  *@return the password mapper object.
+  */
+  public PasswordMapper getPasswordMapper()
+  {
+    return passwordMapper;
+  }
+  
   // Nuke stuff for security and the garbage
   // collector threads
   private void sessionCleanup()
   {
     // Un-log-in the user
     isLoggedIn = false;
+    userID = null;
+    manageUsers = false;
+    loginTime = -1L;
+    passwordMapper = null;
   }
 
 
diff --git a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/jsp/JspWrapper.java b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/jsp/JspWrapper.java
index d34d897..d1fb733 100644
--- a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/jsp/JspWrapper.java
+++ b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/jsp/JspWrapper.java
@@ -19,26 +19,31 @@
 package org.apache.manifoldcf.ui.jsp;
 
 import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.ui.beans.AdminProfile;
 import javax.servlet.jsp.*;
 import java.io.*;
 
 /** This class provides an implementation of IHTTPOutput, which provides output
-* services to connector UI interfaces.
+* services to connector UI interfaces.  More broadly, it provides the services that all
+* connectors will need in order to provide UI components.
 */
 public class JspWrapper implements IHTTPOutput
 {
   public static final String _rcsid = "@(#)$Id: JspWrapper.java 988245 2010-08-23 18:39:35Z kwright $";
 
-  protected JspWriter writer;
+  protected final JspWriter writer;
+  protected final AdminProfile adminProfile;
 
   /** Constructor.
   */
-  public JspWrapper(JspWriter writer)
+  public JspWrapper(JspWriter writer, AdminProfile adminProfile)
   {
     this.writer = writer;
+    this.adminProfile = adminProfile;
   }
 
   /** Flush the stream */
+  @Override
   public void flush()
     throws IOException
   {
@@ -46,6 +51,7 @@
   }
   
   /** Write a newline */
+  @Override
   public void newLine()
     throws IOException
   {
@@ -53,6 +59,7 @@
   }
   
   /** Write a boolean */
+  @Override
   public void print(boolean b)
     throws IOException
   {
@@ -60,6 +67,7 @@
   }
   
   /** Write a char */
+  @Override
   public void print(char c)
     throws IOException
   {
@@ -67,6 +75,7 @@
   }
   
   /** Write an array of chars */
+  @Override
   public void print(char[] c)
     throws IOException
   {
@@ -74,6 +83,7 @@
   }
   
   /** Write a double */
+  @Override
   public void print(double d)
     throws IOException
   {
@@ -81,6 +91,7 @@
   }
   
   /** Write a float */
+  @Override
   public void print(float f)
     throws IOException
   {
@@ -88,6 +99,7 @@
   }
   
   /** Write an int */
+  @Override
   public void print(int i)
     throws IOException
   {
@@ -95,6 +107,7 @@
   }
   
   /** Write a long */
+  @Override
   public void print(long l)
     throws IOException
   {
@@ -102,6 +115,7 @@
   }
   
   /** Write an object */
+  @Override
   public void print(Object o)
     throws IOException
   {
@@ -109,6 +123,7 @@
   }
   
   /** Write a string */
+  @Override
   public void print(String s)
     throws IOException
   {
@@ -116,6 +131,7 @@
   }
   
   /** Write a boolean */
+  @Override
   public void println(boolean b)
     throws IOException
   {
@@ -123,6 +139,7 @@
   }
   
   /** Write a char */
+  @Override
   public void println(char c)
     throws IOException
   {
@@ -130,6 +147,7 @@
   }
   
   /** Write an array of chars */
+  @Override
   public void println(char[] c)
     throws IOException
   {
@@ -137,6 +155,7 @@
   }
   
   /** Write a double */
+  @Override
   public void println(double d)
     throws IOException
   {
@@ -144,6 +163,7 @@
   }
   
   /** Write a float */
+  @Override
   public void println(float f)
     throws IOException
   {
@@ -151,6 +171,7 @@
   }
   
   /** Write an int */
+  @Override
   public void println(int i)
     throws IOException
   {
@@ -158,6 +179,7 @@
   }
   
   /** Write a long */
+  @Override
   public void println(long l)
     throws IOException
   {
@@ -165,6 +187,7 @@
   }
   
   /** Write an object */
+  @Override
   public void println(Object o)
     throws IOException
   {
@@ -172,10 +195,35 @@
   }
   
   /** Write a string */
+  @Override
   public void println(String s)
     throws IOException
   {
     writer.println(s);
   }
 
+  /** Map a password to a unique key.
+  * This method works within a specific given browser session to replace an existing password with
+  * a key which can be used to look up the password at a later time.
+  *@param password is the password.
+  *@return the key.
+  */
+  @Override
+  public String mapPasswordToKey(String password)
+  {
+    return adminProfile.getPasswordMapper().mapPasswordToKey(password);
+  }
+  
+  /** Convert a key, created by mapPasswordToKey, back to the original password, within
+  * the lifetime of the browser session.  If the provided key is not an actual key, instead
+  * the key value is assumed to be a new password value.
+  *@param key is the key.
+  *@return the password.
+  */
+  @Override
+  public String mapKeyToPassword(String key)
+  {
+    return adminProfile.getPasswordMapper().mapKeyToPassword(key);
+  }
+
 }
diff --git a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/multipart/MultipartWrapper.java b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/multipart/MultipartWrapper.java
index a86289c..9153cd8 100644
--- a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/multipart/MultipartWrapper.java
+++ b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/multipart/MultipartWrapper.java
@@ -19,6 +19,7 @@
 package org.apache.manifoldcf.ui.multipart;
 
 import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.ui.beans.AdminProfile;
 import org.apache.commons.fileupload.*;
 import javax.servlet.*;
 import javax.servlet.http.*;
@@ -33,6 +34,8 @@
 {
   public static final String _rcsid = "@(#)$Id: MultipartWrapper.java 988245 2010-08-23 18:39:35Z kwright $";
 
+  /** The Admin Profile bean, for password mapping. */
+  protected final AdminProfile adminProfile;
   /** This is the HttpServletRequest object, which will be used for parameters only if
   * the form is not multipart. */
   protected HttpServletRequest request = null;
@@ -41,9 +44,11 @@
 
   /** Constructor.
   */
-  public MultipartWrapper(HttpServletRequest request)
+  public MultipartWrapper(HttpServletRequest request, AdminProfile adminProfile)
     throws ManifoldCFException
   {
+    this.adminProfile = adminProfile;
+
     // Check that we have a file upload request
     boolean isMultipart = FileUpload.isMultipartContent(request);
     if (!isMultipart)
@@ -95,6 +100,7 @@
 
   /** Get multiple parameter values.
   */
+  @Override
   public String[] getParameterValues(String name)
   {
     // Expect multiple items, all strings
@@ -136,6 +142,7 @@
 
   /** Get single parameter value.
   */
+  @Override
   public String getParameter(String name)
   {
     // Get it as a parameter.
@@ -167,6 +174,7 @@
 
   /** Get a file parameter, as a binary input.
   */
+  @Override
   public BinaryInput getBinaryStream(String name)
     throws ManifoldCFException
   {
@@ -205,6 +213,7 @@
 
   /** Get file parameter, as a byte array.
   */
+  @Override
   public byte[] getBinaryBytes(String name)
   {
     if (request != null)
@@ -227,6 +236,7 @@
 
   /** Set a parameter value
   */
+  @Override
   public void setParameter(String name, String value)
   {
     ArrayList values = new ArrayList();
@@ -237,6 +247,7 @@
 
   /** Set an array of parameter values
   */
+  @Override
   public void setParameterValues(String name, String[] values)
   {
     ArrayList valueArray = new ArrayList();
@@ -248,4 +259,30 @@
     variableMap.put(name,valueArray);
   }
 
+  // Password mapping
+  
+  /** Map a password to a unique key.
+  * This method works within a specific given browser session to replace an existing password with
+  * a key which can be used to look up the password at a later time.
+  *@param password is the password.
+  *@return the key.
+  */
+  @Override
+  public String mapPasswordToKey(String password)
+  {
+    return adminProfile.getPasswordMapper().mapPasswordToKey(password);
+  }
+  
+  /** Convert a key, created by mapPasswordToKey, back to the original password, within
+  * the lifetime of the browser session.  If the provided key is not an actual key, instead
+  * the key value is assumed to be a new password value.
+  *@param key is the key.
+  *@return the password.
+  */
+  @Override
+  public String mapKeyToPassword(String key)
+  {
+    return adminProfile.getPasswordMapper().mapKeyToPassword(key);
+  }
+  
 }
diff --git a/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/passwords/PasswordMapper.java b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/passwords/PasswordMapper.java
new file mode 100644
index 0000000..2699298
--- /dev/null
+++ b/framework/ui-core/src/main/java/org/apache/manifoldcf/ui/passwords/PasswordMapper.java
@@ -0,0 +1,101 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.ui.passwords;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.core.system.*;
+import java.util.*;
+
+/** This object manages a session-based map of password keys.
+*/
+public class PasswordMapper
+{
+  public static final String _rcsid = "@(#)$Id$";
+  
+  private final String randomPrefix;
+  private final Map<String,Integer> passwordToKey = new HashMap<String,Integer>();
+  private final List<String> passwordList = new ArrayList<String>();
+  
+  /** Constructor */
+  public PasswordMapper()
+  {
+    randomPrefix = generateRandomPrefix();
+  }
+  
+  /** Map a password to a key.
+  *@param password is the password.
+  *@return the key.
+  */
+  public synchronized String mapPasswordToKey(String password)
+  {
+    // Special case for null or empty password
+    if (password == null || password.length() == 0)
+      return password;
+    Integer index = passwordToKey.get(password);
+    if (index == null)
+    {
+      // Need a new key.
+      index = new Integer(passwordList.size());
+      passwordList.add(password);
+      passwordToKey.put(password,index);
+    }
+    return randomPrefix + index;
+  }
+  
+  /** Map a key back to a password.
+  *@param key is the key (or a password, if changed)
+  *@return the password.
+  */
+  public synchronized String mapKeyToPassword(String key)
+  {
+    if (key != null && key.startsWith(randomPrefix))
+    {
+      String intPart = key.substring(randomPrefix.length());
+      try
+      {
+        int index = Integer.parseInt(intPart);
+        if (index < passwordList.size())
+          return passwordList.get(index);
+      }
+      catch (NumberFormatException e)
+      {
+      }
+    }
+    return key;
+  }
+  
+  // Protected methods
+  
+  protected static char[] pickChars = new char[]{'\u0d5d','\u20c4','\u0392','\u1a2b'};
+  
+  /** Generate a random prefix that will not likely collide with any password */
+  protected static String generateRandomPrefix()
+  {
+    Random r = new Random(System.currentTimeMillis());
+    StringBuilder sb = new StringBuilder("_");
+    for (int i = 0; i < 8; i++)
+    {
+      int index = r.nextInt(pickChars.length);
+      sb.append(pickChars[index]);
+    }
+    sb.append("_");
+    return sb.toString();
+  }
+  
+}
diff --git a/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_en_US.properties b/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_en_US.properties
index c292c14..8ff7a8f 100644
--- a/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_en_US.properties
+++ b/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_en_US.properties
@@ -19,14 +19,15 @@
 editconnection.Name=Name
 editconnection.Type=Type
 editconnection.Throttling=Throttling
-editconnection.Addthrottle
+editconnection.Addthrottle=Add throttle
 editjob.Name=Name
 editjob.Connection=Connection
-editauthority.Name=Name
-editauthority.Type=Type
-editauthority.Throttling=Throttling
 
 viewconnection.NoneGlobalAuthority=None (global authority)
+viewconnection.ClearHistoryAssociatedWithThisConnection=Clear history associated with this connection
+viewconnection.ClearAllRelatedHistory=Clear All Related History
+viewconnection.Thiscommandwillclearallhistoryrelatedto=This command will clear all history related to connection
+viewconnection.period=.
 
 listconnections.GlobalAuthority=None(globalAuthority)
 
@@ -35,14 +36,23 @@
 editjob.am=am
 editjob.pm=pm
 
+index.ApacheManifoldCF=Apache ManifoldCF
 index.WelcomeToApacheManifoldFC=Welcome to Apache ManifoldCF
+index.ApacheManifoldCFLogin=Apache ManifoldCF Login
+index.UserIDColon=User ID:
+index.PasswordColon=Password:
+index.Login=Login
+index.LoginFailed=Login failed!
 
 banner.DocumentIngestion=Document Ingestion
 
+navigation.LogOut=Log Out
 navigation.Outputs=Outputs
 navigation.ListOutputConnections=List Output Connections
 navigation.Authorities=Authorities
+navigation.ListUserMappings=List User Mapping Connections
 navigation.ListAuthorityConnections=List Authority Connections
+navigation.ListAuthorityGroups=List Authority Groups
 navigation.Repositories=Repositories
 navigation.ListRepositoryConnections=List Repository Connections
 navigation.Jobs=Jobs
@@ -59,11 +69,11 @@
 navigation.Miscellaneous=Miscellaneous
 navigation.Locale=en_US
 navigation.Help=Help
-
 navigation.Listoutputconnections=List output connections
 navigation.Listrepositoryconnections=List repository connections
+navigation.Listusermappings=List user mapping connections
 navigation.Listauthorities=List authorities
-navigation.Listrepositorconnections=List repository connections
+navigation.Listauthoritygroups=List authority groups
 navigation.Listjobs=List jobs
 navigation.Managejobs=Manage jobs
 navigation.Documentstatus=Document status
@@ -85,6 +95,7 @@
 listoutputs.Delete=Delete
 listoutputs.DeleteOutputConnection=Delete output connection
 listoutputs.AddAnOutputConnection=Add an output connection
+listoutputs.uninstalled=(uninstalled)
 
 editoutput.ApacheManifoldCFEditOutputConnection=Apache ManifoldCF: Edit Output Connection
 editoutput.EditAnOutputConnection=Edit an Output Connection
@@ -96,8 +107,7 @@
 editoutput.CancelOutputConnectionEditing=Cancel output connection editing
 editoutput.Save=Save
 editoutput.SaveThisOutputConnection=Save this output connection
-editoutput.MaxConnections=Max connections
-editoutput.PerJVMColon=(per JVM):
+editoutput.MaxConnectionsColon=Max connections:
 editoutput.EditOutputConnection=Edit output connection
 editoutput.NameColon=Name:
 editoutput.DescriptionColon=Description:
@@ -121,16 +131,32 @@
 viewoutput.Delete=Delete
 viewoutput.ReIngestAllDocumentsAssociatedWithThisOutputConnection=Re-ingest all documents associated with this output connection
 viewoutput.ReIngestAllAssociatedDocuments=Re-ingest all associated documents
+viewoutput.RemoveAllDocumentsAssociatedWithThisOutputConnection=Remove all document records associated with this output connection
+viewoutput.RemoveAllAssociatedDocuments=Remove all associated documents
 viewoutput.Deleteoutputconnection=Delete output connection
 viewoutput.Thiscommandwillforce=This command will force all documents associated with output\nconnection
 viewoutput.toberecrawled=to be recrawled the next time their associated\n jobs are started.  Do you want to continue?
+viewoutput.Thiscommandwillcause=This command will cause ManifoldCF to lose all current knowledge of documents\n associated with output connection
+viewoutput.tobeforgotten=.  Do you want to continue?
 viewoutput.qmark=?
 viewoutput.uninstalled=(uninstalled)
 viewoutput.Connectorisnotinstalled=Connector is not installed.
 viewoutput.Threwexception=Threw exception:
 
+listgroups.ApacheManifoldCFListAuthorityGroups=Apache ManifoldCF: List Authority Groups
+listgroups.DeleteAuthorityGroup=Delete authority group
+listgroups.ListOfAuthorityGroups=List of Authority Groups
+listgroups.Name=Name
+listgroups.Description=Description
+listgroups.View=View
+listgroups.Edit=Edit
+listgroups.Delete=Delete
+listgroups.AddNewGroup=Add new authority group
+listgroups.AddaNewGroup=Add a new authority group
+
 listauthorities.ApacheManifoldCFListAuthorities=Apache ManifoldCF: List Authorities
 listauthorities.ListOfAuthorityConnections=List of Authority Connections
+listauthorities.DeleteAuthority=Delete authority
 listauthorities.Name=Name
 listauthorities.Description=Description
 listauthorities.AuthorityType=Authority Type
@@ -140,6 +166,34 @@
 listauthorities.View=View
 listauthorities.Edit=Edit
 listauthorities.Delete=Delete
+listauthorities.uninstalled=(uninstalled)
+
+listmappers.ApacheManifoldCFListMappers=Apache ManifoldCF: List User Mappers
+listmappers.ListOfMappingConnections=List of User Mapping Connections
+listmappers.DeleteMapper=Delete user mapper
+listmappers.Name=Name
+listmappers.Description=Description
+listmappers.MapperType=Mapping Type
+listmappers.Max=Max
+listmappers.AddaNewConnection=Add a new connection
+listmappers.AddNewConnection=Add a new connection
+listmappers.View=View
+listmappers.Edit=Edit
+listmappers.Delete=Delete
+listmappers.uninstalled=(uninstalled)
+
+editgroup.Name=Name
+editgroup.ApacheManifoldCFEditAuthorityGroup=Apache ManifoldCF: Edit Authority Group
+editgroup.AuthorityGroupMustHaveAName=Authority group must have a name
+editgroup.tab=tab
+editgroup.EditGroup=Edit group
+editgroup.EditAGroup=Edit a group
+editgroup.NameColon=Name:
+editgroup.DescriptionColon=Description:
+editgroup.Save=Save
+editgroup.SaveThisAuthorityGroup=Save this authority group
+editgroup.Cancel=Cancel
+editgroup.CancelAuthorityGroupEditing=Cancel authority group editing
 
 editauthority.ApacheManifoldCFEditAuthority=Apache ManifoldCF: Edit Authority
 editauthority.EditAnAuthority=Edit an Authority
@@ -148,8 +202,7 @@
 editauthority.ConnectionTypeColon=Connection type:
 editauthority.Continue=Continue
 editauthority.ContinueToNextPage=Continue to next page
-editauthority.MaxConnections=Max connections
-editauthority.PerJVMColon=(per JVM):
+editauthority.MaxConnectionsColon=Max connections:
 editauthority.Cancel=Cancel
 editauthority.CancelAuthorityEditing=Cancel authority editing
 editauthority.Save=Save
@@ -158,16 +211,54 @@
 editauthority.TheMaximumNumberOfConnectionsMustBeAValidInteger=The maximum number of connections must be a valid integer
 editauthority.ConnectionMustHaveAName=Connection must have a name
 editauthority.NoAuthorityConnectorsRegistered=No authority connectors registered
-
 editauthority.UNREGISTERED=UNREGISTERED
 editauthority.tab=tab
+editauthority.Name=Name
+editauthority.Type=Type
+editauthority.Throttling=Throttling
+editauthority.EditAuthorityConnection=Edit Authority Connection
+editauthority.Prerequisites=Prerequisites
+editauthority.PrerequisiteUserMappingColon=Prerequisite user mapping:
+editauthority.NoPrerequisites=(No Prerequisites)
+editauthority.AuthorizationDomainColon=Authorization domain:
+editauthority.AuthorityGroupColon=Authority group:
+editauthority.SelectAGroup=--Select a group--
+editauthority.ConnectionMustHaveAGroup=Authority connection must have a group
+editauthority.NoAuthorityGroupsDefinedCreateOneFirst=No authority groups have been defined; create one first
+editauthority.DefaultDomainNone=Default domain (None)
+
+editmapper.ApacheManifoldCFEditMapping=Apache ManifoldCF: Edit Mapping
+editmapper.EditAMapping=Edit A Mapping
+editmapper.NameColon=Name:
+editmapper.DescriptionColon=Description:
+editmapper.ConnectionTypeColon=Connection type:
+editmapper.Continue=Continue
+editmapper.ContinueToNextPage=Continue to next page
+editmapper.MaxConnectionsColon=Max connections:
+editmapper.Cancel=Cancel
+editmapper.CancelMappingEditing=Cancel mapping editing
+editmapper.Save=Save
+editmapper.SaveThisMappingConnection=Save this mapping connection
+editmapper.EditMapping=Edit mapper
+editmapper.TheMaximumNumberOfConnectionsMustBeAValidInteger=The maximum number of connections must be a valid integer
+editmapper.ConnectionMustHaveAName=Connection must have a name
+editmapper.NoMappingConnectorsRegistered=No mapping connectors registered
+editmapper.UNREGISTERED=UNREGISTERED
+editmapper.tab=tab
+editmapper.Name=Name
+editmapper.Type=Type
+editmapper.Throttling=Throttling
+editmapper.EditMappingConnection=Edit Mapping Connection
+editmapper.Prerequisites=Prerequisites
+editmapper.PrerequisiteUserMappingColon=Prerequisite user mapping:
+editmapper.NoPrerequisites=(No Prerequisites)
 
 listconnections.ApacheManifoldCFListConnections=Apache ManifoldCF: List Connections
 listconnections.ListOfRepositoryConnections=List of Repository Connections
 listconnections.Name=Name
 listconnections.Description=Description
 listconnections.ConnectionType=Connection Type
-listconnections.Authority=Authority
+listconnections.AuthorityGroup=Authority Group
 listconnections.Max=Max
 listconnections.AddNewConnection=Add new connection
 listconnections.View=View
@@ -175,17 +266,17 @@
 listconnections.Delete=Delete
 listconnections.DeleteConnection=Delete connection
 listconnections.AddAConnection=Add a connection
+listconnections.uninstalled=(uninstalled)
 
 editconnection.ApacheManifoldCFEditConnection=Apache ManifoldCF: Edit Connection
 editconnection.EditAConnection=Edit a Connection
 editconnection.ConnectionTypeColon=Connection type:
-editconnection.AuthorityColon=Authority:
+editconnection.AuthorityGroupColon=Authority group:
 editconnection.Cancel=Cancel
 editconnection.CancelConnectionEditing=Cancel connection editing
 editconnection.NameColon=Name:
 editconnection.DescriptionColon=Description:
-editconnection.Maxconnections=Max connections
-editconnection.PerJVMColon=(per JVM):
+editconnection.MaxconnectionsColon=Max connections:
 editconnection.ThrottlingColon=Throttling:
 editconnection.Add=Add
 editconnection.BinRegularExpression=Bin regular expression
@@ -216,7 +307,7 @@
 viewconnection.DescriptionColon=Description:
 viewconnection.ConnectionTypeColon=Connection type:
 viewconnection.MaxConnectionsColon=Max connections:
-viewconnection.AuthorityColon=Authority:
+viewconnection.AuthorityGroupColon=Authority group:
 viewconnection.ThrottlingColon=Throttling:
 viewconnection.NoThrottles=No throttles
 viewconnection.ConnectionStatusColon=Connection status:
@@ -523,6 +614,7 @@
 simplereport.PreviousPage=Previous page
 simplereport.NextPage=Next page
 simplereport.Next=Next
+simplereport.Rows=Rows:
 simplereport.RowsPerPage=Rows per page:
 simplereport.PleaseSelectAConnection=Please select a connection
 simplereport.PleaseTryAgainLater=This page is unavailable due to maintenance operations.  Please try again later.
@@ -564,6 +656,7 @@
 maxactivityreport.PreviousPage=Previous page
 maxactivityreport.NextPage=Next page
 maxactivityreport.Next=Next
+maxactivityreport.Rows=Rows:
 maxactivityreport.RowsPerPage=Rows per page:
 maxactivityreport.PleaseSelectAConnection=Please select a connection
 maxactivityreport.PleaseTryAgainLater=This page is unavailable due to maintenance operations.  Please try again later.
@@ -737,6 +830,7 @@
 viewjob.October=October
 viewjob.November=November
 viewjob.December=December
+viewjob.onanydayofthemonth=on any day of the month
 viewjob.onthe1stofthemonth=on the 1st of the month
 viewjob.onthe=on the
 viewjob.ofthemonth=of the month
@@ -754,6 +848,15 @@
 viewjob.Minimal=Minimal
 viewjob.Complete=Complete
 
+viewgroup.ApacheManifoldCFViewGroup=Apache ManifoldCF: View Authority Group
+viewgroup.ViewAuthorityGroup=View Authority Group
+viewgroup.NameColon=Name:
+viewgroup.DescriptionColon=Description:
+viewgroup.EditThisAuthorityGroup=Edit this authority group
+viewgroup.Edit=Edit
+viewgroup.DeleteThisAuthorityGroup=Delete this authority group
+viewgroup.Delete=Delete
+
 viewauthority.ViewAuthorityConnectionStatus=View Authority Connection Status
 viewauthority.NameColon=Name:
 viewauthority.DescriptionColon=Description:
@@ -771,3 +874,27 @@
 viewauthority.uninstalled=(uninstalled)
 viewauthority.Threwexception=Threw exception:
 viewauthority.qmark=?
+viewauthority.PrerequisiteUserMappingColon=Prerequisite user mapping:
+viewauthority.NoPrerequisites=No prerequisites
+viewauthority.AuthorizationDomainColon=Authorization domain:
+viewauthority.AuthorityGroupColon=Authority group:
+
+viewmapper.ApacheManifoldCFViewMappingConnectionStatus=Apache ManifoldCF: View Mapping Connection Status
+viewmapper.DeleteConnection=Delete connection
+viewmapper.ViewMappingConnectionStatus=View Mapping Connection Status
+viewmapper.uninstalled=(uninstalled)
+viewmapper.Connectorisnotinstalled=Connector is not installed.
+viewmapper.Threwexception=Threw exception:
+viewmapper.NameColon=Name:
+viewmapper.DescriptionColon=Description:
+viewmapper.MapperTypeColon=Mapping type:
+viewmapper.MaxConnectionsColon=Max connections:
+viewmapper.ConnectionStatusColon=Connection status:
+viewmapper.Refresh=Refresh
+viewmapper.Edit=Edit
+viewmapper.EditThisMappingConnection=Edit this mapping connection
+viewmapper.Delete=Delete
+viewmapper.DeleteThisMappingConnection=Delete this mapping connection
+viewmapper.qmark=?
+viewmapper.PrerequisiteUserMappingColon=Prerequisite user mapping:
+viewmapper.NoPrerequisites=No prerequisites
diff --git a/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_ja_JP.properties b/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_ja_JP.properties
index a70eed6..eb03e1f 100644
--- a/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_ja_JP.properties
+++ b/framework/ui-core/src/main/native2ascii/org/apache/manifoldcf/ui/i18n/common_ja_JP.properties
@@ -19,14 +19,15 @@
 editconnection.Name=名前
 editconnection.Type=タイプ
 editconnection.Throttling=スロットリング
-editconnection.Addthrottle
+editconnection.Addthrottle=スロットルを追加
 editjob.Name=名前
 editjob.Connection=コネクション
-editauthority.Name=名前
-editauthority.Type=タイプ
-editauthority.Throttling=スロットリング
 
 viewconnection.NoneGlobalAuthority=なし(globalAuthority)
+viewconnection.ClearHistoryAssociatedWithThisConnection=Clear history associated with this connection
+viewconnection.ClearAllRelatedHistory=Clear All Related History
+viewconnection.Thiscommandwillclearallhistoryrelatedto=This command will clear all history related to connection
+viewconnection.period=.
 
 listconnections.GlobalAuthority=なし(globalAuthority)
 
@@ -35,14 +36,23 @@
 editjob.am=午前
 editjob.pm=午後
 
+index.ApacheManifoldCF=Apache ManifoldCF
 index.WelcomeToApacheManifoldFC=Apache ManifoldCFへようこそ
+index.ApacheManifoldCFLogin=Apache ManifoldCF ログイン
+index.UserIDColon=ユーザーID:
+index.PasswordColon=パスワード:
+index.Login=ログイン
+index.LoginFailed=ログインに失敗しました!
 
 banner.DocumentIngestion=コンテンツの読込み
 
+navigation.LogOut=ログアウト
 navigation.Outputs=出力
 navigation.ListOutputConnections=出力コネクション一覧
 navigation.Authorities=権限
+navigation.ListUserMappings=ユーザーマッピングコネクション一覧
 navigation.ListAuthorityConnections=権限コネクション一覧
+navigation.ListAuthorityGroups=List Authority Groups
 navigation.Repositories=リポジトリ
 navigation.ListRepositoryConnections=リポジトリコネクション一覧
 navigation.Jobs=ジョブ
@@ -59,19 +69,19 @@
 navigation.Miscellaneous=その他
 navigation.Locale=ja_JP
 navigation.Help=ヘルプ
-
-navigation.Listrepositoryconnections=List repository connections
-navigation.Listoutputconnections=List output connections
-navigation.Listauthorities=List authorities
-navigation.Listrepositorconnections=List repository connections
-navigation.Listjobs=List jobs
-navigation.Managejobs=Manage jobs
-navigation.Documentstatus=Document status
-navigation.Queuestatus=Queue status
-navigation.Simplehistory=Simple history
-navigation.Maximumactivity=Maximum activity
-navigation.Maximumbandwidth=Maximum bandwidth
-navigation.Resulthistogram=Result histogram
+navigation.Listrepositoryconnections=リポジトリコネクション一覧
+navigation.Listoutputconnections=出力コネクション一覧
+navigation.Listusermappings=ユーザーマッピングコネクション一覧
+navigation.Listauthorities=権限一覧
+navigation.Listauthoritygroups=List authority groups
+navigation.Listjobs=ジョブ一覧
+navigation.Managejobs=ジョブ管理
+navigation.Documentstatus=コンテンツの状態
+navigation.Queuestatus=キュー状態
+navigation.Simplehistory=履歴レポート
+navigation.Maximumactivity=最大アクティビティ
+navigation.Maximumbandwidth=最大バンド幅
+navigation.Resulthistogram=結果履歴
 
 listoutputs.ApacheManifoldCFListOutputConnections=Apache ManifoldCF:出力コネクション一覧
 listoutputs.ListOfOutputConnections=出力コネクション一覧
@@ -85,6 +95,7 @@
 listoutputs.Delete=削除
 listoutputs.DeleteOutputConnection=出力コネクションを削除
 listoutputs.AddAnOutputConnection=出力コネクションを追加
+listoutputs.uninstalled=(アンインストール)
 
 editoutput.ApacheManifoldCFEditOutputConnection=Apache ManifoldCF:出力コネクションの編集
 editoutput.EditAnOutputConnection=出力コネクションの編集
@@ -96,19 +107,18 @@
 editoutput.CancelOutputConnectionEditing=出力コネクションの編集をキャンセル
 editoutput.Save=保存
 editoutput.SaveThisOutputConnection=出力コネクションを保存
-editoutput.MaxConnections=最大コネクション数
-editoutput.PerJVMColon=(/JVM):
+editoutput.MaxConnectionsColon=最大コネクション数:
 editoutput.EditOutputConnection=出力コネクションを編集
 editoutput.NameColon=名前:
 editoutput.DescriptionColon=説明:
 editoutput.ConnectionMustHaveAName=コネクション名を入力してください
 editoutput.TheMaximumNumberOfConnectionsMustBeAValidInteger=最大コネクション数には整数を入力してください
 
-editoutput.UNREGISTERED=UNREGISTERED
-editoutput.tab=tab
+editoutput.UNREGISTERED=アンインストール
+editoutput.tab=タブ
 
-viewoutput.ApacheManifoldCFViewOutputConnectionStatus=Apache ManifoldCF:出力コネクション状態を表示
-viewoutput.ViewOutputConnectionStatus=出力コネクション状態を表示
+viewoutput.ApacheManifoldCFViewOutputConnectionStatus=Apache ManifoldCF:出力コネクション状態の表示
+viewoutput.ViewOutputConnectionStatus=出力コネクション状態の表示
 viewoutput.NameColon=名前:
 viewoutput.DescriptionColon=説明:
 viewoutput.ConnectionTypeColon=コネクションタイプ:
@@ -121,16 +131,32 @@
 viewoutput.Delete=削除
 viewoutput.ReIngestAllDocumentsAssociatedWithThisOutputConnection=コネクションに指定されているすべてのコンテンツを再読込む
 viewoutput.ReIngestAllAssociatedDocuments=すべてのコンテンツの再読込み
-viewoutput.Deleteoutputconnection=Delete output connection
-viewoutput.Thiscommandwillforce=This command will force all documents associated with output\nconnection
-viewoutput.toberecrawled=to be recrawled the next time their associated\n jobs are started.  Do you want to continue?
-viewoutput.qmark=?
-viewoutput.uninstalled=(uninstalled)
-viewoutput.Connectorisnotinstalled=Connector is not installed.
-viewoutput.Threwexception=Threw exception:
+viewoutput.RemoveAllDocumentsAssociatedWithThisOutputConnection=Remove all document records associated with this output connection
+viewoutput.RemoveAllAssociatedDocuments=Remove all associated documents
+viewoutput.Deleteoutputconnection=出力コネクションを削除
+viewoutput.Thiscommandwillforce=このコマンドは、出力コネクションに関連付けられているすべてのドキュメントに強制されます
+viewoutput.toberecrawled=次回それらの関連するジョブが開始する時に再クロールされます。  継続しますか?
+viewoutput.Thiscommandwillcause=This command will cause ManifoldCF to lose all current knowledge of documents\n associated with output connection
+viewoutput.tobeforgotten=.  Do you want to continue?
+viewoutput.qmark=?
+viewoutput.uninstalled=(アンインストール)
+viewoutput.Connectorisnotinstalled=コネクターがインストールされていません
+viewoutput.Threwexception=例外がスローされました:
+
+listgroups.ApacheManifoldCFListAuthorityGroups=Apache ManifoldCF: List Authority Groups
+listgroups.DeleteAuthorityGroup=Delete authority group
+listgroups.ListOfAuthorityGroups=List of Authority Groups
+listgroups.Name=Name
+listgroups.Description=Description
+listgroups.View=View
+listgroups.Edit=Edit
+listgroups.Delete=Delete
+listgroups.AddNewGroup=Add a new group
+listgroups.AddaNewGroup=Add a new group
 
 listauthorities.ApacheManifoldCFListAuthorities=Apache ManifoldCF:権限一覧
 listauthorities.ListOfAuthorityConnections=権限コネクション一覧
+listauthorities.DeleteAuthority=権限を削除
 listauthorities.Name=名前
 listauthorities.Description=説明
 listauthorities.AuthorityType=権限タイプ
@@ -140,6 +166,34 @@
 listauthorities.View=表示
 listauthorities.Edit=編集
 listauthorities.Delete=削除
+listauthorities.uninstalled=(アンインストール)
+
+listmappers.ApacheManifoldCFListMappers=Apache ManifoldCF: ユーザーマッピング一覧
+listmappers.ListOfMappingConnections=ユーザーマッピングコネクション一覧
+listmappers.DeleteMapper=ユーザーマッピングを削除
+listmappers.Name=名前
+listmappers.Description=説明
+listmappers.MapperType=マッピングタイプ
+listmappers.Max=最大値
+listmappers.AddaNewConnection=新しいコネクションを追加
+listmappers.AddNewConnection=新しいコネクションを追加
+listmappers.View=表示
+listmappers.Edit=編集
+listmappers.Delete=削除
+listmappers.uninstalled=(アンインストール)
+
+editgroup.Name=Name
+editgroup.ApacheManifoldCFEditAuthorityGroup=Apache ManifoldCF: Edit Authority Group
+editgroup.AuthorityGroupMustHaveAName=Authority group must have a name
+editgroup.tab=tab
+editgroup.EditGroup=Edit group
+editgroup.EditAGroup=Edit a group
+editgroup.NameColon=Name:
+editgroup.DescriptionColon=Description:
+editgroup.Save=Save
+editgroup.SaveThisAuthorityGroup=Save this authority group
+editgroup.Cancel=Cancel
+editgroup.CancelAuthorityGroupEditing=Cancel authority group editing
 
 editauthority.ApacheManifoldCFEditAuthority=Apache ManifoldCF:権限の編集
 editauthority.EditAnAuthority=権限を編集
@@ -148,26 +202,63 @@
 editauthority.ConnectionTypeColon=コネクションタイプ:
 editauthority.Continue=次へ
 editauthority.ContinueToNextPage=次のページに進む
-editauthority.MaxConnections=最大コネクション数
-editauthority.PerJVMColon=(/JVM):
+editauthority.MaxConnectionsColon=最大コネクション数:
 editauthority.Cancel=キャンセル
 editauthority.CancelAuthorityEditing=権限の編集をキャンセル
 editauthority.Save=保存
-editauthority.SaveThisAuthorityConnection=権限接続を保存
+editauthority.SaveThisAuthorityConnection=権限コネクションを保存
 editauthority.EditAuthority=権限の編集
 editauthority.TheMaximumNumberOfConnectionsMustBeAValidInteger=最大コネクション数には整数を入力してください
 editauthority.ConnectionMustHaveAName=コネクション名を入力してください
 editauthority.NoAuthorityConnectorsRegistered=権限コネクションがありません
+editauthority.UNREGISTERED=アンインストール
+editauthority.tab=タブ
+editauthority.Name=名前
+editauthority.Type=タイプ
+editauthority.Throttling=スロットリング
+editauthority.EditAuthorityConnection=権限コネクションを編集
+editauthority.Prerequisites=条件
+editauthority.PrerequisiteUserMappingColon=前提ユーザーマッピング:
+editauthority.NoPrerequisites=(条件無し)
+editauthority.AuthorizationDomainColon=Authorization domain:
+editauthority.AuthorityGroupColon=Authority group:
+editauthority.SelectAGroup=--Select a group--
+editauthority.ConnectionMustHaveAGroup=Authority connection must have a group
+editauthority.NoAuthorityGroupsDefinedCreateOneFirst=No authority groups have been defined; create one first
+editauthority.DefaultDomainNone=Default domain (None)
 
-editauthority.UNREGISTERED=UNREGISTERED
-editauthority.tab=tab
+editmapper.ApacheManifoldCFEditMapping=Apache ManifoldCF: マッピングの編集
+editmapper.EditAMapping=マッピングを編集
+editmapper.NameColon=名前:
+editmapper.DescriptionColon=説明:
+editmapper.ConnectionTypeColon=コネクションタイプ:
+editmapper.Continue=次へ
+editmapper.ContinueToNextPage=次のページに進む
+editmapper.MaxConnectionsColon=最大コネクション数:
+editmapper.Cancel=キャンセル
+editmapper.CancelMappingEditing=マッピングの編集をキャンセル
+editmapper.Save=保存
+editmapper.SaveThisMappingConnection=マッピングコネクションを保存
+editmapper.EditMapping=マッピングの編集
+editmapper.TheMaximumNumberOfConnectionsMustBeAValidInteger=最大コネクション数には整数を入力してください
+editmapper.ConnectionMustHaveAName=コネクション名を入力してください
+editmapper.NoMappingConnectorsRegistered=マッピングコネクションがありません
+editmapper.UNREGISTERED=アンインストール
+editmapper.tab=タブ
+editmapper.Name=名前
+editmapper.Type=タイプ
+editmapper.Throttling=スロットリング
+editmapper.EditMappingConnection=マッピングコネクションを編集
+editmapper.Prerequisites=条件
+editmapper.PrerequisiteUserMappingColon=前提ユーザーマッピング:
+editmapper.NoPrerequisites=(条件無し)
 
 listconnections.ApacheManifoldCFListConnections=Apache ManifoldCF:コネクション一覧
 listconnections.ListOfRepositoryConnections=リポジトリコネクション一覧
 listconnections.Name=名前
 listconnections.Description=説明
 listconnections.ConnectionType=コネクションタイプ
-listconnections.Authority=権限
+listconnections.AuthorityGroup=Authority Group
 listconnections.Max=最大値
 listconnections.AddNewConnection=新しいコネクションを追加
 listconnections.View=表示
@@ -175,17 +266,17 @@
 listconnections.Delete=削除
 listconnections.DeleteConnection=コネクションを削除:
 listconnections.AddAConnection=コネクションを追加
+listconnections.uninstalled=(アンインストール)
 
 editconnection.ApacheManifoldCFEditConnection=Apache ManifoldCF:コネクションを編集
 editconnection.EditAConnection=コネクションを編集
 editconnection.ConnectionTypeColon=コネクションタイプ:
-editconnection.AuthorityColon=権限:
+editconnection.AuthorityGroupColon=Authority group:
 editconnection.Cancel=キャンセル
 editconnection.CancelConnectionEditing=コネクションの編集をキャンセル
 editconnection.NameColon=名前:
 editconnection.DescriptionColon=説明:
-editconnection.Maxconnections=最大コネクション数
-editconnection.PerJVMColon=(/JVM):
+editconnection.MaxconnectionsColon=最大コネクション数:
 editconnection.ThrottlingColon=スロットリング:
 editconnection.Add=追加
 editconnection.BinRegularExpression=Bin正規表現
@@ -205,18 +296,18 @@
 editconnection.EditRepositoryConnection=リポジトリコネクションの編集
 editconnection.NoRepositoryConnectorsRegistered=リポジトリコネクションがありません
 
-editconnection.UNREGISTERED=UNREGISTERED
-editconnection.tab=tab
-editconnection.Delete=Delete
-editconnection.Deletethrottle=Delete throttle
+editconnection.UNREGISTERED=アンインストール
+editconnection.tab=タブ
+editconnection.Delete=削除
+editconnection.Deletethrottle=スロットルを削除
 
-viewconnection.ApacheManifoldCFViewRepositoryConnectionStatus=Apache ManifoldCF:リポジトリコネクション状態の表意
-viewconnection.ViewRepositoryConnectionStatus=リポジトリコネクション状態を表意
+viewconnection.ApacheManifoldCFViewRepositoryConnectionStatus=Apache ManifoldCF:リポジトリコネクション状態の表示
+viewconnection.ViewRepositoryConnectionStatus=リポジトリコネクション状態の表示
 viewconnection.NameColon=名前:
 viewconnection.DescriptionColon=説明:
 viewconnection.ConnectionTypeColon=コネクションタイプ:
 viewconnection.MaxConnectionsColon=最大コネクション数:
-viewconnection.AuthorityColon=権限:
+viewconnection.AuthorityGroupColon=Authority group:
 viewconnection.ThrottlingColon=スロットリング:
 viewconnection.NoThrottles=スロットリングなし
 viewconnection.ConnectionStatusColon=コネクションの状態:
@@ -225,14 +316,14 @@
 viewconnection.Edit=編集
 viewconnection.Delete=削除
 viewconnection.DeleteConnection=コネクションを削除
-viewconnection.Deletethisconnection=Delete this connection
-viewconnection.uninstalled=(uninstalled)
-viewconnection.Connectorisnotinstalled=Connector is not installed.
-viewconnection.Threwexception=Threw exception:
-viewconnection.Binregularexpression=Bin regular expression
-viewconnection.Description=Description
-viewconnection.Maxavgfetches=Max avg fetches/min
-viewconnection.qmark=?
+viewconnection.Deletethisconnection=コネクションを削除
+viewconnection.uninstalled=(アンインストール)
+viewconnection.Connectorisnotinstalled=コネクターがインストールされていません
+viewconnection.Threwexception=例外がスローされました:
+viewconnection.Binregularexpression=正規表現
+viewconnection.Description=説明
+viewconnection.Maxavgfetches=最大平均取得/分
+viewconnection.qmark=?
 
 listjobs.ApacheManifoldCFListJobDescriptions=Apache ManifoldCF:ジョブ一覧
 listjobs.JobList=ジョブ一覧
@@ -241,7 +332,7 @@
 listjobs.RepositoryConnection=リポジトリコネクション
 listjobs.ScheduleType=スケジュールタイプ
 listjobs.AddaNewJob=新しいジョブを追加
-listjobs.Addajob=Add a job
+listjobs.Addajob=ジョブを追加
 listjobs.View=表示
 listjobs.Viewjob=ジョブを表示
 listjobs.Edit=編集
@@ -264,8 +355,8 @@
 editjob.OutputConnectionColon=出力コネクション:
 editjob.NoneSelected=未指定
 editjob.PriorityColon=優先順位:
-editjob.Highest=(最大)
-editjob.Lowest=(最小)
+editjob.Highest=(最大)
+editjob.Lowest=(最小)
 editjob.StartMethodColon=起動方法:
 editjob.RepositoryConnectionColon=リポジトリコネクション:
 editjob.Continue=次へ
@@ -276,10 +367,10 @@
 editjob.ScheduleTypeColon=スケジュールタイプ:
 editjob.RescanDocumentsDynamically=動的にコンテンツを再スキャン
 editjob.ScanEveryDocumentOnce=すべてのコンテンツを1回スキャン
-editjob.RecrawlIntervalIfContinuousColon=再読込み間隔 (継続の場合):
-editjob.minutesBlankInfinity=分 (空白=無限)
-editjob.ExpirationIntervalIfContinuousColon=失効間隔 (継続の場合):
-editjob.ReseedIntervalIfContinuousColon=再シード間隔 (継続の場合):
+editjob.RecrawlIntervalIfContinuousColon=再読込み間隔 (継続の場合):
+editjob.minutesBlankInfinity=分 (空白=無限)
+editjob.ExpirationIntervalIfContinuousColon=失効間隔 (継続の場合):
+editjob.ReseedIntervalIfContinuousColon=再シード間隔 (継続の場合):
 editjob.NoScheduleSpecified=スケジュールが指定されていません
 editjob.ScheduledTimeColon=スケジュール時間:
 editjob.AnyDayOfWeek=すべての曜日
@@ -331,22 +422,22 @@
 editjob.ForcedMetadataColon=強制メタデータ:
 editjob.ParameterName=名前
 editjob.ParameterValue=値
-editjob.Deleteforcedmetadatanumber=Delete forced metadata #
-editjob.Delete=Delete
+editjob.Deleteforcedmetadatanumber=強制メタデータを削除: #
+editjob.Delete=削除
 editjob.NoForcedMetadataSpecified=強制メタデータの指定がありません
-editjob.Add=Add
-editjob.Addforcedmetadata=Add forced metadata
-editjob.ForcedMetadataNameMustNotBeNull=Forced metadata name must not be null
+editjob.Add=追加
+editjob.Addforcedmetadata=強制メタデータを追加
+editjob.ForcedMetadataNameMustNotBeNull=強制メタデータ名にはnullを入力しないでください
 editjob.st=日
 editjob.nd=日
 editjob.rd=日
 editjob.th=日
-editjob.dayofmonth=day of month
+editjob.dayofmonth=月の日
 editjob.JobInvocationColon=ジョブ起動:
-editjob.Minimal=Minimal
-editjob.Complete=Complete
+editjob.Minimal=最小
+editjob.Complete=完了
 
-editjob.tab=tab
+editjob.tab=タブ
 
 showjobstatus.ApacheManifoldCFStatusOfAllJobs=Apache ManifoldCF:すべてのジョブの状態
 showjobstatus.Name=名前
@@ -361,38 +452,38 @@
 showjobstatus.PleaseTryAgainLater=保守処理中です。少々お待ちください。
 showjobstatus.StatusOfJobs=ジョブの状態
 
-showjobstatus.Notyetrun=Not yet run
-showjobstatus.Running=Running
-showjobstatus.Runningnoconnector=Running, no connector
-showjobstatus.Aborting=Aborting
-showjobstatus.Restarting=Restarting
-showjobstatus.Stopping=Stopping
-showjobstatus.Resuming=Resuming
-showjobstatus.Paused=Paused
-showjobstatus.Done=Done
-showjobstatus.Waiting=Waiting
-showjobstatus.Startingup=Starting up
-showjobstatus.Cleaningup=Cleaning up
-showjobstatus.Terminating=Terminating
-showjobstatus.Endnotification=End notification
-showjobstatus.ErrorColon=Error:
-showjobstatus.Unknown=Unknown
-showjobstatus.Notstarted=Not started
-showjobstatus.Aborted=Aborted
-showjobstatus.Neverrun=Never run
-showjobstatus.Start=Start
-showjobstatus.Startminimal=Start minimal
-showjobstatus.Restart=Restart
-showjobstatus.Restartminimal=Restart minimal
-showjobstatus.Pause=Pause
-showjobstatus.Abort=Abort
-showjobstatus.Resume=Resume
-showjobstatus.Startjob=Start job
-showjobstatus.minimally=minimally
-showjobstatus.Restartjob=Restart job
-showjobstatus.Pausejob=Pause job
-showjobstatus.Abortjob=Abort job
-showjobstatus.Resumejob=Resume job
+showjobstatus.Notyetrun=未実行
+showjobstatus.Running=実行中
+showjobstatus.Runningnoconnector=実行中, コネクターはありません
+showjobstatus.Aborting=異常終了
+showjobstatus.Restarting=再開
+showjobstatus.Stopping=停止
+showjobstatus.Resuming=再開
+showjobstatus.Paused=停止
+showjobstatus.Done=完了
+showjobstatus.Waiting=待機
+showjobstatus.Startingup=開始
+showjobstatus.Cleaningup=クリーニング
+showjobstatus.Terminating=終了
+showjobstatus.Endnotification=終了通知
+showjobstatus.ErrorColon=エラー:
+showjobstatus.Unknown=未知
+showjobstatus.Notstarted=開始されません
+showjobstatus.Aborted=異常終了しました
+showjobstatus.Neverrun=実行されていません
+showjobstatus.Start=開始
+showjobstatus.Startminimal=最小スタート
+showjobstatus.Restart=再スタート
+showjobstatus.Restartminimal=再最小スタート
+showjobstatus.Pause=停止
+showjobstatus.Abort=中断
+showjobstatus.Resume=再開
+showjobstatus.Startjob=ジョブを開始
+showjobstatus.minimally=最小
+showjobstatus.Restartjob=ジョブを再スタート
+showjobstatus.Pausejob=ジョブを停止
+showjobstatus.Abortjob=ジョブを中断
+showjobstatus.Resumejob=ジョブを再開
 
 documentstatus.ApacheManifoldCFDocumentStatus=Apache ManifoldCF:コンテンツの状態
 documentstatus.DocumentStatus=コンテンツの状態
@@ -400,7 +491,7 @@
 documentstatus.TimeOffsetFromNowMinutes=今から経過時間(分):
 documentstatus.DocumentState=コンテンツの状態:
 documentstatus.DocumentIdentifierMatch=コンテンツID:
-documentstatus.Go=実効
+documentstatus.Go=実行
 documentstatus.ExecuteThisQuery=クエリーを実行
 documentstatus.Continue=次へ
 documentstatus.PleaseSelectAtLeastOneJob=ジョブを一つ以上選択してください
@@ -523,6 +614,7 @@
 simplereport.PreviousPage=前のページ
 simplereport.NextPage=次へ
 simplereport.Next=次へ
+simplereport.Rows=Rows:
 simplereport.RowsPerPage=行/ページ:
 simplereport.PleaseSelectAConnection=コネクションを選択してください
 simplereport.PleaseTryAgainLater=保守処理中です。少々お待ちください。
@@ -564,6 +656,7 @@
 maxactivityreport.PreviousPage=前のページ
 maxactivityreport.NextPage=次のページ
 maxactivityreport.Next=次へ
+maxactivityreport.Rows=Rows:
 maxactivityreport.RowsPerPage=行/ページ:
 maxactivityreport.PleaseSelectAConnection=コネクションを選択してください
 maxactivityreport.PleaseTryAgainLater=保守処理中です。少々お待ちください。
@@ -614,7 +707,7 @@
 maxbandwidthreport.PleaseSelectAConnection=コネクションを選択してください
 maxbandwidthreport.PleaseTryAgainLater=保守処理中です。少々お待ちください。
 maxbandwidthreport.HighestBandwidth2=最大バンド幅
-maxbandwidthreport.Rows=行:
+maxbandwidthreport.Rows=行:
 maxbandwidthreport.Next=次へ
 maxbandwidthreport.EnterALegalNumberForRowsPerPage=正しいページ毎の行数を入力してください
 maxbandwidthreport.EnterALegalIntervalSizeInMinutes=正しい間隔値を入力してください(分)
@@ -701,15 +794,15 @@
 viewjob.StartMethodColon=開始メソッド:
 viewjob.ForcedMetadataColon=強制メタデータ:
 viewjob.NoForcedMetadata=強制メタデータなし
-viewjob.DeleteJobConfirmation=Warning: Deleting this job will remove all\nassociated documents from the index.\nDo you want to proceed?
-viewjob.Notapplicable=Not applicable
-viewjob.Rescandocumentsdynamically=Rescan documents dynamically
-viewjob.Infinity=Infinity
-viewjob.minutes=minutes
-viewjob.Scaneverydocumentonce=Scan every document once
-viewjob.Startatbeginningofschedulewindow=Start at beginning of schedule window
-viewjob.Startinsideschedulewindow=Start inside schedule window
-viewjob.Dontautomaticallystart=Don't automatically start
+viewjob.DeleteJobConfirmation=警告: このジョブの削除をすると関連するドキュメントを全てインデックスから削除されます。\nよろしいですか?
+viewjob.Notapplicable=適用可能ではありません
+viewjob.Rescandocumentsdynamically=ドキュメントを動的に再スキャンします
+viewjob.Infinity=無限
+viewjob.minutes=分
+viewjob.Scaneverydocumentonce=一度全てのドキュメントをスキャンする
+viewjob.Startatbeginningofschedulewindow=スケジュール・ウィンドウのはじめにスタートします
+viewjob.Startinsideschedulewindow=スケジュール・ウィンドウの内部でスタートします
+viewjob.Dontautomaticallystart=自動的にスタートしない
 viewjob.Anydayoftheweek=すべての曜日
 viewjob.Sundays=日
 viewjob.Mondays=月
@@ -718,8 +811,8 @@
 viewjob.Thursdays=木
 viewjob.Fridays=金
 viewjob.Saturdays=土
-viewjob.oneveryhour=on every hour
-viewjob.atmidnight=at midnight
+viewjob.oneveryhour=毎時
+viewjob.atmidnight=夜中
 viewjob.at=at
 viewjob.am=午前
 viewjob.pm=午後
@@ -738,22 +831,32 @@
 viewjob.October=10月
 viewjob.November=11月
 viewjob.December=12月
+viewjob.onanydayofthemonth=すべての日
 viewjob.onthe1stofthemonth=on the 1st of the month
 viewjob.onthe=on the
 viewjob.ofthemonth=of the month
 viewjob.inyears=in year(s)
-viewjob.Nolimit=No limit
-viewjob.Unlimited=Unlimited
-viewjob.Deleteunreachabledocuments=Delete unreachable documents
-viewjob.Nodeletesfornow=No deletes, for now
-viewjob.Nodeletesforever=No deletes, forever
+viewjob.Nolimit=限界無し
+viewjob.Unlimited=無制限
+viewjob.Deleteunreachabledocuments=入手できないドキュメントを削除します
+viewjob.Nodeletesfornow=削除は今のところありません
+viewjob.Nodeletesforever=削除はありません
 viewjob.st=日
 viewjob.nd=日
 viewjob.rd=日
 viewjob.th=日
-viewjob.JobInvocationColon=Job invocation:
-viewjob.Minimal=Minimal
-viewjob.Complete=Complete
+viewjob.JobInvocationColon=ジョブ起動:
+viewjob.Minimal=最小
+viewjob.Complete=完了
+
+viewgroup.ApacheManifoldCFViewGroup=Apache ManifoldCF: View Authority Group
+viewgroup.ViewAuthorityGroup=View Authority Group
+viewgroup.NameColon=Name:
+viewgroup.DescriptionColon=Description:
+viewgroup.EditThisAuthorityGroup=Edit this authority group
+viewgroup.Edit=Edit
+viewgroup.DeleteThisAuthorityGroup=Delete this authority group
+viewgroup.Delete=Delete
 
 viewauthority.ViewAuthorityConnectionStatus=権限コネクション状態の表示
 viewauthority.NameColon=名前:
@@ -766,9 +869,33 @@
 viewauthority.EditThisAuthorityConnection=権限コネクションを編集
 viewauthority.Delete=削除
 viewauthority.DeleteThisAuthorityConnection=権限コネクションを削除
-viewauthority.ApacheManifoldCFViewAuthorityConnectionStatus=Apache ManifoldCF: 権限コネクション状態の表示
-viewauthority.DeleteConnection=Delete connection
-viewauthority.Connectorisnotinstalled=Connector is not installed.
-viewauthority.uninstalled=(uninstalled)
-viewauthority.Threwexception=Threw exception:
-viewauthority.qmark=?
+viewauthority.ApacheManifoldCFViewAuthorityConnectionStatus=Apache ManifoldCF: 権限コネクション状態の表示
+viewauthority.DeleteConnection=コネクションを削除
+viewauthority.Connectorisnotinstalled=コネクターがインストールされていません
+viewauthority.uninstalled=(アンインストール)
+viewauthority.Threwexception=例外がスローされました:
+viewauthority.qmark=?
+viewauthority.PrerequisiteUserMappingColon=前提ユーザーマッピング:
+viewauthority.NoPrerequisites=条件無し
+viewauthority.AuthorizationDomainColon=Authorization domain:
+viewauthority.AuthorityGroupColon=Authority group:
+
+viewmapper.ApacheManifoldCFViewMappingConnectionStatus=Apache ManifoldCF: マッピングコネクション状態の表示
+viewmapper.DeleteConnection=コネクションを削除
+viewmapper.ViewMappingConnectionStatus=マッピングコネクション状態の表示
+viewmapper.uninstalled=(アンインストール)
+viewmapper.Connectorisnotinstalled=コネクターがインストールされていません
+viewmapper.Threwexception=例外がスローされました:
+viewmapper.NameColon=名前:
+viewmapper.DescriptionColon=説明:
+viewmapper.MapperTypeColon=マッピングタイプ:
+viewmapper.MaxConnectionsColon=最大コネクション:
+viewmapper.ConnectionStatusColon=コネクション状態:
+viewmapper.Refresh=更新
+viewmapper.Edit=編集
+viewmapper.EditThisMappingConnection=マッピングコネクションを編集
+viewmapper.Delete=削除
+viewmapper.DeleteThisMappingConnection=マッピングコネクションを削除
+viewmapper.qmark=?
+viewmapper.PrerequisiteUserMappingColon=前提ユーザーマッピング:
+viewmapper.NoPrerequisites=条件無し
diff --git a/lib-license/LICENSE.txt b/lib-license/LICENSE.txt
index dff8488..dfb9c5e 100644
--- a/lib-license/LICENSE.txt
+++ b/lib-license/LICENSE.txt
@@ -293,6 +293,36 @@
 This product includes a jstl-impl-1.2.jar.
 License: Common Development and Distribution License (CDDL) v1.0 (https://glassfish.dev.java.net/public/CDDLv1.0.html)
 
+This product includes a dropbox-client-1.5.3.jar.
+License: MIT license (http://opensource.org/licenses/MIT).
+
+This product includes a json-simple-1.1.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a jackson-core-2.1.3.jar.
+License: Dual license; we choose to distribute under Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-api-client-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-oauth-client-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-api-services-drive-v2-rev64-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-api-services-drive-v2-rev64-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-http-client-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a google-http-client-jackson2-1.14.1-beta.jar.
+License: Apache 2 (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
+This product includes a guava.jar.
+License: Apache 2  (http://www.apache.org/licenses/LICENSE-2.0.txt)
+
 This product may include pdf files that embed IPA-licensed fonts.
 License: IPA Font License Agreement v1.0 (http://ossipedia.ipa.go.jp/ipafont/index.html#LicenseEng)
 
diff --git a/pom.xml b/pom.xml
index bf641b2..bfd50af 100644
--- a/pom.xml
+++ b/pom.xml
@@ -29,7 +29,7 @@
 
   <groupId>org.apache.manifoldcf</groupId>
   <artifactId>mcf-parent</artifactId>
-  <version>1.2-SNAPSHOT</version>
+  <version>1.5-SNAPSHOT</version>
 
   <name>ManifoldCF</name>
   <packaging>pom</packaging>
@@ -40,16 +40,18 @@
     <junit.version>4.8.2</junit.version>
     <postgresql.version>9.1-901.jdbc4</postgresql.version>
     <mysql.version>5.1.18</mysql.version>
-    <hsqldb.version>2.2.9</hsqldb.version>
-    <derby.version>10.8.2.2</derby.version>
+    <hsqldb.version>2.3.1</hsqldb.version>
+    <derby.version>10.10.1.1</derby.version>
     <jetty.version>7.5.4.v20111024</jetty.version>
     <commons-codec.version>1.5</commons-codec.version>
     <commons-io.version>2.1</commons-io.version>
     <commons-logging.version>1.1.1</commons-logging.version>
     <commons-collections.version>3.2.1</commons-collections.version>
     <commons-fileupload.version>1.2.2</commons-fileupload.version>
-    <httpcomponent.version>4.2.4</httpcomponent.version>
-    <solr.version>4.3.0</solr.version>
+    <httpcomponent.httpclient.version>4.2.6</httpcomponent.httpclient.version>
+    <httpcomponent.httpcore.version>4.2.5</httpcomponent.httpcore.version>
+    <httpcomponent.httpmime.version>4.2.6</httpcomponent.httpmime.version>
+    <solr.version>4.6.0</solr.version>
     <commons-el.version>1.0</commons-el.version>
     <commons-lang.version>2.6</commons-lang.version>
     <xalan.version>2.7.1</xalan.version>
@@ -64,6 +66,8 @@
     <json.version>20090211</json.version>
     <velocity.version>1.7</velocity.version>
     <slf4j.version>1.6.6</slf4j.version>
+    <jaxb.version>2.2.6</jaxb.version>
+    <zookeeper.version>3.4.5</zookeeper.version>
   </properties>
 
   <modules>
@@ -90,8 +94,8 @@
         <artifactId>maven-compiler-plugin</artifactId>
         <version>2.3.2</version>
         <configuration>
-          <source>1.6</source>
-          <target>1.6</target>
+          <source>1.7</source>
+          <target>1.7</target>
           <fork>true</fork>
           <meminitial>128m</meminitial>
           <maxmem>512m</maxmem>
diff --git a/site/build.xml b/site/build.xml
index 27ec3b6..0318f1a 100644
--- a/site/build.xml
+++ b/site/build.xml
@@ -274,8 +274,9 @@
       <mkdir dir="fonts"/>
       <!-- http://ossipedia.ipa.go.jp/ipafont/download.html?ruleagreement=%E5%90%8C%E6%84%8F%E3%81%99%E3%82%8B%2FAccept -->
       <!-- get src="http://ossipedia.ipa.go.jp/ipafont/IPAGTTC00303.php" dest="fonts/IPAGTTC00303.zip"/ -->
-      <get src="http://info.openlab.ipa.go.jp/ipafont/fontdata/IPAGTTC00303.zip" dest="fonts"/>
+      <!-- get src="http://info.openlab.ipa.go.jp/ipafont/fontdata/IPAGTTC00303.zip" dest="fonts"/ -->
       <!-- get src="http://ossipedia.ipa.go.jp/ipafont/ipafont/IPAGTTC00303.zip" dest="fonts"/ -->
+      <get src="http://jaist.dl.sourceforge.jp/ipafonts/51867/IPAGTTC00303.zip" dest="fonts"/>
     </target>
     
     <target name="download-dependencies" depends="download-dejavu-fonts,download-ipa-fonts">
diff --git a/site/src/documentation/content/xdocs/en_US/concepts.xml b/site/src/documentation/content/xdocs/en_US/concepts.xml
index 1de9c84..a7b8033 100644
--- a/site/src/documentation/content/xdocs/en_US/concepts.xml
+++ b/site/src/documentation/content/xdocs/en_US/concepts.xml
@@ -53,15 +53,45 @@
       <section>
         <title>ManifoldCF security model</title>
         <p></p>
-        <p>The ManifoldCF security model is based loosely on the standard authorization concepts and hierarchies found in Microsoft's Active Directory.  Active Directory is quite common in the kinds of environments where data repositories exist that are ripe for indexing.  Active Directory's authorization model is also easily used in a general way to represent authorization for a huge variety of third-party content repositories.</p>
+        <p>The ManifoldCF security model is based loosely on the standard authorization concepts and hierarchies found in Microsoft's Active Directory.  Active Directory is quite
+          common in the kinds of environments where data repositories exist that are ripe for indexing.  Active Directory's authorization model is also easily used in a general way to 
+          represent authorization for a huge variety of third-party content repositories.</p>
         <p></p>
-        <p>ManifoldCF defines a concept of an <em>access token</em>.  An access token, to ManifoldCF, is a string which is meaningful only to a specific connector or connectors.  This string describes the ability of a user to view (or not view) some set of documents.  For documents protected by Active Directory itself, an access token would be an Active Directory SID (e.g. "S-1-23-4-1-45").  But, for example, for documents protected by Livelink a wholly different string would be used.</p>
+        <p>ManifoldCF defines a concept of an <em>access token</em>.  An access token, to ManifoldCF, is a string which is meaningful only to a specific connector or
+          connectors.  This string describes the ability of a user to view (or not view) some set of documents.  For documents protected by Active Directory itself, an access token
+          would be an Active Directory SID (e.g. "S-1-23-4-1-45").  But, for example, for documents protected by Livelink a wholly different string would be used.</p>
         <p></p>
-        <p>In the ManifoldCF security model, it is the job of an <em>authority</em> to provide a list of access tokens for a given searching user.  Multiple authorities cooperate in that each one can add to the list of access tokens describing a given user's security.  The resulting access tokens are handed to the search engine as part of every search request, so that the search engine may properly exclude documents that the user is not allowed to see.</p>
+        <p>In the ManifoldCF security model, it is the job of an <em>authority</em> to provide a list of access tokens for a given searching user.  Multiple authorities cooperate
+          in that each one can add to the list of access tokens describing a given user's security.  A user is described in terms of a set of <em>authorization domains</em>
+          and user name tuples.  Any given authority will provide access tokens for a user name corresponding to one authorization domain.  For example,
+          an authority that understands FaceBook users would only respond to a FaceBook user name.  Access tokens from all applicable authorities are added into the final list that is handed to the search engine as part of every
+          search request, so that the search engine may properly exclude documents that the user is not allowed to see.</p>
         <p></p>
-        <p>When document indexing is done, therefore, it is the job of the crawler to hand access tokens to the search engine, so that it may categorize the documents properly according to their accessibility.  Note that the access tokens so provided are meaningful only within the space of the governing authority.  Access tokens can be provided as "grant" tokens, or as "deny" tokens.  Finally, there are multiple levels of tokens, which correspond to Active Directory's concepts of "share" security, "directory" security, or "file" security.  (The latter concepts are rarely used except for documents that come from Windows or Samba systems.)</p>
+        <p>When document indexing is done, therefore, it is the job of the crawler to hand access tokens to the search engine, so that it may categorize the documents properly
+          according to their accessibility.  The access tokens the crawler attaches to a document are meaningful only within the space of the governing <em>authority group</em>.  An
+          authority group describes a set of authorities which all can cooperate to provide access tokens for a single given document.  Each authority belongs
+          to exactly one authority group.  Authority groups serve to separate access tokens into different spaces so that they cannot interfere with one another.</p>
         <p></p>
-        <p>Once all these documents and their access tokens are handed to the search engine, it is the search engine's job to enforce security by excluding inappropriate documents from the search results.  For Solr 1.5, this infrastructure has been submitted in jira ticket SOLR-1895, found <a href="https://issues.apache.org/jira/browse/SOLR-1895">here</a>, where you can download a SearchComponent plug-in and simple instructions for setting up your copy of Solr to enforce ManifoldCF's model of document security.  Bear in mind that this plug-in is still not a complete solution, as it requires an authenticated user name to be passed to it from some upstream source, possibly a JAAS authenticator within an application server framework.</p>
+        <p>For example, say that you would want to crawl documents from a LiveLink repository, as well as from a Windows shared drive.
+          You will therefore have two kinds of documents that are each secured in an entirely different way.  There is a LiveLink authority connection, which provides LiveLink
+          access tokens, and there is an Active Directory authority connection, which provides Windows access tokens.  Now, you don't want there to be any chance
+          that a LiveLink access token could be confused with an Active Directory SID, so the way you do that in ManifoldCF is to create two distinct
+          authority groups, each of which provides access tokens meant for specific kinds of repository documents.  Thus, documents secured by Active
+          Directory SIDs should be indexed against an Active Directory authority group, and documents secured by LiveLink access tokens should have a LiveLink
+          authority group.  Finally, the Active Directory authority connection should then belong to the Active Directory authority group, and the LiveLink authority connection should
+          belong to the LiveLink authority group.</p>
+        <p></p>
+        <p>In addition to specifying the correct authority group, access tokens can be attached to documents as "grant" tokens, or as "deny" tokens.
+          "Grant" tokens provide access, "deny" tokens restrict it.  "Deny" tokens, if matched, always win over "grant" tokens.
+          And finally, there are multiple levels of tokens, which correspond to Active Directory's concepts
+          of "share" security, specific "directory" security, or "file" security.  (The latter concepts are rarely used except for documents that come from
+          Windows or Samba systems.)  Each level provided must agree that the document is to be visible for the document to appear in search
+          results.</p>
+        <p></p>
+        <p>Once all these documents and their access tokens are handed to the search engine, it is the search engine's job to enforce security by excluding inappropriate documents
+          from the search results.  For Solr and for ElasticSearch, this infrastructure has been included in ManifoldCF releases as a Solr plugin (both 3.x and 4.x varieties) and an
+          ElasticSearch plugin.  Bear in mind that this plug-in is still not a complete solution, as it requires one or more authenticated user
+          names to be passed to it from some upstream source, possibly a JAAS authenticator within an application server framework.</p>
         <p></p>
       </section>
       <section>
@@ -70,20 +100,25 @@
         <section>
           <title>Connectors</title>
           <p></p>
-          <p>ManifoldCF defines three different kinds of connectors.  These are:</p>
+          <p>ManifoldCF defines four different kinds of connectors.  These are:</p>
           <p></p>
           <ul>
+            <li>User mapping connectors</li>
             <li>Authority connectors</li>
             <li>Repository connectors</li>
             <li>Output connectors</li>
           </ul>
           <p></p>
-          <p>All connectors share certain characteristics.  First, they are pooled.  This means that ManifoldCF keeps configured and connected instances of a connector around for a while, and has the ability to limit the total number of such instances to within some upper limit.  Connector implementations have specific methods in them for managing their existence in the pools that ManifoldCF keeps them in.  Second, they are configurable.  The configuration description for a connector is an XML document, whose precise format is determined by the connector implementation.  A configured connector instance is called a <em>connection</em>, by common ManifoldCF convention.</p>
+          <p>All connectors share certain characteristics.  First, they are pooled.  This means that ManifoldCF keeps configured and connected instances of a connector around for
+            a while, and has the ability to limit the total number of such instances to within some upper limit.  Connector implementations have specific methods in them for managing
+            their existence in the pools that ManifoldCF keeps them in.  Second, they are configurable.  The configuration description for a connector is an XML document, whose precise
+            format is determined by the connector implementation.  A configured connector instance is called a <em>connection</em>, by common ManifoldCF convention.</p>
           <p></p>
           <p>The function of each type of connector is described below.</p>
           <p></p>
           <table>
             <tr><th>Connector type</th><th>Function</th></tr>
+            <tr><td>User mapping connector</td><td>Maps a user name to another (equivalent) user name, typically by means of a regular expression mechanism, or by repository access</td></tr>
             <tr><td>Authority connector</td><td>Furnishes a standard way of mapping a user name to access tokens that are meaningful for a given type of repository</td></tr>
             <tr><td>Repository connector</td><td>Fetches documents from a specific kind of repository, such as SharePoint or off the web</td></tr>
             <tr><td>Output connector</td><td>Pushes document ingestion requests and deletion requests to a specific kind of back end search engine or other entity, such as Lucene</td></tr>
@@ -93,17 +128,27 @@
         <section>
           <title>Connections</title>
           <p></p>
-          <p>As described above, a <em>connection</em> is a connector implementation plus connector-specific configuration information.  A user can define a connection of all three types in the crawler UI.</p>
+          <p>As described above, a <em>connection</em> is a connector implementation plus connector-specific configuration information.  A user can define a connection of all
+            three types in the crawler UI.</p>
           <p></p>
-          <p>The kind of information included in the configuration data for a connector typically describes the "how", as opposed to the "what".  For example, you'd configure a LiveLink connection by specifying how to talk to the LiveLink server.  You would <strong>not</strong> include information about which documents to select in such a configuration.</p>
+          <p>The kind of information included in the configuration data for a connector typically describes the "how", as opposed to the "what".  For example, you'd configure a
+            LiveLink connection by specifying how to talk to the LiveLink server.  You would <strong>not</strong> include information about which documents to select in such a
+            configuration.</p>
           <p></p>
-          <p>There is one difference between how you define a <em>repository connection</em>, vs. how you would define an <em>authority connection</em> or <em>output connection</em>.  The difference is that you must specify a governing authority connection for your repository connection.  This is because <strong>all</strong> documents ingested by ManifoldCF need to include appropriate access tokens, and those access tokens are specific to the governing authority.</p>
+          <p>There is one difference between how you define a <em>repository connection</em>, vs. how you would define an <em>authority connection</em> or <em>output
+            connection</em> or <em>mapping connection</em>.  The difference is that you must specify a governing authority connection for your repository connection.  This is
+            because <strong>all</strong> documents ingested by ManifoldCF need to include appropriate access tokens, and those access tokens are specific to the governing authority.</p>
+          <p></p>
+          <p>Another difference in how you define an <em>authority connection</em> or <em>mapping connection</em>, vs. other connections, is that you can specify a prerequisite
+            <em>mapping connection</em> that must occur beforehand.  This means you can have multiple user mappings that occur in a defined sequence, before the authority is
+            invoked.</p>
           <p></p>
         </section>
         <section>
           <title>Jobs</title>
           <p></p>
-          <p>A <em>job</em> in ManifoldCF parlance is a description of some kind of synchronization that needs to occur between a specified repository connection and a specified output connection.  A job includes the following:</p>
+          <p>A <em>job</em> in ManifoldCF parlance is a description of some kind of synchronization that needs to occur between a specified repository connection and a specified
+            output connection.  A job includes the following:</p>
           <p></p>
           <ul>
             <li>A verbal description</li>
@@ -114,7 +159,26 @@
             <li>A schedule for when the job will run: either within specified time windows, or on demand</li>
           </ul>
           <p></p>
-          <p>Jobs are allowed to share the same repository connection, and thus they can overlap in the set of documents they describe.  ManifoldCF permits this situation, although when it occurs it is probably an accident.</p>
+          <p>Jobs are allowed to share the same repository connection, and thus they can overlap in the set of documents they describe.  ManifoldCF permits this situation, although 
+            when it occurs it is probably an accident.</p>
+        </section>
+        <section>
+          <title>Authorization domains</title>
+          <p></p>
+          <p>ManifoldCF supports a federated concept of a user.  The same user, for instance, may have one login name for FaceBook, another for Windows,
+            and yet another for Google.  We can describe this user as having three different authorization domains: "FaceBook", "Windows", and "Google".</p>
+          <p>In ManifoldCF, each authority understands user names or ids from one specific authorization domain.  This allows ManifoldCF to be configured
+            so that access tokens generated from multiple independent sources are amalgamated, even if the incoming user names differ from source to
+            source.</p>
+        </section>
+        <section>
+          <title>Authority groups</title>
+          <p></p>
+          <p>ManifoldCF groups authority connections together in groups, so that multiple authorities can furnish security for a single document.  An authority
+            group is nothing more than a name and a description that is referenced by authority connections that are part of that group, and is referenced also
+            by repository connections that wish to be secured by that group.  For most simple repositories, there is one authority group per authority.  But
+            repositories capable of federated security (e.g. SharePoint with Claim Space support) can use multiple authorities to describe security for a single
+            document.  Authority groups allow configuration of the appropriate many-to-many relationship for this situation.</p>
         </section>
       </section>
     </section>
diff --git a/site/src/documentation/content/xdocs/en_US/end-user-documentation.xml b/site/src/documentation/content/xdocs/en_US/end-user-documentation.xml
index 7f0c009..4f37adf 100644
--- a/site/src/documentation/content/xdocs/en_US/end-user-documentation.xml
+++ b/site/src/documentation/content/xdocs/en_US/end-user-documentation.xml
@@ -34,7 +34,13 @@
             </p>
             <p>The ManifoldCF UI has been tested with Firefox and various incarnations of Internet Explorer.  If you use another browser, there is a small chance that the UI
                   will not work properly.  Please let your system integrator know if you find any browser incompatibility problems.</p>
-            <p>When you do manage to enter the Framework user interface the first time, you should see a screen that looks something like this:</p>
+            <p>When you enter the Framework user interface the first time, you will first be asked to log in:</p>
+            <br/><br/>
+            <figure src="images/en_US/login.PNG" alt="Login Screen" width="80%"/>
+            <br/><br/>
+            <p>Enter the login user name and password for your system.  By default, the user name is "admin" and the password is "admin", although your
+                  system administrator can (and should) change this.  Then, click the "Login" button.  If you entered the correct credentials, you should see a
+                  screen that looks something like this:</p>
             <br/><br/>
             <figure src="images/en_US/welcome-screen.PNG" alt="Welcome Screen" width="80%"/>
             <br/><br/>
@@ -84,69 +90,60 @@
                        If this happens, you will need to correct the problem, by either fixing your infrastructure, or by editing the connection configuration appropriately, before the output connection
                        will work correctly.</p>
             </section>
-            <section id="authorities">
-                <title>Defining Authority Connections</title>
-                <p>The Framework UI's left-side menu contains a link for listing authority connections.  An authority connection is a connection to a system that defines a particular security environment.
-                       For example, if you want to index some documents that are protected by Active Directory, you would need to configure an Active Directory authority connection.</p>
-                <p>You may not need an authority if you do not mind that portions of all the documents you want to index are visible to everyone.  For web, RSS, and Wiki crawling, this might be the
-                       situation.  Most other repositories have some security mechanism, however.</p>
-                <p>You should define your authority connections <b>before</b> setting up your repository connections.  While it is possible to change the relationship between a repository connection
-                       and its authority after-the-fact, in practice such changes may cause many documents to require reindexing.</p>
-                <p>You can create an authority connection by clicking the "List Authority Connections" link in the left-side navigation menu.  When you do this, the
+
+            <section id="groups">
+                <title>Defining Authority Groups</title>
+                <p>The Framework UI's left-side menu contains a link for listing authority groups.  An authority group is a collection of authorities that all cooperate to furnish
+                      security for each document from repositories that you select.  For example, a SharePoint 2010 repository with the Claim Space feature enabled may
+                      contain documents that are authorized by SharePoint itself, by Active Directory, and by others.  Documents from such a SharePoint
+                      repository would therefore refer to a authority group which would have a SharePoint native authority, a SharePoint Active Directory authority,
+                      and other SharePoint claim space authorities as members.  But most of the time, an authority group will consist of a single authority that is appropriate for
+                      the repository the authority group is meant to secure.</p>
+                <p>Since you need to select an authority group when you define an authority connection, you should define your authority groups <b>before</b> setting
+                      up your authority connections.  If you don't have any authority groups defined, you cannot create authority connections at all.  But if you select the
+                      wrong authority group when setting up your authority connection, you can go back later and change your selection.</p>
+                <p>It is also a good idea to define your authority groups before creating any repository connections, since each repository connection will also need to
+                      refer back to an authority group in order to secure documents.  While it is possible to change the relationship between a repository connection
+                       and its authority group after-the-fact, in practice such changes may cause many documents to be reindexed the next time an associated job is run.</p>
+                <p>You can create an authority group by clicking the "List Authority Groups" link in the left-side navigation menu.  When you do this, the
                        following screen will appear:</p>
                 <br/><br/>
-                <figure src="images/en_US/list-authority-connections.PNG" alt="List Authority Connections" width="80%"/>
+                <figure src="images/en_US/list-authority-groups.PNG" alt="List Authority Groups" width="80%"/>
                 <br/><br/>
-                <p>On a freshly created system, there may well be no existing authority connections listed.  If there are already authority connections, they will be listed on this screen, along with links
-                       that allow you to view, edit, or delete them.  To create a new authority connection, click the "Add a new connection" link at the bottom.  The following screen will then appear:</p>
+                <p>If there are already authority groups, they will be listed on this screen, along with links that allow you to view, edit, or delete them.  To create a new
+                      authority group, click the "Add a new authority group" link at the bottom.  The following screen will then appear:</p>
                 <br/><br/>
-                <figure src="images/en_US/add-new-authority-connection-name.PNG" alt="Add New Authority Connection, specify Name" width="80%"/>
+                <figure src="images/en_US/add-new-authority-group-name.PNG" alt="Add New Authority Group, specify Name" width="80%"/>
                 <br/><br/>
-                <p>The tabs across the top each present a different view of your authority connection.  Each tab allows you to edit a different characteristic of that connection.  The exact set of tabs you see
-                       depends on the connection type you choose for the connection.</p>
-                <p>Start by giving your connection a name and a description.  Remember that all authority connection names must be unique, and cannot be changed after the connection is defined.  The name must be
-                       no more than 32 characters long.  The description can be up to 255 characters long.  When you are done, click on the "Type" tab.  The Type tab for the connection will then appear:</p>
-                <br/><br/>
-                <figure src="images/en_US/add-new-authority-connection-type.PNG" alt="Add New Authority Connection, select Type" width="80%"/>
-                <br/><br/>
-                <p>The list of authority connection types in the pulldown box, and what they are each called, is determined by your system integrator.  The configuration tabs for each different kind of authority connection
-                       type are described in separate sections below.</p>
-                <p>After you choose an authority connection type, click the "Continue" button at the bottom of the pane.  You will then see all the tabs appropriate for that kind of connection appear, and a
-                       "Save" button will also appear at the bottom of the pane.  You <b>must</b> click the "Save" button when you are done in order to create your connection.  If you click "Cancel" instead, the new connection
-                       will not be created.  (The same thing will happen if you click on any of the navigation links in the left-hand pane.)</p>
-                <p>Every authority connection has a "Throttling" tab.  The tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/authority-throttling.PNG" alt="Authority Connection Throttling" width="80%"/>
-                <br/><br/>
-                <p>On this tab, you can specify only one thing: how many open connections are allowed at any given time to the system the authority connection talks with.  This restriction helps prevent
-                       that system from being overloaded, or in some cases exceeding its license limitations.  Conversely, making this number larger allows for smaller average search latency.  The default
-                       value is 10, which may not be optimal for all types of authority connections.  Please refer to the section of the manual describing your authority connection type for more precise
-                       recommendations.
-                </p>
-                <p>Please refer to the section of the manual describing your chosen authority connection type for a description of the tabs appropriate for that connection type.</p>
+                <p>The tabs across the top each present a different view of your authority group.  For authority groups, there is only ever one tab, the "Name" tab.</p>
+                <p>Give your authority group a name and a description.  Remember that all authority group names must be unique, and cannot be changed after the
+                      authority group is defined.  The name must be no more than 32 characters long.  The description can be up to 255 characters long.  When you are
+                      done, click on the "Save" button.  You <b>must</b> click the "Save" button when you are done in order to create or update your authority group.
+                      If you click "Cancel" instead, the new authority group will not be created.  (The same thing will happen if you click on any of the navigation links in
+                      the left-hand pane.)</p>
+                <p>After you save your authority group, a summary screen will be displayed that describes the group, and you can proceed on to create any authority
+                      connections that belong to the authority group, or repository connections that refer to the authority group.</p>
 
-                <p>After you save your connection, a summary screen will be displayed that describes your connection's configuration.  This summary screen contains a line where the connection's status
-                       is displayed.  If you did everything correctly, the message "Connection working" will be displayed as a status.  If there was a problem, you will see a connection-type-specific diagnostic message instead.
-                       If this happens, you will need to correct the problem, by either fixing your infrastructure, or by editing the connection configuration appropriately, before the authority connection
-                       will work correctly.</p>
-                       
             </section>
-            <section id="connectors">
+            
+            <section id="connections">
                 <title>Defining Repository Connections</title>
                 <p>The Framework UI's left-hand menu contains a link for listing repository connections.  A repository connection is a connection to the repository system that contains the documents
                        that you are interested in indexing.</p>
                 <p>All jobs require you to specify a repository connection, because that is where they get their documents from.  It is therefore necessary to create a repository connection before
                        indexing any documents.</p>
-                <p>A repository connection also may have an associated authority connection.  This specified authority determines the security environment in which documents from the repository
-                       connection are placed.  While it is possible to change the specified authority for a repository connection after a crawl has been done, in practice this will require that all documents
-                       associated with that repository connection be reindexed.  Therefore, we recommend that you set up your desired authority connection before defining your repository connection.</p>
+                <p>A repository connection also may have an associated authority group.  This specified authority group determines the security environment in which documents
+                      from the repository connection are attached.  While it is possible to change the specified authority group for a repository connection after a crawl has been done,
+                      in practice this will require all documents associated with that repository connection be reindexed in order to be searchable by anyone.  Therefore, we recommend
+                      that you set up your desired authority group before defining your repository connection.</p>
                 <p>You can create a repository connection by clicking the "List Repository Connections" link in the left-side navigation menu.  When you do this, the
                        following screen will appear:</p>
                 <br/><br/>
                 <figure src="images/en_US/list-repository-connections.PNG" alt="List Repository Connections" width="80%"/>
                 <br/><br/>
-                <p>On a freshly created system, there may well be no existing repository connections listed.  If there are already repository connections, they will be listed on this screen, along with links
-                       that allow you to view, edit, or delete them.  To create a new repository connection, click the "Add a new connection" link at the bottom.  The following screen will then appear:</p>
+                <p>On a freshly created system, there may well be no existing repository connections listed.  If there are already repository connections, they will be listed on this
+                    screen, along with links that allow you to view, edit, or delete them.  To create a new repository connection, click the "Add a new connection" link at the bottom.
+                    The following screen will then appear:</p>
                 <br/><br/>
                 <figure src="images/en_US/add-new-repository-connection-name.PNG" alt="Add New Repository Connection, specify Name" width="80%"/>
                 <br/><br/>
@@ -157,13 +154,14 @@
                 <br/><br/>
                 <figure src="images/en_US/add-new-repository-connection-type.PNG" alt="Add New Repository Connection, select Type" width="80%"/>
                 <br/><br/>
-                <p>The list of repository connection types in the pulldown box, and what they are each called, is determined by your system integrator.  The configuration tabs for each different kind of repository connection
-                       type are described in separate sections below.</p>
-                <p>You may also at this point select the authority connection to secure all documents fetched from this repository with.  Bear in mind that only some authority connection types are compatible with any
-                       given repository connection types.  Read the details of your desired repository or authority connection type to understand its intentions, and how it is expected to be used.</p>
-                <p>After you choose the desired repository connection type and an authority connection, click the "Continue" button at the bottom of the pane.  You will then see all the tabs appropriate for that kind of connection appear, and a
-                       "Save" button will also appear at the bottom of the pane.  You <b>must</b> click the "Save" button when you are done in order to create or update your connection.  If you click "Cancel" instead, the new connection
-                       will not be created.  (The same thing will happen if you click on any of the navigation links in the left-hand pane.)</p>
+                <p>The list of repository connection types in the pulldown box, and what they are each called, is determined by your system integrator.  The configuration tabs
+                      for each different kind of repository connection type are described in this document in separate sections below.</p>
+                <p>You may also at this point select the authority group to use to secure all documents fetched from this repository with.  You do not need to define your
+                      authority group's authority connections before doing this step, but you will not be able to search for your documents after indexing them until you do.</p>
+                <p>After you choose the desired repository connection type and an authority group (if desired), click the "Continue" button at the bottom of the pane.  You will
+                      then see all the tabs appropriate for that kind of connection appear, and a "Save" button will also appear at the bottom of the pane.  You <b>must</b> click
+                      the "Save" button when you are done in order to create or update your connection.  If you click "Cancel" instead, the new connection
+                      will not be created.  (The same thing will happen if you click on any of the navigation links in the left-hand pane.)</p>
                 <p>Every repository connection has a "Throttling" tab.  The tab looks like this:</p>
                 <br/><br/>
                 <figure src="images/en_US/repository-throttling.PNG" alt="Repository Connection Throttling" width="80%"/>
@@ -193,6 +191,126 @@
                        will work correctly.</p>
                        
             </section>
+
+            <section id="mappers">
+                <title>Defining User Mapping Connections</title>
+                <p>The Framework UI's left-side menu contains a link for listing user mapping connections.  A user mapping connection is a connection to a system
+                      that understands how to map a user name into a different user name.  For example, if you want to enforce document security using LiveLink, but
+                      you have only an Active Directory user name, you will need to map the Active Directory user name to a corresponding LiveLink one, before finding
+                      access tokens for it using the LiveLink Authority.</p>
+                <p>Not all user mapping connections need to access other systems in order to be useful.  ManifoldCF, for instance, comes with a regular expression
+                      user mapper that manipulates a user name string using regular expressions alone.  Also, user mapping is not needed for many, if not most, authorities.
+                      You will not need any user mapping connections if the authorities that you intend to create can all operate using the same user name, and that user
+                      name is in the form that will be made available to ManifoldCF's authority servlet at search time.</p>
+                <p>You should define your mapping connections <b>before</b> setting up your authority connections.  An authority connections may specify a mapping
+                      connection that precedes it.  For the same reason, it's also convenient to define your mapping connections in the order that you want to process the
+                      user name.  If you don't manage to do this right the first time, though, there is no reason you cannot go back and fix things up.</p>
+                <p>You can create a mapping connection by clicking the "List User Mapping Connections" link in the left-side navigation menu.  When you do this, the
+                       following screen will appear:</p>
+                <br/><br/>
+                <figure src="images/en_US/list-mapping-connections.PNG" alt="List User Mapping Connections" width="80%"/>
+                <br/><br/>
+                <p>On a freshly created system, there may well be no existing mapping connections listed.  If there are already mapping connections, they will be listed on this screen, along with links
+                       that allow you to view, edit, or delete them.  To create a new mapping connection, click the "Add a new connection" link at the bottom.  The following screen will then appear:</p>
+                <br/><br/>
+                <figure src="images/en_US/add-new-mapping-connection-name.PNG" alt="Add New User Mapping Connection, specify Name" width="80%"/>
+                <br/><br/>
+                <p>The tabs across the top each present a different view of your mapping connection.  Each tab allows you to edit a different characteristic of that connection.  The exact set of tabs you see
+                       depends on the connection type you choose for the connection.</p>
+                <p>Start by giving your connection a name and a description.  Remember that all mapping connection names must be unique, and cannot be changed after the connection is defined.  The name must be
+                       no more than 32 characters long.  The description can be up to 255 characters long.  When you are done, click on the "Type" tab.  The Type tab for the connection will then appear:</p>
+                <br/><br/>
+                <figure src="images/en_US/add-new-mapping-connection-type.PNG" alt="Add New User Mapping Connection, select Type" width="80%"/>
+                <br/><br/>
+                <p>The list of mapping connection types in the pulldown box, and what they are each called, is determined by your system integrator.  The configuration tabs for each different kind of
+                       mapping connection type included with ManifoldCF are described in separate sections below.</p>
+                <p>After you choose a mapping connection type, click the "Continue" button at the bottom of the pane.  You will then see all the tabs appropriate for that kind of connection appear, and a
+                       "Save" button will also appear at the bottom of the pane.  You <b>must</b> click the "Save" button when you are done in order to create your connection.  If you click "Cancel" instead,
+                       the new connection will not be created.  (The same thing will happen if you click on any of the navigation links in the left-hand pane.)</p>
+                <p>Every mapping connection has a "Prerequisites" tab.  This tab allows you to specify which mapping connection needs to be run before this one (if any).  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/mapping-prerequisites.PNG" alt="User Mapping Connection Prerequisites" width="80%"/>
+                <br/><br/>
+                <p>Note: It is very important that you do not specify prerequisites in such a way as to create a loop.  To make this easier, ManifoldCF will not display any user mapping connections in the pulldown
+                       which, if selected, would lead to a loop.</p>
+                <p>Every mapping connection has a "Throttling" tab.  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/mapping-throttling.PNG" alt="User Mapping Connection Throttling" width="80%"/>
+                <br/><br/>
+                <p>On this tab, you can specify only one thing: how many open connections are allowed at any given time to the system the authority connection talks with.  This restriction helps prevent
+                       that system from being overloaded, or in some cases exceeding its license limitations.  Conversely, making this number larger allows for smaller average search latency.  The default
+                       value is 10, which may not be optimal for all types of mapping connections.  Please refer to the section of the manual describing your mapping connection type for more precise
+                       recommendations.
+                </p>
+                <p>Please refer to the section of the manual describing your chosen mapping connection type for a description of the tabs appropriate for that connection type.</p>
+
+                <p>After you save your connection, a summary screen will be displayed that describes your connection's configuration.  This summary screen contains a line where the connection's status
+                       is displayed.  If you did everything correctly, the message "Connection working" will be displayed as a status.  If there was a problem, you will see a connection-type-specific diagnostic message instead.
+                       If this happens, you will need to correct the problem, by either fixing your infrastructure, or by editing the connection configuration appropriately, before the mapping connection
+                       will work correctly.</p>
+                       
+            </section>
+
+            <section id="authorities">
+                <title>Defining Authority Connections</title>
+                <p>The Framework UI's left-side menu contains a link for listing authority connections.  An authority connection is a connection to a system that defines a
+                      particular security environment.  For example, if you want to index some documents that are protected by Active Directory, you would need to configure
+                      an Active Directory authority connection.</p>
+                <p>Bear in mind that only specific authority connection types are compatible with a given repository connection type.  Read the details of your desired
+                      repository type in this document in order to understand how it is designed to be used.  You may not need an authority if you do not mind that portions
+                      of all the documents you want to index are visible to everyone.  For web, RSS, and Wiki crawling, this might be the situation.  Most other repositories
+                      have some native security mechanism, however.</p>
+                <p>You can create an authority connection by clicking the "List Authority Connections" link in the left-side navigation menu.  When you do this, the
+                       following screen will appear:</p>
+                <br/><br/>
+                <figure src="images/en_US/list-authority-connections.PNG" alt="List Authority Connections" width="80%"/>
+                <br/><br/>
+                <p>On a freshly created system, there may well be no existing authority connections listed.  If there are already authority connections, they will be listed on this screen, along with links
+                       that allow you to view, edit, or delete them.  To create a new authority connection, click the "Add a new connection" link at the bottom.  The following screen will then appear:</p>
+                <br/><br/>
+                <figure src="images/en_US/add-new-authority-connection-name.PNG" alt="Add New Authority Connection, specify Name" width="80%"/>
+                <br/><br/>
+                <p>The tabs across the top each present a different view of your authority connection.  Each tab allows you to edit a different characteristic of that connection.  The exact set of tabs you see
+                       depends on the connection type you choose for the connection.</p>
+                <p>Start by giving your connection a name and a description.  Remember that all authority connection names must be unique, and cannot be changed after the connection is defined.  The name must be
+                       no more than 32 characters long.  The description can be up to 255 characters long.  When you are done, click on the "Type" tab.  The Type tab for the connection will then appear:</p>
+                <br/><br/>
+                <figure src="images/en_US/add-new-authority-connection-type.PNG" alt="Add New Authority Connection, select Type" width="80%"/>
+                <br/><br/>
+                <p>The list of authority connection types in the pulldown box, and what they are each called, is determined by your system integrator.  The configuration tabs for
+                      each different kind of authority connection type are described in this document in separate sections below.</p>
+                <p>On this tab, you must also select the authority group that the authority connection you are creating belongs to.  Select the appropriate authority group from the
+                      pulldown.</p>
+                <p>You also have the option of selecting a non-default authorization domain.  An authorization domain describes which of possibly several user identities the
+                      authority connection is associated with.  For example, a single user may have an Active Directory identity, a LiveLink identity, and a FaceBook identity.
+                      Your authority connection will be appropriate to only one of those identities.  The list of specific authorization domains available is determined by your system
+                      integrator.</p>
+                <p>After you choose an authority connection type, the authority group, and optionally the authorization domain, click the "Continue" button at the bottom of the pane.
+                      You will then see all the tabs appropriate for that kind of connection appear, and a "Save" button will also appear at the bottom of the pane.  You <b>must</b>
+                      click the "Save" button when you are done in order to create your connection.  If you click "Cancel" instead, the new connection
+                      will not be created.  (The same thing will happen if you click on any of the navigation links in the left-hand pane.)</p>
+                <p>Every authority connection has a "Prerequisites" tab.  This tab allows you to specify which mapping connection needs to be run before this one (if any).  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/authority-prerequisites.PNG" alt="Authority Connection Prerequisites" width="80%"/>
+                <br/><br/>
+                <p>Every authority connection also has a "Throttling" tab.  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/authority-throttling.PNG" alt="Authority Connection Throttling" width="80%"/>
+                <br/><br/>
+                <p>On this tab, you can specify only one thing: how many open connections are allowed at any given time to the system the authority connection talks with.  This restriction helps prevent
+                       that system from being overloaded, or in some cases exceeding its license limitations.  Conversely, making this number larger allows for smaller average search latency.  The default
+                       value is 10, which may not be optimal for all types of authority connections.  Please refer to the section of the manual describing your authority connection type for more precise
+                       recommendations.
+                </p>
+                <p>Please refer to the section of the manual describing your chosen authority connection type for a description of the tabs appropriate for that connection type.</p>
+
+                <p>After you save your connection, a summary screen will be displayed that describes your connection's configuration.  This summary screen contains a line where the connection's status
+                       is displayed.  If you did everything correctly, the message "Connection working" will be displayed as a status.  If there was a problem, you will see a connection-type-specific diagnostic message instead.
+                       If this happens, you will need to correct the problem, by either fixing your infrastructure, or by editing the connection configuration appropriately, before the authority connection
+                       will work correctly.</p>
+                       
+            </section>
+
             <section id="jobs">
                 <title>Creating Jobs</title>
                 <p>A "job" in ManifoldCF is a description of a set of documents.  The Framework's job is to fetch this set of documents come from a specific repository connection, and
@@ -447,6 +565,144 @@
         
         <section id="outputconnectiontypes">
             <title>Output Connection Types</title>
+
+            <section id="elasticsearchoutputconnector">
+                <title>ElasticSearch Output Connection</title>
+                <p>The ElasticSearch Output Connection allow ManifoldCF to submit documents to an ElasticSearch instance, via the XML over HTTP API. The connector has been designed
+            	to be as easy to use as possible.</p>
+                <p>After creating an ElasticSearch ouput connection, you have to populate the parameters tab. Fill in the fields according your ElasticSearch configuration. Each
+            	ElasticSearch output connector instance works with one index. To work with multiple indexes, just create one output connector for each index.</p>
+                <figure src="images/en_US/elasticsearch-connection-parameters.png" alt="ElasticSearch, parameters tab" width="80%"/>
+                <br />
+                <p>The parameters are:</p>
+                <ul>
+                      <li>Server location: An URL that references your ElasticSearch instance. The default value (http://localhost:9200) is valid if your ElasticSearch instance runs
+                          on the same server than the ManifoldCF instance.</li>
+                      <li>Index name: The connector will populate the index defined here.</li>
+                </ul>
+                <br /><p>Once you created a new job, having selected the ElasticSearch output connector, you will have the ElasticSearch tab. This tab let you:</p>
+                <ul>
+                      <li>Fix the maximum size of a document before deciding to index it. The value is in bytes. The default value is 16MB.</li>
+                      <li>The allowed mime types. Warning it does not work with all repository connectors.</li>
+                      <li>The allowed file extensions. Warning it does not work with all repository connectors.</li>
+                </ul>
+                <figure src="images/en_US/elasticsearch-job-parameters.png" alt="ElasticSearch, job parameters" width="80%"/>
+                <p>In the history report you will be able to monitor all the activites. The connector supports three activites: Document ingestion (Indexation), document deletion and
+                  index optimization. The targeted index is automatically optimized when the job is ending.</p>
+                <figure src="images/en_US/elasticsearch-history-report.png" alt="ElasticSearch, history report" width="80%"/>
+                <p>You may also refer to <a href="http://www.elasticsearch.org/guide">ElasticSearch's user documentation</a>.  Especially important is the
+                       need to configure the ElasticSearch index mapping <em>before</em> you try to index anything.  <strong>If you have not configured the ElasticSearch mapping properly, then the
+                       documents you send to ElasticSearch via ManifoldCF will not be parsed, and once you send a document to the index, you cannot fix this in ElasticSearch
+                       without discarding your index.</strong>  Specifically, you will want a mapping that enables the attachment plug-in, for example something like this:</p>
+                <source>
+{
+  "attachment" :
+  {
+    "properties" :
+    {
+      "file" :
+      {
+        "type" : "attachment",
+        "fields" :
+        {
+          "title" : { "store" : "yes" },
+          "keywords" : { "store" : "yes" },
+          "author" : { "store" : "yes" },
+          "content_type" : {"store" : "yes"},
+          "name" : {"store" : "yes"},
+          "date" : {"store" : "yes"},
+          "file" : { "term_vector":"with_positions_offsets", "store":"yes" }
+        }
+      }
+    }
+  }
+}
+                </source>
+                <p>Obviously, you would want your mapping to have details consistent with your particular indexing task.  You can change the mapping or inspect it using
+                       the <em>curl</em> tool, which you can download from <a href="http://curl.haxx.se">http://curl.haxx.se</a>.  For example, to inspect the mapping
+                       for a version of ElasticSearch running locally on port 9200:</p>
+                <source>
+curl -XGET http://localhost:9200/index/_mapping
+                </source>
+            </section>
+
+            <section id="filesystemoutputconnector">
+                <title>File System Output Connection</title>
+                <p>The File System output connection type allows ManifoldCF to store documents in a local filesystem, using the conventions established by the
+                    Unix utility called <em>wget</em>.  Documents stored by this connection type will not include any metadata or security information, but instead
+                    consist solely of a binary file.</p>
+                <p>The connection configuration information for the File System output connection type includes no additional tabs.  There is an additional job tab,
+                    however, called "Output Path".  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/filesystem-job-output-path.PNG" alt="File System Specification, Output Path tab" width="80%"/>
+                <br/><br/>
+                <p>Fill in the path you want the connection type to use to write the documents to.  Then, click the "Save" button.</p>
+            </section>
+
+            <section id="hdfsoutputconnector">
+                <title>HDFS Output Connection</title>
+                <p>The HDFS output connection type allows ManifoldCF to store documents in HDFS, using the conventions established by the
+                    Unix utility called <em>wget</em>.  Documents stored by this connection type will not include any metadata or security information, but instead
+                    consist solely of a binary file.</p>
+                <p>The connection configuration information for the HDFS output connection type includes one additional tab: the "Server" tab.  This tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-configure-server.PNG" alt="HDFS Output Configuration, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Fill in the name node URI and the user name.  Both are required.</p>
+                <p>For the HDFS output connection type, there is an additional job tab called "Output Path".  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-job-output-path.PNG" alt="HDFS Output Specification, Output Path tab" width="80%"/>
+                <br/><br/>
+                <p>Fill in the path you want the connection type to use to write the documents to.  Then, click the "Save" button.</p>
+            </section>
+
+            
+            <section id="gtsoutputconnector">
+                <title>MetaCarta GTS Output Connection</title>
+                <p>The MetaCarta GTS output connection type is designed to allow ManifoldCF to submit documents to an appropriate MetaCarta GTS search
+                       appliance, via the appliance's HTTP Ingestion API.</p>
+                <p>The connection type implicitly understands that GTS can only handle text, HTML, XML, RTF, PDF, and Microsoft Office documents.  All other document types will be
+                       considered to be unindexable.  This helps prevent jobs based on a GTS-type output connection from fetching data that is large, but of no particular relevance.</p>
+                <p>When you configure a job to use a GTS-type output connection, two additional tabs will be presented to the user: "Collections" and "Document Templates".  These
+                       tabs allow per-job specification of these GTS-specific features.</p>
+                <p>More here later</p>
+            </section>
+            
+            <section id="nulloutputconnector">
+                <title>Null Output Connection</title>
+                <p>The null output connection type is meant primarily to function as an aid for people writing repository connection types.  It is not expected to be useful in practice.</p>
+                <p>The null output connection type simply logs indexing and deletion requests, and does nothing else.  It does not have any special configuration tabs, nor does it
+                       contribute tabs to jobs defined that use it.</p>
+            </section>
+
+            <section id="opensearchserveroutputconnector">
+                <title>OpenSearchServer Output Connection</title>
+                <p>The OpenSearchServer Output Connection allow ManifoldCF to submit documents to an OpenSearchServer instance, via the XML over HTTP API. The connector has been designed
+            	to be as easy to use as possible.</p>
+                <p>After creating an OpenSearchServer ouput connection, you have to populate the parameters tab. Fill in the fields according your OpenSearchServer configuration. Each
+            	OpenSearchServer output connector instance works with one index. To work with muliple indexes, just create one output connector for each index.</p>
+                <figure src="images/en_US/opensearchserver-connection-parameters.PNG" alt="OpenSearchServer, parameters tab" width="80%"/>
+                <p>The parameters are:</p><br/>
+                <ul>
+                      <li>Server location: An URL that references your OpenSearchServer instance. The default value (http://localhost:8080) is valid if your OpenSearchServer instance runs
+                          on the same server than the ManifoldCF instance.</li>
+                      <li>Index name: The connector will populate the index defined here.</li>
+                      <li>User name and API Key: The credentials required to connect to the OpenSearchServer instance. It can be left empty if no user has been created. The next figure shows
+                          where to find the user's informations in the OpenSearchServer user interface.</li>
+                </ul>
+                <figure src="images/en_US/opensearchserver-user.PNG" alt="OpenSearchServer, user configuration" width="80%"/>
+                <p>Once you created a new job, having selected the OpenSearchServer output connector, you will have the OpenSearchServer tab. This tab let you:</p><br/>
+                <ul>
+                      <li>Fix the maximum size of a document before deciding to index it. The value is in bytes. The default value is 16MB.</li>
+                      <li>The allowed mime types. Warning it does not work with all repository connectors.</li>
+                      <li>The allowed file extensions. Warning it does not work with all repository connectors.</li>
+                </ul>
+                <figure src="images/en_US/opensearchserver-job-parameters.PNG" alt="OpenSearchServer, job parameters" width="80%"/>
+                <p>In the history report you will be able to monitor all the activites. The connector supports three activites: Document ingestion (Indexation), document deletion and
+                    index optimization. The targeted index is automatically optimized when the job is ending.</p>
+                <figure src="images/en_US/opensearchserver-history-report.PNG" alt="OpenSearchServer, history report" width="80%"/>
+                <p>You may also refer to the <a href="http://www.open-search-server.com/documentation">OpenSearchServer's user documentation</a>.</p>
+            </section>
             
             <section id="solroutputconnector">
                 <title>Solr Output Connection</title>
@@ -518,89 +774,44 @@
                 <p>Add a new mapping by filling in the "source" with the name of the metadata item from the repository, and "target" as the name of the output field in
                        Solr, and click the "Add" button.  Leaving the "target" field blank will result in all metadata items of that name not being sent to Solr.</p>
             </section>
-            
-            <section id="osssoutputconnector">
-            	<title>OpenSearchServer Output Connection</title>
-            	<p>The OpenSearchServer Output Connection allow ManifoldCF to submit documents to an OpenSearchServer instance, via the XML over HTTP API. The connector has been designed
-            	to be as easy to use as possible.</p>
-            	<p>After creating an OpenSearchServer ouput connection, you have to populate the parameters tab. Fill in the fields according your OpenSearchServer configuration. Each
-            	OpenSearchServer output connector instance works with one index. To work with muliple indexes, just create one output connector for each index.</p>
-            	<figure src="images/en_US/opensearchserver-connection-parameters.PNG" alt="OpenSearchServer, parameters tab" width="80%"/>
-            	<p>The parameters are:</p><br/>
-            	<ul>
-            		<li>Server location: An URL that references your OpenSearchServer instance. The default value (http://localhost:8080) is valid if your OpenSearchServer instance runs
-            		on the same server than the ManifoldCF instance.</li>
-            		<li>Index name: The connector will populate the index defined here.</li>
-            		<li>User name and API Key: The credentials required to connect to the OpenSearchServer instance. It can be left empty if no user has been created. The next figure shows
-            		where to find the user's informations in the OpenSearchServer user interface.</li>
-            	</ul>
-            	<figure src="images/en_US/opensearchserver-user.PNG" alt="OpenSearchServer, user configuration" width="80%"/>
-            	<p>Once you created a new job, having selected the OpenSearchServer output connector, you will have the OpenSearchServer tab. This tab let you:</p><br/>
-            	<ul>
-            		<li>Fix the maximum size of a document before deciding to index it. The value is in bytes. The default value is 16MB.</li>
-            		<li>The allowed mime types. Warning it does not work with all repository connectors.</li>
-            		<li>The allowed file extensions. Warning it does not work with all repository connectors.</li>
-            	</ul>
-            	<figure src="images/en_US/opensearchserver-job-parameters.PNG" alt="OpenSearchServer, job parameters" width="80%"/>
-            	<p>In the history report you will be able to monitor all the activites. The connector supports three activites: Document ingestion (Indexation), document deletion and
-            	   index optimization. The targeted index is automatically optimized when the job is ending.</p>
-            	<figure src="images/en_US/opensearchserver-history-report.PNG" alt="OpenSearchServer, history report" width="80%"/>
-             	<p>You may also refer to the <a href="http://www.open-search-server.com/documentation">OpenSearchServer's user documentation</a>.</p>
-            </section>
-            
-            <section id="esssoutputconnector">
-            	<title>ElasticSearch Output Connection</title>
-            	<p>The ElasticSearch Output Connection allow ManifoldCF to submit documents to an ElasticSearch instance, via the XML over HTTP API. The connector has been designed
-            	to be as easy to use as possible.</p>
-            	<p>After creating an ElasticSearch ouput connection, you have to populate the parameters tab. Fill in the fields according your ElasticSearch configuration. Each
-            	ElasticSearch output connector instance works with one index. To work with multiple indexes, just create one output connector for each index.</p>
-            	<figure src="images/en_US/elasticsearch-connection-parameters.png" alt="ElasticSearch, parameters tab" width="80%"/>
-            	<br />
-            	<p>The parameters are:</p>
-            	<ul>
-            		<li>Server location: An URL that references your ElasticSearch instance. The default value (http://localhost:9200) is valid if your ElasticSearch instance runs
-            		on the same server than the ManifoldCF instance.</li>
-            		<li>Index name: The connector will populate the index defined here.</li>
-            	</ul>
-            	<br /><p>Once you created a new job, having selected the ElasticSearch output connector, you will have the ElasticSearch tab. This tab let you:</p>
-            	<ul>
-            		<li>Fix the maximum size of a document before deciding to index it. The value is in bytes. The default value is 16MB.</li>
-            		<li>The allowed mime types. Warning it does not work with all repository connectors.</li>
-            		<li>The allowed file extensions. Warning it does not work with all repository connectors.</li>
-            	</ul>
-            	<figure src="images/en_US/elasticsearch-job-parameters.png" alt="ElasticSearch, job parameters" width="80%"/>
-            	<p>In the history report you will be able to monitor all the activites. The connector supports three activites: Document ingestion (Indexation), document deletion and
-            	   index optimization. The targeted index is automatically optimized when the job is ending.</p>
-            	<figure src="images/en_US/elasticsearch-history-report.png" alt="ElasticSearch, history report" width="80%"/>
-             	<p>You may also refer to <a href="http://www.elasticsearch.org/guide">ElasticSearch's user documentation</a>.</p>
-            </section>
-            
-            <section id="gtsoutputconnector">
-                <title>MetaCarta GTS Output Connection</title>
-                <p>The MetaCarta GTS output connection type is designed to allow ManifoldCF to submit documents to an appropriate MetaCarta GTS search
-                       appliance, via the appliance's HTTP Ingestion API.</p>
-                <p>The connection type implicitly understands that GTS can only handle text, HTML, XML, RTF, PDF, and Microsoft Office documents.  All other document types will be
-                       considered to be unindexable.  This helps prevent jobs based on a GTS-type output connection from fetching data that is large, but of no particular relevance.</p>
-                <p>When you configure a job to use a GTS-type output connection, two additional tabs will be presented to the user: "Collections" and "Document Templates".  These
-                       tabs allow per-job specification of these GTS-specific features.</p>
-                <p>More here later</p>
-            </section>
-            
-            <section id="nulloutputconnector">
-                <title>Null Output Connection</title>
-                <p>The null output connection type is meant primarily to function as an aid for people writing repository connection types.  It is not expected to be useful in practice.</p>
-                <p>The null output connection type simply logs indexing and deletion requests, and does nothing else.  It does not have any special configuration tabs, nor does it
-                       contribute tabs to jobs defined that use it.</p>
-            </section>
-            
+
+
         </section>
-        
+
+        <section id="mappingconnectiontypes">
+            <title>User Mapping Connection Types</title>
+            
+            <section id="regexpmapper">
+                <title>Regular Expression User Mapping Connection</title>
+                <p>The Regular Expression user mapping connection type is very helpful for rote user name conversions of all sorts.  For example, it can easily be configured to map the standard "user@domain" form
+                       of an Active Directory user name to (say) a LiveLink equivalent, e.g. "domain\user".  Since many repositories establish such rote conversions, the Regular Expression user mapping connection
+                       type is often all that you will ever need.</p>
+                <br/>
+                <p>A Regular Expression user mapping connection type has one special tab in the user mapping connection editing screen: "User Mapping".  This
+                       tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/regexp-mapping-user-mapping.PNG" alt="Regexp User Mapping, User Mapping tab" width="80%"/>
+                <br/><br/>
+                <p>The mapping consists of a match expression, which is a regular expression where parentheses ("(" and ")") mark sections you are interested in, and a
+                       replace string.  The sections marked with parentheses are called "groups" in regular expression parlance.  The replace string consists of constant text plus
+                       substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first
+                       match group mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
+                <p>For example, a match expression of <code>^(.*)\@([A-Z|a-z|0-9|_|-]*)\.(.*)$</code> with a replace string of <code>$(2)\$(1l)</code> would convert
+                      an Active Directory username of <code>MyUserName@subdomain.domain.com</code> into the user name
+                      <code>subdomain\myusername</code>.</p>
+                <p>When you are done, click the "Save" button.  When you do, a connection summary and status screen will be presented, which may look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/regexp-mapping-status.PNG" alt="Regexp User Mapping Status" width="80%"/>
+                <br/><br/>
+            </section>
+        </section>
+
         <section id="authorityconnectiontypes">
             <title>Authority Connection Types</title>
             
             <section id="adauthority">
                 <title>Active Directory Authority Connection</title>
-                <p>An active directory authority connection is essential for enforcing security for documents from Windows shares, Microsoft SharePoint, and IBM FileNet repositories.
+                <p>An active directory authority connection is essential for enforcing security for documents from Windows shares, Microsoft SharePoint (in ActiveDirectory mode), and IBM FileNet repositories.
                        This connection type needs to be provided with information about how to log into an appropriate Windows domain controller, with a user that has sufficient privileges to
                        be able to look up any user's ID and group relationships.</p>
                 <br/>
@@ -628,6 +839,171 @@
                 <p>Note that in this example, the Active Directory connection is not responding, which is leading to an error status message instead of "Connection working".</p>
             </section>
 
+            <section id="cmisauthority">
+              <title>CMIS Authority Connection</title>
+              <p>A CMIS authority connection is required for enforcing security for documents retrieved from CMIS repositories.</p>
+              <p>The CMIS specification includes the concept of authorities only depending on a specific document, this authority connector is only based on a regular expression comparator.</p>
+              <p>A CMIS authority connection has the following special tabs you will need to configure: the "Repository" tab and the "User Mapping" tab. The "Repository" tab looks like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-authority-connection-configuration-repository.png" alt="CMIS Authority, Repository configuration" width="80%"/>
+              <br/><br/>
+              <p>The repository configuration will be only used to track an ID for a specific CMIS repository. No calls will be performed against the CMIS repository.</p>
+              <br/><br/>
+              <p>The second tab that you need to configure is the "User Mapping" tab that allows you to define a regular expression to specify the user mapping.  This tab
+                    predates the addition of user mapping functionality to ManifoldCF.  Please create a user mapping instead.</p>
+              <p>The "User Mapping" tab looks like the following:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-authority-connection-configuration-usermapping.png" alt="CMIS Authority, User Mapping configuration" width="80%"/>
+              <br/><br/>
+              <p>The purpose of the "User Mapping" tab is to allow you to map the incoming user name and domain (usually from Active Directory) to its CMIS user equivalent.
+                     The mapping consists of a match expression, which is a regular expression where parentheses ("("
+                     and ")") mark sections you are interested in, and a replace string.  The sections marked with parentheses are called "groups" in regular expression parlance.  The replace string consists of constant text plus
+                     substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
+                     mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
+              <p>For example, a match expression of <code>^(.*)\@([A-Z|a-z|0-9|_|-]*)\.(.*)$</code> with a replace string of <code>$(2)\$(1l)</code> would convert an
+                   Active Directory username of <code>MyUserName@subdomain.domain.com</code> into the CMIS user name <code>subdomain\myusername</code>.</p>
+              <p>When you are done, click the "Save" button.  You will then see a summary and status for the authority connection:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-authority-connection-configuration-save.png" alt="CMIS Authority, saving configuration" width="80%"/>
+              <br/><br/>
+            </section>
+
+            <section id="documentumauthority">
+                <title>EMC Documentum Authority Connection</title>
+                <p>A Documentum authority connection is required for enforcing security for documents retrieved from Documentum repositories.</p>
+                <p>This connection type needs to be provided with information about what Content Server to connect to, and the credentials that should be used to retrieve a user's ACLs from that machine.
+                    In addition, you can also specify whether or not you wish to include auto-generated ACLs in every user's list.  Auto-generated ACLs are created within Documentum for every folder
+                    object.  Because there are often a very large number of folders, including these ACLs can bloat the number of ManifoldCF access tokens returned for a user to tens of thousands, which can negatively
+                    impact perfomance.  Even more notably, few Documentum installations make any real use of these ACLs in any way.  Since Documentum's ACLs are purely additive (that is, there are no
+                    mechanisms for 'deny' semantics), the impact of a missing ACLs is only to block a user from seeing something they otherwise could see.  It is thus safe, and often desirable, to simply ignore the
+                    existence of these auto-generated ACLs.</p>
+                <p>A Documentum authority connection has three special tabs you will need to configure: the "Docbase" tab, the "User Mapping" tab, and the "System ACLs" tab.</p>
+                <p>The "Docbase" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-authority-docbase.PNG" alt="Documentum Authority, Docbase tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the desired Content Server docbase name, and enter the appropriate credentials.  You may leave the "Domain" field blank if the Content Server you specify does not have
+                    Active Directory support enabled.</p>
+                <p>The purpose of the User Mapping tab is to map an incoming user name to the form that Documentum requires.  This tab predates the addition of
+                      user mapping functionality to ManifoldCF, and is thus considered to be deprecated.  Please create a user mapping instead.</p>
+                <p>The "User Mapping" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-authority-user-mapping.PNG" alt="Documentum Authority, User Mapping tab" width="80%"/>
+                <br/><br/>
+                <p>Here you can specify whether the mapping between incoming user names and Content Server user names is case sensitive or case insensitive.  No other mappings
+                    are currently permitted.  Typically, Documentum instances operate in conjunction with Active Directory, such that Documentum user names are either the same as the Active Directory user names,
+                    or are the Active Directory user names mapped to all lower case characters.  You may need to consult with your Documentum system administrator to decide what the correct setting should be for
+                    this option.</p>
+                <p>The "System ACLs" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-authority-system-acls.PNG" alt="Documentum Authority, System ACLs tab" width="80%"/>
+                <br/><br/>
+                <p>Here, you can choose to ignore all auto-generated ACLs associated with a user.  We recommend that you try ignoring such ACLs, and only choose the default if you have
+                    reason to believe that your Documentum content is protected in a significant way by the use of auto-generated ACLs.  Your may need to consult with your Documentum system administrator to
+                    decide what the proper setting should be for this option.</p>
+                <p>When you are done, click the "Save" button.  When you do, a connection summary and status screen will be presented:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-authority-status.PNG" alt="Documentum Authority Status" width="80%"/>
+                <br/><br/>
+                <p>Pay careful attention to the status, and be prepared to correct any
+                    problems that are displayed.</p>
+            </section>
+
+             <section id="genericauthority">
+              <title>Generic Authority</title>
+              <p>Generic authority is intended to be used with Generic Connector and provide authentication tokens based on generic API. The idea is that you can use it and implement only the API which is designed
+                  to be fine grained and as simple as it is possible to handle all tasks.</p>
+              <p>API should be implemented as xml web page (entry point) returning results based on provided GET params. It may be a simple server script or part of the bigger application.
+                  API can be secured with HTTP basic authentication.</p>
+              <br/>
+              <p>There are 2 actions:</p>
+              <ul>
+                <li>check</li>
+                <li>auth</li>
+              </ul>
+              <p>Action is passed as "action" GET param to the entrypoint.</p>
+              <br/><br/>
+              <p><b>[entrypoint]?action=check</b></p>
+              <p>Should return HTTP status code 200 providing information that entrypoint is working properly. Any content returned will be ignored, only the status code matters.</p>
+              <br/><br/>
+			  
+              <p><b>[entrypoint]?action=auth&amp;username=UserName@Domain</b></p>
+              <p>Parameters:</p>
+              <ul>
+                <li>username - name of the user we want to resolve and fetch tokens.</li>
+              </ul>
+              <p>Result should be valid XML of form:</p>
+              <source>
+&lt;auth exists="true|false"&gt;
+   &lt;token&gt;token_1&lt;/token&gt;;
+   &lt;token&gt;token_2&lt;/token&gt;;
+   ...
+&lt;/auth&gt;
+              </source>
+              <p><code>exists</code> attribute is required and it carries information whether user is valid or not.</p>
+              <br/><br/>
+            </section>
+
+            <section id="jdbcauthority">
+                <title>Generic Database Authority Connection</title>
+                <p>The generic database connection type allows you to generate access tokens from a database table, served by one of the following databases:</p>
+                <br/>
+                <ul>
+                    <li>Postgresql (via a Postgresql JDBC driver)</li>
+                    <li>SQL Server (via the JTDS JDBC driver)</li>
+                    <li>Oracle (via the Oracle JDBC driver)</li>
+                    <li>Sybase (via the JTDS JDBC driver)</li>
+                    <li>MySQL (via the MySQL JDBC driver)</li>
+                </ul>
+                <br/>
+                <p>This connection type <b>cannot</b> be configured to work with other databases than the ones listed above without software changes.  Depending on your particular installation,
+                       some of the above options may not be available.</p>
+                <p>A generic database authority connection has four special tabs on the repository connection editing screen: the "Database Type" tab, the "Server" tab,
+                      the "Credentials" tab, and the "Queries" tab.  The "Database Type" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-authority-configure-database-type.PNG" alt="Generic Database Authority Connection, Database Type tab" width="80%"/>
+                <br/><br/>
+                <p>Select the kind of database you want to connect to, from the pulldown.</p>
+                <p>Also, select the JDBC access method you want from the access method pulldown.  The access method is provided because the JDBC specification has been
+                    recently clarified, and not all JDBC drivers work the same way as far as resultset column name discovery is concerned.  The "by name" option currently works
+                    with all JDBC drivers in the list except for the MySQL driver.  The "by label" works for the current MySQL driver, and may work for some of the others as well.  If
+                    the queries you supply for your generic database jobs do not work correctly, and you see an error message about not being able to find required columns in the
+                    result, you can change your selection on this pulldown and it may correct the problem.</p>
+                <p>The "Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-authority-configure-server.PNG" alt="Generic Database Authority Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Here you have a choice.  <strong>Either</strong> you can choose to specify the database host and port, and the database name or instance name,
+                      <strong>or</strong> you can provide a raw JDBC connection string that is appropriate for the database type you have chosen.  This latter option
+                      is provided because many JDBC drivers, such as Oracle's, now can connect to an entire cluster of Oracle servers if you specify the appropriate
+                      connection description string.</p>
+                <p>If you choose the second option, just consult your JDBC driver's documentation and supply your string.  If there is anything entered in the raw connection
+                      string field at all, it will take precedence over the database host and database name fields.</p>
+                <p>If you choose the first option, the server name and port must be provided in the "Database host and port" field.  For example, for Oracle, the standard
+                      Oracle installation uses port 1521, so you would enter something like, "my-oracle-server:1521" for this field.  Postgresql uses port 5432 by default, so
+                      "my-postgresql-server:5432" would be required.  SQL Server's standard port is 1433, so use "my-sql-server:1433".</p>
+                <p>The service name or instance name field describes which instance and database to connect to.  For Oracle or Postgresql, provide just the database name.
+                      For SQL Server, use "my-instance-name/my-database-name".  For SQL Server using the default instance, use just the database name.</p>
+                <p>The "Credentials" tab is straightforward:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-authority-configure-credentials.PNG" alt="Generic Database Authority Connection, Credentials tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the database user credentials.</p>
+                <p>The "Queries" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-authority-configure-queries.PNG" alt="Generic Database Authority Connection, Queries tab" width="80%"/>
+                <br/><br/>
+                <p>Here you supply two queries.  The first query looks up the user name to find a user id.  The second query looks up access tokens corresponding to the
+                      user id.  Details of what you supply for these queries will depend on your database schema.</p>
+                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-authority-status.PNG" alt="Generic Database Authority Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the generic database authority connection is not properly authenticated, which is leading to an error status message instead
+                      of "Connection working".</p>
+            </section>
+
+
             <section id="ldapauthority">
                 <title>LDAP Authority Connection</title>
                 <p>An LDAP authority connection can be used to provide document security in situations where there is no native document security
@@ -650,6 +1026,23 @@
                 <figure src="images/en_US/ldap-status.PNG" alt="LDAP Status" width="80%"/>
                 <br/><br/>
                 <p>Note that in this example, the LDAP connection is not responding, which is leading to an error status message instead of "Connection working".</p>
+                <br/><br/>
+				<p>Example configuration for ActiveDirectory server to fetch user groups:</p>
+				<ul>
+				  <li>Server: [xxx.yyy.zzz.ttt]</li>
+				  <li>Port: 389</li>
+				  <li>Server base: [DC=domain,DC=name]</li>
+				  <li>Bind as user: [user@domain.name]</li>
+				  <li>Bind with password: [password for that user]</li>
+				  <li>User search base: CN=Users</li>
+				  <li>User search filter: sAMAccountName={0}</li>
+				  <li>User name attribute: sAMAccountName</li>
+				  <li>Group search base: CN=Users</li>
+				  <li>Group search filter: (member:1.2.840.113556.1.4.1941:={0})</li>
+				  <li>Group name attribute: sAMAccountName</li>
+				  <li>Member attribute is DN: yes (tick the checkbox)</li>
+				</ul>
+				<p><code>member:1.2.840.113556.1.4.1941:</code> gives you recursive check for nested groups</p>
             </section>
 
             <section id="livelinkauthority">
@@ -682,7 +1075,9 @@
                 <figure src="images/en_US/livelink-authority-user-mapping.PNG" alt="LiveLink Authority, User Mapping tab" width="80%"/>
                 <br/><br/>
                 <p>The purpose of the "User Mapping" tab is to allow you to map the incoming user name and domain (usually from Active Directory) to its LiveLink equivalent.
-                       The mapping consists of a match expression, which is a regular expression where parentheses ("(" and ")") mark sections you are interested in, and a
+                      This tab predates the addition of the general user mapping functionality, and is provided only for backwards-compatibility reasons.  Please create a regular
+                      expression mapper instead.</p>
+                <p>The mapping consists of a match expression, which is a regular expression where parentheses ("(" and ")") mark sections you are interested in, and a
                        replace string.  The sections marked with parentheses are called "groups" in regular expression parlance.  The replace string consists of constant text plus
                        substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first
                        match group mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
@@ -703,75 +1098,7 @@
                     not accept connections, which is leading to an error status message instead of "Connection working".</p>
             </section>
             
-            <section id="documentumauthority">
-                <title>EMC Documentum Authority Connection</title>
-                <p>A Documentum authority connection is required for enforcing security for documents retrieved from Documentum repositories.</p>
-                <p>This connection type needs to be provided with information about what Content Server to connect to, and the credentials that should be used to retrieve a user's ACLs from that machine.
-                    In addition, you can also specify whether or not you wish to include auto-generated ACLs in every user's list.  Auto-generated ACLs are created within Documentum for every folder
-                    object.  Because there are often a very large number of folders, including these ACLs can bloat the number of ManifoldCF access tokens returned for a user to tens of thousands, which can negatively
-                    impact perfomance.  Even more notably, few Documentum installations make any real use of these ACLs in any way.  Since Documentum's ACLs are purely additive (that is, there are no
-                    mechanisms for 'deny' semantics), the impact of a missing ACLs is only to block a user from seeing something they otherwise could see.  It is thus safe, and often desirable, to simply ignore the
-                    existence of these auto-generated ACLs.</p>
-                <p>A Documentum authority connection has three special tabs you will need to configure: the "Docbase" tab, the "User Mapping" tab, and the "System ACLs" tab.</p>
-                <p>The "Docbase" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-authority-docbase.PNG" alt="Documentum Authority, Docbase tab" width="80%"/>
-                <br/><br/>
-                <p>Enter the desired Content Server docbase name, and enter the appropriate credentials.  You may leave the "Domain" field blank if the Content Server you specify does not have
-                    Active Directory support enabled.</p>
-                <p>The "User Mapping" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-authority-user-mapping.PNG" alt="Documentum Authority, User Mapping tab" width="80%"/>
-                <br/><br/>
-                <p>Here you can specify whether the mapping between incoming user names and Content Server user names is case sensitive or case insensitive.  No other mappings
-                    are currently permitted.  Typically, Documentum instances operate in conjunction with Active Directory, such that Documentum user names are either the same as the Active Directory user names,
-                    or are the Active Directory user names mapped to all lower case characters.  You may need to consult with your Documentum system administrator to decide what the correct setting should be for
-                    this option.</p>
-                <p>The "System ACLs" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-authority-system-acls.PNG" alt="Documentum Authority, System ACLs tab" width="80%"/>
-                <br/><br/>
-                <p>Here, you can choose to ignore all auto-generated ACLs associated with a user.  We recommend that you try ignoring such ACLs, and only choose the default if you have
-                    reason to believe that your Documentum content is protected in a significant way by the use of auto-generated ACLs.  Your may need to consult with your Documentum system administrator to
-                    decide what the proper setting should be for this option.</p>
-                <p>When you are done, click the "Save" button.  When you do, a connection summary and status screen will be presented:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-authority-status.PNG" alt="Documentum Authority Status" width="80%"/>
-                <br/><br/>
-                <p>Pay careful attention to the status, and be prepared to correct any
-                    problems that are displayed.</p>
-            </section>
-            
-            <section id="memexauthority">
-                <title>Memex Patriarch Authority Connection</title>
-                <p>A Memex authority connection is required for enforcing security for documents retrieved from Memex repositories.</p>
-                <p>This connection type needs to be provided with information about what Memex Server to connect to, and what user mapping to perform.
-                    Also needed are the Memex credentials that should be used to retrieve a user's permissions from the Memex server.</p>
-                <p>A Memex authority connection has the following special tabs you will need to configure: the "Memex Server" tab, and the "User Mapping" tab.  The "Memex Server" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/memex-authority-memex-server.PNG" alt="Memex Authority, Memex Server tab" width="80%"/>
-                <br/><br/>
-                <p>You must supply the name of your Memex server, and the connection port, along with the Memex credentials for a user that has sufficient permissions to retrieve Memex user
-                    information.  You must also select the Memex server's character encoding.  If you do not know the encoding, consult your Memex system administrator.</p>
-                <p>The "User Mapping" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/memex-authority-user-mapping.PNG" alt="Memex Authority, User Mapping tab" width="80%"/>
-                <br/><br/>
-                <p>The purpose of the "User Mapping" tab is to allow you to map the incoming user name and domain (usually from Active Directory) to its Memex equivalent.
-                       The mapping consists of a match expression, which is a regular expression where parentheses ("("
-                       and ")") mark sections you are interested in, and a replace string.  The sections marked with parentheses are called "groups" in regular expression parlance.  The replace string consists of constant text plus
-                       substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
-                       mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
-                <p>For example, a match expression of <code>^(.*)\@([A-Z|a-z|0-9|_|-]*)\.(.*)$</code> with a replace string of <code>$(2)\$(1l)</code> would convert an AD username of
-                    <code>MyUserName@subdomain.domain.com</code> into the Memex user name <code>subdomain\myusername</code>.</p>
-                <p>When you are done, click the "Save" button.  You will then see a summary and status for the authority connection:</p>
-                <br/><br/>
-                <figure src="images/en_US/memex-authority-status.PNG" alt="Memex Authority Status" width="80%"/>
-                <br/><br/>
-                <p>We suggest that you examine the status carefully and correct any reported errors before proceeding.  Note that in this example, the Memex server has a license error, which
-                    is leading to an error status message instead of "Connection working".</p>
-            </section>
-            
+
             <section id="meridioauthority">
                 <title>Autonomy Meridio Authority Connection</title>
                 <p>A Meridio authority connection is required for enforcing security for documents retrieved from Meridio repositories.</p>
@@ -829,55 +1156,362 @@
                 <p>If you need specific ManifoldCF logging information, contact your system integrator.</p>
             </section>
             
-            <section id="cmisauthority">
-              <title>CMIS Authority Connection</title>
-              <p>A CMIS authority connection is required for enforcing security for documents retrieved from CMIS repositories.</p>
-              <p>The CMIS specification includes the concept of authorities only depending on a specific document, this authority connector is only based on a regular expression comparator.</p>
-              <p>A CMIS authority connection has the following special tabs you will need to configure: the "Repository" tab and the "User Mapping" tab. The "Repository" tab looks like this:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-authority-connection-configuration-repository.png" alt="CMIS Authority, Repository configuration" width="80%"/>
-              <br/><br/>
-              <p>The repository configuration will be only used to track an ID for a specific CMIS repository. No calls will be performed against the CMIS repository.</p>
-              <br/><br/>
-              <p>The second tab that you need to configure is the "User Mapping" tab that allows you to define a regular expression to specify the user mapping.</p>
-              <p>The "User Mapping" tab looks like the following:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-authority-connection-configuration-usermapping.png" alt="CMIS Authority, User Mapping configuration" width="80%"/>
-              <br/><br/>
-              <p>The purpose of the "User Mapping" tab is to allow you to map the incoming user name and domain (usually from Active Directory) to its CMIS user equivalent.
-                     The mapping consists of a match expression, which is a regular expression where parentheses ("("
-                     and ")") mark sections you are interested in, and a replace string.  The sections marked with parentheses are called "groups" in regular expression parlance.  The replace string consists of constant text plus
-                     substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
-                     mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
-              <p>For example, a match expression of <code>^(.*)\@([A-Z|a-z|0-9|_|-]*)\.(.*)$</code> with a replace string of <code>$(2)\$(1l)</code> would convert an
-                   Active Directory username of <code>MyUserName@subdomain.domain.com</code> into the CMIS user name <code>subdomain\myusername</code>.</p>
-              <p>When you are done, click the "Save" button.  You will then see a summary and status for the authority connection:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-authority-connection-configuration-save.png" alt="CMIS Authority, saving configuration" width="80%"/>
-              <br/><br/>
+            <section id="sharepointadauthority">
+                <title>Microsoft SharePoint ActiveDirectory Authority Connection</title>
+                <p>A Microsoft SharePoint ActiveDirectory authority connection is meant to furnish access tokens from Active Directory for a SharePoint instance that is configured
+                    to use Claim Space authorization.  It cannot be used in any other situation.</p>
+                <p>The SharePoint ActiveDirectory authority is meant to work in conjunction with a SharePoint Native authority connection, and provides authorization information from one or
+                    more Active Directory domain controllers.  Thus, it is only needed if Active Directory groups are used to furnish access to documents for users in the SharePoint system.</p>
+                <p>Documents must be indexed using a Microsoft SharePoint repository connection where the "Authority type" is specified to be "Native".  If the "Authority type" is
+                    specified to be "Active Directory", then instead you should configure an Active Directory authority connection, described above.</p>
+                <p>This connection type needs to be provided with information about how to log into an appropriate Windows domain controller, with a user that has sufficient privileges to
+                    be able to look up any user's ID and group relationships.</p>
+                <br/>
+                <p>A SharePoint Active Directory authority connection type has two special tabs in the authority connection editing screen: "Domain Controller", and "Cache".  The "Domain Controller"
+                       tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepointadauthority-configure-dc.PNG" alt="SharePoint AD Configuration, Domain Controller tab" width="80%"/>
+                <br/><br/>
+                <p>As you can see, the SharePoint Active Directory authority allows you to configure multiple connections to different, but presumably related, domain controllers.  The choice of
+                       which domain controller will be accessed is determined by traversing the list of configured domain controllers from top to bottom, and finding the first one that
+                       matches the domain suffix field specified.  Note that a blank value for the domain suffix will match <strong>all</strong> users.</p>
+                <p>To add a domain controller to the end of the list, fill in the requested values.  Note that the "Administrative user name" field usually requires no domain suffix, but
+                       depending on the details of how the domain controller is configured, may sometimes only accept the "name@domain" format.  When you have completed your
+                       entry, click the "Add to end" button to add the domain controller rule to the end of the list.  Later, when other domain controllers are present in the list, you can
+                       click a different button at an appropriate spot to insert the domain controller record into the list where you want it to go.</p>
+                <p>The SharePoint Active Directory authority connection type also has a "Cache" tab, for managing the caching of individual user responses:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepointadauthority-configure-cache.PNG" alt="SharePoint AD Configuration, Cache tab" width="80%"/>
+                <br/><br/>
+                <p>Here you can control how many individual users will be cached, and for how long.</p>
+                <p>When you are done, click the "Save" button.  When you do, a connection summary and status screen will be presented, which may look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepointadauthority-status.PNG" alt="SharePoint AD Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the SharePoint Active Directory connection is not responding, which is leading to an error status message instead of "Connection working".</p>
+
+            </section>
+
+            <section id="sharepointnativeauthority">
+                <title>Microsoft SharePoint Native Authority Connection</title>
+                <p>A Microsoft SharePoint Native authority connection is meant to furnish access tokens from the same SharePoint instance that the documents are coming from.
+                    You should use this authority type whenever you are trying to secure documents using a SharePoint repository connection that is configured to the use "Native" 
+                    authority type.</p>
+                <p>If your SharePoint instance is configured to use the Claim Space authorization model, you may combine a SharePoint Native authority connection with other
+                    SharePoint authority types, such as the SharePoint ActiveDirectory authority type, to furnish complete authorization support.  However, if Claim Space is not
+                    configured, the SharePoint Native authority connection is the only authority type you should need to use.</p>
+                <p>A SharePoint authority connection has two special tabs on the authority connection editing screen: the "Server" tab, and the "Cache" tab.
+                    The "Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepointnativeauthority-configure-server.PNG" alt="SharePoint Native Authority, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select your SharePoint server version from the pulldown.  If you do not select the correct server version, your documents may either be indexed with
+                    insufficient security protection, or you may not be able to index any documents.  Check with your SharePoint system administrator if you are not sure
+                    what to select.</p>
+                <p>SharePoint uses a web URL model for addressing sites, subsites, libraries, and files.  The best way to figure out how to set up a SharePoint connection 
+                    type is therefore to start with your web browser, and visit the topmost root of the site you wish to crawl.  Then, record the URL you see in your browser.</p>
+                <p>Select the server protocol, and enter the server name and port, based on what you recorded from the URL for your SharePoint site.  For the "Site path"
+                    field, type in the portion of the root site URL that includes everything after the server and port, except for the final "aspx" file.  For example, if the SharePoint
+                    URL is "http://myserver:81/sites/somewhere/index.asp", the site path would be "/sites/somewhere".</p>
+                <p>The SharePoint credentials are, of course, what you used to log into your root site.  The SharePoint connection type always requires the user name to be
+                    in the form "domain\user".</p>
+                <p>If your SharePoint server is using SSL, you will need to supply enough certificates for the connection's trust store so that the SharePoint server's SSL
+                    server certificate can be validated.  This typically consists of either the server certificate, or the certificate from the authority that signed the server certificate.
+                    Browse to the local file containing the certificate, and click the "Add" button.</p>
+                <p>The "Cache" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepointnativeauthority-configure-cache.PNG" alt="SharePoint Native Authority, Cache tab" width="80%"/>
+                <br/><br/>
+                <p>Fill in the desired caching parameters.</p>
+                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepointnativeauthority-status.PNG" alt="SharePoint Native Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the SharePoint connection is not actually referencing a SharePoint instance, which is leading to an error status message instead of
+                    "Connection working".</p>
             </section>
             
-        </section>
+            
+       </section>
         
         <section id="repositoryconnectiontypes">
             <title>Repository Connection Types</title>
 
+            <section id="alfrescorepository">
+              <title>Alfresco Repository Connection</title>
+              <p>The Alfresco Repository Connection type allows you to index content from an Alfresco repository.</p>
+              <p>This connector is compatible with any Alfresco version (2.x, 3.x and 4.x).</p>
+              <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
+              <br/>
+              <p>An Alfresco connection has the following configuration parameters on the repository connection editing screen:</p>
+              <br/><br/>
+              <figure src="images/en_US/alfresco-repository-connection-configuration.png" alt="Alfresco Repository Connection, configuration parameters" width="80%"/>
+              <br/><br/>
+              <p>Enter the correct username, password and the endpoint to reference the Alfresco document server services.</p>
+              <p>You also can set a specific Socket Timeout to correctly tune the connector for your own Alfresco repository settings.</p>
+              <p>If you have a Multi-Tenancy environment configured in the repository, be sure to also set the tenant domain parameter.</p>
+              <p>The endpoint consists of the HTTP protocol, hostname, port and the context path of the Alfresco Web Services API exposed by the Alfresco server.</p>
+              <p>By default, if you don't have changed the context path of the Alfresco webapp, you should have an endpoint address similar to the following:</p>
+              <br/><br/>
+              <p><code>http://HOSTNAME:PORT/alfresco/api</code></p>
+              <br/><br/>
+              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/alfresco-repository-connection-configuration-save.png" alt="Alfresco Repository Connection, saving configuration" width="80%"/>
+              <br/><br/>
+              <p>When you configure a job to use the Alfresco repository connection an additional tab is presented. This is the "Lucene Query" tab:</p>
+              <br/><br/>
+              <figure src="images/en_US/alfresco-repository-connection-job-lucenequery.png" alt="Alfresco Repository Connection, Lucene Query" width="80%"/>
+              <br/><br/>
+              <p>The Lucene Query tab allows you to specify the query based on the Lucene Query Language to get all the result documents that need to be ingested.</p>
+              <p>Please note that if the Lucene Query is null, the connector will ingest all the contents in the Alfresco repository under the Company Home.</p>
+              <p>Please also note that the Alfresco Connector during the ingestion process, for each result, if it will find a folder node (that must have any child association defined for the node type), it will ingest all the children of the folder node; otherwise it will directly ingest the document (that must have any d:content as one of the properties of the node).</p>
+              <p>When you are done, and you click the "Save" button, you will see a summary page looking something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/alfresco-repository-connection-job-save.png" alt="Alfresco Repository Connection, saving job" width="80%"/>
+              <br/><br/>
+            </section>
+
+            <section id="cmisrepository">
+              <title>CMIS Repository Connection</title>
+              <p>The CMIS Repository Connection type allows you to index content from any CMIS-compliant repository.</p>
+              <p>By default each CMIS Connection manages a single CMIS repository, this means that if you have multiple CMIS repositories exposed by a single
+                  endpoint, you need to create a specific connection for each CMIS repository.</p>
+              <p>CMIS repository documents are typically secured by using the CMIS Authority Connection type.  This authority type, however, does not have access
+                    to user groups, since there is no such functionality in the CMIS specification at this time.  As a result, most people only use the CMIS connection type
+                    in an unsecured manner.</p>
+              <br/>
+              <p>A CMIS connection has the following configuration parameters on the repository connection editing screen:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-repository-connection-configuration.png" alt="CMIS Repository Connection, configuration parameters" width="80%"/>
+              <br/><br/>
+              <p>Select the correct CMIS binding protocol (AtomPub or Web Services) and enter the correct username, password and the endpoint to reference the CMIS document server services.</p>
+              <p>The endpoint consists of the HTTP protocol, hostname, port and the context path of the CMIS service exposed by the CMIS server:</p>
+              <br/><br/>
+              <p><code>http://HOSTNAME:PORT/CMIS_CONTEXT_PATH</code></p>
+              <br/><br/>
+              <p>Optionally you can provide the repository ID to select one of the exposed CMIS repository, if this parameter is null the CMIS Connector will consider the first CMIS repository exposed by the CMIS server.</p>
+              <br/>
+              <p>Note that, in a CMIS system, a specific binding protocol has its own context path, this means that the endpoints are different:</p>
+              <p>for example the endpoint of the AtomPub binding exposed by the actual version of the InMemory Server provided by the OpenCMIS framework is the following:</p>
+              <p><code>http://localhost:8080/chemistry-opencmis-server-inmemory-war-0.5.0-SNAPSHOT/atom</code></p>
+              <br/><br/>
+              <p>The Web Services binding is exposed using a different endpoint:</p>
+              <p><code>http://localhost:8080/chemistry-opencmis-server-inmemory-war-0.5.0-SNAPSHOT/services/RepositoryService</code></p>
+              <br/><br/>
+              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-repository-connection-configuration-save.png" alt="CMIS Repository Connection, saving configuration" width="80%"/>
+              <br/><br/>
+              <p>When you configure a job to use the CMIS repository connection an additional tab is presented. This is the "CMIS Query" tab:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-repository-connection-job-cmisquery.png" alt="CMIS Repository Connection, CMIS Query" width="80%"/>
+              <br/><br/>
+              <p>The CMIS Query tab allows you to specify the query based on the CMIS Query Language to get all the result documents that need to be ingested.</p>
+              <p>Note that the CMIS Connector during the ingestion process, for each result, if it will find a folder node (that must have cmis:folder as the baseType), it will ingest all the children of the folder node; otherwise it will directly ingest the document (that must have cmis:document as the baseType).</p>
+              <p>When you are done, and you click the "Save" button, you will see a summary page looking something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/cmis-repository-connection-job-save.png" alt="CMIS Repository Connection, saving job" width="80%"/>
+              <br/><br/>
+            </section>
+            
+            <section id="documentumrepository">
+                <title>EMC Documentum Repository Connection</title>
+                <p>The EMC Documentum connection type allows you index content from a Documentum Content Server instance.  A single connection allows you
+                    to reach all documents contained on a single Content Server instance.  Multiple connections are therefore required to reach documents from multiple Content Server instances.</p>
+                <p>For each Content Server instance, the Documentum connection type allows you to index any Documentum content that is of type dm_document, or is derived from dm_document.
+                    Compound documents are handled as well, but only by mean of the component documents that make them up.  No other Documentum construct can be indexed at this time.</p>
+                <p>Documents described by Documentum connections are typically secured by a Documentum authority.  If you have not yet created a Documentum authority, but would like your
+                    documents to be secured, please follow the direction in the section titled "EMC Documentum Authority Connection".</p>
+                <p>A Documentum connection has the following special tabs: "Docbase", and "Webtop".  The "Docbase" tab allows you to select a Content Server to connect to, and also to provide
+                    appropriate credentials.  The "Webtop" tab describes the location of a Webtop server that will be used to display the documents from that Content Server, after they have been indexed.</p>
+                <p>The "Docbase" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-docbase.PNG" alt="Documentum Connection, Docbase tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the Content Server Docbase instance name, and provide your credentials.  You may leave the "Domain" field blank, if the Content Server instance does not have AD integration enabled.</p>
+                <p>The "Webtop tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-webtop.PNG" alt="Documentum Connection, Docbase tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the components of the base URL of the Webtop instance you want to use for serving the documents.  Remember that this information will only be used to construct
+                    a URL to the document to allow user inspection; it will not be used for any crawling activities.</p>
+                <p>When you are done, click the "Save" button.  When you do, a connection summary and status screen will be presented:</p>
+                <br/><br/>
+                <figure src="images/en_US/documentum-status.PNG" alt="Documentum Connection Status" width="80%"/>
+                <br/><br/>
+                <p>Pay careful attention to the status, and be prepared to correct any
+                    problems that are displayed.</p>
+                <p></p>
+                <p>A job created to use a Documentum connection has the following additional tabs associated with it: "Paths", "Document Types", "Content Types", "Security", and "Path Metadata".</p>
+                <p>The "Paths" tab allows you to construct the paths within Documentum that you want to scan for content.  If no paths are selected, all content will be considered eligible.</p>
+                <p>The "Document Types" tab allows you to select what document types you want to index.  Only document types that are derived from dm_document, which are flagged by the system administrator
+                    as being "indexable", will be presented for your selection.  On this tab also, for each document type you index, you may choose included specific metadata for documents of that type, or you can
+                    check the "All metadata" checkbox to include all metadata associated with documents of that type.</p>
+                <p>The "Content Types" tab allows you to select which Documentum mime-types are to be included in the document set.  Check the types you want to include, and uncheck the types you want to
+                    exclude.</p>
+                <p>The "Security" tab allows you to disable or enable Documentum security for the documents described by this job.  You can turn off native Documentum security by clicking the "Disable" radio button.
+                    If you do this, you may also enter your own access tokens, which will be applied to all documents described by the job.  The form of the access tokens you enter will depend on the governing
+                    authority connection type.  Click the "Add" button to add each access token.</p>
+                <p>The "Path Metadata" tab allows you to send each document's path information as metadata to the index.  To enable this feature, enter the name of the metadata
+                    attribute you want this information to be sent into the "Path attribute name" field.  Then, add the rules you want to the list of rules.  Each rule has a match expression, which is a regular expression where
+                    parentheses ("(" and ")") mark sections you are interested in.  These sections are called "groups" in regular expression parlance.  The replace string consists of constant text plus
+                    substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
+                    mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
+                <p>For example, suppose you had a rule which had ".*/(.*)/(.*)/.*" as a match expression, and "$(1) $(2)" as the replace string.  If presented with the path
+                    <code>Project/Folder_1/Folder_2/Filename</code>, it would output the string <code>Folder_1 Folder_2</code>.</p>
+                <p>If more than one rule is present, the rules are all executed in sequence.  That is, the output of the first rule is modified by the second rule, etc.</p>
+            </section>
+            
+            <section id="dropboxrepository">
+              <title>Dropbox Repository Connection</title>
+              <p>The Dropbox Repository Connection type allows you to index content from <a href="https://www.dropbox.com/home">Dropbox</a>.</p>
+              <p>Each Dropbox Connection manages access to a single dropbox repository. This means that if you have multiple dropbox repositories (i.e. different users),
+                you need to create a specific connection for each dropbox repository and provide the associated authentication information.</p>
+              <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
+              <br/>
+              <p>A Dropbox connection has the following configuration parameters on the repository connection editing screen:</p>
+              <br/><br/>
+              <figure src="images/en_US/dropbox-repository-connection-configuration.PNG" alt="Dropbox Repository Connection, configuration parameters" width="80%"/>
+              <br/><br/>
+              <p>As we can see there are 4 pieces of information which are needed to create a succesful connection. The application key and secret are given by dropbox 
+                when you register your application for a development license. This is typically done through the application developer <a href="https://www.dropbox.com/developers/apps">Dropbox website</a>.</p>
+              <br/><br/>
+              <figure src="images/en_US/dropbox-repository-create-application.PNG" alt="Dropbox create application" width="80%"/>
+              <br/><br/>
+              <p>For our purposes, we need to select "Core" as the application type as we'll be using REST services to communicate with dropbox. Also we select
+                "full access". This merits a small discussion. Typically an application which wants to store and retreive information does so from an application specific
+                folder. In this case, we assume that the user wants to have access to their files as is, and not copy them into a manifoldcf specific folder. As a result,
+                we have selected full access instead of "App folder".</p>
+              <br/><br/>
+              <figure src="images/en_US/dropbox-repository-application-secret-passwords.PNG" alt="Dropbox get key and secret passwords" width="80%"/>
+              <br/><br/>
+              <p> Afterwards we can see the app key and app secret which are the two pieces of information requested by the connector.</p>
+              <p>Now each user must confirm their acceptance of allowing your application to access their dropbox. This is done through a run-of-the-mill OAUTH
+                approach. After providing your application key and secret, the user is directed to a dropbox website asking them if they wish to grant permission to your
+                application. After they accept the request, dropbox provides a client key and secret. These are the last two pieces of information needed for the dropbox
+                connector. This process is covered in depth at the <a href="https://www.dropbox.com/developers/core/authentication">dropbox website</a>
+                which shows examples of how to generate the two needed client tokens. </p>
+              <br/>
+              <br/><br/>
+              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/dropbox-repository-connection-configuration-save.PNG" alt="Dropbox Repository Connection, saving configuration" width="80%"/>
+              <br/><br/>
+              <p>When you configure a job to use the Dropbox repository connection an additional tab is presented. This is the "Dropbox Folder to Index" tab:</p>
+              <br/><br/>
+              <figure src="images/en_US/dropbox-repository-connection-job-dropbox-folder-to-index.PNG" alt="Dropbox Repository Connection, Dropbox Folder to Index" width="80%"/>
+              <br/><br/>
+              <p>The Dropbox Folder to Index tab allows you to specify the directory which the dropbox connector will index. Dropbox uses unix style paths, with
+                "/" indicating the root path (and thus the entire dropbox). For example if you want to just index the Photos directory, you would specify "/Photos". </p>
+              <p>Note that the Dropbox Connector during the ingestion process, for each result, when it find a folder node, it will ingest all the children of the folder node;
+                otherwise it will directly ingest the document.</p>
+              <p>When you are done, and you click the "Save" button, you will see a summary page looking something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/dropbox-repository-connection-job-save.PNG" alt="CMIS Repository Connection, saving job" width="80%"/>
+              <br/><br/>
+            </section>
+
+
+            <section id="emailrepository">
+                <title>Individual Email Repository Connection</title>
+                <p>The Individual Email connection type allows you to index content from a single email account, using IMAP, IMAP-SSL, POP3, or
+                    POP3-SSL email protocols.  Multiple connections are required to support multiple email accounts.</p>
+                <p>This connection type provides no support at this time for securing email documents.</p>
+                <p>An Email connection has the following special tabs: "Server" and "URL".  The "Server" tab allows you to describe a server and
+                    email account, while the "URL" tab allows you describe a URL template that will be used to form the URL for individual emails that
+                    are crawled.</p>
+                <p>The "Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/email-configure-server.PNG" alt="Email Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select an email protocol, and type in the name of the email host.  Also type in the user name and password.  If the port differs from the
+                    default for the selected protocol, you may enter a port as well.</p>
+                <p>The "URL" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/email-configure-url.PNG" alt="Email Connection, URL tab" width="80%"/>
+                <br/><br/>
+                <p>Enter a URL template to be used for generating the URL for each individual email.  Use the special substitution tokens "$(FOLDERNAME)"
+                    and "$(MESSAGEID)" to include the individual message's folder name and message ID in the URL, in url-encoded form.</p>
+                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/email-status.PNG" alt="Email Connection Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the email connection cannot reach the server, which is leading to an error status message instead of
+                    "Connection working".</p>
+                <p></p>
+                <p>When you configure a job to use a repository connection of the email type, two additional tabs are presented.  These are, in order, "Metadata", and "Filter".</p>
+                <p>The "Metadata" tab looks something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/email-job-metadata.PNG" alt="Email Job, Metadata tab" width="80%"/>
+                <br/><br/>
+                <p>Select any of the checkboxes to include that metadata in the crawl.</p>
+                <p></p>
+                <p>The "Filter" tab looks something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/email-job-filter.PNG" alt="Email Job, Filter tab" width="80%"/>
+                <br/><br/>
+                <p>Select one or more folders to include in the pulldown.  Then, if you wish to limit the documents included further by applying search criteria,
+                    you may select a field, type in a search value, and click the "Add" button.  Note that <strong>all</strong> the fields must match for the
+                    email to be included in the crawl.</p>
+            </section>
+
+            <section id="filenetrepository">
+                <title>IBM FileNet P8 Repository Connection</title>
+                <p>The IBM FileNet P8 connection type allows you to index content from a FileNet P8 server instance.  A connection allows you to reach all files
+                      kept on that server.  Multiple connections are required to support multiple servers.</p>
+                <p>This connection type secures documents using the Active Directory authority.  If you have not yet created an Active Directory authority, but would like your
+                    documents to be secured, please follow the direction in the section titled "Active Directory Authority Connection".</p>
+                <p>A FileNet connection has the following special tabs: "Server", "Object Store", "Document URL", and "Credentials".  The "Server" tab allows you to
+                      connect to a specific FileNet P8 Server, while the "Object store" tab allows you to specify the desired FileNet object store.  The "Document URL"
+                      tab allows you to set up the parameters of each indexed document's URL, while the "Credentials" tab allows you to specify the credentials to use
+                      to access the FileNet object store.</p>
+                <p>The "Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/filenet-configure-server.PNG" alt="FileNet Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select the appropriate protocol, and provide the server name, port, and service location.</p>
+                <p>The "Object Store" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/filenet-configure-objectstore.PNG" alt="FileNet Connection, Object Store tab" width="80%"/>
+                <br/><br/>
+                <p>Type in the name of the FileNet domain you want to connect to, and the name of the FileNet object store within that domain.</p>
+                <p>The "Document URL" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/filenet-configure-documenturl.PNG" alt="FileNet Connection, Document URL tab" width="80%"/>
+                <br/><br/>
+                <p>This tab allows you to provide the basic URL that will be how each indexed document will be loaded for the user.  Select the
+                      protocol.  Type in the host name.  Type in the port.  And, type in the location.</p>
+                <p>The "Credentials" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/filenet-configure-credentials.PNG" alt="FileNet Connection, Credentials tab" width="80%"/>
+                <br/><br/>
+                <p>Type in the FileNet user ID and password to allow the FileNet connection type access to the FileNet repository.</p>
+                <p>When you are done filling in the connection information, click the "Save" button.  You should see something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/filenet-status.PNG" alt="FileNet Connection Status" width="80%"/>
+                <br/><br/>
+                <p>More here later</p>
+            </section>
+            
             <section id="filesystemrepository">
-                <title>Generic File System Repository Connection</title>
-                <p>The generic file system repository connection type was developed primarily as an example, demonstration, and testing tool, although it can potentially be useful for indexing local
-                       files that exist on the same machine that ManifoldCF is running on.  Bear in mind that there is no support in this connection type for any kind of
-                       security, and the options are somewhat limited.</p>
-                <p>The file system repository connection type provides no configuration tabs beyond the standard ones.  However, please consider setting a "Maximum connections per
+                <title>Generic WGET-Compatible File System Repository Connection</title>
+                <p>The generic file system repository connection type was developed in part as an example, demonstration, and testing tool, which reads simple
+                       files in directory paths, and partly as ManifoldCF support for the Unix utility called <em>wget</em>.  In the latter mode, the File System Repository Connector
+                       will parse file names that were created by <em>wget</em>, or by the wget-compatible File System Output Connector, and turn these back
+                       into full URL's to external web content.</p>
+                <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
+                <p>The File System repository connection type provides no configuration tabs beyond the standard ones.  However, please consider setting a "Maximum connections per
                        JVM" value on the "Throttling" tab to at least one per worker thread, or 30, for best performance.</p>
                 <p>Jobs created using a file-system-type repository connection
-                       have two tabs in addition to the standard repertoire: the "Hop Filters" tab, and the "Paths" tab.</p>
+                       have two tabs in addition to the standard repertoire: the "Hop Filters" tab, and the "Repository Paths" tab.</p>
                 <p>The "Hop Filters" tab allows you to restrict the document set by the number of child hops from the path root.  While this is not terribly interesting in the case of a file
                        system, the same basic functionality is also used in the Web connection type, where it is a more important feature.  The file system connection type gives you a way to see
                        how this feature works, in a more predictable environment:</p>
                 <br/><br/>
                 <figure src="images/en_US/filesystem-job-hopcount.PNG" alt="File System Connection, Hop Filters tab" width="80%"/>
                 <br/><br/>
-                <p>In the case of the file system connection type, there is only one variety of relationship between documents, which is called a "child" relationship.  If you want to
+                <p>In the case of the File System connection type, there is only one variety of relationship between documents, which is called a "child" relationship.  If you want to
                        restrict the document set by how far away a document is from the path root, enter the maximum allowed number of hops in the text box.  Leaving the box blank
                        indicates that no such filtering will take place.</p>
                 <p>On this same tab, you can tell the Framework what to do should there be changes in the distance from the root to a document.  The choice "Delete unreachable
@@ -885,18 +1519,523 @@
                        expensive bookkeeping, however, so you also have the option of  ignoring such changes.  There are two varieties of this latter option - you can ignore the changes
                        for now, with the option of turning back on the aggressive bookkeeping at a later time, or you can decide not to ever allow changes to propagate, in which case
                        the Framework will discard the necessary bookkeeping information permanently.</p>
-                <p>The "Paths" tab looks like this:</p>
+                <p>The "Repository Paths" tab looks like this:</p>
                 <br/><br/>
-                <figure src="images/en_US/filesystem-job-paths.PNG" alt="File System Connection, Paths tab" width="80%"/>
+                <figure src="images/en_US/filesystem-job-paths.PNG" alt="File System Connection, Repository Paths tab" width="80%"/>
                 <br/><br/>
-                <p>This tab allows you to type in a set of paths which function as the roots of the crawl.  For each desired path, type in the path and click the "Add" button to add it to
-                       the list.  The form of the path you type in obviously needs to be meaningful for the operating system the Framework is running on.</p>
+                <p>This tab allows you to type in a set of paths which function as the roots of the crawl.  For each desired path, type in the path, select whether the root should
+                       behave as an WGET repository or not, and click the "Add" button to add it to the list.  The form of the path you type in obviously needs to be meaningful
+                       for the operating system the Framework is running on.</p>
                 <p>Each root path has a set of rules which determines whether a document is included or not in the set for the job.  Once you have added the root path to the list, you
                        may then add rules to it.  Each rule has a match expression, an indication of whether the rule is intended to match files or directories, and an action (include or exclude).
                        Rules are evaluated from top to bottom, and the first rule that matches the file name is the one that is chosen.  To add a rule, select the desired pulldowns, type in 
                        a match file specification (e.g. "*.txt"), and click the "Add" button.</p>
             </section>
+
+            <section id="genericconnector">
+              <title>Generic Connector</title>
+              <p>Generic connector allows you to index any source that follows provided API specification. The idea is that you can use it and implement only the API which is designed
+                    to be fine grained and as simple as it is possible to handle document indexing.</p>
+              <p>API should be implemented as xml web page (entry point) returning results based on provided GET params. It may be a simple server script or part of the bigger application.
+                    API can be secured with HTTP basic authentication.</p>
+              <br/>
+              <p>There are 4 actions:</p>
+              <ul>
+                    <li>check</li>
+                    <li>seed</li>
+                    <li>items</li>
+                    <li>item</li>
+              </ul>
+              <p>Action is passed as "action" GET param to the entrypoint.</p>
+              <br/><br/>
+              <p><b>[entrypoint]?action=check</b></p>
+              <p>Should return HTTP status code 200 providing information that entrypoint is working properly. Any content returned will be ignored, only the status code matters.</p>
+              <br/><br/>
+			  
+              <p><b>[entrypoint]?action=seed&amp;startDate=YYYY-MM-DDTHH:mm:ssZ&amp;endDate=YYYY-MM-DDTHH:mm:ssZ</b></p>
+              <p>Parameters:</p>
+              <ul>
+                    <li>startDate - the start of time frame which should be applied to returned seeds. If this is a first run - this parameter will not be provided meaning that all documents should be returned.</li>
+                    <li>endDate - the end of time frame. Always provided.</li>
+              </ul>
+              <p><code>startDate</code> and <code>endDate</code> parameters are encoded as <code>YYYY-MM-DD'T'HH:mm:ss'Z'</code>. Result should be valid XML of form:</p>
+              <source>
+&lt;seeds&gt;
+   &lt;seed id="document_id_1" /&gt;
+   &lt;seed id="document_id_2" /&gt;
+   ...
+&lt;/seeds&gt;
+              </source>
+              <p>Attributes <code>id</code> are required.</p>
+              <br/><br/>
+
+              <p><b>[entrypoint]?action=items&amp;id[]=document_id_1&amp;id=document_id_2</b></p>
+              <p>Parameters:</p>
+              <ul>
+                    <li>id[] - array of document IDs that should be returned</li>
+              </ul>
+              <p>Result should be valid XML of form:</p>
+              <source>
+&lt;items&gt;
+   &lt;item id="document_id_1"&gt;
+      &lt;url&gt;[http://document_uri]&lt;/url&gt;
+      &lt;version&gt;[document_version]&lt;/version&gt;
+	  &lt;created&gt;2013-11-11T21:00:00Z&lt;/created&gt;
+	  &lt;updated&gt;2013-11-11T21:00:00Z&lt;/updated&gt;
+	  &lt;filename&gt;filename.ext&lt;/filename&gt;
+	  &lt;mimetype&gt;mime/type&lt;/mimetype&gt;
+	  &lt;metadata&gt;
+	     &lt;meta name="meta_name_1"&gt;meta_value_1&lt;/meta&gt;
+	     &lt;meta name="meta_name_2"&gt;meta_value_2&lt;/meta&gt;
+		 ...
+	  &lt;/metadata&gt;
+	  &lt;auth&gt;
+		 &lt;token&gt;auth_token_1&lt;/token&gt;
+		 &lt;token&gt;auth_token_2&lt;/token&gt;
+		 ...
+	  &lt;/auth&gt;
+	  &lt;related&gt;
+		 &lt;id&gt;other_document_id_1&lt;/id&gt;
+		 &lt;id&gt;other_document_id_2&lt;/id&gt;
+		 ...
+	  &lt;/related&gt;
+	  &lt;content&gt;Document content&lt;/content&gt;
+   &lt;/item&gt;
+   ...
+&lt;/items&gt;
+              </source>
+              <p><code>id, url, version</code> are required, the rest is optional.</p>
+              <p>If <code>auth</code> tag is provided - document will be treated as non-public with defined access tokens, if it is ommited - document will be public.</p>
+              <p>If <code>content</code> tag is ommited - connector will ask for document content as <code>action=item</code> separate API call.</p>
+              <p>You may provide related document ids when document repository is a graph or tree. Provided documents will also be indexed. In case you want to use relations -
+                  seeding do not have to return all documents, only starting points. Rest of documents will be fetched using relations.</p>
+              <br/><br/>
+
+              <p><b>[entrypoint]?action=item&amp;id=document_id</b></p>
+              <p>Parameters:</p>
+              <ul>
+                    <li>id - requested document ID</li>
+              </ul>
+              <p>Result should be the document content. It does not have to be XML - you may return binary data (PDF, DOC, etc) which represent the document.</p>
+              <br/><br/>
+              <p>You may provide custom parameters by defining them in Job specification. All defined parameters will be sent as additional GET parameters with every API call</p>
+              <p>You may override provided auth tokens and define forced tokens in Job specification. If you change security model to "forced" and do not provide any tokens - all documents will be public.</p>
+              <br/><br/>
+            </section>
+
+            <section id="jdbcrepository">
+                <title>Generic Database Repository Connection</title>
+                <p>The generic database connection type allows you to index content from a database table, served by one of the following databases:</p>
+                <br/>
+                <ul>
+                    <li>Postgresql (via a Postgresql JDBC driver)</li>
+                    <li>SQL Server (via the JTDS JDBC driver)</li>
+                    <li>Oracle (via the Oracle JDBC driver)</li>
+                    <li>Sybase (via the JTDS JDBC driver)</li>
+                    <li>MySQL (via the MySQL JDBC driver)</li>
+                </ul>
+                <br/>
+                <p>This connection type <b>cannot</b> be configured to work with other databases than the ones listed above without software changes.  Depending on your particular installation,
+                       some of the above options may not be available.</p>
+                <p>The generic database connection type currently has no per-document notion of security.  It is possible to set document security for all documents specified by a
+                       given job.  Since this form of security requires you to know what the actual access tokens are, you must have detailed knowledge of the authority connection you
+                       intend to use, and what sorts of access tokens it produces.</p>
+                <p>A generic database connection has three special tabs on the repository connection editing screen: the "Database Type" tab, the "Server" tab, and the
+                       "Credentials" tab.  The "Database Type" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-configure-database-type.PNG" alt="Generic Database Connection, Database Type tab" width="80%"/>
+                <br/><br/>
+                <p>Select the kind of database you want to connect to, from the pulldown.</p>
+                <p>Also, select the JDBC access method you want from the access method pulldown.  The access method is provided because the JDBC specification has been
+                    recently clarified, and not all JDBC drivers work the same way as far as resultset column name discovery is concerned.  The "by name" option currently works
+                    with all JDBC drivers in the list except for the MySQL driver.  The "by label" works for the current MySQL driver, and may work for some of the others as well.  If
+                    the queries you supply for your generic database jobs do not work correctly, and you see an error message about not being able to find required columns in the
+                    result, you can change your selection on this pulldown and it may correct the problem.</p>
+                <p>The "Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-configure-server.PNG" alt="Generic Database Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Here you have a choice.  <strong>Either</strong> you can choose to specify the database host and port, and the database name or instance name,
+                      <strong>or</strong> you can provide a raw JDBC connection string that is appropriate for the database type you have chosen.  This latter option
+                      is provided because many JDBC drivers, such as Oracle's, now can connect to an entire cluster of Oracle servers if you specify the appropriate
+                      connection description string.</p>
+                <p>If you choose the second option, just consult your JDBC driver's documentation and supply your string.  If there is anything entered in the raw connection
+                      string field at all, it will take precedence over the database host and database name fields.</p>
+                <p>If you choose the first option, the server name and port must be provided in the "Database host and port" field.  For example, for Oracle, the standard
+                      Oracle installation uses port 1521, so you would enter something like, "my-oracle-server:1521" for this field.  Postgresql uses port 5432 by default, so
+                      "my-postgresql-server:5432" would be required.  SQL Server's standard port is 1433, so use "my-sql-server:1433".</p>
+                <p>The service name or instance name field describes which instance and database to connect to.  For Oracle or Postgresql, provide just the database name.
+                      For SQL Server, use "my-instance-name/my-database-name".  For SQL Server using the default instance, use just the database name.</p>
+                <p>The "Credentials" tab is straightforward:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-configure-credentials.PNG" alt="Generic Database Connection, Credentials tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the database user credentials.</p>
+                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-status.PNG" alt="Generic Database Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the generic database connection is not properly authenticated, which is leading to an error status message instead of "Connection working".</p>
+                <p></p>
+                <p>When you configure a job to use a repository connection of the generic database type, several additional tabs are presented.  These are, in order, "Queries", and "Security".</p>
+                <p>The "Queries" tab looks something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/jdbc-job-queries.PNG" alt="Generic Database Job, Queries tab" width="80%"/>
+                <br/><br/>
+                <p>You must supply at least two queries.  (A third query is optional.)  The purpose of these queries is to obtain the data needed for the database to be properly crawled.
+                       But in order for you to write these queries, you must make some decisions first.  Basically, you need to figure out how best to map the constructs within your database
+                       to the requirements of the Framework.</p>
+                <br/>
+                <ul>
+                    <li>Obtain a list of document identifiers corresponding to changes and additions that occurred within a specified time window (see below)</li>
+                    <li>Given a set of document identifiers, find the corresponding version strings (see below)</li>
+                    <li>Given a set of document identifiers and version strings, find information about the document, consisting of the document's data, access URL, and metadata</li>
+                </ul>
+                <br/>
+                <p>The Framework uses a unique document identifier to describe every document within the confines of a defined repository connection.  This document identifier is used
+                       as a primary key to locate information about the document.  When you set up a generic-database-type job, the database you are connecting to must have a similar
+                       concept.  If you pick the wrong thing for a document identifier, at the very least you could find that the crawler runs very slowly.</p>
+                <p>Obtaining the list of document identifiers that represents the changes that occurred over the given time frame must return <b>at least</b> all such changes.  It is
+                        acceptable (although not ideal) for the returned list to be bigger than that.</p>
+                <p>If you want your database connection to function in an incremental manner, you must also come up with the format of a "version string".  This string is used by the 
+                       Framework to determine if a document has changed.  It must change whenever anything that might affect the document's indexing changes.  (It is not a problem if
+                       it changes for other reasons, as long as it fulfills that principle criteria.)</p>
+                <p>The queries you provide get substituted before they are used by the connection.  The example queries, which are present when the queries tab is first opened for a
+                       new job, show many of these substitutions in roughly the manner in which they are intended to be used.  For example, "$(IDCOLUMN)" will substitute a column
+                       name expected by the connection to contain the document identifier into the query.  The list of substitution strings are as follows:</p>
+                <br/>
+                <table>
+                    <tr><td><b>String name</b></td><td><b>Meaning/use</b></td></tr>
+                    <tr><td>IDCOLUMN</td><td>The name of an expected resultset column containing a document identifier</td></tr>
+                    <tr><td>VERSIONCOLUMN</td><td>The name of an expected resultset column containing a version string</td></tr>
+                    <tr><td>URLCOLUMN</td><td>The name of an expected resultset column containing a URL</td></tr>
+                    <tr><td>DATACOLUMN</td><td>The name of an expected resultset column containing document data</td></tr>
+                    <tr><td>STARTTIME</td><td>A query string value containing a start time in milliseconds since epoch</td></tr>
+                    <tr><td>ENDTIME</td><td>A query string value containing an end time in milliseconds since epoch</td></tr>
+                    <tr><td>IDLIST</td><td>A query string value containing a parenthesized list of document identifier values</td></tr>
+                </table>
+                <br/>
+                <p>Use caution when constructing queries that include time-based
+                        components. "$(STARTTIME)" and "$(ENDTIME)" provide
+                        times in milliseconds since epoch. If the modified date field is not
+                        in this unit, the seeding query may not select the desired document
+                        identifiers. You should convert "$(STARTTIME)" and
+                        "$(ENDTIME)" to the appropriate timestamp unit for your system within your query.</p>
+                <p>The following table gives several sample query fragments that can be
+                        used to convert the helper strings "$(STARTTIME)" and
+                        "$(ENDTIME)" into other date and time types. The first column names
+                        the SQL database type that the following query phrase corresponds to,
+                        the second column names the output data type for the query phrase, and
+                        the third gives the query phrase itself using "$(STARTTIME)"
+                        as an example time in milliseconds since epoch. These query phrases
+                        are intended as guidlines for creating an appropriate query phrase in
+                        each language. Each query phrase is designed to work with the most
+                        current version of the database software available at the time of
+                        publishing for this document. If your modified date field is not of
+                        the type given in the second column, the query phrase may not provide
+                        an appropriate output for date comparisons.</p>
+                <br/>
+                <table>
+                    <tr><td><b>Database Type</b></td><td><b>Date Type</b></td><td><b>Sample Query Phrase</b></td></tr>
+                    <tr><td>Oracle</td><td>date</td><td><code>TO_DATE ( '1970/01/01:00:00:00', 'yyyy/mm/dd:hh:mi:ss') + ROUND ($(STARTTIME)/86400000)</code></td></tr>
+                    <tr><td>Oracle</td><td>timestamp</td><td><code>TO_TIMESTAMP('1970-01-01 00:00:00') + interval '$(STARTTIME)/1000' second</code></td></tr>
+                    <tr><td>Postgres SQL</td><td>timestamp</td><td><code>date '1970-01-01' + interval '$(STARTTIME) milliseconds'</code></td></tr>
+                    <tr><td>MS SQL Server ($>$6.5)</td><td>datetime</td><td><code>DATEADD(ms, $(STARTTIME), '19700101')</code></td></tr>
+                    <tr><td>Sybase (10+)</td><td>datetime</td><td><code>DATEADD(ms, $(STARTTIME), '19700101')</code></td></tr>
+                </table>
+                <br/>
+                <p>When you create a job based on a general database connection, the job's queries are initially populated with examples.  These examples should give you a good idea of
+                        what columns your queries should return - in most cases, the only columns you need to return are the ones that appear in the example queries.  However,
+                        for the file data query, you may also return columns that are not specified in the example.  <strong>When you do this, the extra return column values will be passed
+                        to the index as metadata for the document.  The metadata name used will be the corresponding resultlist column name of the resultset.</strong></p>
+                <p>For example, the following file data query (written for PostgreSQL) will return documents with the metadata fields "metadata_a" and "metadata_b", in addition to the required primary
+                        document body and URL:</p>
+                <br/>
+                <p><code>SELECT id AS $(IDCOLUMN), characterdata AS $(DATACOLUMN), 'http://mydynamicserver.com?id=' || id AS $(URLCOLUMN), 
+                  publisher AS metadata_a, distributor AS metadata_b FROM mytable WHERE id IN $(IDLIST)</code></p>
+                <br/>
+                <p>There is currently no support in the JDBC connection type for natively handling multi-valued metadata.</p>
+                <p>The "Security" tab simply allows you to add specific access tokens to all documents indexed with a general database job.  In order for you to know what tokens
+                       to add, you must decide with what authority connection these documents will be secured, and understand the form of the access tokens used by that authority connection
+                       type.  This is what the "Security" tab looks like:</p>
+                <br/>
+                <br/>
+                <figure src="images/en_US/jdbc-job-security.PNG" alt="Generic Database Job, Security tab" width="80%"/>
+                <br/><br/>
+                <p>Enter a desired access token, and click the "Add" button.  You may enter multiple access tokens.</p>
+            </section>
+
+            <section id="googledriverepository">
+              <title>Google Drive Repository Connection</title>
+              <p>The Google Drive Repository Connection type allows you to index content from <a href="https://drive.google.com">Google Drives</a>.</p>
+              <p>Each Google Drive Connection manages access to a single drive repository. This means that if you have multiple Google Drives (i.e. different users),
+                you need to create a specific connection for each drive repository and provide the associated authentication information.</p>
+              <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
+              <br/>
+              <p>A Google Drive connection has the following configuration parameters on the repository connection editing screen:</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-connection-configuration.PNG" alt="googledrive Repository Connection, configuration parameters" width="80%"/>
+              <br/><br/>
+              <p>As we can see there are 3 pieces of information which are needed to create a succesful connection. The Client ID and Client Secret given by
+                Google Drive when you register your application for a development license. This is typically done through
+                the <a href="https://code.google.com/apis/console/b/0/">Google APIs Console</a>.</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-setup-1.PNG" alt="googledrive create project" width="80%"/>
+              <br/><br/>
+              <p>Once having created a project, we must enable the Google Drive API:</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-setup-2.PNG" alt="googledrive enable drive api" width="80%"/>
+              <br/><br/>
+              <p>Then going to the API Access link on the righthand side, we need to select create an OAuth 2.0 client ID:</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-setup-3.PNG" alt="googledrive create oauth client" width="80%"/>
+              <br/><br/>
+              <p>After filling in the necessary information, we need to select what type of application we'd like. For our purposes we need to select
+                installed application:</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-setup-4.PNG" alt="googledrive create client id" width="80%"/>
+              <br/><br/>
+              <p>Afterwards the connector requests our Client ID and Client secrets (where the red boxes are):</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-setup-5.PNG" alt="googledrive client id and secret" width="80%"/>
+              <br/><br/>
+              <p>Now each user must confirm their acceptance of allowing your application to access their google drive. This is done through a run-of-the-mill OAUTH
+                approach, but needs to be done before hand. Once the steps are completed, a long-life refresh token is presented, which is then used by the connector.
+                For completeness, we present the needed steps below since they require some manual work.</p>
+              <br/><br/>
+              <ol>
+                  <li>Browse to here: https://accounts.google.com/o/oauth2/auth?scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fdrive.readonly&amp;state=%2Fprofile&amp;redirect_uri=https%3A%2F%2Flocalhost&amp;response_type=code&amp;client_id=&lt;CLIENT_ID&gt;&amp;approval_prompt=force&amp;access_type=offline</li>
+                  <li>This returns a link (after acceptance), where a code is embedded in the URL: https://localhost/?state=/profile&amp;code=&lt;CODE&gt;</li>
+                  <li>Use a tool like <em>curl</em> (<a href="http://curl.haxx.se">http://curl.haxx.se</a>) to perform a POST to "https://accounts.google.com/o/oauth2/token", using the body: grant_type=authorization_code&amp;redirect_uri=https%3A%2F%2Flocalhost&amp;client_secret=&lt;CLIENT_SECRET&gt;&amp;client_id=&lt;CLIENT_ID&gt;&amp;code=&lt;CODE&gt;</li>
+              <li> The response is then a json response which contains the refresh_token.</li>
+              </ol>
+              <br/><br/>
+              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-connection-configuration-save.PNG" alt="googledrive Repository Connection, saving configuration" width="80%"/>
+              <br/><br/>
+              <p>When you configure a job to use the Google Drive repository connection an additional tab is presented. This is the "Google Drive Seed Query" tab:</p>
+              <br/><br/>
+              <figure src="images/en_US/googledrive-repository-connection-job-googledrive-seed-query.PNG" alt="googledrive Repository Connection, seed query" width="80%"/>
+              <br/><br/>
+              <p>This tab allows you to specify the query which will be used to seed documents for the indexing process. The query language is
+                specified on the <a href="https://developers.google.com/drive/search-parameters">Drive Search Parameters</a> site. Directories
+                which meet the seed query are fully crawled as the query on applies to seeds. The default query indexes the entire drive. Lastly, native Google
+                documents such as spreadsheets and word documents are exported to PDF and then ingested.</p>
+            </section>
+
+            <section id="hdfsrepository">
+                <title>HDFS Repository Connection (WGET compatible)</title>
+                <p>The HDFS repository connection operates much like the File System Repository Connection, except it reads data from the Hadoop File System rather than a
+                       local disk.  It, too, is capable of understanding directories written in the manner of the Unix utility called <em>wget</em>.  In the latter mode, the HDFS Repository Connector
+                       will parse file names that were created by <em>wget</em>, or by the wget-compatible HDFS Output Connector, and turn these back
+                       into full URL's pointing to external web content.</p>
+                <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
+                <p>The HDFS repository connection type has an additional configuration tab above and beyond the standard ones, called "Server".  This is what it looks like:</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-repository-configure-server.PNG" alt="HDFS Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the HDFS name node URI, and the user name, and click the "Save" button.</p>
+                <p>Jobs created using an HDFS repository connection type
+                       have two tabs in addition to the standard repertoire: the "Hop Filters" tab, and the "Repository Paths" tab.</p>
+                <p>The "Hop Filters" tab allows you to restrict the document set by the number of child hops from the path root.  This is what it looks like:</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-job-hopcount.PNG" alt="HDFS Connection, Hop Filters tab" width="80%"/>
+                <br/><br/>
+                <p>In the case of the HDFS connection type, there is only one variety of relationship between documents, which is called a "child" relationship.  If you want to
+                       restrict the document set by how far away a document is from the path root, enter the maximum allowed number of hops in the text box.  Leaving the box blank
+                       indicates that no such filtering will take place.</p>
+                <p>On this same tab, you can tell the Framework what to do should there be changes in the distance from the root to a document.  The choice "Delete unreachable
+                       documents" requires the Framework to recalculate the distance to every potentially affected document whenever a change takes place.  This may require
+                       expensive bookkeeping, however, so you also have the option of  ignoring such changes.  There are two varieties of this latter option - you can ignore the changes
+                       for now, with the option of turning back on the aggressive bookkeeping at a later time, or you can decide not to ever allow changes to propagate, in which case
+                       the Framework will discard the necessary bookkeeping information permanently.</p>
+                <p>The "Repository Paths" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-job-paths.PNG" alt="HDFS Connection, Repository Paths tab" width="80%"/>
+                <br/><br/>
+                <p>This tab allows you to type in a set of paths which function as the roots of the crawl.  For each desired path, type in the path, select whether the root should
+                       behave as an WGET repository or not, and click the "Add" button to add it to the list.</p>
+                <p>Each root path has a set of rules which determines whether a document is included or not in the set for the job.  Once you have added the root path to the list, you
+                       may then add rules to it.  Each rule has a match expression, an indication of whether the rule is intended to match files or directories, and an action (include or exclude).
+                       Rules are evaluated from top to bottom, and the first rule that matches the file name is the one that is chosen.  To add a rule, select the desired pulldowns, type in 
+                       a match file specification (e.g. "*.txt"), and click the "Add" button.</p>
+            </section>
+
+            <section id="jirarepository">
+              <title>Jira Repository Connection</title>
+              <p>The Jira Repository Connection type allows you to index tickets from <a href="http://www.atlassian.com/‎">Atlassian</a>'s <a href="http://www.atlassian.com/software/jira">Jira</a>.</p>
+              <p>This repository connection type is meant to secure documents in conjunction with the Jira Authority Connection type.  Please read the associated
+                    documentation to configure document security.</p>
+              <p>A Jira connection has the following configuration parameters on the repository connection editing screen:</p>
+              <br/><br/>
+              <figure src="images/en_US/jira-repository-connection-configuration.PNG" alt="jira Repository Connection, configuration parameters" width="80%"/>
+              <br/><br/>
+              <p>As we can see there are 3 pieces of information which are needed to create a successful connection. The Client ID and Client Secret
+                are the username and password of a Jira account. The JiraUrl is the base endpoint of the particular Jira Instance, for example:
+                https://searchbox.atlassian.net is the Jira version for searchbox which is hosted at Atlassian.</p>
+              <br/><br/>
+              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+              <br/><br/>
+              <figure src="images/en_US/jira-repository-connection-configuration-save.PNG" alt="jira Repository Connection, saving configuration" width="80%"/>
+              <br/><br/>
+              <p>When you configure a job to use the Jira repository connection an additional tab is presented. This is the "Seed Query" tab:</p>
+              <br/><br/>
+              <figure src="images/en_US/jira-repository-connection-job-jira-seed-query.PNG" alt="jira Repository Connection, seed query" width="80%"/>
+              <br/><br/>
+              <p>This tab allows you to specify the query which will be used to find documents for the indexing process. The query language is
+                specified on the <a href="https://confluence.atlassian.com/display/JIRA/Advanced+Searching">Jira Advanced Searching</a> site. Directories
+                which meet the seed query are fully crawled as the query only applies to seeds.</p>
+            </section>
+
+            <section id="livelinkrepository">
+                <title>OpenText LiveLink Repository Connection</title>
+                <p>The LiveLink connection type allows you to index content from LiveLink repositories.  LiveLink has a rich variety of different document types and metadata,
+                    which include basic documents, as well as compound documents, folders, workspaces, and projects.  A LiveLink connection is able to discover documents
+                    contained within all of these constructs.</p>
+                <p>Documents described by LiveLink connections are typically secured by a LiveLink authority.  If you have not yet created a LiveLink authority, but would
+                    like your documents to be secured, please follow the direction in the section titled "OpenText LiveLink Authority Connection".</p>
+                <p>A LiveLink connection has the following special tabs: "Server", "Document Access", and "Document View".  The "Server" tab allows you to select a
+                    LiveLink server to connect to, and also to provide appropriate credentials.  The "Document Access" tab describes the location of the LiveLink web interface,
+                    relative to the server, that will be used to fetch document content from LiveLink. The "Document View" tab affects how URLs to the fetched documents
+                    are constructed, for viewing results of searches.</p>
+                <p>The "Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-connection-server.PNG" alt="LiveLink Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select the manner you want the connection to use to communicate with LiveLink.  Your options are:</p>
+                <ul>
+                  <li>Internal (native LiveLink protocol)</li>
+                  <li>HTTP (communication with LiveLink through the IIS web server)</li>
+                  <li>HTTPS (communication with LiveLink through IIS using SSL)</li>
+                </ul>
+                <p>Also, you need to enter the name of the desired LiveLink server, the LiveLink port, and the LiveLink server credentials.  If you have selected communication
+                    using HTTP or HTTPS, you must provide a relative CGI path to your LiveLink.  You may also need to provide web server credentials.  Basic authentication
+                    and older forms of NTLM are supported.  In order to use NTLM, specify a non-blank server domain name in the "Server HTTP domain" field, plus a non-
+                    qualified user name and password.  If basic authentication is desired, leave the "Server HTTP domain" field blank, and provide basic auth credentials in the
+                    "Server HTTP NTLM user name" and "Server HTTP NTLM password" fields.  For no web server authentication, leave these fields all blank.</p>
+                <p>For communication using HTTPS, you will also need to upload your authority certificate(s) on the "Server" tab, to tell the connection which certificates to
+                    trust.  Upload your certificate using the browse button, and then click the "Add" button to add it to the trust store.</p>
+                <p>The "Document Access" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-connection-document-access.PNG" alt="LiveLink Connection, Document Access tab" width="80%"/>
+                <br/><br/>
+                <p>The server name is presumed to be the same as is on the "Server" tab.  Select the desired protocol for document retrieval.  If your LiveLink server
+                    is using a non-standard HTTP port for the specified protocol for document retrieval, enter the port number.  If your LiveLink server is using NTLM
+                    authentication to access documents, enter an Active Directory user name, password, and domain.  If your LiveLink is using HTTPS, browse locally
+                    for the appropriate certificate authority certificate, and click "Add" to upload that certificate to the connection's trust store.  (You may also use the server's
+                    certificate, but that is less resilient because the server's certificate may be changed periodically.)</p>
+                <p>The "Document View" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-connection-document-view.PNG" alt="LiveLink Connection, Document Viewtab" width="80%"/>
+                <br/><br/>
+                <p>If you want each document's view URL to be the same as its access URL, you can leave this tab unchanged.  If you want to direct users to a different
+                    CGI path when they view search results, you can specify that here.</p>
+                <p>When you are done, click the "Save" button.  You will see a summary screen that looks something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-connection-status.PNG" alt="LiveLink Connection Status" width="80%"/>
+                <br/><br/>
+                <p>Make note of and correct any reported connection errors.  In this example, the connection has been correctly set up, so the connection status is
+                    "Connection working".</p>
+                <p></p>
+                <p>A job created to use a LiveLink connection has the following additional tabs associated with it: "Paths", "Filters", "Security", and "Metadata".</p>
+                <p>The "Paths" tab allows you to manage a list of LiveLink paths that act as starting points for indexing content:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-job-paths.PNG" alt="LiveLink Job, Paths tab" width="80%"/>
+                <br/><br/>
+                <p>Build each path by selecting from the available dropdown, and clicking the "+" button.  When your path is complete, click the "Add" button to add the path
+                    to the list of starting points.</p>
+                <p>The "Filters" tab controls the criteria the LiveLink job will use to include or exclude content.  The filters are basically a list of rules.  Each rule has a
+                    document match field, and a matching action ("Include" or "Exclude").  When a LiveLink connection encounters a document, it evaluates the rules from
+                    top to bottom.  If the rule matches, then it will be included or excluded from the job's document set depending on what you have selected for the
+                    matching action. A rule's match field specifies a character match, where "*" will match any number of characters, and "?" will match any single character.</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-job-filters.PNG" alt="LiveLink Job, Filters tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the match field value, select the match action, and click the "Add" button to add to the list of filters.</p>
+                <p>The "Security" tab allows you to disable (or enable) LiveLink security for the documents associated with this job:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-job-security.PNG" alt="LiveLink Job, Security tab" width="80%"/>
+                <br/><br/>
+                <p>If you disable security, you can add your own access tokens to all
+                    jobs in the document set as they are indexed.  The format of the access tokens you would enter depends on the governing authority associated with the
+                    job's repository connection. Enter a token and click the "Add" button to add it to the list.</p>
+                <p>The "Metadata" tab allows you to select what specific metadata values from LiveLink you want to pass to the index:</p>
+                <br/><br/>
+                <figure src="images/en_US/livelink-job-metadata.PNG" alt="LiveLink Job, Metadata tab" width="80%"/>
+                <br/><br/>
+                <p>If you want to pass all available LiveLink metadata to the index, then click the "All metadata" radio button.  Otherwise, you need to build LiveLink
+                    metadata paths and add them to the metadata list. Select the next metadata path segment, and click the appropriate "+" button to add it to the path.
+                    You may add folder information, or a metadata category, at any point.</p>
+                <p>Once you have drilled down to a metadata category, you can select the metadata attributes to include, or check the "All attributes in this category"
+                    checkbox.  When you are done, click the "Add" button to add the metadata attributes that you want to include in the index.</p>
+                <p>You can also use the "Metadata" tab to have the connection send path data along with each document, as a piece of document metadata.  To enable
+                    this feature, enter the name of the metadata attribute you want this information to be sent into the "Path attribute name" field.  Then, add the rules you
+                    want to the list of rules.  Each rule has a match expression, which is a regular expression where parentheses ("(" and ")") mark sections you are
+                    interested in.  These sections are called "groups" in regular expression parlance.  The replace string consists of constant text plus
+                    substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the
+                    first match group mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
+                <p>For example, suppose you had a rule which had ".*/(.*)/(.*)/.*" as a match expression, and "$(1) $(2)" as the replace string.  If presented with the path
+                    <code>Project/Folder_1/Folder_2/Filename</code>, it would output the string <code>Folder_1 Folder_2</code>.</p>
+                <p>If more than one rule is present, the rules are all executed in sequence.  That is, the output of the first rule is modified by the second rule, etc.</p>
+            </section>
             
+            <section id="meridiorepository">
+                <title>Autonomy Meridio Repository Connection</title>
+                <p>An Autonomy Meridio connection allows you to index documents from a set of Meridio servers.  Meridio's architecture allows you to separate services on multiple machines -
+                    e.g. the document services can run on one machine, and the records services can run on another.  A Meridio connection type correspondingly is configured to describe each
+                    Meridio service independently.</p>
+                <p>Documents described by Meridio connections are typically secured by a Meridio authority.  If you have not yet created a Meridio authority, but would like your
+                    documents to be secured, please follow the direction in the section titled "Autonomy Meridio Authority Connection".</p>
+                <p>A Meridio connection has the following special tabs on the repository connection editing screen: the "Document Server" tab, the "Records Server" tab, the "Web Client" tab,
+                    and the "Credentials" tab.  The "Document Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/meridio-connection-document-server.PNG" alt="Meridio Connection, Document Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select the correct protocol, and enter the correct server name, port, and location to reference the Meridio document server services.  If a proxy is involved, enter the proxy host
+                    and port.  Authenticated proxies are not supported by this connection type at this time.</p>
+                <p>Note that, in the Meridio system, while it is possible that different services run on different servers, this is not typically the case.  The connection type, on the other hand, makes
+                    no assumptions, and permits the most general configuration.</p>
+                <p>The "Records Server" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/meridio-connection-records-server.PNG" alt="Meridio Connection, Records Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select the correct protocol, and enter the correct server name, port, and location to reference the Meridio records server services.  If a proxy is involved, enter the proxy host
+                    and port.  Authenticated proxies are not supported by this connection type at this time.</p>
+                <p>Note that, in the Meridio system, while it is possible that different services run on different servers, this is not typically the case.  The connection type, on the other hand, makes
+                    no assumptions, and permits the most general configuration.</p>
+                <p>The "Web Client" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/meridio-connection-web-client.PNG" alt="Meridio Connection, Web Client tab" width="80%"/>
+                <br/><br/>
+                <p>The purpose of the Meridio Connection web client tab is to allow the connection to build a useful URL for each document it indexes.
+                    Select the correct protocol, and enter the correct server name, port, and location to reference the Meridio web client service.  No proxy information is required, as no documents
+                    will be fetched from this service.</p>
+                <p>The "Credentials" tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/meridio-connection-credentials.PNG" alt="Meridio Connection, Credentials tab" width="80%"/>
+                <br/><br/>
+                <p>Enter the Meridio server credentials needed to access the Meridio system.</p>
+                <p>When you are done, click the "Save" button to save the connection.  You will see a summary screen, looking something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/meridio-connection-status.PNG" alt="Meridio Connection Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the Meridio connection is not actually correctly configured, which is leading to an error status message instead of "Connection working".</p>
+                <p>Since Meridio uses Windows IIS for authentication, there are many ways in which the configuration of either IIS or the Windows domain under which Meridio runs can affect
+                    the correct functioning of the Meridio connection.  It is beyond the scope of this manual to describe the kinds of analysis and debugging techniques that might be required to diagnose connection
+                    and authentication problems.  If you have trouble, you will almost certainly need to involve your Meridio IT personnel.  Debugging tools may include (but are not limited to):</p>
+                <br/>
+                <ul>
+                    <li>Windows security event logs</li>
+                    <li>ManifoldCF logs (see below)</li>
+                    <li>Packet captures (using a tool such as WireShark)</li>
+                </ul>
+                <br/>
+                <p>If you need specific ManifoldCF logging information, contact your system integrator.</p>
+                <p></p>
+                <p>Jobs based on Meridio connections have the following special tabs: "Search Paths", "Content Types", "Categories", "Data Types", "Security", and "Metadata".</p>
+                <p>More here later</p>
+            </section>
 
             <section id="rssrepository">
                 <title>Generic RSS Repository Connection</title>
@@ -912,6 +2051,7 @@
                 <br/>
                 <p>Many users of the RSS connection type set up their jobs to run continuously, configuring their jobs to never refetch documents, but rather to expire them after some 30 days.
                        This model works reasonably well for news, which is what RSS is often used for.</p>
+                <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
                 <p>An RSS connection has the following special tabs: "Email", "Robots", "Bandwidth", and "Proxy".  The "Email" tab looks like this:</p>
                 <br/><br/>
                 <figure src="images/en_US/rss-configure-email.PNG" alt="RSS Connection, Email tab" width="80%"/>
@@ -1042,7 +2182,178 @@
                 <br/><br/>
                 <p>Select the mode you want the connection to operate in.</p>
             </section>
-            
+
+            <section id="sharepointrepository">
+                <title>Microsoft SharePoint Repository Connection</title>
+                <p>The Microsoft SharePoint connection type allows you to index documents from a Microsoft SharePoint site.  Bear in mind that a single SharePoint
+                    installation actually represents a set of sites.  Some sites in SharePoint are directly related to others (e.g. they are subsites), while some sites operate
+                    relatively independently of one another.</p>
+                <p>The SharePoint connection type is designed so that one SharePoint repository connection can access all SharePoint sites from a specific root site
+                    though its explicit subsites.  It is the case that it is desirable in some very large SharePoint installations to access <b>all</b> SharePoint sites using
+                    a single connection.  But the ManifoldCF SharePoint connection type does not support that model as of yet.  If this functionality is important for you,
+                    contact your system integrator.</p>
+                <p>Documents described by SharePoint connections can be secured in either one of two ways.  Either you can choose to secure documents using Active
+                    Directory SIDs (in which case, you must use the Active Directory authority type), or you may choose to use native SharePoint groups and users for
+                    authorization.  The latter <strong>must</strong> be used in the following cases:</p>
+                <br/>
+                <ul>
+                  <li>You have native SharePoint groups or users created which do not correspond to Active Directory SIDs</li>
+                  <li>Your SharePoint 2010 is configured to use Claim Space authorization mode</li>
+                  <li>You have ActiveDirectory groups that have more than roughly 1000 members</li>
+                </ul>
+                <br/>
+                <p>In general, native SharePoint authorization is the preferred model, except in legacy situations.  If you choose to use native SharePoint authorization, you
+                    will need to define one or more authorities of type "SharePoint/XXX" associated with the same authority group as your SharePoint connection.  Please read
+                    the sections of this manual that describe how to configure SharePoint/Native and SharePoint/AD authorities.  Bear in mind that SharePoint when configured
+                    to run in Claim
+                    Space mode (available starting in SharePoint 2010) uses a federated authorization model, so you should expect to create more than one authority when
+                    working with a SharePoint Claim Space installation.  If your SharePoint is not using Claim Space, then a single authority of type "SharePoint/Native" is
+                    sufficient.</p>
+                <p>If you wish to use the legacy support for the Active Directory authority, then read the section titled "Active Directory Authority Connection" instead.</p>
+                <p>A SharePoint connection has two special tabs on the repository connection editing screen: the "Server" tab, and the "Authority type" tab.  The "Server"
+                    tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepoint-configure-server.PNG" alt="SharePoint Connection, Server tab" width="80%"/>
+                <br/><br/>
+                <p>Select your SharePoint server version from the pulldown.  If you do not select the correct server version, your documents may either be indexed with
+                    insufficient security protection, or you may not be able to index any documents.  Check with your SharePoint system administrator if you are not sure
+                    what to select.</p>
+                <p>SharePoint uses a web URL model for addressing sites, subsites, libraries, and files.  The best way to figure out how to set up a SharePoint connection 
+                    type is therefore to start with your web browser, and visit the topmost root of the site you wish to crawl.  Then, record the URL you see in your browser.</p>
+                <p>Select the server protocol, and enter the server name and port, based on what you recorded from the URL for your SharePoint site.  For the "Site path"
+                    field, type in the portion of the root site URL that includes everything after the server and port, except for the final "aspx" file.  For example, if the SharePoint
+                    URL is "http://myserver:81/sites/somewhere/index.asp", the site path would be "/sites/somewhere".</p>
+                <p>The SharePoint credentials are, of course, what you used to log into your root site.  The SharePoint connection type always requires the user name to be
+                    in the form "domain\user".</p>
+                <p>If your SharePoint server is using SSL, you will need to supply enough certificates for the connection's trust store so that the SharePoint server's SSL
+                    server certificate can be validated.  This typically consists of either the server certificate, or the certificate from the authority that signed the server certificate.
+                    Browse to the local file containing the certificate, and click the "Add" button.</p>
+                <p>The SharePoint connection "Authority type" tab allows you to select the authorization model used by the connection.  It looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepoint-configure-authoritytype.PNG" alt="SharePoint Connection, Authority type tab" width="80%"/>
+                <br/><br/>
+                <p>Select the authority model you wish to use.</p>
+                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepoint-status.PNG" alt="SharePoint Status" width="80%"/>
+                <br/><br/>
+                <p>Note that in this example, the SharePoint connection is not actually referencing a SharePoint instance, which is leading to an error status message instead of
+                    "Connection working".</p>
+                <p>Since SharePoint uses Windows IIS for authentication, there are many ways in which the configuration of either IIS or the Windows domain under which
+                    SharePoint runs can affect the correct functioning of the SharePoint connection.  It is beyond the scope of this manual to describe the kinds of analysis and
+                    debugging techniques that might be required to diagnose connection and authentication problems.  If you have trouble, you will almost certainly need to involve
+                    your SharePoint IT personnel.  Debugging tools may include (but are not limited to):</p>
+                <br/>
+                <ul>
+                    <li>Windows security event logs</li>
+                    <li>ManifoldCF logs (see below)</li>
+                    <li>Packet captures (using a tool such as WireShark)</li>
+                </ul>
+                <br/>
+                <p>If you need specific ManifoldCF logging information, contact your system integrator.</p>
+                <p></p>
+                <p>When you configure a job to use a repository connection of the generic database type, several additional tabs are presented.  These are, in order, "Paths", "Security", and "Metadata".</p>
+                <p>The "Paths" tab allows you to build a list of rules describing the SharePoint content that you want to include in your job.  When the SharePoint connection type encounters a subsite,
+                    library, list, or file, it looks through this list of rules to determine whether to include the subsite, library, list, or file.  The first matching rule will determine what will be done.</p>
+                <p>Each rule consists of a path, a rule type, and an action.  The actions are "Include" and "Exclude".  The rule type tells the connection what kind of SharePoint entity it is allowed to exactly match.  For
+                    example, a "File" rule will only exactly match SharePoint paths that represent files - it cannot exactly match sites or libraries.  The path itself is just a sequence of characters, where the "*" character
+                    has the special meaning of being able to match any number of any kind of characters, and the "?" character matches exactly one character of any kind.</p>
+                <p>The rule matcher extends strict, exact matching by introducing a concept of implicit inclusion rules.  If your rule action is "Include", and you specify (say) a "File" rule, the matcher presumes
+                    implicit inclusion rules for the corresponding site and library.  So, if you create an "Include File" rule that matches (for example) "/MySite/MyLibrary/MyFile", there is an implied "Site Include" rule
+                    for "/MySite", and an implied "Library Include" rule for "/MySite/MyLibrary".  Similarly, if you create a "Library Include" rule, there is an implied "Site Include" rule that corresponds to it.
+                    Note that these shortcuts only applies to "Include" rules - there are no corresponding implied "Exclude" rules.</p>
+                <p>The "Paths" tab allows you to build these rules one at a time, and add them either to the bottom of the list, or insert them into the list of rules at any point.  Either way, you construct the rule
+                    you want to append or insert by first constructing the path, from left to right, using your choice of text and context-dependent pulldowns with existing server path information listed.  This is what the tab
+                    may look like for you.  Bear in mind that if you are using a connection that does not display the status, "Connection working", you may not see the selections you should in these pulldowns:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepoint-job-paths.PNG" alt="SharePoint Job, Paths tab" width="80%"/>
+                <br/><br/>
+                <p>To build a rule, first build the rule's matching path.  Make an appropriate selection or enter desired text, then click either the "Add Site", "Add Library", "Add List", or "Add Text" button, depending on your choice.
+                    Repeat this process until the path is what you want it to be.  At this point, if the SharePoint connection does not know what kind of entity your path describes, you will need to select the
+                    SharePoint entity type that you want the rule to match also.  Select whether this is an include or exclude rule.  Then, click the "Add New Rule" button, to add your newly-constructed rule
+                    at the end of the list.</p>
+                <p>The "Security" tab allows you to specify whether SharePoint's security model should be applied to this set of documents, or not.  You also have the option of applying some specified set of access
+                    tokens to the documents described by the job.  The tab looks like this:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepoint-job-security.PNG" alt="SharePoint Job, Security tab" width="80%"/>
+                <br/><br/>
+                <p>Select whether SharePoint security is on or off using the radio buttons provided.  If security is off, you may add access tokens in the text box and click the "Add" button.  The access tokens must
+                    be in the proper form expected by the authority that governs your SharePoint connection for this feature to be useful.</p>
+                <p>The "Metadata" tab allows you to specify what metadata will be included for each document.  The tab is similar to the "Paths" tab, which you may want to review above:</p>
+                <br/><br/>
+                <figure src="images/en_US/sharepoint-job-metadata.PNG" alt="SharePoint Job, Security tab" width="80%"/>
+                <br/><br/>
+                <p>The main difference is that instead of rules that include or exclude individual sites, libraries, lists, or documents, the rules describe inclusion and exclusion of document or list item metadata.  Since metadata is associated
+                    with files and list items, all of the metadata rules are applied only to file paths and list item paths, and there are no such things as "site" or "library" metadata path rules.</p>
+                <p>If an exclusion rule matches a file's path, it means that <b>no</b> metadata from that file will be included at all.  There is no way to individually exclude a single field using an exclusion rule.</p>
+                <p>To build a rule, first build the rule's matching path.  Make an appropriate selection or enter desired text, then click either the "Add Site", "Add Library", "Add List", or "Add Text" button, depending on your choice.
+                    Repeat this process until the path is what you want it to be.  Select whether this is an include or exclude rule.  Either check the box for "Include all metadata", or select the metadata you want to include
+                    from the pulldown.  (The choices of metadata fields you are presented with are determined by which SharePoint library is selected.  If your rule path does not uniquely specify a library, you cannot select individual
+                    fields to include.  You can only select "All metadata".)  Then, click the "Add New Rule" button, to put your newly-constructed rule
+                    at the end of the list.</p>
+                <p>You can also use the "Metadata" tab to have the connection send path data along with each document, as a piece of document metadata.  To enable this feature, enter the name of the metadata
+                    attribute you want this information to be sent into the "Attribute name" field.  Then, add the rules you want to the list of rules.  Each rule has a match expression, which is a regular expression where
+                    parentheses ("(" and ")") mark sections you are interested in.  These sections are called "groups" in regular expression parlance.  The replace string consists of constant text plus
+                    substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
+                    mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
+                <p>For example, suppose you had a rule which had ".*/(.*)/(.*)/.*" as a match expression, and "$(1) $(2)" as the replace string.  If presented with the path
+                    <code>Project/Folder_1/Folder_2/Filename</code>, it would output the string <code>Folder_1 Folder_2</code>.</p>
+                <p>If more than one rule is present, the rules are all executed in sequence.  That is, the output of the first rule is modified by the second rule, etc.</p>
+                <br/>
+                <p><b>Example: How to index a SharePoint 2010 Document Library</b></p>
+                <p></p>
+                <p>Let's say we want to index a Document Library named Documents. The following URL displays contents of the library : http://iknow/Documents/Forms/AllItems.aspx</p>
+                <figure src="images/en_US/documents-library-contents.png" alt="Documents Library Contents"/>
+                <p>Note that there exists eight folders and one file in our library. Some folders have sub-folders. And leaf folders contain files. The following <b>single</b> Path Rule is sufficient to index <b>all</b> files in Documents library.</p>
+                <figure src="images/en_US/documents-library-path-rule.png" alt="Documents Library Path Rule"/>
+                <p>If we click Library Tools > Library > Modify View, we will see complete list of all available metadata.</p>
+                <figure src="images/en_US/documents-library-all-metadata.png" alt="Documents Library All Metadata"/>
+                <p>ManifoldCF's UI also displays all available Document Libraries and their associated metadata too. Using this pulldown, you can select which fields you want to index.</p>
+                <figure src="images/en_US/documents-library-metadata.png" alt="Documents Library Selected Metadata"/>
+                <p>To create the metadata rule below, click Metadata tab in Job settings. Select Documents from --Select library-- and "Add Library" button. As soon as you have done this, all available metadata will be listed. 
+                     Enter * in the textbox which is right of the Add Text button. And click Add Text button. When you have done this Path Match becomes /Documents/*. After this you can multi select list of metadata. 
+                     This action will populate Fields with CheckoutUser, Created, etc. Click Add New Rule button. This action will add this new rule to your Metadata rules.</p>
+                <figure src="images/en_US/documents-library-metadata-rule.png" alt="Documents Library Metadata Rule"/>
+                <p>Finally click the "Save" button at the bottom of the page. You will see a page looking something like this:</p>
+                <figure src="images/en_US/documents-library-summary.png" alt="Documents Library Summary Page"/>
+                <br/>
+                <p><b>Some Final Notes</b></p>
+                <ul>
+                    <li>If you don't add * to Patch match rule, your selected fields won't be used. In other words Path match rule <b>/Documents</b> won't match the document <i>/Documents/ik_docs/diger/sorular.docx</i> </li>
+                    <li>We can include all metadata using the checkbox. (without selecting from the pulldown list)</li>
+                    <li>If we were to index only docx files, our Patch match rule would be <b>/Documents/*.docx</b></li>
+                </ul>
+                <br/>
+                
+                <p><b>Example: How to index SharePoint 2010 Lists</b></p>
+                <p></p>
+                <p>Lists are a key part of the architecture of Windows SharePoint Services. A document library is another form of a list, and while it has many similar properties to a standard list, it also includes additional 
+                     functions to enable document uploads, retrieval, and other functions to support document management and collaboration. <a href="http://msdn.microsoft.com/en-us/library/dd490727%28v=office.12%29.aspx">[1]</a> </p>
+            	<p>An item added to a document library (and other libraries) must be a file. You can't have a library without a file. A list on the other hand doesn't have a file, it is just a piece of data, just like SQL Table.</p>
+                <p>Let's say we want to index a List named IKGeneralFAQ. The following URL displays contents of the list : http://iknow/Lists/IKGeneralFAQ/AllItems.aspx</p>
+                <figure src="images/en_US/faq-list-contents.png" alt="IKGeneralFAQ List Contents"/>
+                <p>Note that the Lists do not have files. It looks like an Excel spreadsheet. In ManifoldCF Job Settings, Path tab displays all available Lists. It lets you to select the name of the list you want to index.</p>
+                <figure src="images/en_US/add-list.png" alt="Add List"/>
+                <p>After we select IKGeneralFAQ, hit Add List button and Save button, we have the following Path Rule:</p>
+                <figure src="images/en_US/faq-list-path-rule.png" alt="IKGeneralFAQ List Path Rule"/>
+                <p>The above <b>single</b> Path Rule is sufficient to index content of IKGeneralFAQ List. Note that unlike the document libraries, we don't need * here.</p> 
+                <p>If we click List Tools > List > Modify View we will see complete list of all available metadata.</p>
+                <figure src="images/en_US/faq-list-all-metadata.png" alt="IKGeneralFAQ List All Metadata"/>
+                <p>ManifoldCF's Metadata UI also displays all available Lists and their associated metadata too. Using this pulldown, you can select which fields you want to index.</p>
+                <figure src="images/en_US/faq-list-metadata.png" alt="IKGeneralFAQ List Selected Metadata"/>
+                <p>To create the metadata rule below, click Metadata tab in Job settings. Select IKGeneralFAQ from --Select list-- and "Add List" button. As soon as you have done this, all available metadata will be listed. 
+                     After this you can multi select list of metadata. This action will populate Fields with ID, IKFAQAnswer, IKFAQPage, IKFAQPageID, etc. Click Add New Rule button. This action will add this new rule to your Metadata rules.</p>
+                <figure src="images/en_US/faq-list-metadata-rule.png" alt="IKGeneralFAQ List Metadata Rule"/>
+                <p>Finally click the "Save" button at the bottom of the page. You will see a page looking something like this:</p>
+                <figure src="images/en_US/faq-list-summary.png" alt="IKGeneralFAQ List Summary Page"/>
+                <br/>
+                <p><b>Some Final Notes</b></p>
+                <ul>
+                    <li>Note that, when specifying Metadata rules, UI automatically adds * to Path match rule for Lists. This is not the case with Document Libraries.</li>
+                    <li>We can include all metadata using the checkbox. (without selecting from the pulldown list)</li>                    
+                </ul>
+            </section>
+                        
             <section id="webrepository">
                 <title>Generic Web Repository Connection</title>
                 <p>The Web connection type is effectively a reasonably full-featured web crawler.  It is capable of handling most kinds of authentication (basic, all forms of NTLM,
@@ -1069,6 +2380,7 @@
                        reason, we strongly encourage you to consider using the RSS connection type for all applications where it might reasonably apply.</p>
                 <p>Many users of the Web connection type set up their jobs to run continuously, configuring their jobs to occasionally refetch documents, or to not refetch documents
                        ever, and expire them after some period of time.</p>
+                <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
                 <p>A Web connection has the following special tabs: "Email", "Robots", "Bandwidth", "Access Credentials", and "Certificates".  The "Email" tab
                        looks like this:</p>
                 <br/><br/>
@@ -1203,16 +2515,21 @@
                     <li>A redirection to a specific URL, as described by a regular expression</li>
                     <li>A page that has a form of a particular name on it, as described by a regular expression</li>
                     <li>A page that has a link on it to a specific target, as described by a regular expression</li>
+                    <li>A page that has specific content on it, as described by a regular expression</li>
                 </ul>
                 <br/>
-                <p>Note that in all three case above that there is an implicit flow through the login sequence that you describe by specifying the pages in the login sequence. For
+                <p>Note that in three of the cases above that there is an implicit flow through the login sequence that you describe by specifying the pages in the login sequence. For
                       example, if upon session timeout you expect to see a redirection to a link, or family of links (remember, it's a regexp, so you can describe that easily), then as part
                       of identifying the redirection as belonging to the login sequence, the web connector also now has a new link to fetch - the redirection link - which is what it does next. The same applies
                       to forms.  If the form name that was specified is found, then the web connector submits that form using values for the form elements that you specify, and using
                       the submission type described in the actual form tag (GET, POST, or multi-part). Any other elements of the form are left in whatever state that the HTML specified;
                       no Javascript is ever evaluated. Thus, if you think a form element's value is being set by Javascript, you have to figure out what it is being set to and enter this
                       value by hand as part of the specification for the "form" type of login page. Typically this amounts to a user name and password.</p>
-                      
+                <p>In the fourth login sequence case, where specific page content is matched to determine that a page belongs in the login sequence, there is no
+                      implicit flow to a subsequent page.  In this case you must supply an <em>override URL</em>,
+                      which describes which page to go to to continue the login sequence.  In fact, you are allowed to provide an override URL for all four cases above,
+                      but this is only recommended when the web connector would not automatically find the right subsequent page URL on its own.</p>
+      
                 <p>To add a session authentication rule, fill in a regular expression describing the site pages that are being protected, and click the "Add" button:</p>
                 <br/><br/>
                 <figure src="images/en_US/web-configure-access-credentials-session.PNG" alt="Web Connection, Access Credentials tab" width="80%"/>
@@ -1332,7 +2649,7 @@
                        Share connection type to convert file names to IRI's.  Instead, the connection always uses a standard canonical form, and expects the search results display system component to know how to properly form
                        the right IRI for the browser or client being used.</p>
                 <p>If you are interested in enforcing security for documents crawled with a Windows Share repository connection, you will need to first configure an authority connection
-                       of the Active Directory type to control access to these documents.</p>
+                       of the Active Directory type to control access to these documents.  The Share/DFS connector type can also be used with the LDAP authority connection type.</p>
                 <p>A Windows Share connection has a single special tab on the repository connection editing screen: the "Server" tab:</p>
                 <br/><br/>
                 <figure src="images/en_US/jcifs-configure-server.PNG" alt="Windows Share Connection, Server tab" width="80%"/>
@@ -1429,7 +2746,8 @@
             <section id="wikirepository">
                 <title>Wiki Repository Connection</title>
                 <p>The Wiki repository connection type allows you to index content from the main space of a Wiki or MediaWiki site.  The connection type uses the Wiki API
-                  in order to fetch content.  Only publicly visible documents will be indexed, and there is thus no need of an authority for Wiki content.</p>
+                  in order to fetch content.  Only publicly visible documents will be indexed, and there is thus typically no need of an authority for Wiki content.</p>
+                <p>This connection type has no support for any kind of document security, except for hand-entered access tokens provided on a per-job basis.</p>
                 <p>A Wiki connection has only one special tab on the repository connection editing screen: the "Server" tab.  The "Server" tab looks like this:</p>
                 <br/><br/>
                 <figure src="images/en_US/wiki-configure-server.PNG" alt="Wiki Connection, Server tab" width="80%"/>
@@ -1439,591 +2757,7 @@
                 <p>When you configure a job to use a repository connection of the Wiki type, no additional tabs are currently presented.</p>
             </section>
             
-            <section id="jdbcrepository">
-                <title>Generic Database Repository Connection</title>
-                <p>The generic database connection type allows you to index content from a database table, served by one of the following databases:</p>
-                <br/>
-                <ul>
-                    <li>Postgresql (via a Postgresql JDBC driver)</li>
-                    <li>SQL Server (via the JTDS JDBC driver)</li>
-                    <li>Oracle (via the Oracle JDBC driver)</li>
-                    <li>Sybase (via the JTDS JDBC driver)</li>
-                    <li>MySQL (via the MySQL JDBC driver)</li>
-                </ul>
-                <br/>
-                <p>This connection type <b>cannot</b> be configured to work with other databases than the ones listed above without software changes.  Depending on your particular installation,
-                       some of the above options may not be available.</p>
-                <p>The generic database connection type currently has no per-document notion of security.  It is possible to set document security for all documents specified by a
-                       given job.  Since this form of security requires you to know what the actual access tokens are, you must have detailed knowledge of the authority connection you
-                       intend to use, and what sorts of access tokens it produces.</p>
-                <p>A generic database connection has three special tabs on the repository connection editing screen: the "Database Type" tab, the "Server" tab, and the
-                       "Credentials" tab.  The "Database Type" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/jdbc-configure-database-type.PNG" alt="Generic Database Connection, Database Type tab" width="80%"/>
-                <br/><br/>
-                <p>Select the kind of database you want to connect to, from the pulldown.</p>
-                <p>Also, select the JDBC access method you want from the access method pulldown.  The access method is provided because the JDBC specification has been
-                    recently clarified, and not all JDBC drivers work the same way as far as resultset column name discovery is concerned.  The "by name" option currently works
-                    with all JDBC drivers in the list except for the MySQL driver.  The "by label" works for the current MySQL driver, and may work for some of the others as well.  If
-                    the queries you supply for your generic database jobs do not work correctly, and you see an error message about not being able to find required columns in the
-                    result, you can change your selection on this pulldown and it may correct the problem.</p>
-                <p>The "Server" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/jdbc-configure-server.PNG" alt="Generic Database Connection, Server tab" width="80%"/>
-                <br/><br/>
-                <p>The server name and port must be provided in the "Database host and port" field.  For example, for Oracle, the standard Oracle installation uses port 1521, so you would
-                       enter something like, "my-oracle-server:1521" for this field.  Postgresql uses port 5432 by default, so "my-postgresql-server:5432" would be required.  SQL Server's
-                       standard port is 1433, so use "my-sql-server:1433".</p>
-                <p>The service name or instance name field describes which instance and database to connect to.  For Oracle or Postgresql, provide just the database name.  For SQL Server, use
-                       "my-instance-name/my-database-name".  For SQL Server using the default instance, use just the database name.</p>
-                <p>The "Credentials" tab is straightforward:</p>
-                <br/><br/>
-                <figure src="images/en_US/jdbc-configure-credentials.PNG" alt="Generic Database Connection, Credentials tab" width="80%"/>
-                <br/><br/>
-                <p>Enter the database user credentials.</p>
-                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/jdbc-status.PNG" alt="Generic Database Status" width="80%"/>
-                <br/><br/>
-                <p>Note that in this example, the generic database connection is not properly authenticated, which is leading to an error status message instead of "Connection working".</p>
-                <p></p>
-                <p>When you configure a job to use a repository connection of the generic database type, several additional tabs are presented.  These are, in order, "Queries", and "Security".</p>
-                <p>The "Queries" tab looks something like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/jdbc-job-queries.PNG" alt="Generic Database Job, Queries tab" width="80%"/>
-                <br/><br/>
-                <p>You must supply at least two queries.  (A third query is optional.)  The purpose of these queries is to obtain the data needed for the database to be properly crawled.
-                       But in order for you to write these queries, you must make some decisions first.  Basically, you need to figure out how best to map the constructs within your database
-                       to the requirements of the Framework.</p>
-                <br/>
-                <ul>
-                    <li>Obtain a list of document identifiers corresponding to changes and additions that occurred within a specified time window (see below)</li>
-                    <li>Given a set of document identifiers, find the corresponding version strings (see below)</li>
-                    <li>Given a set of document identifiers and version strings, find information about the document, consisting of the document's data, access URL, and metadata</li>
-                </ul>
-                <br/>
-                <p>The Framework uses a unique document identifier to describe every document within the confines of a defined repository connection.  This document identifier is used
-                       as a primary key to locate information about the document.  When you set up a generic-database-type job, the database you are connecting to must have a similar
-                       concept.  If you pick the wrong thing for a document identifier, at the very least you could find that the crawler runs very slowly.</p>
-                <p>Obtaining the list of document identifiers that represents the changes that occurred over the given time frame must return <b>at least</b> all such changes.  It is
-                        acceptable (although not ideal) for the returned list to be bigger than that.</p>
-                <p>If you want your database connection to function in an incremental manner, you must also come up with the format of a "version string".  This string is used by the 
-                       Framework to determine if a document has changed.  It must change whenever anything that might affect the document's indexing changes.  (It is not a problem if
-                       it changes for other reasons, as long as it fulfills that principle criteria.)</p>
-                <p>The queries you provide get substituted before they are used by the connection.  The example queries, which are present when the queries tab is first opened for a
-                       new job, show many of these substitutions in roughly the manner in which they are intended to be used.  For example, "$(IDCOLUMN)" will substitute a column
-                       name expected by the connection to contain the document identifier into the query.  The list of substitution strings are as follows:</p>
-                <br/>
-                <table>
-                    <tr><td><b>String name</b></td><td><b>Meaning/use</b></td></tr>
-                    <tr><td>IDCOLUMN</td><td>The name of an expected resultset column containing a document identifier</td></tr>
-                    <tr><td>VERSIONCOLUMN</td><td>The name of an expected resultset column containing a version string</td></tr>
-                    <tr><td>URLCOLUMN</td><td>The name of an expected resultset column containing a URL</td></tr>
-                    <tr><td>DATACOLUMN</td><td>The name of an expected resultset column containing document data</td></tr>
-                    <tr><td>STARTTIME</td><td>A query string value containing a start time in milliseconds since epoch</td></tr>
-                    <tr><td>ENDTIME</td><td>A query string value containing an end time in milliseconds since epoch</td></tr>
-                    <tr><td>IDLIST</td><td>A query string value containing a parenthesized list of document identifier values</td></tr>
-                </table>
-                <br/>
-                <p>Use caution when constructing queries that include time-based
-                        components. "$(STARTTIME)" and "$(ENDTIME)" provide
-                        times in milliseconds since epoch. If the modified date field is not
-                        in this unit, the seeding query may not select the desired document
-                        identifiers. You should convert "$(STARTTIME)" and
-                        "$(ENDTIME)" to the appropriate timestamp unit for your system within your query.</p>
-                <p>The following table gives several sample query fragments that can be
-                        used to convert the helper strings "$(STARTTIME)" and
-                        "$(ENDTIME)" into other date and time types. The first column names
-                        the SQL database type that the following query phrase corresponds to,
-                        the second column names the output data type for the query phrase, and
-                        the third gives the query phrase itself using "$(STARTTIME)"
-                        as an example time in milliseconds since epoch. These query phrases
-                        are intended as guidlines for creating an appropriate query phrase in
-                        each language. Each query phrase is designed to work with the most
-                        current version of the database software available at the time of
-                        publishing for this document. If your modified date field is not of
-                        the type given in the second column, the query phrase may not provide
-                        an appropriate output for date comparisons.</p>
-                <br/>
-                <table>
-                    <tr><td><b>Database Type</b></td><td><b>Date Type</b></td><td><b>Sample Query Phrase</b></td></tr>
-                    <tr><td>Oracle</td><td>date</td><td><code>TO_DATE ( '1970/01/01:00:00:00', 'yyyy/mm/dd:hh:mi:ss') + ROUND ($(STARTTIME)/86400000)</code></td></tr>
-                    <tr><td>Oracle</td><td>timestamp</td><td><code>TO_TIMESTAMP('1970-01-01 00:00:00') + interval '$(STARTTIME)/1000' second</code></td></tr>
-                    <tr><td>Postgres SQL</td><td>timestamp</td><td><code>date '1970-01-01' + interval '$(STARTTIME) milliseconds'</code></td></tr>
-                    <tr><td>MS SQL Server ($>$6.5)</td><td>datetime</td><td><code>DATEADD(ms, $(STARTTIME), '19700101')</code></td></tr>
-                    <tr><td>Sybase (10+)</td><td>datetime</td><td><code>DATEADD(ms, $(STARTTIME), '19700101')</code></td></tr>
-                </table>
-                <br/>
-                <p>When you create a job based on a general database connection, the job's queries are initially populated with examples.  These examples should give you a good idea of
-                        what columns your queries should return - in most cases, the only columns you need to return are the ones that appear in the example queries.  However,
-                        for the file data query, you may also return columns that are not specified in the example.  <strong>When you do this, the extra return column values will be passed
-                        to the index as metadata for the document.  The metadata name used will be the corresponding resultlist column name of the resultset.</strong></p>
-                <p>For example, the following file data query (written for PostgreSQL) will return documents with the metadata fields "metadata_a" and "metadata_b", in addition to the required primary
-                        document body and URL:</p>
-                <br/>
-                <p><code>SELECT id AS $(IDCOLUMN), characterdata AS $(DATACOLUMN), 'http://mydynamicserver.com?id=' || id AS $(URLCOLUMN), 
-                  publisher AS metadata_a, distributor AS metadata_b FROM mytable WHERE id IN $(IDLIST)</code></p>
-                <br/>
-                <p>There is currently no support in the JDBC connection type for natively handling multi-valued metadata.</p>
-                <p>The "Security" tab simply allows you to add specific access tokens to all documents indexed with a general database job.  In order for you to know what tokens
-                       to add, you must decide with what authority connection these documents will be secured, and understand the form of the access tokens used by that authority connection
-                       type.  This is what the "Security" tab looks like:</p>
-                <br/>
-                <br/>
-                <figure src="images/en_US/jdbc-job-security.PNG" alt="Generic Database Job, Security tab" width="80%"/>
-                <br/><br/>
-                <p>Enter a desired access token, and click the "Add" button.  You may enter multiple access tokens.</p>
-            </section>
-
-            <section id="filenetrepository">
-                <title>IBM FileNet P8 Repository Connection</title>
-                <p>More here later</p>
-            </section>
             
-            <section id="documentumrepository">
-                <title>EMC Documentum Repository Connection</title>
-                <p>The EMC Documentum connection type allows you index content from a Documentum Content Server instance.  A single connection allows you
-                    to reach all documents contained on a single Content Server instance.  Multiple connections are therefore required to reach documents from multiple Content Server instances.</p>
-                <p>For each Content Server instance, the Documentum connection type allows you to index any Documentum content that is of type dm_document, or is derived from dm_document.
-                    Compound documents are handled as well, but only by mean of the component documents that make them up.  No other Documentum construct can be indexed at this time.</p>
-                <p>Documents described by Documentum connections are typically secured by a Documentum authority.  If you have not yet created a Documentum authority, but would like your
-                    documents to be secured, please follow the direction in the section titled "EMC Documentum Authority Connection".</p>
-                <p>A Documentum connection has the following special tabs: "Docbase", and "Webtop".  The "Docbase" tab allows you to select a Content Server to connect to, and also to provide
-                    appropriate credentials.  The "Webtop" tab describes the location of a Webtop server that will be used to display the documents from that Content Server, after they have been indexed.</p>
-                <p>The "Docbase" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-docbase.PNG" alt="Documentum Connection, Docbase tab" width="80%"/>
-                <br/><br/>
-                <p>Enter the Content Server Docbase instance name, and provide your credentials.  You may leave the "Domain" field blank, if the Content Server instance does not have AD integration enabled.</p>
-                <p>The "Webtop tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-webtop.PNG" alt="Documentum Connection, Docbase tab" width="80%"/>
-                <br/><br/>
-                <p>Enter the components of the base URL of the Webtop instance you want to use for serving the documents.  Remember that this information will only be used to construct
-                    a URL to the document to allow user inspection; it will not be used for any crawling activities.</p>
-                <p>When you are done, click the "Save" button.  When you do, a connection summary and status screen will be presented:</p>
-                <br/><br/>
-                <figure src="images/en_US/documentum-status.PNG" alt="Documentum Connection Status" width="80%"/>
-                <br/><br/>
-                <p>Pay careful attention to the status, and be prepared to correct any
-                    problems that are displayed.</p>
-                <p></p>
-                <p>A job created to use a Documentum connection has the following additional tabs associated with it: "Paths", "Document Types", "Content Types", "Security", and "Path Metadata".</p>
-                <p>The "Paths" tab allows you to construct the paths within Documentum that you want to scan for content.  If no paths are selected, all content will be considered eligible.</p>
-                <p>The "Document Types" tab allows you to select what document types you want to index.  Only document types that are derived from dm_document, which are flagged by the system administrator
-                    as being "indexable", will be presented for your selection.  On this tab also, for each document type you index, you may choose included specific metadata for documents of that type, or you can
-                    check the "All metadata" checkbox to include all metadata associated with documents of that type.</p>
-                <p>The "Content Types" tab allows you to select which Documentum mime-types are to be included in the document set.  Check the types you want to include, and uncheck the types you want to
-                    exclude.</p>
-                <p>The "Security" tab allows you to disable or enable Documentum security for the documents described by this job.  You can turn off native Documentum security by clicking the "Disable" radio button.
-                    If you do this, you may also enter your own access tokens, which will be applied to all documents described by the job.  The form of the access tokens you enter will depend on the governing
-                    authority connection type.  Click the "Add" button to add each access token.</p>
-                <p>The "Path Metadata" tab allows you to send each document's path information as metadata to the index.  To enable this feature, enter the name of the metadata
-                    attribute you want this information to be sent into the "Path attribute name" field.  Then, add the rules you want to the list of rules.  Each rule has a match expression, which is a regular expression where
-                    parentheses ("(" and ")") mark sections you are interested in.  These sections are called "groups" in regular expression parlance.  The replace string consists of constant text plus
-                    substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
-                    mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
-                <p>For example, suppose you had a rule which had ".*/(.*)/(.*)/.*" as a match expression, and "$(1) $(2)" as the replace string.  If presented with the path
-                    <code>Project/Folder_1/Folder_2/Filename</code>, it would output the string <code>Folder_1 Folder_2</code>.</p>
-                <p>If more than one rule is present, the rules are all executed in sequence.  That is, the output of the first rule is modified by the second rule, etc.</p>
-            </section>
-            
-            <section id="livelinkrepository">
-                <title>OpenText LiveLink Repository Connection</title>
-                <p>The LiveLink connection type allows you to index content from LiveLink repositories.  LiveLink has a rich variety of different document types and metadata,
-                    which include basic documents, as well as compound documents, folders, workspaces, and projects.  A LiveLink connection is able to discover documents
-                    contained within all of these constructs.</p>
-                <p>Documents described by LiveLink connections are typically secured by a LiveLink authority.  If you have not yet created a LiveLink authority, but would
-                    like your documents to be secured, please follow the direction in the section titled "OpenText LiveLink Authority Connection".</p>
-                <p>A LiveLink connection has the following special tabs: "Server", "Document Access", and "Document View".  The "Server" tab allows you to select a
-                    LiveLink server to connect to, and also to provide appropriate credentials.  The "Document Access" tab describes the location of the LiveLink web interface,
-                    relative to the server, that will be used to fetch document content from LiveLink. The "Document View" tab affects how URLs to the fetched documents
-                    are constructed, for viewing results of searches.</p>
-                <p>The "Server" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-connection-server.PNG" alt="LiveLink Connection, Server tab" width="80%"/>
-                <br/><br/>
-                <p>Select the manner you want the connection to use to communicate with LiveLink.  Your options are:</p>
-                <ul>
-                  <li>Internal (native LiveLink protocol)</li>
-                  <li>HTTP (communication with LiveLink through the IIS web server)</li>
-                  <li>HTTPS (communication with LiveLink through IIS using SSL)</li>
-                </ul>
-                <p>Also, you need to enter the name of the desired LiveLink server, the LiveLink port, and the LiveLink server credentials.  If you have selected communication
-                    using HTTP or HTTPS, you must provide a relative CGI path to your LiveLink.  You may also need to provide web server credentials.  Basic authentication
-                    and older forms of NTLM are supported.  In order to use NTLM, specify a non-blank server domain name in the "Server HTTP domain" field, plus a non-
-                    qualified user name and password.  If basic authentication is desired, leave the "Server HTTP domain" field blank, and provide basic auth credentials in the
-                    "Server HTTP NTLM user name" and "Server HTTP NTLM password" fields.  For no web server authentication, leave these fields all blank.</p>
-                <p>For communication using HTTPS, you will also need to upload your authority certificate(s) on the "Server" tab, to tell the connection which certificates to
-                    trust.  Upload your certificate using the browse button, and then click the "Add" button to add it to the trust store.</p>
-                <p>The "Document Access" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-connection-document-access.PNG" alt="LiveLink Connection, Document Access tab" width="80%"/>
-                <br/><br/>
-                <p>The server name is presumed to be the same as is on the "Server" tab.  Select the desired protocol for document retrieval.  If your LiveLink server
-                    is using a non-standard HTTP port for the specified protocol for document retrieval, enter the port number.  If your LiveLink server is using NTLM
-                    authentication to access documents, enter an Active Directory user name, password, and domain.  If your LiveLink is using HTTPS, browse locally
-                    for the appropriate certificate authority certificate, and click "Add" to upload that certificate to the connection's trust store.  (You may also use the server's
-                    certificate, but that is less resilient because the server's certificate may be changed periodically.)</p>
-                <p>The "Document View" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-connection-document-view.PNG" alt="LiveLink Connection, Document Viewtab" width="80%"/>
-                <br/><br/>
-                <p>If you want each document's view URL to be the same as its access URL, you can leave this tab unchanged.  If you want to direct users to a different
-                    CGI path when they view search results, you can specify that here.</p>
-                <p>When you are done, click the "Save" button.  You will see a summary screen that looks something like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-connection-status.PNG" alt="LiveLink Connection Status" width="80%"/>
-                <br/><br/>
-                <p>Make note of and correct any reported connection errors.  In this example, the connection has been correctly set up, so the connection status is
-                    "Connection working".</p>
-                <p></p>
-                <p>A job created to use a LiveLink connection has the following additional tabs associated with it: "Paths", "Filters", "Security", and "Metadata".</p>
-                <p>The "Paths" tab allows you to manage a list of LiveLink paths that act as starting points for indexing content:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-job-paths.PNG" alt="LiveLink Job, Paths tab" width="80%"/>
-                <br/><br/>
-                <p>Build each path by selecting from the available dropdown, and clicking the "+" button.  When your path is complete, click the "Add" button to add the path
-                    to the list of starting points.</p>
-                <p>The "Filters" tab controls the criteria the LiveLink job will use to include or exclude content.  The filters are basically a list of rules.  Each rule has a
-                    document match field, and a matching action ("Include" or "Exclude").  When a LiveLink connection encounters a document, it evaluates the rules from
-                    top to bottom.  If the rule matches, then it will be included or excluded from the job's document set depending on what you have selected for the
-                    matching action. A rule's match field specifies a character match, where "*" will match any number of characters, and "?" will match any single character.</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-job-filters.PNG" alt="LiveLink Job, Filters tab" width="80%"/>
-                <br/><br/>
-                <p>Enter the match field value, select the match action, and click the "Add" button to add to the list of filters.</p>
-                <p>The "Security" tab allows you to disable (or enable) LiveLink security for the documents associated with this job:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-job-security.PNG" alt="LiveLink Job, Security tab" width="80%"/>
-                <br/><br/>
-                <p>If you disable security, you can add your own access tokens to all
-                    jobs in the document set as they are indexed.  The format of the access tokens you would enter depends on the governing authority associated with the
-                    job's repository connection. Enter a token and click the "Add" button to add it to the list.</p>
-                <p>The "Metadata" tab allows you to select what specific metadata values from LiveLink you want to pass to the index:</p>
-                <br/><br/>
-                <figure src="images/en_US/livelink-job-metadata.PNG" alt="LiveLink Job, Metadata tab" width="80%"/>
-                <br/><br/>
-                <p>If you want to pass all available LiveLink metadata to the index, then click the "All metadata" radio button.  Otherwise, you need to build LiveLink
-                    metadata paths and add them to the metadata list. Select the next metadata path segment, and click the appropriate "+" button to add it to the path.
-                    You may add folder information, or a metadata category, at any point.</p>
-                <p>Once you have drilled down to a metadata category, you can select the metadata attributes to include, or check the "All attributes in this category"
-                    checkbox.  When you are done, click the "Add" button to add the metadata attributes that you want to include in the index.</p>
-                <p>You can also use the "Metadata" tab to have the connection send path data along with each document, as a piece of document metadata.  To enable
-                    this feature, enter the name of the metadata attribute you want this information to be sent into the "Path attribute name" field.  Then, add the rules you
-                    want to the list of rules.  Each rule has a match expression, which is a regular expression where parentheses ("(" and ")") mark sections you are
-                    interested in.  These sections are called "groups" in regular expression parlance.  The replace string consists of constant text plus
-                    substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the
-                    first match group mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
-                <p>For example, suppose you had a rule which had ".*/(.*)/(.*)/.*" as a match expression, and "$(1) $(2)" as the replace string.  If presented with the path
-                    <code>Project/Folder_1/Folder_2/Filename</code>, it would output the string <code>Folder_1 Folder_2</code>.</p>
-                <p>If more than one rule is present, the rules are all executed in sequence.  That is, the output of the first rule is modified by the second rule, etc.</p>
-            </section>
-            
-            <section id="mexexrepository">
-                <title>Memex Patriarch Repository Connection</title>
-                <p>A Memex Patriarch connection allows you to index documents from a Memex server.</p>
-                <p>Documents described by Memex connections are typically secured by a Memex authority.  If you have not yet created a Memex authority, but would like your
-                    documents to be secured, please follow the direction in the section titled "Memex Patriarch Authority Connection".</p>
-                <p>A Memex connection has the following special tabs on the repository connection editing screen: the "Memex Server" tab, and the "Web Server" tab.  The "Memex Server" tab
-                    looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/memex-connection-memex-server.PNG" alt="Memex Connection, Memex Server tab" width="80%"/>
-                <br/><br/>
-                <p>You must supply the name of your Memex server, and the connection port, along with the Memex credentials for a user that has sufficient permissions to retrieve Memex
-                    documents.  You must also select the Memex server's character encoding, and timezone.  If you do not know the encoding or timezone, consult your Memex system administrator.</p>
-                <p>The "Web Server" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/memex-connection-web-server.PNG" alt="Memex Connection, Web Server tab" width="80%"/>
-                <br/><br/>
-                <p>Here you must provide information that allows a Memex connection to construct a unique URL for each of your Memex documents.  Select a protocol, and fill in the server name and 
-                    port.</p>
-                <p>When you are done, click the "Save" button.  You should see a status page, something like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/memex-connection-status.PNG" alt="Memex Connection Status" width="80%"/>
-                <br/><br/>
-                <p></p>
-                <p>Jobs based on Memex connections have the following special tabs: "Record Criteria", "Entities", and "Security".</p>
-
-                <p>More here later</p>
-            </section>
-            
-            <section id="meridiorepository">
-                <title>Autonomy Meridio Repository Connection</title>
-                <p>An Autonomy Meridio connection allows you to index documents from a set of Meridio servers.  Meridio's architecture allows you to separate services on multiple machines -
-                    e.g. the document services can run on one machine, and the records services can run on another.  A Meridio connection type correspondingly is configured to describe each
-                    Meridio service independently.</p>
-                <p>Documents described by Meridio connections are typically secured by a Meridio authority.  If you have not yet created a Meridio authority, but would like your
-                    documents to be secured, please follow the direction in the section titled "Autonomy Meridio Authority Connection".</p>
-                <p>A Meridio connection has the following special tabs on the repository connection editing screen: the "Document Server" tab, the "Records Server" tab, the "Web Client" tab,
-                    and the "Credentials" tab.  The "Document Server" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/meridio-connection-document-server.PNG" alt="Meridio Connection, Document Server tab" width="80%"/>
-                <br/><br/>
-                <p>Select the correct protocol, and enter the correct server name, port, and location to reference the Meridio document server services.  If a proxy is involved, enter the proxy host
-                    and port.  Authenticated proxies are not supported by this connection type at this time.</p>
-                <p>Note that, in the Meridio system, while it is possible that different services run on different servers, this is not typically the case.  The connection type, on the other hand, makes
-                    no assumptions, and permits the most general configuration.</p>
-                <p>The "Records Server" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/meridio-connection-records-server.PNG" alt="Meridio Connection, Records Server tab" width="80%"/>
-                <br/><br/>
-                <p>Select the correct protocol, and enter the correct server name, port, and location to reference the Meridio records server services.  If a proxy is involved, enter the proxy host
-                    and port.  Authenticated proxies are not supported by this connection type at this time.</p>
-                <p>Note that, in the Meridio system, while it is possible that different services run on different servers, this is not typically the case.  The connection type, on the other hand, makes
-                    no assumptions, and permits the most general configuration.</p>
-                <p>The "Web Client" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/meridio-connection-web-client.PNG" alt="Meridio Connection, Web Client tab" width="80%"/>
-                <br/><br/>
-                <p>The purpose of the Meridio Connection web client tab is to allow the connection to build a useful URL for each document it indexes.
-                    Select the correct protocol, and enter the correct server name, port, and location to reference the Meridio web client service.  No proxy information is required, as no documents
-                    will be fetched from this service.</p>
-                <p>The "Credentials" tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/meridio-connection-credentials.PNG" alt="Meridio Connection, Credentials tab" width="80%"/>
-                <br/><br/>
-                <p>Enter the Meridio server credentials needed to access the Meridio system.</p>
-                <p>When you are done, click the "Save" button to save the connection.  You will see a summary screen, looking something like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/meridio-connection-status.PNG" alt="Meridio Connection Status" width="80%"/>
-                <br/><br/>
-                <p>Note that in this example, the Meridio connection is not actually correctly configured, which is leading to an error status message instead of "Connection working".</p>
-                <p>Since Meridio uses Windows IIS for authentication, there are many ways in which the configuration of either IIS or the Windows domain under which Meridio runs can affect
-                    the correct functioning of the Meridio connection.  It is beyond the scope of this manual to describe the kinds of analysis and debugging techniques that might be required to diagnose connection
-                    and authentication problems.  If you have trouble, you will almost certainly need to involve your Meridio IT personnel.  Debugging tools may include (but are not limited to):</p>
-                <br/>
-                <ul>
-                    <li>Windows security event logs</li>
-                    <li>ManifoldCF logs (see below)</li>
-                    <li>Packet captures (using a tool such as WireShark)</li>
-                </ul>
-                <br/>
-                <p>If you need specific ManifoldCF logging information, contact your system integrator.</p>
-                <p></p>
-                <p>Jobs based on Meridio connections have the following special tabs: "Search Paths", "Content Types", "Categories", "Data Types", "Security", and "Metadata".</p>
-                <p>More here later</p>
-            </section>
-            
-            <section id="sharepointrepository">
-                <title>Microsoft SharePoint Repository Connection</title>
-                <p>The Microsoft SharePoint connection type allows you to index documents from a Microsoft SharePoint site.  Bear in mind that a single SharePoint installation actually represents
-                    a set of sites.  Some sites
-                    in SharePoint are directly related to others (e.g. they are subsites), while some sites operate relatively independently of one another.</p>
-                <p>The SharePoint connection type is designed so that one SharePoint repository connection can access all SharePoint sites from a specific root site though its explicit subsites.
-                    It is the case
-                    that it is desirable in some very large SharePoint installations to access <b>all</b> SharePoint sites using a single connection.  But the ManifoldCF SharePoint connection type today
-                    does not support
-                    that model as of yet.  If this functionality is important for you, contact your system integrator.</p>
-                <p>SharePoint uses a web URL model for addressing sites, subsites, libraries, and files.  The best way to figure out how to set up a SharePoint connection type is therefore to start
-                    with your web browser,
-                    and visit the root of the site you wish to crawl.  Then, record the URL you see in your browser.</p>
-                <p>Documents described by SharePoint connections are typically secured by an Active Directory authority.  If you have not yet created your Active Directory authority, but would like
-                    your documents to be secured, please follow the direction in the section titled "Active Directory Authority Connection".</p>
-                <p>A SharePoint connection has one special tab on the repository connection editing screen: the "Server" tab, which looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/sharepoint-configure-server.PNG" alt="SharePoint Connection, Server tab" width="80%"/>
-                <br/><br/>
-                <p>Select your SharePoint server version from the pulldown.  If you do not select the correct server version, your documents may either be indexed with insufficient security protection,
-                    or you
-                    may not be able to index any documents.  Check with your SharePoint system administrator if you are not sure what to select.</p>
-                <p>Select the server protocol, and enter the server name and port, based on what you recorded from the URL for your SharePoint site.  For the "Site path" field, type in the portion of the
-                    root site URL that includes everything after the server and port, except for the final "aspx" file.  For example, if the SharePoint URL is "http://myserver:81/sites/somewhere/index.asp",
-                    the site path would be "/sites/somewhere".</p>
-                <p>The SharePoint credentials are, of course, what you used to log into your root site.  The SharePoint connection type always requires the user name to be in the form "domain\user".</p>
-                <p>If your SharePoint server is using SSL, you will need to supply enough certificates for the connection's trust store so that the SharePoint server's SSL server certificate
-                    can be validated.  This typically consists of either the server certificate, or the certificate from the authority that signed the server certificate.  Browse to the local file containing the
-                    certificate, and click the "Add" button.</p>
-                <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/sharepoint-status.PNG" alt="SharePoint Status" width="80%"/>
-                <br/><br/>
-                <p>Note that in this example, the SharePoint connection is not actually referencing a SharePoint instance, which is leading to an error status message instead of "Connection working".</p>
-                <p>Since SharePoint uses Windows IIS for authentication, there are many ways in which the configuration of either IIS or the Windows domain under which SharePoint runs can affect
-                    the correct functioning of the SharePoint connection.  It is beyond the scope of this manual to describe the kinds of analysis and debugging techniques that might be required to diagnose connection
-                    and authentication problems.  If you have trouble, you will almost certainly need to involve your SharePoint IT personnel.  Debugging tools may include (but are not limited to):</p>
-                <br/>
-                <ul>
-                    <li>Windows security event logs</li>
-                    <li>ManifoldCF logs (see below)</li>
-                    <li>Packet captures (using a tool such as WireShark)</li>
-                </ul>
-                <br/>
-                <p>If you need specific ManifoldCF logging information, contact your system integrator.</p>
-                <p></p>
-                <p>When you configure a job to use a repository connection of the generic database type, several additional tabs are presented.  These are, in order, "Paths", "Security", and "Metadata".</p>
-                <p>The "Paths" tab allows you to build a list of rules describing the SharePoint content that you want to include in your job.  When the SharePoint connection type encounters a subsite,
-                    library, list, or file, it looks through this list of rules to determine whether to include the subsite, library, list, or file.  The first matching rule will determine what will be done.</p>
-                <p>Each rule consists of a path, a rule type, and an action.  The actions are "Include" and "Exclude".  The rule type tells the connection what kind of SharePoint entity it is allowed to exactly match.  For
-                    example, a "File" rule will only exactly match SharePoint paths that represent files - it cannot exactly match sites or libraries.  The path itself is just a sequence of characters, where the "*" character
-                    has the special meaning of being able to match any number of any kind of characters, and the "?" character matches exactly one character of any kind.</p>
-                <p>The rule matcher extends strict, exact matching by introducing a concept of implicit inclusion rules.  If your rule action is "Include", and you specify (say) a "File" rule, the matcher presumes
-                    implicit inclusion rules for the corresponding site and library.  So, if you create an "Include File" rule that matches (for example) "/MySite/MyLibrary/MyFile", there is an implied "Site Include" rule
-                    for "/MySite", and an implied "Library Include" rule for "/MySite/MyLibrary".  Similarly, if you create a "Library Include" rule, there is an implied "Site Include" rule that corresponds to it.
-                    Note that these shortcuts only applies to "Include" rules - there are no corresponding implied "Exclude" rules.</p>
-                <p>The "Paths" tab allows you to build these rules one at a time, and add them either to the bottom of the list, or insert them into the list of rules at any point.  Either way, you construct the rule
-                    you want to append or insert by first constructing the path, from left to right, using your choice of text and context-dependent pulldowns with existing server path information listed.  This is what the tab
-                    may look like for you.  Bear in mind that if you are using a connection that does not display the status, "Connection working", you may not see the selections you should in these pulldowns:</p>
-                <br/><br/>
-                <figure src="images/en_US/sharepoint-job-paths.PNG" alt="SharePoint Job, Paths tab" width="80%"/>
-                <br/><br/>
-                <p>To build a rule, first build the rule's matching path.  Make an appropriate selection or enter desired text, then click either the "Add Site", "Add Library", "Add List", or "Add Text" button, depending on your choice.
-                    Repeat this process until the path is what you want it to be.  At this point, if the SharePoint connection does not know what kind of entity your path describes, you will need to select the
-                    SharePoint entity type that you want the rule to match also.  Select whether this is an include or exclude rule.  Then, click the "Add New Rule" button, to add your newly-constructed rule
-                    at the end of the list.</p>
-                <p>The "Security" tab allows you to specify whether SharePoint's security model should be applied to this set of documents, or not.  You also have the option of applying some specified set of access
-                    tokens to the documents described by the job.  The tab looks like this:</p>
-                <br/><br/>
-                <figure src="images/en_US/sharepoint-job-security.PNG" alt="SharePoint Job, Security tab" width="80%"/>
-                <br/><br/>
-                <p>Select whether SharePoint security is on or off using the radio buttons provided.  If security is off, you may add access tokens in the text box and click the "Add" button.  The access tokens must
-                    be in the proper form expected by the authority that governs your SharePoint connection for this feature to be useful.</p>
-                <p>The "Metadata" tab allows you to specify what metadata will be included for each document.  The tab is similar to the "Paths" tab, which you may want to review above:</p>
-                <br/><br/>
-                <figure src="images/en_US/sharepoint-job-metadata.PNG" alt="SharePoint Job, Security tab" width="80%"/>
-                <br/><br/>
-                <p>The main difference is that instead of rules that include or exclude individual sites, libraries, lists, or documents, the rules describe inclusion and exclusion of document or list item metadata.  Since metadata is associated
-                    with files and list items, all of the metadata rules are applied only to file paths and list item paths, and there are no such things as "site" or "library" metadata path rules.</p>
-                <p>If an exclusion rule matches a file's path, it means that <b>no</b> metadata from that file will be included at all.  There is no way to individually exclude a single field using an exclusion rule.</p>
-                <p>To build a rule, first build the rule's matching path.  Make an appropriate selection or enter desired text, then click either the "Add Site", "Add Library", "Add List", or "Add Text" button, depending on your choice.
-                    Repeat this process until the path is what you want it to be.  Select whether this is an include or exclude rule.  Either check the box for "Include all metadata", or select the metadata you want to include
-                    from the pulldown.  (The choices of metadata fields you are presented with are determined by which SharePoint library is selected.  If your rule path does not uniquely specify a library, you cannot select individual
-                    fields to include.  You can only select "All metadata".)  Then, click the "Add New Rule" button, to put your newly-constructed rule
-                    at the end of the list.</p>
-                <p>You can also use the "Metadata" tab to have the connection send path data along with each document, as a piece of document metadata.  To enable this feature, enter the name of the metadata
-                    attribute you want this information to be sent into the "Attribute name" field.  Then, add the rules you want to the list of rules.  Each rule has a match expression, which is a regular expression where
-                    parentheses ("(" and ")") mark sections you are interested in.  These sections are called "groups" in regular expression parlance.  The replace string consists of constant text plus
-                    substitutions of the groups from the match, perhaps modified.  For example, "$(1)" refers to the first group within the match, while "$(1l)" refers to the first match group
-                    mapped to lower case.  Similarly, "$(1u)" refers to the same characters, but mapped to upper case.</p>
-                <p>For example, suppose you had a rule which had ".*/(.*)/(.*)/.*" as a match expression, and "$(1) $(2)" as the replace string.  If presented with the path
-                    <code>Project/Folder_1/Folder_2/Filename</code>, it would output the string <code>Folder_1 Folder_2</code>.</p>
-                <p>If more than one rule is present, the rules are all executed in sequence.  That is, the output of the first rule is modified by the second rule, etc.</p>
-                <br/>
-                <p><b>Example: How to index a SharePoint 2010 Document Library</b></p>
-                <p></p>
-                <p>Let's say we want to index a Document Library named Documents. The following URL displays contents of the library : http://iknow/Documents/Forms/AllItems.aspx</p>
-                <figure src="images/en_US/documents-library-contents.png" alt="Documents Library Contents"/>
-                <p>Note that there exists eight folders and one file in our library. Some folders have sub-folders. And leaf folders contain files. The following <b>single</b> Path Rule is sufficient to index <b>all</b> files in Documents library.</p>
-                <figure src="images/en_US/documents-library-path-rule.png" alt="Documents Library Path Rule"/>
-                <p>If we click Library Tools > Library > Modify View, we will see complete list of all available metadata.</p>
-                <figure src="images/en_US/documents-library-all-metadata.png" alt="Documents Library All Metadata"/>
-                <p>ManifoldCF's UI also displays all available Document Libraries and their associated metadata too. Using this pulldown, you can select which fields you want to index.</p>
-                <figure src="images/en_US/documents-library-metadata.png" alt="Documents Library Selected Metadata"/>
-                <p>To create the metadata rule below, click Metadata tab in Job settings. Select Documents from --Select library-- and "Add Library" button. As soon as you have done this, all available metadata will be listed. 
-                     Enter * in the textbox which is right of the Add Text button. And click Add Text button. When you have done this Path Match becomes /Documents/*. After this you can multi select list of metadata. 
-                     This action will populate Fields with CheckoutUser, Created, etc. Click Add New Rule button. This action will add this new rule to your Metadata rules.</p>
-                <figure src="images/en_US/documents-library-metadata-rule.png" alt="Documents Library Metadata Rule"/>
-                <p>Finally click the "Save" button at the bottom of the page. You will see a page looking something like this:</p>
-                <figure src="images/en_US/documents-library-summary.png" alt="Documents Library Summary Page"/>
-                <br/>
-                <p><b>Some Final Notes</b></p>
-                <ul>
-                    <li>If you don't add * to Patch match rule, your selected fields won't be used. In other words Path match rule <b>/Documents</b> won't match the document <i>/Documents/ik_docs/diger/sorular.docx</i> </li>
-                    <li>We can include all metadata using the checkbox. (without selecting from the pulldown list)</li>
-                    <li>If we were to index only docx files, our Patch match rule would be <b>/Documents/*.docx</b></li>
-                </ul>
-                <br/>
-                
-                <p><b>Example: How to index SharePoint 2010 Lists</b></p>
-                <p></p>
-                <p>Lists are a key part of the architecture of Windows SharePoint Services. A document library is another form of a list, and while it has many similar properties to a standard list, it also includes additional 
-                     functions to enable document uploads, retrieval, and other functions to support document management and collaboration. <a href="http://msdn.microsoft.com/en-us/library/dd490727%28v=office.12%29.aspx">[1]</a> </p>
-            	<p>An item added to a document library (and other libraries) must be a file. You can't have a library without a file. A list on the other hand doesn't have a file, it is just a piece of data, just like SQL Table.</p>
-                <p>Let's say we want to index a List named IKGeneralFAQ. The following URL displays contents of the list : http://iknow/Lists/IKGeneralFAQ/AllItems.aspx</p>
-                <figure src="images/en_US/faq-list-contents.png" alt="IKGeneralFAQ List Contents"/>
-                <p>Note that the Lists do not have files. It looks like an Excel spreadsheet. In ManifoldCF Job Settings, Path tab displays all available Lists. It lets you to select the name of the list you want to index.</p>
-                <figure src="images/en_US/add-list.png" alt="Add List"/>
-                <p>After we select IKGeneralFAQ, hit Add List button and Save button, we have the following Path Rule:</p>
-                <figure src="images/en_US/faq-list-path-rule.png" alt="IKGeneralFAQ List Path Rule"/>
-                <p>The above <b>single</b> Path Rule is sufficient to index content of IKGeneralFAQ List. Note that unlike the document libraries, we don't need * here.</p> 
-                <p>If we click List Tools > List > Modify View we will see complete list of all available metadata.</p>
-                <figure src="images/en_US/faq-list-all-metadata.png" alt="IKGeneralFAQ List All Metadata"/>
-                <p>ManifoldCF's Metadata UI also displays all available Lists and their associated metadata too. Using this pulldown, you can select which fields you want to index.</p>
-                <figure src="images/en_US/faq-list-metadata.png" alt="IKGeneralFAQ List Selected Metadata"/>
-                <p>To create the metadata rule below, click Metadata tab in Job settings. Select IKGeneralFAQ from --Select list-- and "Add List" button. As soon as you have done this, all available metadata will be listed. 
-                     After this you can multi select list of metadata. This action will populate Fields with ID, IKFAQAnswer, IKFAQPage, IKFAQPageID, etc. Click Add New Rule button. This action will add this new rule to your Metadata rules.</p>
-                <figure src="images/en_US/faq-list-metadata-rule.png" alt="IKGeneralFAQ List Metadata Rule"/>
-                <p>Finally click the "Save" button at the bottom of the page. You will see a page looking something like this:</p>
-                <figure src="images/en_US/faq-list-summary.png" alt="IKGeneralFAQ List Summary Page"/>
-                <br/>
-                <p><b>Some Final Notes</b></p>
-                <ul>
-                    <li>Note that, when specifying Metadata rules, UI automatically adds * to Path match rule for Lists. This is not the case with Document Libraries.</li>
-                    <li>We can include all metadata using the checkbox. (without selecting from the pulldown list)</li>                    
-                </ul>
-            </section>
-                        
-            <section id="cmisrepository">
-              <title>CMIS Repository Connection</title>
-              <p>The CMIS Repository Connection type allows you to index content from any CMIS-compliant repository.</p>
-              <p>By default each CMIS Connection manages a single CMIS repository, this means that if you have multiple CMIS repositories exposed by a single endpoint, you need to create a specific connection for each CMIS repository.</p>
-              <br/>
-              <p>A CMIS connection has the following configuration parameters on the repository connection editing screen:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-repository-connection-configuration.png" alt="CMIS Repository Connection, configuration parameters" width="80%"/>
-              <br/><br/>
-              <p>Select the correct CMIS binding protocol (AtomPub or Web Services) and enter the correct username, password and the endpoint to reference the CMIS document server services.</p>
-              <p>The endpoint consists of the HTTP protocol, hostname, port and the context path of the CMIS service exposed by the CMIS server:</p>
-              <br/><br/>
-              <p><code>http://HOSTNAME:PORT/CMIS_CONTEXT_PATH</code></p>
-              <br/><br/>
-              <p>Optionally you can provide the repository ID to select one of the exposed CMIS repository, if this parameter is null the CMIS Connector will consider the first CMIS repository exposed by the CMIS server.</p>
-              <br/>
-              <p>Note that, in a CMIS system, a specific binding protocol has its own context path, this means that the endpoints are different:</p>
-              <p>for example the endpoint of the AtomPub binding exposed by the actual version of the InMemory Server provided by the OpenCMIS framework is the following:</p>
-              <p><code>http://localhost:8080/chemistry-opencmis-server-inmemory-war-0.5.0-SNAPSHOT/atom</code></p>
-              <br/><br/>
-              <p>The Web Services binding is exposed using a different endpoint:</p>
-              <p><code>http://localhost:8080/chemistry-opencmis-server-inmemory-war-0.5.0-SNAPSHOT/services/RepositoryService</code></p>
-              <br/><br/>
-              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-repository-connection-configuration-save.png" alt="CMIS Repository Connection, saving configuration" width="80%"/>
-              <br/><br/>
-              <p>When you configure a job to use the CMIS repository connection an additional tab is presented. This is the "CMIS Query" tab:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-repository-connection-job-cmisquery.png" alt="CMIS Repository Connection, CMIS Query" width="80%"/>
-              <br/><br/>
-              <p>The CMIS Query tab allows you to specify the query based on the CMIS Query Language to get all the result documents that need to be ingested.</p>
-              <p>Note that the CMIS Connector during the ingestion process, for each result, if it will find a folder node (that must have cmis:folder as the baseType), it will ingest all the children of the folder node; otherwise it will directly ingest the document (that must have cmis:document as the baseType).</p>
-              <p>When you are done, and you click the "Save" button, you will see a summary page looking something like this:</p>
-              <br/><br/>
-              <figure src="images/en_US/cmis-repository-connection-job-save.png" alt="CMIS Repository Connection, saving job" width="80%"/>
-              <br/><br/>
-            </section>
-            
-            <section id="alfrescorepository">
-              <title>Alfresco Repository Connection</title>
-              <p>The Alfresco Repository Connection type allows you to index content from an Alfresco repository.</p>
-              <p>This connector is compatible with any Alfresco version (2.x, 3.x and 4.x).</p>
-              <br/>
-              <p>An Alfresco connection has the following configuration parameters on the repository connection editing screen:</p>
-              <br/><br/>
-              <figure src="images/en_US/alfresco-repository-connection-configuration.png" alt="Alfresco Repository Connection, configuration parameters" width="80%"/>
-              <br/><br/>
-              <p>Enter the correct username, password and the endpoint to reference the Alfresco document server services.</p>
-              <p>If you have a Multi-Tenancy environment configured in the repository, be sure to also set the tenant domain parameter.</p>
-              <p>The endpoint consists of the HTTP protocol, hostname, port and the context path of the Alfresco Web Services API exposed by the Alfresco server.</p>
-              <p>By default, if you don't have changed the context path of the Alfresco webapp, you should have an endpoint address similar to the following:</p>
-              <br/><br/>
-              <p><code>http://HOSTNAME:PORT/alfresco/api</code></p>
-              <br/><br/>
-              <p>After you click the "Save" button, you will see a connection summary screen, which might look something like this:</p>
-              <br/><br/>
-              <figure src="images/en_US/alfresco-repository-connection-configuration-save.png" alt="Alfresco Repository Connection, saving configuration" width="80%"/>
-              <br/><br/>
-              <p>When you configure a job to use the Alfresco repository connection an additional tab is presented. This is the "Lucene Query" tab:</p>
-              <br/><br/>
-              <figure src="images/en_US/alfresco-repository-connection-job-lucenequery.png" alt="Alfresco Repository Connection, Lucene Query" width="80%"/>
-              <br/><br/>
-              <p>The Lucene Query tab allows you to specify the query based on the Lucene Query Language to get all the result documents that need to be ingested.</p>
-              <p>Please note that if the Lucene Query is null, the connector will ingest all the contents in the Alfresco repository under the Company Home.</p>
-              <p>Please also note that the Alfresco Connector during the ingestion process, for each result, if it will find a folder node (that must have any child association defined for the node type), it will ingest all the children of the folder node; otherwise it will directly ingest the document (that must have any d:content as one of the properties of the node).</p>
-              <p>When you are done, and you click the "Save" button, you will see a summary page looking something like this:</p>
-              <br/><br/>
-              <figure src="images/en_US/alfresco-repository-connection-job-save.png" alt="Alfresco Repository Connection, saving job" width="80%"/>
-              <br/><br/>
-            </section>
             
         </section>
 
diff --git a/site/src/documentation/content/xdocs/en_US/how-to-build-and-deploy.xml b/site/src/documentation/content/xdocs/en_US/how-to-build-and-deploy.xml
index c8f687a..dab03d8 100644
--- a/site/src/documentation/content/xdocs/en_US/how-to-build-and-deploy.xml
+++ b/site/src/documentation/content/xdocs/en_US/how-to-build-and-deploy.xml
@@ -79,9 +79,14 @@
         <ul>
           <li>CMIS connector</li>
           <li>Documentum connector, built against a Documentum API stub</li>
+          <li>DropBox connector</li>
           <li>FileNet connector, built against a FileNet API stub</li>
-          <li>Filesystem connector</li>
+          <li>WGET-compatible filesystem connector</li>
+          <li>Generic XML repository connector</li>
+          <li>Google Drive connector</li>
+          <li>HDFS connector</li>
           <li>JDBC connector, with just the PostgreSQL jdbc driver</li>
+          <li>Jira connector</li>
           <li>LiveLink connector, built against a LiveLink API stub</li>
           <li>Meridio connector, built against modified Meridio API WSDLs and XSDs</li>
           <li>RSS connector</li>
@@ -108,10 +113,12 @@
           <li>Apache Solr output connector</li>
           <li>OpenSearchServer output connector</li>
           <li>ElasticSearch output connector</li>
+          <li>WGET-compatible filesystem output connector</li>
+          <li>HDFS output connector</li>
           <li>Null output connector</li>
         </ul>
         <p></p>
-        <p>Each individual LGPL and proprietary connector's dependencies and build limitations are described in separate sections below.</p>
+        <p>The dependencies and build limitations of each individual LGPL and proprietary connector is described in separate sections below.</p>
         <p></p>
             
         <section>
@@ -120,7 +127,7 @@
           <p>The Alfresco connector requires the Alfresco Web Services Client provided by Alfresco in order to be built. Place this jar into the directory <em>connectors/alfresco/lib-proprietary</em> before you build.
               This will occur automatically if you execute the ant target "make-deps" from the ManifoldCF root directory.</p>
           <p></p>
-          <p>To run integration tests for the connector you have to copy the alfresco.war including H2 support created by the Maven module test-materials/alfresco-war (using "mvn package" inside the folder)
+          <p>To run integration tests for the connector you have to copy the alfresco.war including H2 support created by the Maven module test-materials/alfresco-4-war (using "mvn package" inside the folder)
               into the <em>connectors/alfresco/test-materials-proprietary</em> folder.  Then use the "ant test" or "mvn integration-test" for the standard build to execute integration tests.</p>
           <p></p>
         </section>
@@ -315,8 +322,10 @@
           <tr><td><em>script-engine</em></td><td>jars and scripts for running the ManifoldCF script interpreter</td></tr>
           <tr><td><em>example</em></td><td>a jetty-based example that runs in a single process (except for any connector-specific processes), excluding all proprietary libraries</td></tr>
           <tr><td><em>example-proprietary</em></td><td>a jetty-based example that runs in a single process (except for any connector-specific processes), including proprietary libraries; not included in binary release</td></tr>
-          <tr><td><em>multiprocess-example</em></td><td>scripts and jars for an example that uses the multiple process model, excluding all proprietary libraries</td></tr>
-          <tr><td><em>multiprocess-example-proprietary</em></td><td>scripts and jars for an example that uses the multiple process model, including proprietary libraries; not included in binary release</td></tr>
+          <tr><td><em>multiprocess-file-example</em></td><td>scripts and jars for an example that uses the multiple process model using file-based synchronization, excluding all proprietary libraries</td></tr>
+          <tr><td><em>multiprocess-file-example-proprietary</em></td><td>scripts and jars for an example that uses the multiple process model using file-based synchronization, including proprietary libraries; not included in binary release</td></tr>
+          <tr><td><em>multiprocess-zk-example</em></td><td>scripts and jars for an example that uses the multiple process model using ZooKeeper-based synchronization, excluding all proprietary libraries</td></tr>
+          <tr><td><em>multiprocess-zk-example-proprietary</em></td><td>scripts and jars for an example that uses the multiple process model using ZooKeeper-based synchronization, including proprietary libraries; not included in binary release</td></tr>
           <tr><td><em>web</em></td><td>app-server deployable web applications (wars), excluding all proprietary libraries</td></tr>
           <tr><td><em>web-proprietary</em></td><td>app-server deployable web applications (wars), including proprietary libraries; not included in binary release</td></tr>
           <tr><td><em>doc</em></td><td>javadocs for framework and all included connectors</td></tr>
@@ -415,15 +424,15 @@
         </section>
 
         <section>
-          <title>Simplified multi-process model</title>
+          <title>Simplified multi-process model using file-based synchronization</title>
           <p></p>
-          <p>ManifoldCF can also be deployed in a simplified multi-process model.  Inside the <em>multiprocess-example</em> directory, you will find everything you need to do this.  (The
-              <em>multiprocess-example-proprietary</em> directory is similar but includes proprietary material and is available only if you build ManifoldCF yourself.)  Below is a list of
+          <p>ManifoldCF can also be deployed in a simplified multi-process model which uses files to synchronize processes.  Inside the <em>multiprocess-file-example</em> directory, you will find everything you need to do this.  (The
+              <em>multiprocess-file-example-proprietary</em> directory is similar but includes proprietary material and is available only if you build ManifoldCF yourself.)  Below is a list of
               what you will find in this directory.</p>
           <p></p>
           <table>
-            <caption>Multiprocess example files and directories</caption>
-            <tr><th><em>multiprocess-example</em> file/directory</th><th>Meaning</th></tr>
+            <caption>File-based multiprocess example files and directories</caption>
+            <tr><th><em>multiprocess-file-example</em> file/directory</th><th>Meaning</th></tr>
             <tr><td><em>web</em></td><td>Web applications that should be deployed on tomcat or the equivalent, plus recommended application server -D switch names and values</td></tr>
             <tr><td><em>processes</em></td><td>classpath jars that should be included in the class path for all non-connector-specific processes, along with -D switches, using the same convention as described for tomcat, above</td></tr>
             <tr><td><em>properties.xml</em></td><td>an example ManifoldCF configuration file, in the right place for the multiprocess script to find it</td></tr>
@@ -433,17 +442,63 @@
             <tr><td><em>start-database[.sh|.bat]</em></td><td>script to start the HSQLDB database</td></tr>
             <tr><td><em>initialize[.sh|.bat]</em></td><td>script to create the database instance, create all database tables, and register connectors</td></tr>
             <tr><td><em>start-webapps[.sh|.bat]</em></td><td>script to start Jetty with the ManifoldCF web applications deployed</td></tr>
-            <tr><td><em>start-agents[.sh|.bat]</em></td><td>script to start the agents process</td></tr>
-            <tr><td><em>stop-agents[.sh|.bat]</em></td><td>script to stop a running agents process cleanly</td></tr>
+            <tr><td><em>start-agents[.sh|.bat]</em></td><td>script to start the (first) agents process</td></tr>
+            <tr><td><em>start-agents-2[.sh|.bat]</em></td><td>script to start a second agents process</td></tr>
+            <tr><td><em>stop-agents[.sh|.bat]</em></td><td>script to stop all running agents processes cleanly</td></tr>
             <tr><td><em>lock-clean[.sh|.bat]</em></td><td>script to clean up dirty locks (run only when all webapps and processes are stopped)</td></tr>
           </table>
           <p></p>
           <section>
             <title>Initializing the database and running</title>
             <p></p>
-            <p>If you run the multiprocess model, after you first start the database (using <em>start-database[.sh|.bat]</em>), you will need to initialize the database before you start the agents process or use the crawler UI.  To do this, all you need to do is
+            <p>If you run the file-based multiprocess model, after you first start the database (using <em>start-database[.sh|.bat]</em>), you will need to initialize the database before you start the agents process or use the crawler UI.  To do this, all you need to do is
                 run the <em>initialize[.sh|.bat]</em> script.  Then, you will need to start the web applications (using <em>start-webapps[.sh|.bat]</em>) and the agents process (using
-                <em>start-agents[.sh|.bat]</em>).</p>
+                <em>start-agents[.sh|.bat]</em>), and optionally the second agents process (using <em>start-agents-2[.sh|.bat]</em>).</p>
+            <p></p>
+          </section>
+
+        </section>
+
+        <section>
+          <title>Simplified multi-process model using ZooKeeper-based synchronization</title>
+          <p></p>
+          <p>ManifoldCF can be deployed in a simplified multi-process model which uses Apache ZooKeeper to synchronize processes.  Inside the <em>multiprocess-kz-example</em> directory, you will find everything you need to do this.  (The
+              <em>multiprocess-zk-example-proprietary</em> directory is similar but includes proprietary material and is available only if you build ManifoldCF yourself.)  Below is a list of
+              what you will find in this directory.</p>
+          <p></p>
+          <table>
+            <caption>ZooKeeper-based multiprocess example files and directories</caption>
+            <tr><th><em>multiprocess-zk-example</em> file/directory</th><th>Meaning</th></tr>
+            <tr><td><em>web</em></td><td>Web applications that should be deployed on tomcat or the equivalent, plus recommended application server -D switch names and values</td></tr>
+            <tr><td><em>processes</em></td><td>classpath jars that should be included in the class path for all non-connector-specific processes, along with -D switches, using the same convention as described for tomcat, above</td></tr>
+            <tr><td><em>properties.xml</em></td><td>an example ManifoldCF configuration file, in the right place for the multiprocess script to find it</td></tr>
+            <tr><td><em>properties-global.xml</em></td><td>an example ManifoldCF shared configuration file, in the right place for the setglobalproperties script to find it</td></tr>
+            <tr><td><em>logging.ini</em></td><td>an example ManifoldCF logging configuration file, in the right place for the properties.xml to find it</td></tr>
+            <tr><td><em>zookeeper</em></td><td>the example ZooKeeper storage directory, which must be writable in order for ZooKeeper to work</td></tr>
+            <tr><td><em>logs</em></td><td>where the ManifoldCF logs get written to</td></tr>
+            <tr><td><em>runzookeeper[.sh|.bat]</em></td><td>script to run a ZooKeeper server instance</td></tr>
+            <tr><td><em>setglobalproperties[.sh|.bat]</em></td><td>script to initialize ZooKeeper with properties from properties-global.xml</td></tr>
+            <tr><td><em>start-database[.sh|.bat]</em></td><td>script to start the HSQLDB database</td></tr>
+            <tr><td><em>initialize[.sh|.bat]</em></td><td>script to create the database instance, create all database tables, and register connectors</td></tr>
+            <tr><td><em>start-webapps[.sh|.bat]</em></td><td>script to start Jetty with the ManifoldCF web applications deployed</td></tr>
+            <tr><td><em>start-agents[.sh|.bat]</em></td><td>script to start (the first) agents process</td></tr>
+            <tr><td><em>start-agents-2[.sh|.bat]</em></td><td>script to start a second agents process</td></tr>
+            <tr><td><em>stop-agents[.sh|.bat]</em></td><td>script to stop all running agents processes cleanly</td></tr>
+          </table>
+          <p></p>
+          <section>
+            <title>Initializing the database and running</title>
+            <p></p>
+            <p>If you run the ZooKeeper-based multiprocess example, then you must follow the following steps:</p>
+            <p></p>
+            <ol>
+              <li>Start ZooKeeper (using the <em>runzookeeper[.sh|.bat]</em> script)</li>
+              <li>Initialize the ManifoldCF shared configuration data (using <em>setglobalproperties[.sh|.bat]</em>)</li>
+              <li>Start the database (using <em>start-database[.sh|.bat]</em>)</li>
+              <li>Initialize the database (using <em>initialize[.sh|.bat]</em>)</li>
+              <li>Start the agents process (using <em>start-agents[.sh|.bat]</em>, and optionally <em>start-agents-2[.sh|.bat]</em>)</li>
+              <li>Start the web applications (using <em>start-webapps[.sh|.bat]</em>)</li>
+            </ol>
             <p></p>
           </section>
 
@@ -476,7 +531,7 @@
             MCF_HOME, which should point to ManifoldCF's home execution directory, where the <em>properties.xml</em> file is found.)</p>
             
           <p></p>
-          <p>The basic steps required to set up and run ManifoldCF in command-driven multi-process mode are as follows:</p>
+          <p>The basic steps required to set up and run ManifoldCF in command-driven file-based multi-process mode are as follows:</p>
           <p></p>
           <ul>
             <li>Install PostgreSQL or MySQL.  The PostgreSQL JDBC driver included with ManifoldCF is known to work with version 9.1, so that version is the currently recommended
@@ -547,6 +602,20 @@
               correct arguments and settings.</p>
             <p></p>
             <table>
+              <tr><th>Authorization Domain Command Class</th><th>Arguments</th><th>Function</th></tr>
+              <tr><td>org.apache.manifoldcf.authorities.RegisterDomain</td><td><em>domainname</em> <em>description</em></td><td>Register an authorization domain</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.UnRegisterDomain</td><td><em>domainname</em></td><td>Un-register an authorization domain</td></tr>
+            </table>
+            <p></p>
+            <table>
+              <tr><th>User Mapping Command Class</th><th>Arguments</th><th>Function</th></tr>
+              <tr><td>org.apache.manifoldcf.authorities.RegisterMapper</td><td><em>classname</em> <em>description</em></td><td>Register a mapping connector class</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.UnRegisterMapper</td><td><em>classname</em></td><td>Un-register a mapping connector class</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.UnRegisterAllMappers</td><td>None</td><td>Un-register all mapping connector classes</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.SynchronizeMappers</td><td>None</td><td>Un-register all registered mapping connector classes that can't be found</td></tr>
+            </table>
+            <p></p>
+            <table>
               <tr><th>Authority Command Class</th><th>Arguments</th><th>Function</th></tr>
               <tr><td>org.apache.manifoldcf.authorities.RegisterAuthority</td><td><em>classname</em> <em>description</em></td><td>Register an authority connector class</td></tr>
               <tr><td>org.apache.manifoldcf.authorities.UnRegisterAuthority</td><td><em>classname</em></td><td>Un-register an authority connector class</td></tr>
@@ -554,8 +623,8 @@
               <tr><td>org.apache.manifoldcf.authorities.SynchronizeAuthorities</td><td>None</td><td>Un-register all registered authority connector classes that can't be found</td></tr>
             </table>
             <p></p>
-            <p>Remember that you need to include all the jars under <em>multiprocess-example/processes/lib</em> in the classpath whenever you run one of these commands!
-                But, luckily, there are scripts which do this for you.  These can be found in <em>multiprocess-example/processes/executecommand[.sh,.bat]</em>.
+            <p>Remember that you need to include all the jars under <em>multiprocess-file-example/processes/lib</em> in the classpath whenever you run one of these commands!
+                But, luckily, there are scripts which do this for you.  These can be found in <em>multiprocess-file-example/processes/executecommand[.sh,.bat]</em>.
                 The scripts require some environment variables to be set, such as <em>MCF_HOME</em> and <em>JAVA_HOME</em>, and expect the configuration file to be
                 found at <em>MCF_HOME/properties.xml</em>.</p>
             <p></p>
@@ -608,8 +677,8 @@
         <p>&#60;outputconnector name="<em>pretty_name</em>" class="<em>connector_class</em>"/&#62;</p>
         <p></p>
         <p>The <em>connectors.xml</em> file typically has some connectors commented out - namely the ones build with stubs which require you to supply a
-          third-party library in order for the connector to run.  If you build ManifoldCF yourself, the <em>example-proprietary</em> and <em>multiprocess-example-proprietary</em>
-          directories instead use <em>connectors-proprietary.xml</em>.  The connectors you build against the proprietary libraries you supply will not have their
+          third-party library in order for the connector to run.  If you build ManifoldCF yourself, the <em>example-proprietary</em> and <em>multiprocess-file-example-proprietary</em>
+          and <em>multiprocess-zk-example-proprietary</em> directories instead use <em>connectors-proprietary.xml</em>.  The connectors you build against the proprietary libraries you supply will not have their
           <em>connectors-proprietary.xml</em> tags commented out.</p>
         <p></p>
       </section>
@@ -687,7 +756,7 @@
             <tr><td>tcpip_socket</td><td>true</td></tr>
             <tr><td>max_connections</td><td>400</td></tr>
             <tr><td>checkpoint_timeout</td><td>900</td></tr>
-            <tr><td>datastyle</td><td>ISO,European</td></tr>
+            <tr><td>datestyle</td><td>ISO,European</td></tr>
             <tr><td>autovacuum</td><td>off</td></tr>
           </table>
           <p></p>
@@ -745,6 +814,8 @@
             when operating without any apparent stalls due to the above issues, Derby is still only about 1/4 as fast as PostgreSQL.  At the moment this limits Derby's utility for
             ManifoldCF to demonstration and testing.</p>
         </section>
+        
+        
       </section>
         
       <section>
@@ -778,6 +849,8 @@
           <table>
             <caption>Property.xml properties</caption>
             <tr><th>Property</th><th>Required?</th><th>Function</th></tr>
+            <tr><td>org.apache.manifoldcf.login.name</td><td>No</td><td>Crawler UI login user ID (defaults to "admin")</td></tr>
+            <tr><td>org.apache.manifoldcf.login.password</td><td>No</td><td>Crawler UI login user password (defaults to "admin")</td></tr>
             <tr><td>org.apache.manifoldcf.crawleruiwarpath</td><td>Yes, for Jetty</td><td>Location of Crawler UI war</td></tr>
             <tr><td>org.apache.manifoldcf.authorityservicewarpath</td><td>Yes, for Jetty</td><td>Location of Authority Service war</td></tr>
             <tr><td>org.apache.manifoldcf.apiservicewarpath</td><td>Yes, for Jetty</td><td>Location of API Service war</td></tr>
@@ -786,6 +859,10 @@
             <tr><td>org.apache.manifoldcf.connectorsconfigurationfile</td><td>No</td><td>Location of connectors.xml file, for QuickStart, so ManifoldCF can register connectors.</td></tr>
             <tr><td>org.apache.manifoldcf.dbsuperusername</td><td>No</td><td>Database superuser name, for QuickStart, so ManifoldCF can create database instance.</td></tr>
             <tr><td>org.apache.manifoldcf.dbsuperuserpassword</td><td>No</td><td>Database superuser password, for QuickStart, so ManifoldCF can create database instance.</td></tr>
+            <tr><td>org.apache.manifoldcf.ui.maxstatuscount</td><td>No</td><td>The maximum number of documents ManifoldCF will try to count for the job status display.  Defaults to 500000.</td></tr>
+            <tr><td>org.apache.manifoldcf.databaseimplementationclass</td><td>No</td><td>Specifies the class to use to implement database access.
+                Default is a built-in PostgreSQL implementation.  Supported choices are: org.apache.manifoldcf.core.database.DBInterfaceDerby,
+                org.apache.manifoldcf.core.database.DBInterfacePostgreSQL, org.apache.manifoldcf.core.database.DBInterfaceHSQLDB</td></tr>
             <tr><td>org.apache.manifoldcf.postgresql.hostname</td><td>No</td><td>PostgreSQL server host name, or localhost if not specified.</td></tr>
             <tr><td>org.apache.manifoldcf.postgresql.port</td><td>No</td><td>PostgreSQL server port, or standard port if not specified.</td></tr>
             <tr><td>org.apache.manifoldcf.postgresql.ssl</td><td>No</td><td>Set to "true" for ssl communication with PostgreSQL.</td></tr>
@@ -796,9 +873,17 @@
             <tr><td>org.apache.manifoldcf.hsqldbdatabaseport</td><td>No</td><td>The HSQLDB remote server port.</td></tr>
             <tr><td>org.apache.manifoldcf.hsqldbdatabaseinstance</td><td>No</td><td>The HSQLDB remote database instance name.</td></tr>
             <tr><td>org.apache.manifoldcf.mysql.server</td><td>No</td><td>The MySQL server name.  Defaults to 'localhost'.</td></tr>
-            <tr><td>org.apache.manifoldcf.lockmanagerclass</td><td>No</td><td>Specifies the class to use to implement synchronization.  Default is a built-in file-based synchronization class.</td></tr>
-            <tr><td>org.apache.manifoldcf.databaseimplementationclass</td><td>No</td><td>Specifies the class to use to implement database access.  Default is a built-in PostgreSQL implementation.  Supported choices are: org.apache.manifoldcf.core.database.DBInterfaceDerby, org.apache.manifoldcf.core.database.DBInterfacePostgreSQL, org.apache.manifoldcf.core.database.DBInterfaceHSQLDB</td></tr>
-            <tr><td>org.apache.manifoldcf.synchdirectory</td><td>Yes, if file-based synchronization class is used</td><td>Specifies the path of a synchronization directory.  All ManifoldCF process owners <strong>must</strong> have read/write privileges to this directory.</td></tr>
+            <tr><td>org.apache.manifoldcf.mysql.client</td><td>No</td><td>The MySQL client property.  Defaults to 'localhost'.  You may want to set this to '%' for a multi-machine setup.</td></tr>
+            <tr><td>org.apache.manifoldcf.lockmanagerclass</td><td>No</td><td>Specifies the class to use to implement synchronization.  Default
+                is either file-based synchronization or in-memory synchronization, using the org.apache.manifoldcf.core.lockmanager.LockManager class.
+                Options include org.apache.manifoldcf.core.lockmanager.BaseLockManager, org.apache.manifoldcf.core.FileLockManager, and
+                org.apache.manifoldcf.core.lockmanager.ZooKeeperLockManager.</td></tr>
+            <tr><td>org.apache.manifoldcf.synchdirectory</td><td>Yes, if file-based synchronization class is specified</td><td>Specifies the path of a
+                synchronization directory.  All ManifoldCF process owners <strong>must</strong> have read/write privileges to this directory.</td></tr>
+            <tr><td>org.apache.manifoldcf.zookeeper.connectstring</td><td>Yes, if ZooKeeper-based synchronization class is specified</td><td>Specifies the ZooKeeper
+                connection string, consisting of comma-separated hostname:port pairs.</td></tr>
+            <tr><td>org.apache.manifoldcf.zookeeper.sessiontimeout</td><td>No</td><td>Specifies the ZooKeeper
+                session timeout, if ZooKeeperLockManager is specified.  Defaults to 2000.</td></tr>
             <tr><td>org.apache.manifoldcf.database.maxhandles</td><td>No</td><td>Specifies the maximum number of database connection handles that will by pooled.  Recommended value is 200.</td></tr>
             <tr><td>org.apache.manifoldcf.database.handletimeout</td><td>No</td><td>Specifies the maximum time a handle is to live before it is presumed dead.  Recommend a value of 604800, which is the maximum allowable.</td></tr>
             <tr><td>org.apache.manifoldcf.database.connectiontracking</td><td>No</td><td>True or false.  When "true", will track all allocated database connection handles, and will dump an allocation stack trace when the pool is exhausted.  Useful for diagnosing connection leaks.</td></tr>
@@ -810,6 +895,7 @@
             <tr><td>org.apache.manifoldcf.crawler.expirethreads</td><td>No</td><td>Number of crawler expiration threads created.  Suggest a value of 10.</td></tr>
             <tr><td>org.apache.manifoldcf.crawler.cleanupthreads</td><td>No</td><td>Number of crawler cleanup threads created.  Suggest a value of 10.</td></tr>
             <tr><td>org.apache.manifoldcf.crawler.deletethreads</td><td>No</td><td>Number of crawler delete threads created.  Suggest a value of 10.</td></tr>
+            <tr><td>org.apache.manifoldcf.crawler.historycleanupinterval</td><td>No</td><td>Milliseconds to retain history records.  Default is 0.  Zero means "forever".</td></tr>
             <tr><td>org.apache.manifoldcf.misc</td><td>No</td><td>Miscellaneous debugging output.  Legal values INFO, WARN, or DEBUG.</td></tr>
             <tr><td>org.apache.manifoldcf.db</td><td>No</td><td>Database debugging output.  Legal values INFO, WARN, or DEBUG.</td></tr>
             <tr><td>org.apache.manifoldcf.lock</td><td>No</td><td>Lock management debugging output.  Legal values INFO, WARN, or DEBUG.</td></tr>
@@ -916,7 +1002,47 @@
         <p>In a multi process setup, all of the ManifoldCF processes might as well exist on their own.  You can learn how to programmatically start the agents process by looking at the code
           in the AgentRun command class, as described above.  Similarly, the command classes that register connectors are very small and should be easy to understand.</p>
       </section>
-      
+
+      <section>
+          <title>Integrating ManifoldCF with a search engine</title>
+          <p></p>
+          <p>ManifoldCF's Authority Service is designed to allow maximum flexibility in integrating ManifoldCF security with search engines.  The
+                service receives a user identity (as a set of authorization domain/user name tuples), and produces a set of tokens.  It also returns a 
+                summary of the status of all authorities that were involved in the assembly of the set of tokens, as a nicety.  A search engine user
+                interface could thus signal the user when the results they might be seeing are incomplete, and why.</p>
+          <p>The Authority Service expects the following arguments, passed as URL arguments and properly URL encoded:</p>
+          <p></p>
+          <table>
+            <caption>Authority Service URL parameters</caption>
+            <tr><th>Authority Service URL parameter</th><th>Meaning</th></tr>
+            <tr><td>username</td><td>the username, if there is only one authorization domain</td></tr>
+            <tr><td>domain</td><td>the optional authorization domain if there is only one authorization domain (defaults to empty string)</td></tr>
+            <tr><td>username_<em>XX</em></td><td>username number <em>XX</em>, where <em>XX</em> is an integer starting at zero</td></tr>
+            <tr><td>domain_<em>XX</em></td><td>authorization domain <em>XX</em>, where <em>XX</em> is an integer starting at zero</td></tr>
+          </table>
+          <p></p>
+          <p>Access tokens and authority statuses are returned in the HTTP response separated by newline characters.  Each line has a prefix
+                as follows:</p>
+          <p></p>
+          <table>
+            <caption>Authority Service response prefixes</caption>
+            <tr><th>Authority Service response prefix</th><th>Meaning</th></tr>
+            <tr><td>TOKEN:</td><td>An access token</td></tr>
+            <tr><td>AUTHORIZED:</td><td>The name of an authority that found the user to be authorized</td></tr>
+            <tr><td>UNREACHABLEAUTHORITY:</td><td>The name of an authority that was found to be unreachable or unusable</td></tr>
+            <tr><td>UNAUTHORIZED:</td><td>The name of an authority that found the user to be unauthorized</td></tr>
+            <tr><td>USERNOTFOUND:</td><td>The name of an authority that could not find the user</td></tr>
+          </table>
+          <p></p>
+          <p>It is important to remember that only the "TOKEN:" lines actually matter for security.  Even if any of the error conditions apply, the set
+                of tokens returned by the Authority Service will be correctly supplied in order to apply appropriate security to documents being searched.</p>
+          <p>If you choose to deploy a search-engine plugin supplied by the Apache ManifoldCF project (for example, the Solr plugin), you will not need
+                know any of the above, since part of the plugin's purpose is to communicate with the Authority Service and apply the access tokens that are
+                returned to the search query automatically.  Some plugins, such as the ElasticSearch plugin, are more or less like toolkits, but still hide most
+                of the above from the integrator.  In a more highly customized system, however, you may need to develop your own code which interacts
+                with the Authority Service in order to meet your goals.</p>
+      </section>
+
     </section>
   </body>
 
diff --git a/site/src/documentation/content/xdocs/en_US/included-connectors.xml b/site/src/documentation/content/xdocs/en_US/included-connectors.xml
index 0608c8f..cc79f5d 100644
--- a/site/src/documentation/content/xdocs/en_US/included-connectors.xml
+++ b/site/src/documentation/content/xdocs/en_US/included-connectors.xml
@@ -36,18 +36,22 @@
         <tr><th>System</th><th>Connector Platform</th><th>Server Platform</th><th>Client Version</th><th>Server Version</th></tr>
         <tr><td>Alfresco</td><td>Pure Java</td><td>Various</td><td>Tested using the Alfresco Web Services Client 4.0.b</td><td>Tested with Alfresco 2.x, 3.x and 4.x</td></tr>
         <tr><td>CMIS</td><td>Pure Java</td><td>Various</td><td>CMIS 1.0</td><td>CMIS 1.0</td></tr>
+        <tr><td>DropBox</td><td>Pure Java</td><td>Various</td><td>1.5.3</td><td>N/A</td></tr>
         <tr><td>File System</td><td>Pure Java</td><td>Win/*NIX</td><td>N/A</td><td>N/A</td></tr>
+        <tr><td>Google Drive</td><td>Pure Java</td><td>Various</td><td>v2-rev64-1.14.1-beta</td><td>N/A</td></tr>
+        <tr><td>HDFS</td><td>Pure Java</td><td>Various</td><td>1.1.2</td><td>1.1.2</td></tr>
         <tr><td>Windows Shares</td><td>Pure Java</td><td> Win, Samba, NetApp, other NAS systems </td><td>N/A</td><td>N/A</td></tr>
         <tr><td>JDBC</td><td> Pure Java </td><td> Various </td><td> Supports JDBC V2, V3, V4; tested with Oracle 10, JTDS 1.2, PostgreSQL 9.1 drivers </td><td> Various </td></tr>
+        <tr><td>Jira</td><td> Pure Java </td><td>Various</td><td>N/A</td><td>5.0-6.1</td></tr>
         <tr><td>RSS</td><td> Pure Java </td><td> N/A </td><td> N/A </td><td>Atom, RSS 2.0, others </td></tr>
         <tr><td>Web</td><td> Pure Java </td><td>N/A</td><td> N/A </td><td>HTML Version 1.0, 1.1, 2.0, Atom, RSS 2.0, others </td></tr>
         <tr><td>Wiki</td><td> Pure Java </td><td>N/A</td><td> N/A </td><td>Wiki version 1.8 and above </td></tr>
         <tr><td>LiveLink (OpenText)</td><td> Pure Java </td><td> Win </td><td> LAPI 9.7.1, 10.2.0 </td><td> Tested with 9.2.0 - 10.2.0 </td></tr>
-        <tr><td>Solr</td><td> Pure Java </td><td> N/A </td><td> N/A</td><td> Tested with Solr 1.4, 3.6.2, 4.0.0, 4.1.0 </td></tr>
+        <tr><td>Solr</td><td> Pure Java </td><td> N/A </td><td> N/A</td><td> Tested with Solr 1.4, 3.6.2, 4.0.0, 4.1.0, 4.2.0, 4.3.0 </td></tr>
         <tr><td>OpenSearchServer</td><td> Pure Java </td><td> N/A </td><td> N/A</td><td> Tested with OpenSearchServer 1.2.1, 1.2.2, 1.2.3 </td></tr>
         <tr><td>ElasticSearch</td><td> Pure Java </td><td> N/A </td><td> N/A</td><td> Tested with ElasticSearch 0.18.3, 0.18.4, 0.18.5, 0.18.6, 0.18.7 </td></tr>
         <tr><td>Documentum (EMC)</td><td> Win, RedHat </td><td> Win, RedHat </td><td> Tested with DFC 5.3 SP5 </td><td> Tested against 5.3, 6.0, and 6.5 servers </td></tr>
-        <tr><td>SharePoint (MSFT)</td><td>Pure Java </td><td>Win</td><td> N/A </td><td> Tested with SharePoint 2003 (2.0), 2007 (3.0), 2010 (4.0) </td></tr>
+        <tr><td>SharePoint (MSFT)</td><td>Pure Java </td><td>Win</td><td> N/A </td><td> Tested with SharePoint 2003 (2.0), 2007 (3.0), 2010 (4.0) (without Claim Space Auth enabled) </td></tr>
         <tr><td>Meridio (Autonomy)</td><td> Pure Java </td><td> Win </td><td> N/A </td><td> Tested with Meridio 4.1, 5.0 </td></tr>
         <tr><td>FileNet (IBM)</td><td>Pure Java</td><td>Win, RedHat</td><td>Tested with P8 V4.1, V4.5</td><td>Tested with P8 V4.1, V4.5</td></tr>
       </table>
diff --git a/site/src/documentation/content/xdocs/en_US/javadoc.xml b/site/src/documentation/content/xdocs/en_US/javadoc.xml
index e75e074..972e0d4 100644
--- a/site/src/documentation/content/xdocs/en_US/javadoc.xml
+++ b/site/src/documentation/content/xdocs/en_US/javadoc.xml
@@ -36,10 +36,14 @@
       <p><a href="../api/alfresco/index.html">Alfresco connector</a></p>
       <p><a href="../api/cmis/index.html">CMIS authority and connector</a></p>
       <p><a href="../api/documentum/index.html">Documentum authority, connector, and support processes</a></p>
+      <p><a href="../api/dropbox/index.html">Dropbox connector</a></p>
       <p><a href="../api/filenet/index.html">FileNet connector and support processes</a></p>
-      <p><a href="../api/filesystem/index.html">File system connector</a></p>
+      <p><a href="../api/filesystem/index.html">File system repository and output connector</a></p>
+      <p><a href="../api/googledrive/index.html">GoogleDrive connector</a></p>
       <p><a href="../api/gts/index.html">qBase GTS output connector</a></p>
+      <p><a href="../api/hdfs/index.html">HDFS repository and output connector</a></p>
       <p><a href="../api/jcifs/index.html">CIFS connector</a></p>
+      <p><a href="../api/jira/index.html">JIRA connector and authority</a></p>
       <p><a href="../api/jdbc/index.html">JDBC connector</a></p>
       <p><a href="../api/livelink/index.html">LiveLink authority and connector</a></p>
       <p><a href="../api/meridio/index.html">Meridio authority and connector</a></p>
@@ -47,6 +51,7 @@
       <p><a href="../api/elasticsearch/index.html">ElasticSearch output connector</a></p>
       <p><a href="../api/nullauthority/index.html">Null authority</a></p>
       <p><a href="../api/nulloutput/index.html">Null output connector</a></p>
+      <p><a href="../api/regexpmapper/index.html">Regular expression mapping connector</a></p>
       <p><a href="../api/rss/index.html">RSS connector</a></p>
       <p><a href="../api/sharepoint/index.html">SharePoint connector</a></p>
       <p><a href="../api/solr/index.html">Solr output connector</a></p>
diff --git a/site/src/documentation/content/xdocs/en_US/performance-tuning.xml b/site/src/documentation/content/xdocs/en_US/performance-tuning.xml
index 63e0a52..2f9f0bc 100644
--- a/site/src/documentation/content/xdocs/en_US/performance-tuning.xml
+++ b/site/src/documentation/content/xdocs/en_US/performance-tuning.xml
@@ -153,7 +153,7 @@
           <tr><td>tcpip_socket</td><td>true</td></tr>
           <tr><td>max_connections</td><td>200</td></tr>
           <tr><td>checkpoint_timeout</td><td>900</td></tr>
-          <tr><td>datastyle</td><td>ISO,European</td></tr>
+          <tr><td>datestyle</td><td>ISO,European</td></tr>
           <tr><td>autovacuum</td><td>off</td></tr>
         </table>
         <p>There are some interesting conclusions, for example the use of Solid State Drives for the laptop.  Even though addressable memory was reduced to 4 GB, the system processed twice as much documents than the desktop did with slower disks.  The other interesting fact is that the server had lower performing disks, but 4 times as many processors, and it was twice as fast as the laptop.</p>
diff --git a/site/src/documentation/content/xdocs/en_US/programmatic-operation.xml b/site/src/documentation/content/xdocs/en_US/programmatic-operation.xml
index 87fb638..a656b0b 100644
--- a/site/src/documentation/content/xdocs/en_US/programmatic-operation.xml
+++ b/site/src/documentation/content/xdocs/en_US/programmatic-operation.xml
@@ -79,9 +79,15 @@
           <p></p>
           <table>
             <tr><th>Resource</th><th>Verb</th><th>What it does</th><th>Input format/query args</th><th>Output format</th></tr>
+            <tr><td>authorizationdomains</td><td>GET</td><td>List all registered authorization domains</td><td>N/A</td><td>{"authorizationdomain":[<em>&lt;list_of_authorization_domain_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnectors</td><td>GET</td><td>List all registered output connectors</td><td>N/A</td><td>{"outputconnector":[<em>&lt;list_of_output_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnectors</td><td>GET</td><td>List all registered mapping connectors</td><td>N/A</td><td>{"mappingconnector":[<em>&lt;list_of_mapping_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnectors</td><td>GET</td><td>List all registered authority connectors</td><td>N/A</td><td>{"authorityconnector":[<em>&lt;list_of_authority_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>repositoryconnectors</td><td>GET</td><td>List all registered repository connectors</td><td>N/A</td><td>{"repositoryconnector":[<em>&lt;list_of_repository_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups</td><td>GET</td><td>List all authority groups</td><td>N/A</td><td>{"authoritygroup":[<em>&lt;list_of_authority_group_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups/<em>&lt;encoded_group_name&gt;</em></td><td>GET</td><td>Get a specific authority group</td><td>N/A</td><td>{"authoritygroup":<em>&lt;authority_group_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups/<em>&lt;encoded_group_name&gt;</em></td><td>PUT</td><td>Save or create an authority group</td><td>{"authoritygroup":<em>&lt;authority_group_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups/<em>&lt;encoded_group_name&gt;</em></td><td>DELETE</td><td>Delete an authority group</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnections</td><td>GET</td><td>List all output connections</td><td>N/A</td><td>{"outputconnection":[<em>&lt;list_of_output_connection_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Get a specific output connection</td><td>N/A</td><td>{"outputconnection":<em>&lt;output_connection_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Save or create an output connection</td><td>{"outputconnection":<em>&lt;output_connection_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
@@ -89,6 +95,11 @@
             <tr><td>status/outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Check the status of an output connection</td><td>N/A</td><td>{"check_result":<em>&lt;message&gt;</em>} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>info/outputconnections/<em>&lt;encoded_connection_name&gt;</em>/<em>&lt;connector_specific_resource&gt;</em></td><td>GET</td><td>Retrieve arbitrary connector-specific resource</td><td>N/A</td><td><em>&lt;response_data&gt;</em> <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} <strong>OR</strong> {"service_interruption":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>reset/outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Forget previous indexing state</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections</td><td>GET</td><td>List all mapping connections</td><td>N/A</td><td>{"mappingconnection":[<em>&lt;list_of_mapping_connection_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Get a specific mapping connection</td><td>N/A</td><td>{"mappingconnection":<em>&lt;mapping_connection_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Save or create a mapping connection</td><td>{"mappingconnection":<em>&lt;mapping_connection_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>DELETE</td><td>Delete a mapping connection</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>status/mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Check the status of a mapping connection</td><td>N/A</td><td>{"check_result":<em>&lt;message&gt;</em>} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnections</td><td>GET</td><td>List all authority connections</td><td>N/A</td><td>{"authorityconnection":[<em>&lt;list_of_authority_connection_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Get a specific authority connection</td><td>N/A</td><td>{"authorityconnection":<em>&lt;authority_connection_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Save or create an authority connection</td><td>{"authorityconnection":<em>&lt;authority_connection_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
@@ -105,8 +116,9 @@
             <tr><td>jobs/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job definition</td><td>N/A</td><td>{"job":<em>&lt;job_object_&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>jobs/<em>&lt;job_id&gt;</em></td><td>PUT</td><td>Save a job definition</td><td>{"job":<em>&lt;job_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>jobs/<em>&lt;job_id&gt;</em></td><td>DELETE</td><td>Delete a job definition</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
-            <tr><td>jobstatuses</td><td>GET</td><td>List all jobs and their status</td><td>N/A</td><td>{"jobstatus":[<em>&lt;list_of_job_status_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
-            <tr><td>jobstatuses/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job's status</td><td>N/A</td><td>{"jobstatus":<em>&lt;job_status_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
+            <tr><td>jobstatuses</td><td>GET</td><td>List all jobs and their status</td><td>maxcount=&lt;maximum_documents_to_count&gt;</td><td>{"jobstatus":[<em>&lt;list_of_job_status_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>jobstatuses/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job's status</td><td>maxcount=&lt;maximum_documents_to_count&gt;</td><td>{"jobstatus":<em>&lt;job_status_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
+            <tr><td>jobstatusesnocounts<em>&lt;job_id&gt;</em></td><td>GET</td><td>List all jobs and their status, returning '0' for all counts</td><td>N/A</td><td>{"jobstatus":[<em>&lt;list_of_job_status_objects&gt;</em>]} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
             <tr><td>jobstatusesnocounts/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job's status, returning '0' for all counts</td><td>N/A</td><td>{"jobstatus":<em>&lt;job_status_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
             <tr><td>start/<em>&lt;job_id&gt;</em></td><td>PUT</td><td>Start a specified job manually</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>startminimal/<em>&lt;job_id&gt;</em></td><td>PUT</td><td>Start a specified job manually, minimal run requested</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
@@ -188,6 +200,18 @@
           </table>
         </section>
         <section>
+          <title>Authorization domain objects</title>
+          <p></p>
+          <p>The JSON fields an authorization domain object has are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"description"</td><td>The optional description of the authorization domain</td></tr>
+            <tr><td>"domain_name"</td><td>The internal name of the authorization domain, i.e. what is sent to the Authority Service</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Output connector objects</title>
           <p></p>
           <p>The JSON fields an output connector object has are as follows:</p>
@@ -200,6 +224,18 @@
           <p></p>
         </section>
         <section>
+          <title>Mapping connector objects</title>
+          <p></p>
+          <p>The JSON fields a mapping connector object has are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"description"</td><td>The optional description of the connector</td></tr>
+            <tr><td>"class_name"</td><td>The class name of the class implementing the connector</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Authority connector objects</title>
           <p></p>
           <p>The JSON fields an authority connector object has are as follows:</p>
@@ -224,6 +260,26 @@
           <p></p>
         </section>
         <section>
+          <title>Authority group objects</title>
+          <p></p>
+          <p>Authority group names, when they are part of a URL, should be encoded as follows:</p>
+          <p></p>
+          <ol>
+            <li>All instances of '.' should be replaced by '..'.</li>
+            <li>All instances of '/' should be replaced by '.+'.</li>
+            <li>The URL should be encoded using standard URL utf-8-based %-encoding.</li>
+          </ol>
+          <p></p>
+          <p>The JSON fields an authority group object has are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"name"</td><td>The unique name of the group</td></tr>
+            <tr><td>"description"</td><td>The description of the group</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Output connection objects</title>
           <p></p>
           <p>Output connection names, when they are part of a URL, should be encoded as follows:</p>
@@ -247,6 +303,30 @@
           <p></p>
         </section>
         <section>
+          <title>Mapping connection objects</title>
+          <p></p>
+          <p>Mapping connection names, when they are part of a URL, should be encoded as follows:</p>
+          <p></p>
+          <ol>
+            <li>All instances of '.' should be replaced by '..'.</li>
+            <li>All instances of '/' should be replaced by '.+'.</li>
+            <li>The URL should be encoded using standard URL utf-8-based %-encoding.</li>
+          </ol>
+          <p></p>
+          <p>The JSON fields for a mapping connection object are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"name"</td><td>The unique name of the connection</td></tr>
+            <tr><td>"description"</td><td>The description of the connection</td></tr>
+            <tr><td>"class_name"</td><td>The java class name of the class implementing the connection</td></tr>
+            <tr><td>"max_connections"</td><td>The total number of outstanding connections allowed to exist at a time</td></tr>
+            <tr><td>"configuration"</td><td>The configuration object for the connection, which is specific to the connection class</td></tr>
+            <tr><td>"prerequisite"</td><td>The mapping connection prerequisite, if any</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Authority connection objects</title>
           <p></p>
           <p>Authority connection names, when they are part of a URL, should be encoded as follows:</p>
@@ -266,6 +346,9 @@
             <tr><td>"class_name"</td><td>The java class name of the class implementing the connection</td></tr>
             <tr><td>"max_connections"</td><td>The total number of outstanding connections allowed to exist at a time</td></tr>
             <tr><td>"configuration"</td><td>The configuration object for the connection, which is specific to the connection class</td></tr>
+            <tr><td>"prerequisite"</td><td>The mapping connection prerequisite, if any</td></tr>
+            <tr><td>"authdomain"</td><td>The authorization domain for the authority connection, if any</td></tr>
+            <tr><td>"authgroup"</td><td>The required authority group for the authority connection</td></tr>
           </table>
           <p></p>
         </section>
@@ -289,7 +372,7 @@
             <tr><td>"class_name"</td><td>The java class name of the class implementing the connection</td></tr>
             <tr><td>"max_connections"</td><td>The total number of outstanding connections allowed to exist at a time</td></tr>
             <tr><td>"configuration"</td><td>The configuration object for the connection, which is specific to the connection class</td></tr>
-            <tr><td>"acl_authority"</td><td>The (optional) name of the authority that will enforce security for this connection</td></tr>
+            <tr><td>"acl_authority"</td><td>The (optional) name of the authority group that will enforce security for this connection</td></tr>
             <tr><td>"throttle"</td><td>An array of throttle objects, which control how quickly documents can be requested from this connection</td></tr>
           </table>
           <p></p>
@@ -408,6 +491,8 @@
           <tr><td>org.apache.manifoldcf.authorities.CheckAll</td><td>Check all authorities to be sure they are functioning</td></tr>
           <tr><td>org.apache.manifoldcf.authorities.DefineAuthorityConnection</td><td>Create a new authority connection</td></tr>
           <tr><td>org.apache.manifoldcf.authorities.DeleteAuthorityConnection</td><td>Delete an existing authority connection</td></tr>
+          <tr><td>org.apache.manifoldcf.authorities.DefineMappingConnection</td><td>Create a new mapping connection</td></tr>
+          <tr><td>org.apache.manifoldcf.authorities.DeleteMappingConnection</td><td>Delete an existing mapping connection</td></tr>
           <tr><td>org.apache.manifoldcf.crawler.AbortJob</td><td>Abort a running job</td></tr>
           <tr><td>org.apache.manifoldcf.crawler.AddScheduledTime</td><td>Add a schedule record to a job</td></tr>
           <tr><td>org.apache.manifoldcf.crawler.ChangeJobDocSpec</td><td>Modify a job's specification information</td></tr>
diff --git a/site/src/documentation/content/xdocs/en_US/technical-resources.xml b/site/src/documentation/content/xdocs/en_US/technical-resources.xml
index 8c23090..ee985ba 100644
--- a/site/src/documentation/content/xdocs/en_US/technical-resources.xml
+++ b/site/src/documentation/content/xdocs/en_US/technical-resources.xml
@@ -49,6 +49,7 @@
 	</p> 
 	<ul>
 	    <li><a href="writing-output-connectors.html">How to write an output connector</a></li>
+	    <li><a href="writing-mapping-connectors.html">How to write a user mapping connector</a></li>
 	    <li><a href="writing-authority-connectors.html">How to write an authority connector</a></li>
 	    <li><a href="writing-repository-connectors.html">How to write a repository connector</a></li>
 	</ul>
diff --git a/site/src/documentation/content/xdocs/en_US/writing-mapping-connectors.xml b/site/src/documentation/content/xdocs/en_US/writing-mapping-connectors.xml
new file mode 100644
index 0000000..4b60153
--- /dev/null
+++ b/site/src/documentation/content/xdocs/en_US/writing-mapping-connectors.xml
@@ -0,0 +1,144 @@
+<?xml version="1.0"?>
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" 
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<document> 
+
+  <header> 
+    <title>Writing user mapping connectors</title> 
+  </header> 
+
+  <body> 
+    <section>
+      <title>Writing a User Mapping Connector</title>
+      <p></p>
+      <p>A user mapping connector allows a user name to be transformed in a manner that depends on the functionality of the connector.  In some cases, no connection to
+            an external repository is required (for example, simple string transformations), while in some cases one might imagine such a connector consulting with (say) an
+            LDAP system to look up a specific name.</p>
+      <p></p>
+      <p>A user name is just a string, which is designed to represent a user identity.  Some user names have specific forms - for instance, Active Directory user names are
+            often represented in the form <code>user@domain</code>.  But, most importantly, the exact name used can often depend on the particular system being addressed.</p>
+      <p></p>
+      <p>As is the case with all connectors under the ManifoldCF umbrella, a user mapping connector consists of a single part:</p>
+      <p></p>
+      <ul>
+        <li>A class implementing an interface (in this case, <em>org.apache.manifoldcf.authorities.interfaces.IMappingConnector</em>)</li>
+      </ul>
+      <p></p>
+      <section>
+        <title>Key concepts</title>
+        <p></p>
+        <p>The mapping connector abstraction makes use of, or introduces, the following concepts:</p>
+        <p></p>
+        <table>
+          <tr><th>Concept</th><th>What it is</th></tr>
+          <tr><td>Configuration parameters</td><td>A hierarchical structure, internally represented as an XML document, which describes a specific configuration of a specific mapping connector, i.e. <strong>how</strong> the connector should do its job; see <em>org.apache.manifoldcf.core.interfaces.ConfigParams</em></td></tr>
+          <tr><td>Mapping connection</td><td>An mapping connector instance that has been furnished with configuration data</td></tr>
+          <tr><td>User name</td><td>The name of a user, which is often a Kerberos principal name, e.g. <em>john@apache.org</em></td></tr>
+          <tr><td>Connection management/threading/pooling model</td><td>How an individual mapping connector class instance is managed and used</td></tr>
+        </table>
+        <p></p>
+      </section>
+      <section>
+        <title>Implementing the Mapping Connector class</title>
+        <p></p>
+        <p>A very good place to start is to read the javadoc for the mapping connector interface.  You will note that the javadoc describes the usage and pooling model for a
+              connector class pretty thoroughly.  It is very important to understand the model thoroughly in order to write reliable connectors!  Use of static variables, for one thing,
+              must be done in a very careful way, to avoid issues that would be hard to detect with a cursory test.</p>
+        <p></p>
+        <p>The second thing to do is to examine some of the provided mapping connector implementations.  The only connector presently included (the Regular Expression
+              user mapping connector) demonstrates some of the sorts of techniques you will need for an effective
+              implementation.  You will also note that all of these connectors extend a framework-provided mapping connector base class, found at
+              <em>org.apache.manifoldcf.authorities.mappers.BaseMappingConnector</em>.  This base class furnishes some basic bookkeeping logic for managing the
+              connector pool, as well as default implementations of some of the less typical functionality a connector may have.  For example, connectors are allowed to have
+              database tables of their own, which are instantiated when the connector is registered, and are torn down when the connector is removed.  This is, however, not
+              very typical, and the base implementation reflects that.</p>
+        <p></p>
+        <section>
+          <title>Principle methods</title>
+          <p></p>
+          <p>The principle methods an implementer should be concerned with for creating a mapping connector are the following:</p>
+          <p></p>
+          <table>
+            <tr><th>Method</th><th>What it should do</th></tr>
+            <tr><td><strong>mapUser()</strong></td><td>Given an input user name, find the corresponding output user name</td></tr>
+            <tr><td><strong>outputConfigurationHeader()</strong></td><td>Output the head-section part of a mapping connection <em>ConfigParams</em> editing page</td></tr>
+            <tr><td><strong>outputConfigurationBody()</strong></td><td>Output the body-section part of a mapping connection <em>ConfigParams</em> editing page</td></tr>
+            <tr><td><strong>processConfigurationPost()</strong></td><td>Receive and process form data from a mapping connection <em>ConfigParams</em> editing page</td></tr>
+            <tr><td><strong>viewConfiguration()</strong></td><td>Output the viewing HTML for a mapping connection <em>ConfigParams</em> object</td></tr>
+          </table>
+          <p></p>
+          <p>These methods come in two broad classes: (a) functional methods for doing the work of the connector; (b) UI methods for configuring a connection.  Together they
+                do the heavy lifting of your connector.</p>
+          <p></p>
+          <p></p>
+        </section>
+        <section>
+          <title>Notes on connector UI methods</title>
+          <p></p>
+          <p>The crawler UI uses a tabbed layout structure, and thus each of these elements must properly implement the tabbed model.  This means that the "header" methods 
+                above must add the desired tab names to a specified array, and the "body" methods must provide appropriate HTML which handles both the case where a tab is
+                displayed, and where it is not displayed.  Also, it makes sense to use the appropriate css definitions, so that the connector UI pages have a similar look-and-feel
+                to the rest of ManifoldCF's crawler ui.  We strongly suggest starting with one of the supplied mapping connector's UI code, both for a description of the arguments
+                to each page, and for some decent ideas of ways to organize your connector's UI code.</p>
+          <p></p>
+        </section>
+      </section>
+      <section>
+        <title>Implementation support provided by the framework</title>
+        <p></p>
+        <p>ManifoldCF's framework provides a number of helpful services designed to make the creation of a connector easier.  These services are summarized below.
+              (This is not an exhaustive list, by any means.)</p>
+        <p></p>
+        <ul>
+          <li>Lock management and synchronization (see <em>org.apache.manifoldcf.core.interfaces.LockManagerFactory</em>)</li>
+          <li>Cache management (see <em>org.apache.manifoldcf.core.interfaces.CacheManagerFactory</em>)</li>
+          <li>Local keystore management (see <em>org.apache.manifoldcf.core.KeystoreManagerFactory</em>)</li>
+          <li>Database management (see <em>org.apache.manifoldcf.core.DBInterfaceFactory</em>)</li>
+        </ul>
+        <p></p>
+        <p>For UI method support, these too are very useful:</p>
+        <p></p>
+        <ul>
+          <li>Multipart form processing (see <em>org.apache.manifoldcf.ui.multipart.MultipartWrapper</em>)</li>
+          <li>HTML encoding (see <em>org.apache.manifoldcf.ui.util.Encoder</em>)</li>
+          <li>HTML formatting (see <em>org.apache.manifoldcf.ui.util.Formatter</em>)</li>
+        </ul>
+        <p></p>
+      </section>
+      <section>
+        <title>DO's and DON'T DO's</title>
+        <p></p>
+        <p>It's always a good idea to make use of an existing infrastructure component, if it's meant for that purpose, rather than inventing your own.  There are, however,
+              some limitations we recommend you adhere to.</p>
+        <p></p>
+        <ul>
+          <li>DO make use of infrastructure components described in the section above</li>
+          <li>DON'T make use of infrastructure components that aren't mentioned, without checking first</li>
+          <li>NEVER write connector code that directly uses framework database tables, other than the ones installed and managed by your connector</li>
+        </ul>
+        <p></p>
+        <p>If you are tempted to violate these rules, it may well mean you don't understand something important.  At the very least, we'd like to know why.  Send email
+              to dev@manifoldcf.apache.org with a description of your problem and how you are tempted to solve it.</p>
+      </section>
+    </section>
+  </body>
+</document>
\ No newline at end of file
diff --git a/site/src/documentation/content/xdocs/ja_JP/end-user-documentation.xml b/site/src/documentation/content/xdocs/ja_JP/end-user-documentation.xml
index 18a8cb6..30870fe 100644
--- a/site/src/documentation/content/xdocs/ja_JP/end-user-documentation.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/end-user-documentation.xml
@@ -356,6 +356,31 @@
                 <br/><br/>
                 <p>新しいマッピングを追加する場合は、項目「ソース」にメタデータ名、「ターゲット」にSolrの出力項目名を入力して「追加」ボタンを押下してください。Solrに送信しない項目の場合は、「ターゲット」を空に設定してください。</p>
             </section>
+
+            <section id="filesystemoutputconnector">
+                <title>ファイルシステム出力コネクション</title>
+                <p>ファイルシステム出力コネクションは、Unixユーティリティの<em>wget</em>のようにローカルファイルシステムに文書を保管することができます。このコネクションタイプによって格納されたドキュメントは、メタデータまたはセキュリティ情報を含んでいませんが、バイナリ·ファイルのみから構成されています。</p>
+                <p>ファイルシステム出力コネクションタイプの接続構成情報には追加のタブを含みません。しかしながら、付加的なJobタブがあり、「出力パス」と呼びます。タブはこのように見えます。</p>
+                <br/><br/>
+                <figure src="images/en_US/filesystem-job-output-path.PNG" alt="File System Specification, Output Path tab" width="80%"/>
+                <br/><br/>
+                <p>ドキュメントを出力したいパスを入力して、「保存」をクリックしてください。</p>
+            </section>
+
+            <section id="hdfsoutputconnector">
+                <title>HDFS出力コネクション</title>
+                <p>HDFS出力コネクションは、Unixユーティリティの<em>wget</em>のようにHDFS(Hadoop Distributed File System)に文書を保管することができます。このコネクションタイプによって格納されたドキュメントは、メタデータまたはセキュリティ情報を含んでいませんが、バイナリ·ファイルのみから構成されています。</p>
+                <p>HDFS出力コネクションタイプのための接続構成情報は「サーバー」タブという追加のタブを1つ含んでいます。このタブこのように見えます。</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-configure-server.PNG" alt="HDFS Output Configuration, Server tab" width="80%"/>
+                <br/><br/>
+                <p>HDFSネームノードのURIおよびHDFSユーザー名に書き入れてください。両方とも必要となります。</p>
+                <p>HDFS出力接続タイプについては、「出力パス」と呼ばれる付加的なJobタブがあります。このタブこのように見えます。</p>
+                <br/><br/>
+                <figure src="images/en_US/hdfs-job-output-path.PNG" alt="HDFS Output Specification, Output Path tab" width="80%"/>
+                <br/><br/>
+                <p>ドキュメントを出力したいパスを入力して、「保存」をクリックしてください。</p>
+            </section>
             
             <section id="osssoutputconnector">
             	<title>OpenSearchServer出力コネクション</title>
@@ -1081,6 +1106,43 @@
                 <p>1つ以上のルールが存在する場合は、上から実行され、上のルールの結果は下のルールで変更されます。</p>
             </section>
             
+            <section id="dropboxrepository">
+              <title>Dropboxリポジトリコネクション</title>
+              <p>Dropboxリポジトリコネクションは、<a href="https://www.dropbox.com/home">Dropbox</a>の内容をインデクシングすることができます。</p>
+              <p>それぞれのDropboxコネクションは、ひとつのDropboxリポジトリへのアクセスを管理します。これは、たとえば異なるユーザを使って、複数のDropboxを持っている場合、それぞれのDropboxリポジトリに対してひとつずつコネクションを作り、その関連した権限情報を用意する必要があることを意味します。</p>
+              <br/>
+              <p>ひとつのDropboxコネクションは、次のような設定パラメータを、リポジトリコネクションの編集画面に持っています。</p>
+              <br/><br/>
+              <figure src="images/ja_JP/dropbox-repository-connection-configuration.PNG" alt="Dropbox Repository Connection, configuration parameters" width="80%"/>
+              <br/><br/>
+              <p>コネクションに接続するためには、4つの情報が必要です。Application KeyとApplication Secretは、開発ライセンスであなたのアプリケーションを登録した時に、Dropboxから提供されます。これは基本的には、アプリケーション開発者用の<a href="https://www.dropbox.com/developers/apps">Dropbox website</a>を通して行われます。</p>
+              <br/><br/>
+              <figure src="images/ja_JP/dropbox-repository-create-application.PNG" alt="Dropbox create application" width="80%"/>
+              <br/><br/>
+              <p>今回の用途としては、DropboxとコミュニケートするためにRESTサービスを使いますので、アプリケーションタイプとして"Core"を選択する必要があります。また、"full access"を選択します。これには少々議論があります。基本的に、情報を格納したり取得したりするアプリケーションでは、アプリケーションの固有フォルダからフルアクセスします。今回のケースでは、ユーザーがユーザーのファイルをそのままアクセスし、manifoldcfの固有フォルダにコピーしないことを想定しています。結果的には、"App folder"の代わりに"full access"を選択しています。</p>
+              <br/><br/>
+              <figure src="images/ja_JP/dropbox-repository-application-secret-passwords.PNG" alt="Dropbox get key and secret passwords" width="80%"/>
+              <br/><br/>
+              <p>その後、このコネクタで求められる2つの情報、App keyとApp secretを見ることができます。</p>
+              <p>ここで、それぞれのユーザは、ユーザのアプリケーションがDropboxにアクセスできるよう受諾されることを確認しなければなりません。これは普通のOAUTHアプローチを通してなされます。ユーザのアプリケーションのkeyとsecretが提供された後、ユーザは、ユーザのアプリケーションの権限を許可してもらうことを、Dropboxのウェブサイトに対して問い合わせするよう指示されます。彼らがそのリクエストを受諾すると、Dropboxはclient keyとsecretを提供します。このkeyとsecretが、Dropboxコネクタで必要になる最後の2つです。このプロセスの深い内容については、<a href="https://www.dropbox.com/developers/core/authentication">dropbox website</a>で説明されており、どのようにこの2つのclientトークンを生成するかの例が示されています。</p>
+              <br/>
+              <br/><br/>
+              <p>保存ボタンをクリックしたら、コネクションのサマリ画面を見ることになります。これは以下のようなものになります。</p>
+              <br/><br/>
+              <figure src="images/ja_JP/dropbox-repository-connection-configuration-save.PNG" alt="Dropbox Repository Connection, saving configuration" width="80%"/>
+              <br/><br/>
+              <p>Dropboxリポジトリコネクションを使用するジョブを設定した場合は、追加タブが表示されます。これは、"Dropbox Folder to Index" です。</p>
+              <br/><br/>
+              <figure src="images/ja_JP/dropbox-repository-connection-job-dropbox-folder-to-index.PNG" alt="Dropbox Repository Connection, Dropbox Folder to Index" width="80%"/>
+              <br/><br/>
+              <p>このタブでは、Dropboxコネクタがインデクシングするディレクトリを指定することができます。Dropboxはunixスタイルのパスを使います。"/"はルートパスを意味します(したがって全体のDropboxを指定することになります)。たとえば、Photosディレクトリをインデクシングしたい場合は、"/Photos"と指定することになります。</p>
+              <p>注意点は、Dropboxコネクタは取り込み処理中、それぞれの結果について、フォルダの階層を見つけた時、そのフォルダの子供のすべてのフォルダを取り込みしようとします。フォルダでなければ、直接ドキュメントを取り込みしようとします。</p>
+              <p>ジョブの設定が終わったら、保存ボタンをクリックし、サマリ画面を見ます。これは以下のようなものになります。</p>
+              <br/><br/>
+              <figure src="images/ja_JP/dropbox-repository-connection-job-save.PNG" alt="CMIS Repository Connection, saving job" width="80%"/>
+              <br/><br/>
+            </section>
+            
             <section id="livelinkrepository">
                 <title>OpenText LiveLinkリポジトリコネクション</title>
                 <p>OpenText LiveLinkコネクションタイプは、LiveLinkリポジトリからのコンテンツから索引を作成します。LiveLinkには基本ドキュメント、複合ドキュメント、フォルダ、ワークスペース、プロジェクトのような多くのドキュメントタイプがあります。LiveLinkコネクションはこれらのすべてのドキュメント種類のコンテンツを処理することができます。</p>
diff --git a/site/src/documentation/content/xdocs/ja_JP/how-to-build-and-deploy.xml b/site/src/documentation/content/xdocs/ja_JP/how-to-build-and-deploy.xml
index c8f687a..dab03d8 100644
--- a/site/src/documentation/content/xdocs/ja_JP/how-to-build-and-deploy.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/how-to-build-and-deploy.xml
@@ -79,9 +79,14 @@
         <ul>
           <li>CMIS connector</li>
           <li>Documentum connector, built against a Documentum API stub</li>
+          <li>DropBox connector</li>
           <li>FileNet connector, built against a FileNet API stub</li>
-          <li>Filesystem connector</li>
+          <li>WGET-compatible filesystem connector</li>
+          <li>Generic XML repository connector</li>
+          <li>Google Drive connector</li>
+          <li>HDFS connector</li>
           <li>JDBC connector, with just the PostgreSQL jdbc driver</li>
+          <li>Jira connector</li>
           <li>LiveLink connector, built against a LiveLink API stub</li>
           <li>Meridio connector, built against modified Meridio API WSDLs and XSDs</li>
           <li>RSS connector</li>
@@ -108,10 +113,12 @@
           <li>Apache Solr output connector</li>
           <li>OpenSearchServer output connector</li>
           <li>ElasticSearch output connector</li>
+          <li>WGET-compatible filesystem output connector</li>
+          <li>HDFS output connector</li>
           <li>Null output connector</li>
         </ul>
         <p></p>
-        <p>Each individual LGPL and proprietary connector's dependencies and build limitations are described in separate sections below.</p>
+        <p>The dependencies and build limitations of each individual LGPL and proprietary connector is described in separate sections below.</p>
         <p></p>
             
         <section>
@@ -120,7 +127,7 @@
           <p>The Alfresco connector requires the Alfresco Web Services Client provided by Alfresco in order to be built. Place this jar into the directory <em>connectors/alfresco/lib-proprietary</em> before you build.
               This will occur automatically if you execute the ant target "make-deps" from the ManifoldCF root directory.</p>
           <p></p>
-          <p>To run integration tests for the connector you have to copy the alfresco.war including H2 support created by the Maven module test-materials/alfresco-war (using "mvn package" inside the folder)
+          <p>To run integration tests for the connector you have to copy the alfresco.war including H2 support created by the Maven module test-materials/alfresco-4-war (using "mvn package" inside the folder)
               into the <em>connectors/alfresco/test-materials-proprietary</em> folder.  Then use the "ant test" or "mvn integration-test" for the standard build to execute integration tests.</p>
           <p></p>
         </section>
@@ -315,8 +322,10 @@
           <tr><td><em>script-engine</em></td><td>jars and scripts for running the ManifoldCF script interpreter</td></tr>
           <tr><td><em>example</em></td><td>a jetty-based example that runs in a single process (except for any connector-specific processes), excluding all proprietary libraries</td></tr>
           <tr><td><em>example-proprietary</em></td><td>a jetty-based example that runs in a single process (except for any connector-specific processes), including proprietary libraries; not included in binary release</td></tr>
-          <tr><td><em>multiprocess-example</em></td><td>scripts and jars for an example that uses the multiple process model, excluding all proprietary libraries</td></tr>
-          <tr><td><em>multiprocess-example-proprietary</em></td><td>scripts and jars for an example that uses the multiple process model, including proprietary libraries; not included in binary release</td></tr>
+          <tr><td><em>multiprocess-file-example</em></td><td>scripts and jars for an example that uses the multiple process model using file-based synchronization, excluding all proprietary libraries</td></tr>
+          <tr><td><em>multiprocess-file-example-proprietary</em></td><td>scripts and jars for an example that uses the multiple process model using file-based synchronization, including proprietary libraries; not included in binary release</td></tr>
+          <tr><td><em>multiprocess-zk-example</em></td><td>scripts and jars for an example that uses the multiple process model using ZooKeeper-based synchronization, excluding all proprietary libraries</td></tr>
+          <tr><td><em>multiprocess-zk-example-proprietary</em></td><td>scripts and jars for an example that uses the multiple process model using ZooKeeper-based synchronization, including proprietary libraries; not included in binary release</td></tr>
           <tr><td><em>web</em></td><td>app-server deployable web applications (wars), excluding all proprietary libraries</td></tr>
           <tr><td><em>web-proprietary</em></td><td>app-server deployable web applications (wars), including proprietary libraries; not included in binary release</td></tr>
           <tr><td><em>doc</em></td><td>javadocs for framework and all included connectors</td></tr>
@@ -415,15 +424,15 @@
         </section>
 
         <section>
-          <title>Simplified multi-process model</title>
+          <title>Simplified multi-process model using file-based synchronization</title>
           <p></p>
-          <p>ManifoldCF can also be deployed in a simplified multi-process model.  Inside the <em>multiprocess-example</em> directory, you will find everything you need to do this.  (The
-              <em>multiprocess-example-proprietary</em> directory is similar but includes proprietary material and is available only if you build ManifoldCF yourself.)  Below is a list of
+          <p>ManifoldCF can also be deployed in a simplified multi-process model which uses files to synchronize processes.  Inside the <em>multiprocess-file-example</em> directory, you will find everything you need to do this.  (The
+              <em>multiprocess-file-example-proprietary</em> directory is similar but includes proprietary material and is available only if you build ManifoldCF yourself.)  Below is a list of
               what you will find in this directory.</p>
           <p></p>
           <table>
-            <caption>Multiprocess example files and directories</caption>
-            <tr><th><em>multiprocess-example</em> file/directory</th><th>Meaning</th></tr>
+            <caption>File-based multiprocess example files and directories</caption>
+            <tr><th><em>multiprocess-file-example</em> file/directory</th><th>Meaning</th></tr>
             <tr><td><em>web</em></td><td>Web applications that should be deployed on tomcat or the equivalent, plus recommended application server -D switch names and values</td></tr>
             <tr><td><em>processes</em></td><td>classpath jars that should be included in the class path for all non-connector-specific processes, along with -D switches, using the same convention as described for tomcat, above</td></tr>
             <tr><td><em>properties.xml</em></td><td>an example ManifoldCF configuration file, in the right place for the multiprocess script to find it</td></tr>
@@ -433,17 +442,63 @@
             <tr><td><em>start-database[.sh|.bat]</em></td><td>script to start the HSQLDB database</td></tr>
             <tr><td><em>initialize[.sh|.bat]</em></td><td>script to create the database instance, create all database tables, and register connectors</td></tr>
             <tr><td><em>start-webapps[.sh|.bat]</em></td><td>script to start Jetty with the ManifoldCF web applications deployed</td></tr>
-            <tr><td><em>start-agents[.sh|.bat]</em></td><td>script to start the agents process</td></tr>
-            <tr><td><em>stop-agents[.sh|.bat]</em></td><td>script to stop a running agents process cleanly</td></tr>
+            <tr><td><em>start-agents[.sh|.bat]</em></td><td>script to start the (first) agents process</td></tr>
+            <tr><td><em>start-agents-2[.sh|.bat]</em></td><td>script to start a second agents process</td></tr>
+            <tr><td><em>stop-agents[.sh|.bat]</em></td><td>script to stop all running agents processes cleanly</td></tr>
             <tr><td><em>lock-clean[.sh|.bat]</em></td><td>script to clean up dirty locks (run only when all webapps and processes are stopped)</td></tr>
           </table>
           <p></p>
           <section>
             <title>Initializing the database and running</title>
             <p></p>
-            <p>If you run the multiprocess model, after you first start the database (using <em>start-database[.sh|.bat]</em>), you will need to initialize the database before you start the agents process or use the crawler UI.  To do this, all you need to do is
+            <p>If you run the file-based multiprocess model, after you first start the database (using <em>start-database[.sh|.bat]</em>), you will need to initialize the database before you start the agents process or use the crawler UI.  To do this, all you need to do is
                 run the <em>initialize[.sh|.bat]</em> script.  Then, you will need to start the web applications (using <em>start-webapps[.sh|.bat]</em>) and the agents process (using
-                <em>start-agents[.sh|.bat]</em>).</p>
+                <em>start-agents[.sh|.bat]</em>), and optionally the second agents process (using <em>start-agents-2[.sh|.bat]</em>).</p>
+            <p></p>
+          </section>
+
+        </section>
+
+        <section>
+          <title>Simplified multi-process model using ZooKeeper-based synchronization</title>
+          <p></p>
+          <p>ManifoldCF can be deployed in a simplified multi-process model which uses Apache ZooKeeper to synchronize processes.  Inside the <em>multiprocess-kz-example</em> directory, you will find everything you need to do this.  (The
+              <em>multiprocess-zk-example-proprietary</em> directory is similar but includes proprietary material and is available only if you build ManifoldCF yourself.)  Below is a list of
+              what you will find in this directory.</p>
+          <p></p>
+          <table>
+            <caption>ZooKeeper-based multiprocess example files and directories</caption>
+            <tr><th><em>multiprocess-zk-example</em> file/directory</th><th>Meaning</th></tr>
+            <tr><td><em>web</em></td><td>Web applications that should be deployed on tomcat or the equivalent, plus recommended application server -D switch names and values</td></tr>
+            <tr><td><em>processes</em></td><td>classpath jars that should be included in the class path for all non-connector-specific processes, along with -D switches, using the same convention as described for tomcat, above</td></tr>
+            <tr><td><em>properties.xml</em></td><td>an example ManifoldCF configuration file, in the right place for the multiprocess script to find it</td></tr>
+            <tr><td><em>properties-global.xml</em></td><td>an example ManifoldCF shared configuration file, in the right place for the setglobalproperties script to find it</td></tr>
+            <tr><td><em>logging.ini</em></td><td>an example ManifoldCF logging configuration file, in the right place for the properties.xml to find it</td></tr>
+            <tr><td><em>zookeeper</em></td><td>the example ZooKeeper storage directory, which must be writable in order for ZooKeeper to work</td></tr>
+            <tr><td><em>logs</em></td><td>where the ManifoldCF logs get written to</td></tr>
+            <tr><td><em>runzookeeper[.sh|.bat]</em></td><td>script to run a ZooKeeper server instance</td></tr>
+            <tr><td><em>setglobalproperties[.sh|.bat]</em></td><td>script to initialize ZooKeeper with properties from properties-global.xml</td></tr>
+            <tr><td><em>start-database[.sh|.bat]</em></td><td>script to start the HSQLDB database</td></tr>
+            <tr><td><em>initialize[.sh|.bat]</em></td><td>script to create the database instance, create all database tables, and register connectors</td></tr>
+            <tr><td><em>start-webapps[.sh|.bat]</em></td><td>script to start Jetty with the ManifoldCF web applications deployed</td></tr>
+            <tr><td><em>start-agents[.sh|.bat]</em></td><td>script to start (the first) agents process</td></tr>
+            <tr><td><em>start-agents-2[.sh|.bat]</em></td><td>script to start a second agents process</td></tr>
+            <tr><td><em>stop-agents[.sh|.bat]</em></td><td>script to stop all running agents processes cleanly</td></tr>
+          </table>
+          <p></p>
+          <section>
+            <title>Initializing the database and running</title>
+            <p></p>
+            <p>If you run the ZooKeeper-based multiprocess example, then you must follow the following steps:</p>
+            <p></p>
+            <ol>
+              <li>Start ZooKeeper (using the <em>runzookeeper[.sh|.bat]</em> script)</li>
+              <li>Initialize the ManifoldCF shared configuration data (using <em>setglobalproperties[.sh|.bat]</em>)</li>
+              <li>Start the database (using <em>start-database[.sh|.bat]</em>)</li>
+              <li>Initialize the database (using <em>initialize[.sh|.bat]</em>)</li>
+              <li>Start the agents process (using <em>start-agents[.sh|.bat]</em>, and optionally <em>start-agents-2[.sh|.bat]</em>)</li>
+              <li>Start the web applications (using <em>start-webapps[.sh|.bat]</em>)</li>
+            </ol>
             <p></p>
           </section>
 
@@ -476,7 +531,7 @@
             MCF_HOME, which should point to ManifoldCF's home execution directory, where the <em>properties.xml</em> file is found.)</p>
             
           <p></p>
-          <p>The basic steps required to set up and run ManifoldCF in command-driven multi-process mode are as follows:</p>
+          <p>The basic steps required to set up and run ManifoldCF in command-driven file-based multi-process mode are as follows:</p>
           <p></p>
           <ul>
             <li>Install PostgreSQL or MySQL.  The PostgreSQL JDBC driver included with ManifoldCF is known to work with version 9.1, so that version is the currently recommended
@@ -547,6 +602,20 @@
               correct arguments and settings.</p>
             <p></p>
             <table>
+              <tr><th>Authorization Domain Command Class</th><th>Arguments</th><th>Function</th></tr>
+              <tr><td>org.apache.manifoldcf.authorities.RegisterDomain</td><td><em>domainname</em> <em>description</em></td><td>Register an authorization domain</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.UnRegisterDomain</td><td><em>domainname</em></td><td>Un-register an authorization domain</td></tr>
+            </table>
+            <p></p>
+            <table>
+              <tr><th>User Mapping Command Class</th><th>Arguments</th><th>Function</th></tr>
+              <tr><td>org.apache.manifoldcf.authorities.RegisterMapper</td><td><em>classname</em> <em>description</em></td><td>Register a mapping connector class</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.UnRegisterMapper</td><td><em>classname</em></td><td>Un-register a mapping connector class</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.UnRegisterAllMappers</td><td>None</td><td>Un-register all mapping connector classes</td></tr>
+              <tr><td>org.apache.manifoldcf.authorities.SynchronizeMappers</td><td>None</td><td>Un-register all registered mapping connector classes that can't be found</td></tr>
+            </table>
+            <p></p>
+            <table>
               <tr><th>Authority Command Class</th><th>Arguments</th><th>Function</th></tr>
               <tr><td>org.apache.manifoldcf.authorities.RegisterAuthority</td><td><em>classname</em> <em>description</em></td><td>Register an authority connector class</td></tr>
               <tr><td>org.apache.manifoldcf.authorities.UnRegisterAuthority</td><td><em>classname</em></td><td>Un-register an authority connector class</td></tr>
@@ -554,8 +623,8 @@
               <tr><td>org.apache.manifoldcf.authorities.SynchronizeAuthorities</td><td>None</td><td>Un-register all registered authority connector classes that can't be found</td></tr>
             </table>
             <p></p>
-            <p>Remember that you need to include all the jars under <em>multiprocess-example/processes/lib</em> in the classpath whenever you run one of these commands!
-                But, luckily, there are scripts which do this for you.  These can be found in <em>multiprocess-example/processes/executecommand[.sh,.bat]</em>.
+            <p>Remember that you need to include all the jars under <em>multiprocess-file-example/processes/lib</em> in the classpath whenever you run one of these commands!
+                But, luckily, there are scripts which do this for you.  These can be found in <em>multiprocess-file-example/processes/executecommand[.sh,.bat]</em>.
                 The scripts require some environment variables to be set, such as <em>MCF_HOME</em> and <em>JAVA_HOME</em>, and expect the configuration file to be
                 found at <em>MCF_HOME/properties.xml</em>.</p>
             <p></p>
@@ -608,8 +677,8 @@
         <p>&#60;outputconnector name="<em>pretty_name</em>" class="<em>connector_class</em>"/&#62;</p>
         <p></p>
         <p>The <em>connectors.xml</em> file typically has some connectors commented out - namely the ones build with stubs which require you to supply a
-          third-party library in order for the connector to run.  If you build ManifoldCF yourself, the <em>example-proprietary</em> and <em>multiprocess-example-proprietary</em>
-          directories instead use <em>connectors-proprietary.xml</em>.  The connectors you build against the proprietary libraries you supply will not have their
+          third-party library in order for the connector to run.  If you build ManifoldCF yourself, the <em>example-proprietary</em> and <em>multiprocess-file-example-proprietary</em>
+          and <em>multiprocess-zk-example-proprietary</em> directories instead use <em>connectors-proprietary.xml</em>.  The connectors you build against the proprietary libraries you supply will not have their
           <em>connectors-proprietary.xml</em> tags commented out.</p>
         <p></p>
       </section>
@@ -687,7 +756,7 @@
             <tr><td>tcpip_socket</td><td>true</td></tr>
             <tr><td>max_connections</td><td>400</td></tr>
             <tr><td>checkpoint_timeout</td><td>900</td></tr>
-            <tr><td>datastyle</td><td>ISO,European</td></tr>
+            <tr><td>datestyle</td><td>ISO,European</td></tr>
             <tr><td>autovacuum</td><td>off</td></tr>
           </table>
           <p></p>
@@ -745,6 +814,8 @@
             when operating without any apparent stalls due to the above issues, Derby is still only about 1/4 as fast as PostgreSQL.  At the moment this limits Derby's utility for
             ManifoldCF to demonstration and testing.</p>
         </section>
+        
+        
       </section>
         
       <section>
@@ -778,6 +849,8 @@
           <table>
             <caption>Property.xml properties</caption>
             <tr><th>Property</th><th>Required?</th><th>Function</th></tr>
+            <tr><td>org.apache.manifoldcf.login.name</td><td>No</td><td>Crawler UI login user ID (defaults to "admin")</td></tr>
+            <tr><td>org.apache.manifoldcf.login.password</td><td>No</td><td>Crawler UI login user password (defaults to "admin")</td></tr>
             <tr><td>org.apache.manifoldcf.crawleruiwarpath</td><td>Yes, for Jetty</td><td>Location of Crawler UI war</td></tr>
             <tr><td>org.apache.manifoldcf.authorityservicewarpath</td><td>Yes, for Jetty</td><td>Location of Authority Service war</td></tr>
             <tr><td>org.apache.manifoldcf.apiservicewarpath</td><td>Yes, for Jetty</td><td>Location of API Service war</td></tr>
@@ -786,6 +859,10 @@
             <tr><td>org.apache.manifoldcf.connectorsconfigurationfile</td><td>No</td><td>Location of connectors.xml file, for QuickStart, so ManifoldCF can register connectors.</td></tr>
             <tr><td>org.apache.manifoldcf.dbsuperusername</td><td>No</td><td>Database superuser name, for QuickStart, so ManifoldCF can create database instance.</td></tr>
             <tr><td>org.apache.manifoldcf.dbsuperuserpassword</td><td>No</td><td>Database superuser password, for QuickStart, so ManifoldCF can create database instance.</td></tr>
+            <tr><td>org.apache.manifoldcf.ui.maxstatuscount</td><td>No</td><td>The maximum number of documents ManifoldCF will try to count for the job status display.  Defaults to 500000.</td></tr>
+            <tr><td>org.apache.manifoldcf.databaseimplementationclass</td><td>No</td><td>Specifies the class to use to implement database access.
+                Default is a built-in PostgreSQL implementation.  Supported choices are: org.apache.manifoldcf.core.database.DBInterfaceDerby,
+                org.apache.manifoldcf.core.database.DBInterfacePostgreSQL, org.apache.manifoldcf.core.database.DBInterfaceHSQLDB</td></tr>
             <tr><td>org.apache.manifoldcf.postgresql.hostname</td><td>No</td><td>PostgreSQL server host name, or localhost if not specified.</td></tr>
             <tr><td>org.apache.manifoldcf.postgresql.port</td><td>No</td><td>PostgreSQL server port, or standard port if not specified.</td></tr>
             <tr><td>org.apache.manifoldcf.postgresql.ssl</td><td>No</td><td>Set to "true" for ssl communication with PostgreSQL.</td></tr>
@@ -796,9 +873,17 @@
             <tr><td>org.apache.manifoldcf.hsqldbdatabaseport</td><td>No</td><td>The HSQLDB remote server port.</td></tr>
             <tr><td>org.apache.manifoldcf.hsqldbdatabaseinstance</td><td>No</td><td>The HSQLDB remote database instance name.</td></tr>
             <tr><td>org.apache.manifoldcf.mysql.server</td><td>No</td><td>The MySQL server name.  Defaults to 'localhost'.</td></tr>
-            <tr><td>org.apache.manifoldcf.lockmanagerclass</td><td>No</td><td>Specifies the class to use to implement synchronization.  Default is a built-in file-based synchronization class.</td></tr>
-            <tr><td>org.apache.manifoldcf.databaseimplementationclass</td><td>No</td><td>Specifies the class to use to implement database access.  Default is a built-in PostgreSQL implementation.  Supported choices are: org.apache.manifoldcf.core.database.DBInterfaceDerby, org.apache.manifoldcf.core.database.DBInterfacePostgreSQL, org.apache.manifoldcf.core.database.DBInterfaceHSQLDB</td></tr>
-            <tr><td>org.apache.manifoldcf.synchdirectory</td><td>Yes, if file-based synchronization class is used</td><td>Specifies the path of a synchronization directory.  All ManifoldCF process owners <strong>must</strong> have read/write privileges to this directory.</td></tr>
+            <tr><td>org.apache.manifoldcf.mysql.client</td><td>No</td><td>The MySQL client property.  Defaults to 'localhost'.  You may want to set this to '%' for a multi-machine setup.</td></tr>
+            <tr><td>org.apache.manifoldcf.lockmanagerclass</td><td>No</td><td>Specifies the class to use to implement synchronization.  Default
+                is either file-based synchronization or in-memory synchronization, using the org.apache.manifoldcf.core.lockmanager.LockManager class.
+                Options include org.apache.manifoldcf.core.lockmanager.BaseLockManager, org.apache.manifoldcf.core.FileLockManager, and
+                org.apache.manifoldcf.core.lockmanager.ZooKeeperLockManager.</td></tr>
+            <tr><td>org.apache.manifoldcf.synchdirectory</td><td>Yes, if file-based synchronization class is specified</td><td>Specifies the path of a
+                synchronization directory.  All ManifoldCF process owners <strong>must</strong> have read/write privileges to this directory.</td></tr>
+            <tr><td>org.apache.manifoldcf.zookeeper.connectstring</td><td>Yes, if ZooKeeper-based synchronization class is specified</td><td>Specifies the ZooKeeper
+                connection string, consisting of comma-separated hostname:port pairs.</td></tr>
+            <tr><td>org.apache.manifoldcf.zookeeper.sessiontimeout</td><td>No</td><td>Specifies the ZooKeeper
+                session timeout, if ZooKeeperLockManager is specified.  Defaults to 2000.</td></tr>
             <tr><td>org.apache.manifoldcf.database.maxhandles</td><td>No</td><td>Specifies the maximum number of database connection handles that will by pooled.  Recommended value is 200.</td></tr>
             <tr><td>org.apache.manifoldcf.database.handletimeout</td><td>No</td><td>Specifies the maximum time a handle is to live before it is presumed dead.  Recommend a value of 604800, which is the maximum allowable.</td></tr>
             <tr><td>org.apache.manifoldcf.database.connectiontracking</td><td>No</td><td>True or false.  When "true", will track all allocated database connection handles, and will dump an allocation stack trace when the pool is exhausted.  Useful for diagnosing connection leaks.</td></tr>
@@ -810,6 +895,7 @@
             <tr><td>org.apache.manifoldcf.crawler.expirethreads</td><td>No</td><td>Number of crawler expiration threads created.  Suggest a value of 10.</td></tr>
             <tr><td>org.apache.manifoldcf.crawler.cleanupthreads</td><td>No</td><td>Number of crawler cleanup threads created.  Suggest a value of 10.</td></tr>
             <tr><td>org.apache.manifoldcf.crawler.deletethreads</td><td>No</td><td>Number of crawler delete threads created.  Suggest a value of 10.</td></tr>
+            <tr><td>org.apache.manifoldcf.crawler.historycleanupinterval</td><td>No</td><td>Milliseconds to retain history records.  Default is 0.  Zero means "forever".</td></tr>
             <tr><td>org.apache.manifoldcf.misc</td><td>No</td><td>Miscellaneous debugging output.  Legal values INFO, WARN, or DEBUG.</td></tr>
             <tr><td>org.apache.manifoldcf.db</td><td>No</td><td>Database debugging output.  Legal values INFO, WARN, or DEBUG.</td></tr>
             <tr><td>org.apache.manifoldcf.lock</td><td>No</td><td>Lock management debugging output.  Legal values INFO, WARN, or DEBUG.</td></tr>
@@ -916,7 +1002,47 @@
         <p>In a multi process setup, all of the ManifoldCF processes might as well exist on their own.  You can learn how to programmatically start the agents process by looking at the code
           in the AgentRun command class, as described above.  Similarly, the command classes that register connectors are very small and should be easy to understand.</p>
       </section>
-      
+
+      <section>
+          <title>Integrating ManifoldCF with a search engine</title>
+          <p></p>
+          <p>ManifoldCF's Authority Service is designed to allow maximum flexibility in integrating ManifoldCF security with search engines.  The
+                service receives a user identity (as a set of authorization domain/user name tuples), and produces a set of tokens.  It also returns a 
+                summary of the status of all authorities that were involved in the assembly of the set of tokens, as a nicety.  A search engine user
+                interface could thus signal the user when the results they might be seeing are incomplete, and why.</p>
+          <p>The Authority Service expects the following arguments, passed as URL arguments and properly URL encoded:</p>
+          <p></p>
+          <table>
+            <caption>Authority Service URL parameters</caption>
+            <tr><th>Authority Service URL parameter</th><th>Meaning</th></tr>
+            <tr><td>username</td><td>the username, if there is only one authorization domain</td></tr>
+            <tr><td>domain</td><td>the optional authorization domain if there is only one authorization domain (defaults to empty string)</td></tr>
+            <tr><td>username_<em>XX</em></td><td>username number <em>XX</em>, where <em>XX</em> is an integer starting at zero</td></tr>
+            <tr><td>domain_<em>XX</em></td><td>authorization domain <em>XX</em>, where <em>XX</em> is an integer starting at zero</td></tr>
+          </table>
+          <p></p>
+          <p>Access tokens and authority statuses are returned in the HTTP response separated by newline characters.  Each line has a prefix
+                as follows:</p>
+          <p></p>
+          <table>
+            <caption>Authority Service response prefixes</caption>
+            <tr><th>Authority Service response prefix</th><th>Meaning</th></tr>
+            <tr><td>TOKEN:</td><td>An access token</td></tr>
+            <tr><td>AUTHORIZED:</td><td>The name of an authority that found the user to be authorized</td></tr>
+            <tr><td>UNREACHABLEAUTHORITY:</td><td>The name of an authority that was found to be unreachable or unusable</td></tr>
+            <tr><td>UNAUTHORIZED:</td><td>The name of an authority that found the user to be unauthorized</td></tr>
+            <tr><td>USERNOTFOUND:</td><td>The name of an authority that could not find the user</td></tr>
+          </table>
+          <p></p>
+          <p>It is important to remember that only the "TOKEN:" lines actually matter for security.  Even if any of the error conditions apply, the set
+                of tokens returned by the Authority Service will be correctly supplied in order to apply appropriate security to documents being searched.</p>
+          <p>If you choose to deploy a search-engine plugin supplied by the Apache ManifoldCF project (for example, the Solr plugin), you will not need
+                know any of the above, since part of the plugin's purpose is to communicate with the Authority Service and apply the access tokens that are
+                returned to the search query automatically.  Some plugins, such as the ElasticSearch plugin, are more or less like toolkits, but still hide most
+                of the above from the integrator.  In a more highly customized system, however, you may need to develop your own code which interacts
+                with the Authority Service in order to meet your goals.</p>
+      </section>
+
     </section>
   </body>
 
diff --git a/site/src/documentation/content/xdocs/ja_JP/included-connectors.xml b/site/src/documentation/content/xdocs/ja_JP/included-connectors.xml
index 5e42e67..cd0249c 100644
--- a/site/src/documentation/content/xdocs/ja_JP/included-connectors.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/included-connectors.xml
@@ -35,18 +35,22 @@
         <caption>コネクタの対応表</caption>
         <tr><th>コネクタ名</th><th>コネクタプラットフォーム</th><th>サーバプラットフォーム</th><th>クライアントバージョン</th><th>サーババージョン</th></tr>
         <tr><td>CMIS</td><td>Java</td><td>複数</td><td>CMIS 1.0</td><td>CMIS 1.0</td></tr>
+        <tr><td>DropBox</td><td>Java</td><td>複数</td><td>1.5.3</td><td>N/A</td></tr>
         <tr><td>ファイルシステム</td><td>Java</td><td>Win/*NIX</td><td>N/A</td><td>N/A</td></tr>
+        <tr><td>Google Drive</td><td>Java</td><td>複数</td><td>v2-rev64-1.14.1-beta</td><td>N/A</td></tr>
+        <tr><td>HDFS</td><td>Java</td><td>複数</td><td>1.1.2</td><td>1.1.2</td></tr>
         <tr><td>Windows共有</td><td>Java</td><td> Win, Samba, NetApp,その他のNASシステム</td><td>N/A</td><td>N/A</td></tr>
         <tr><td>JDBC</td><td>Java </td><td>複数</td><td>JDBC V2, V3, V4対応;Oracle 10, JTDS 1.2, Postgresql 9.1ドライバで検証済み</td><td>複数</td></tr>
+        <tr><td>Jira</td><td>Java </td><td>複数</td><td>N/A</td><td>5.0-6.1</td></tr>
         <tr><td>RSS</td><td>Java </td><td> N/A </td><td> N/A </td><td>Atom, RSS 2.0, others </td></tr>
         <tr><td>Web</td><td>Java </td><td>N/A</td><td> N/A </td><td>HTML 1.0, 1.1, 2.0, Atom, RSS 2.0,その他</td></tr>
         <tr><td>Wiki</td><td>Java </td><td>N/A</td><td> N/A </td><td>Wiki 1.8以降</td></tr>
         <tr><td>LiveLink (OpenText)</td><td>Java </td><td> Win </td><td> LAPI 9.7.1 </td><td>9.2.0 - 10.2.0で検証済み</td></tr>
-        <tr><td>Solr</td><td>Java </td><td> N/A </td><td> N/A</td><td>Solr 1.4, 3.6.2, 4.0.0, 4.1.0で検証済み</td></tr>
+        <tr><td>Solr</td><td>Java </td><td> N/A </td><td> N/A</td><td>Solr 1.4, 3.6.2, 4.0.0, 4.1.0, 4.2.0, 4.3.0で検証済み</td></tr>
         <tr><td>OpenSearchServer</td><td>Java </td><td> N/A </td><td> N/A</td><td>OpenSearchServer 1.2.1, 1.2.2, 1.2.3で検証済み</td></tr>
         <tr><td>ElasticSearch</td><td>Java </td><td> N/A </td><td> N/A</td><td>ElasticSearch 0.18.3, 0.18.4, 0.18.5, 0.18.6, 0.18.7で検証済み</td></tr>
         <tr><td>Documentum (EMC)</td><td>Win, RedHat</td><td> Win, RedHat </td><td>DFC 5.3 SP5で検証済み</td><td>5.3, 6.0, and 6.5サーバで検証済み</td></tr>
-        <tr><td>SharePoint (MSFT)</td><td>Java </td><td>Win</td><td> N/A </td><td>SharePoint 2003 (2.0), 2007 (3.0), 2010 (4.0)で検証済み</td></tr>
+        <tr><td>SharePoint (MSFT)</td><td>Java </td><td>Win</td><td> N/A </td><td>SharePoint 2003 (2.0), 2007 (3.0)で検証済み, Claim Space Authなし検証2010 (4.0)</td></tr>
         <tr><td>Meridio (Autonomy)</td><td>Java </td><td> Win </td><td> N/A </td><td>Meridio 4.1, 5.0で検証済み</td></tr>
         <tr><td>FileNet (IBM)</td><td>Java</td><td>Win, RedHat</td><td>P8 V4.1, V4.5で検証済み</td><td>P8 V4.1, V4.5で検証済み</td></tr>
       </table>
diff --git a/site/src/documentation/content/xdocs/ja_JP/javadoc.xml b/site/src/documentation/content/xdocs/ja_JP/javadoc.xml
index 50147de..f4926e0 100644
--- a/site/src/documentation/content/xdocs/ja_JP/javadoc.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/javadoc.xml
@@ -33,19 +33,25 @@
       <p>最新版のManifoldCFのJavadocは以下のリンクから参照することができます:</p>
       <p><a href="../api/framework/index.html">ManifoldCFフレームワーク</a></p>
       <p><a href="../api/activedirectory/index.html">Active Directory権限コネクタ</a></p>
+      <p><a href="../api/alfresco/index.html">Alfrescoコネクタ</a></p>
       <p><a href="../api/cmis/index.html">CMIS権限コネクタ</a></p>
       <p><a href="../api/documentum/index.html">Documentum権限コネクタ</a></p>
+      <p><a href="../api/dropbox/index.html">Dropboxコネクタ</a></p>
       <p><a href="../api/filenet/index.html">FileNetコネクタ</a></p>
-      <p><a href="../api/filesystem/index.html">ファイルシステムコネクタ</a></p>
+      <p><a href="../api/filesystem/index.html">File system repository and output connector</a></p>
+      <p><a href="../api/googledrive/index.html">GoogleDriveコネクタ</a></p>
       <p><a href="../api/gts/index.html">qBase GTS出力コネクタ</a></p>
+      <p><a href="../api/hdfs/index.html">HDFS repository and output connector</a></p>
       <p><a href="../api/jcifs/index.html">CIFSコネクタ</a></p>
       <p><a href="../api/jdbc/index.html">JDBCコネクタ</a></p>
+      <p><a href="../api/jira/index.html">JIRA connector and authority</a></p>
       <p><a href="../api/livelink/index.html">LiveLink権限コネクタ</a></p>
       <p><a href="../api/meridio/index.html">Meridio権限コネクタ</a></p>
       <p><a href="../api/opensearchserver/index.html">OpenSearchServer出力コネクタ</a></p>
       <p><a href="../api/elasticsearch/index.html">ElasticSearch出力コネクタ</a></p>
       <p><a href="../api/nullauthority/index.html">Null権限コネクタ</a></p>
       <p><a href="../api/nulloutput/index.html">Null出力コネクタ</a></p>
+      <p><a href="../api/regexpmapper/index.html">Regular expression mapping connector</a></p>
       <p><a href="../api/rss/index.html">RSSコネクタ</a></p>
       <p><a href="../api/sharepoint/index.html">SharePointコネクタ</a></p>
       <p><a href="../api/solr/index.html">Solr出力コネクタ</a></p>
diff --git a/site/src/documentation/content/xdocs/ja_JP/performance-tuning.xml b/site/src/documentation/content/xdocs/ja_JP/performance-tuning.xml
index 63e0a52..2f9f0bc 100644
--- a/site/src/documentation/content/xdocs/ja_JP/performance-tuning.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/performance-tuning.xml
@@ -153,7 +153,7 @@
           <tr><td>tcpip_socket</td><td>true</td></tr>
           <tr><td>max_connections</td><td>200</td></tr>
           <tr><td>checkpoint_timeout</td><td>900</td></tr>
-          <tr><td>datastyle</td><td>ISO,European</td></tr>
+          <tr><td>datestyle</td><td>ISO,European</td></tr>
           <tr><td>autovacuum</td><td>off</td></tr>
         </table>
         <p>There are some interesting conclusions, for example the use of Solid State Drives for the laptop.  Even though addressable memory was reduced to 4 GB, the system processed twice as much documents than the desktop did with slower disks.  The other interesting fact is that the server had lower performing disks, but 4 times as many processors, and it was twice as fast as the laptop.</p>
diff --git a/site/src/documentation/content/xdocs/ja_JP/programmatic-operation.xml b/site/src/documentation/content/xdocs/ja_JP/programmatic-operation.xml
index 87fb638..a656b0b 100644
--- a/site/src/documentation/content/xdocs/ja_JP/programmatic-operation.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/programmatic-operation.xml
@@ -79,9 +79,15 @@
           <p></p>
           <table>
             <tr><th>Resource</th><th>Verb</th><th>What it does</th><th>Input format/query args</th><th>Output format</th></tr>
+            <tr><td>authorizationdomains</td><td>GET</td><td>List all registered authorization domains</td><td>N/A</td><td>{"authorizationdomain":[<em>&lt;list_of_authorization_domain_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnectors</td><td>GET</td><td>List all registered output connectors</td><td>N/A</td><td>{"outputconnector":[<em>&lt;list_of_output_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnectors</td><td>GET</td><td>List all registered mapping connectors</td><td>N/A</td><td>{"mappingconnector":[<em>&lt;list_of_mapping_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnectors</td><td>GET</td><td>List all registered authority connectors</td><td>N/A</td><td>{"authorityconnector":[<em>&lt;list_of_authority_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>repositoryconnectors</td><td>GET</td><td>List all registered repository connectors</td><td>N/A</td><td>{"repositoryconnector":[<em>&lt;list_of_repository_connector_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups</td><td>GET</td><td>List all authority groups</td><td>N/A</td><td>{"authoritygroup":[<em>&lt;list_of_authority_group_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups/<em>&lt;encoded_group_name&gt;</em></td><td>GET</td><td>Get a specific authority group</td><td>N/A</td><td>{"authoritygroup":<em>&lt;authority_group_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups/<em>&lt;encoded_group_name&gt;</em></td><td>PUT</td><td>Save or create an authority group</td><td>{"authoritygroup":<em>&lt;authority_group_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>authoritygroups/<em>&lt;encoded_group_name&gt;</em></td><td>DELETE</td><td>Delete an authority group</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnections</td><td>GET</td><td>List all output connections</td><td>N/A</td><td>{"outputconnection":[<em>&lt;list_of_output_connection_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Get a specific output connection</td><td>N/A</td><td>{"outputconnection":<em>&lt;output_connection_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Save or create an output connection</td><td>{"outputconnection":<em>&lt;output_connection_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
@@ -89,6 +95,11 @@
             <tr><td>status/outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Check the status of an output connection</td><td>N/A</td><td>{"check_result":<em>&lt;message&gt;</em>} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>info/outputconnections/<em>&lt;encoded_connection_name&gt;</em>/<em>&lt;connector_specific_resource&gt;</em></td><td>GET</td><td>Retrieve arbitrary connector-specific resource</td><td>N/A</td><td><em>&lt;response_data&gt;</em> <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} <strong>OR</strong> {"service_interruption":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>reset/outputconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Forget previous indexing state</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections</td><td>GET</td><td>List all mapping connections</td><td>N/A</td><td>{"mappingconnection":[<em>&lt;list_of_mapping_connection_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Get a specific mapping connection</td><td>N/A</td><td>{"mappingconnection":<em>&lt;mapping_connection_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Save or create a mapping connection</td><td>{"mappingconnection":<em>&lt;mapping_connection_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>DELETE</td><td>Delete a mapping connection</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>status/mappingconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Check the status of a mapping connection</td><td>N/A</td><td>{"check_result":<em>&lt;message&gt;</em>} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnections</td><td>GET</td><td>List all authority connections</td><td>N/A</td><td>{"authorityconnection":[<em>&lt;list_of_authority_connection_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>GET</td><td>Get a specific authority connection</td><td>N/A</td><td>{"authorityconnection":<em>&lt;authority_connection_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>authorityconnections/<em>&lt;encoded_connection_name&gt;</em></td><td>PUT</td><td>Save or create an authority connection</td><td>{"authorityconnection":<em>&lt;authority_connection_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
@@ -105,8 +116,9 @@
             <tr><td>jobs/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job definition</td><td>N/A</td><td>{"job":<em>&lt;job_object_&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>jobs/<em>&lt;job_id&gt;</em></td><td>PUT</td><td>Save a job definition</td><td>{"job":<em>&lt;job_object&gt;</em>}</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>jobs/<em>&lt;job_id&gt;</em></td><td>DELETE</td><td>Delete a job definition</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
-            <tr><td>jobstatuses</td><td>GET</td><td>List all jobs and their status</td><td>N/A</td><td>{"jobstatus":[<em>&lt;list_of_job_status_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
-            <tr><td>jobstatuses/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job's status</td><td>N/A</td><td>{"jobstatus":<em>&lt;job_status_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
+            <tr><td>jobstatuses</td><td>GET</td><td>List all jobs and their status</td><td>maxcount=&lt;maximum_documents_to_count&gt;</td><td>{"jobstatus":[<em>&lt;list_of_job_status_objects&gt;</em>]} <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
+            <tr><td>jobstatuses/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job's status</td><td>maxcount=&lt;maximum_documents_to_count&gt;</td><td>{"jobstatus":<em>&lt;job_status_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
+            <tr><td>jobstatusesnocounts<em>&lt;job_id&gt;</em></td><td>GET</td><td>List all jobs and their status, returning '0' for all counts</td><td>N/A</td><td>{"jobstatus":[<em>&lt;list_of_job_status_objects&gt;</em>]} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
             <tr><td>jobstatusesnocounts/<em>&lt;job_id&gt;</em></td><td>GET</td><td>Get a specific job's status, returning '0' for all counts</td><td>N/A</td><td>{"jobstatus":<em>&lt;job_status_object&gt;</em>} <strong>OR</strong> { } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>} </td></tr>
             <tr><td>start/<em>&lt;job_id&gt;</em></td><td>PUT</td><td>Start a specified job manually</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
             <tr><td>startminimal/<em>&lt;job_id&gt;</em></td><td>PUT</td><td>Start a specified job manually, minimal run requested</td><td>N/A</td><td>{ } <strong>OR</strong> {"error":<em>&lt;error_text&gt;</em>}</td></tr>
@@ -188,6 +200,18 @@
           </table>
         </section>
         <section>
+          <title>Authorization domain objects</title>
+          <p></p>
+          <p>The JSON fields an authorization domain object has are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"description"</td><td>The optional description of the authorization domain</td></tr>
+            <tr><td>"domain_name"</td><td>The internal name of the authorization domain, i.e. what is sent to the Authority Service</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Output connector objects</title>
           <p></p>
           <p>The JSON fields an output connector object has are as follows:</p>
@@ -200,6 +224,18 @@
           <p></p>
         </section>
         <section>
+          <title>Mapping connector objects</title>
+          <p></p>
+          <p>The JSON fields a mapping connector object has are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"description"</td><td>The optional description of the connector</td></tr>
+            <tr><td>"class_name"</td><td>The class name of the class implementing the connector</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Authority connector objects</title>
           <p></p>
           <p>The JSON fields an authority connector object has are as follows:</p>
@@ -224,6 +260,26 @@
           <p></p>
         </section>
         <section>
+          <title>Authority group objects</title>
+          <p></p>
+          <p>Authority group names, when they are part of a URL, should be encoded as follows:</p>
+          <p></p>
+          <ol>
+            <li>All instances of '.' should be replaced by '..'.</li>
+            <li>All instances of '/' should be replaced by '.+'.</li>
+            <li>The URL should be encoded using standard URL utf-8-based %-encoding.</li>
+          </ol>
+          <p></p>
+          <p>The JSON fields an authority group object has are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"name"</td><td>The unique name of the group</td></tr>
+            <tr><td>"description"</td><td>The description of the group</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Output connection objects</title>
           <p></p>
           <p>Output connection names, when they are part of a URL, should be encoded as follows:</p>
@@ -247,6 +303,30 @@
           <p></p>
         </section>
         <section>
+          <title>Mapping connection objects</title>
+          <p></p>
+          <p>Mapping connection names, when they are part of a URL, should be encoded as follows:</p>
+          <p></p>
+          <ol>
+            <li>All instances of '.' should be replaced by '..'.</li>
+            <li>All instances of '/' should be replaced by '.+'.</li>
+            <li>The URL should be encoded using standard URL utf-8-based %-encoding.</li>
+          </ol>
+          <p></p>
+          <p>The JSON fields for a mapping connection object are as follows:</p>
+          <p></p>
+          <table>
+            <tr><th>Field</th><th>Meaning</th></tr>
+            <tr><td>"name"</td><td>The unique name of the connection</td></tr>
+            <tr><td>"description"</td><td>The description of the connection</td></tr>
+            <tr><td>"class_name"</td><td>The java class name of the class implementing the connection</td></tr>
+            <tr><td>"max_connections"</td><td>The total number of outstanding connections allowed to exist at a time</td></tr>
+            <tr><td>"configuration"</td><td>The configuration object for the connection, which is specific to the connection class</td></tr>
+            <tr><td>"prerequisite"</td><td>The mapping connection prerequisite, if any</td></tr>
+          </table>
+          <p></p>
+        </section>
+        <section>
           <title>Authority connection objects</title>
           <p></p>
           <p>Authority connection names, when they are part of a URL, should be encoded as follows:</p>
@@ -266,6 +346,9 @@
             <tr><td>"class_name"</td><td>The java class name of the class implementing the connection</td></tr>
             <tr><td>"max_connections"</td><td>The total number of outstanding connections allowed to exist at a time</td></tr>
             <tr><td>"configuration"</td><td>The configuration object for the connection, which is specific to the connection class</td></tr>
+            <tr><td>"prerequisite"</td><td>The mapping connection prerequisite, if any</td></tr>
+            <tr><td>"authdomain"</td><td>The authorization domain for the authority connection, if any</td></tr>
+            <tr><td>"authgroup"</td><td>The required authority group for the authority connection</td></tr>
           </table>
           <p></p>
         </section>
@@ -289,7 +372,7 @@
             <tr><td>"class_name"</td><td>The java class name of the class implementing the connection</td></tr>
             <tr><td>"max_connections"</td><td>The total number of outstanding connections allowed to exist at a time</td></tr>
             <tr><td>"configuration"</td><td>The configuration object for the connection, which is specific to the connection class</td></tr>
-            <tr><td>"acl_authority"</td><td>The (optional) name of the authority that will enforce security for this connection</td></tr>
+            <tr><td>"acl_authority"</td><td>The (optional) name of the authority group that will enforce security for this connection</td></tr>
             <tr><td>"throttle"</td><td>An array of throttle objects, which control how quickly documents can be requested from this connection</td></tr>
           </table>
           <p></p>
@@ -408,6 +491,8 @@
           <tr><td>org.apache.manifoldcf.authorities.CheckAll</td><td>Check all authorities to be sure they are functioning</td></tr>
           <tr><td>org.apache.manifoldcf.authorities.DefineAuthorityConnection</td><td>Create a new authority connection</td></tr>
           <tr><td>org.apache.manifoldcf.authorities.DeleteAuthorityConnection</td><td>Delete an existing authority connection</td></tr>
+          <tr><td>org.apache.manifoldcf.authorities.DefineMappingConnection</td><td>Create a new mapping connection</td></tr>
+          <tr><td>org.apache.manifoldcf.authorities.DeleteMappingConnection</td><td>Delete an existing mapping connection</td></tr>
           <tr><td>org.apache.manifoldcf.crawler.AbortJob</td><td>Abort a running job</td></tr>
           <tr><td>org.apache.manifoldcf.crawler.AddScheduledTime</td><td>Add a schedule record to a job</td></tr>
           <tr><td>org.apache.manifoldcf.crawler.ChangeJobDocSpec</td><td>Modify a job's specification information</td></tr>
diff --git a/site/src/documentation/content/xdocs/ja_JP/technical-resources.xml b/site/src/documentation/content/xdocs/ja_JP/technical-resources.xml
index 6fff5ab..187b038 100644
--- a/site/src/documentation/content/xdocs/ja_JP/technical-resources.xml
+++ b/site/src/documentation/content/xdocs/ja_JP/technical-resources.xml
@@ -49,6 +49,7 @@
 	</p> 
 	<ul>
 	    <li><a href="writing-output-connectors.html">出力コネクションの作成</a></li>
+	    <li><a href="writing-mapping-connectors.html">How to write a user mapping connector</a></li>
 	    <li><a href="writing-authority-connectors.html">権限コネクションの作成</a></li>
 	    <li><a href="writing-repository-connectors.html">リポジトリコネクションの作成</a></li>
 	</ul>
diff --git a/site/src/documentation/content/xdocs/ja_JP/writing-mapping-connectors.xml b/site/src/documentation/content/xdocs/ja_JP/writing-mapping-connectors.xml
new file mode 100644
index 0000000..4b60153
--- /dev/null
+++ b/site/src/documentation/content/xdocs/ja_JP/writing-mapping-connectors.xml
@@ -0,0 +1,144 @@
+<?xml version="1.0"?>
+
+<!DOCTYPE document PUBLIC "-//APACHE//DTD Documentation V2.0//EN" 
+          "http://forrest.apache.org/dtd/document-v20.dtd">
+
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<document> 
+
+  <header> 
+    <title>Writing user mapping connectors</title> 
+  </header> 
+
+  <body> 
+    <section>
+      <title>Writing a User Mapping Connector</title>
+      <p></p>
+      <p>A user mapping connector allows a user name to be transformed in a manner that depends on the functionality of the connector.  In some cases, no connection to
+            an external repository is required (for example, simple string transformations), while in some cases one might imagine such a connector consulting with (say) an
+            LDAP system to look up a specific name.</p>
+      <p></p>
+      <p>A user name is just a string, which is designed to represent a user identity.  Some user names have specific forms - for instance, Active Directory user names are
+            often represented in the form <code>user@domain</code>.  But, most importantly, the exact name used can often depend on the particular system being addressed.</p>
+      <p></p>
+      <p>As is the case with all connectors under the ManifoldCF umbrella, a user mapping connector consists of a single part:</p>
+      <p></p>
+      <ul>
+        <li>A class implementing an interface (in this case, <em>org.apache.manifoldcf.authorities.interfaces.IMappingConnector</em>)</li>
+      </ul>
+      <p></p>
+      <section>
+        <title>Key concepts</title>
+        <p></p>
+        <p>The mapping connector abstraction makes use of, or introduces, the following concepts:</p>
+        <p></p>
+        <table>
+          <tr><th>Concept</th><th>What it is</th></tr>
+          <tr><td>Configuration parameters</td><td>A hierarchical structure, internally represented as an XML document, which describes a specific configuration of a specific mapping connector, i.e. <strong>how</strong> the connector should do its job; see <em>org.apache.manifoldcf.core.interfaces.ConfigParams</em></td></tr>
+          <tr><td>Mapping connection</td><td>An mapping connector instance that has been furnished with configuration data</td></tr>
+          <tr><td>User name</td><td>The name of a user, which is often a Kerberos principal name, e.g. <em>john@apache.org</em></td></tr>
+          <tr><td>Connection management/threading/pooling model</td><td>How an individual mapping connector class instance is managed and used</td></tr>
+        </table>
+        <p></p>
+      </section>
+      <section>
+        <title>Implementing the Mapping Connector class</title>
+        <p></p>
+        <p>A very good place to start is to read the javadoc for the mapping connector interface.  You will note that the javadoc describes the usage and pooling model for a
+              connector class pretty thoroughly.  It is very important to understand the model thoroughly in order to write reliable connectors!  Use of static variables, for one thing,
+              must be done in a very careful way, to avoid issues that would be hard to detect with a cursory test.</p>
+        <p></p>
+        <p>The second thing to do is to examine some of the provided mapping connector implementations.  The only connector presently included (the Regular Expression
+              user mapping connector) demonstrates some of the sorts of techniques you will need for an effective
+              implementation.  You will also note that all of these connectors extend a framework-provided mapping connector base class, found at
+              <em>org.apache.manifoldcf.authorities.mappers.BaseMappingConnector</em>.  This base class furnishes some basic bookkeeping logic for managing the
+              connector pool, as well as default implementations of some of the less typical functionality a connector may have.  For example, connectors are allowed to have
+              database tables of their own, which are instantiated when the connector is registered, and are torn down when the connector is removed.  This is, however, not
+              very typical, and the base implementation reflects that.</p>
+        <p></p>
+        <section>
+          <title>Principle methods</title>
+          <p></p>
+          <p>The principle methods an implementer should be concerned with for creating a mapping connector are the following:</p>
+          <p></p>
+          <table>
+            <tr><th>Method</th><th>What it should do</th></tr>
+            <tr><td><strong>mapUser()</strong></td><td>Given an input user name, find the corresponding output user name</td></tr>
+            <tr><td><strong>outputConfigurationHeader()</strong></td><td>Output the head-section part of a mapping connection <em>ConfigParams</em> editing page</td></tr>
+            <tr><td><strong>outputConfigurationBody()</strong></td><td>Output the body-section part of a mapping connection <em>ConfigParams</em> editing page</td></tr>
+            <tr><td><strong>processConfigurationPost()</strong></td><td>Receive and process form data from a mapping connection <em>ConfigParams</em> editing page</td></tr>
+            <tr><td><strong>viewConfiguration()</strong></td><td>Output the viewing HTML for a mapping connection <em>ConfigParams</em> object</td></tr>
+          </table>
+          <p></p>
+          <p>These methods come in two broad classes: (a) functional methods for doing the work of the connector; (b) UI methods for configuring a connection.  Together they
+                do the heavy lifting of your connector.</p>
+          <p></p>
+          <p></p>
+        </section>
+        <section>
+          <title>Notes on connector UI methods</title>
+          <p></p>
+          <p>The crawler UI uses a tabbed layout structure, and thus each of these elements must properly implement the tabbed model.  This means that the "header" methods 
+                above must add the desired tab names to a specified array, and the "body" methods must provide appropriate HTML which handles both the case where a tab is
+                displayed, and where it is not displayed.  Also, it makes sense to use the appropriate css definitions, so that the connector UI pages have a similar look-and-feel
+                to the rest of ManifoldCF's crawler ui.  We strongly suggest starting with one of the supplied mapping connector's UI code, both for a description of the arguments
+                to each page, and for some decent ideas of ways to organize your connector's UI code.</p>
+          <p></p>
+        </section>
+      </section>
+      <section>
+        <title>Implementation support provided by the framework</title>
+        <p></p>
+        <p>ManifoldCF's framework provides a number of helpful services designed to make the creation of a connector easier.  These services are summarized below.
+              (This is not an exhaustive list, by any means.)</p>
+        <p></p>
+        <ul>
+          <li>Lock management and synchronization (see <em>org.apache.manifoldcf.core.interfaces.LockManagerFactory</em>)</li>
+          <li>Cache management (see <em>org.apache.manifoldcf.core.interfaces.CacheManagerFactory</em>)</li>
+          <li>Local keystore management (see <em>org.apache.manifoldcf.core.KeystoreManagerFactory</em>)</li>
+          <li>Database management (see <em>org.apache.manifoldcf.core.DBInterfaceFactory</em>)</li>
+        </ul>
+        <p></p>
+        <p>For UI method support, these too are very useful:</p>
+        <p></p>
+        <ul>
+          <li>Multipart form processing (see <em>org.apache.manifoldcf.ui.multipart.MultipartWrapper</em>)</li>
+          <li>HTML encoding (see <em>org.apache.manifoldcf.ui.util.Encoder</em>)</li>
+          <li>HTML formatting (see <em>org.apache.manifoldcf.ui.util.Formatter</em>)</li>
+        </ul>
+        <p></p>
+      </section>
+      <section>
+        <title>DO's and DON'T DO's</title>
+        <p></p>
+        <p>It's always a good idea to make use of an existing infrastructure component, if it's meant for that purpose, rather than inventing your own.  There are, however,
+              some limitations we recommend you adhere to.</p>
+        <p></p>
+        <ul>
+          <li>DO make use of infrastructure components described in the section above</li>
+          <li>DON'T make use of infrastructure components that aren't mentioned, without checking first</li>
+          <li>NEVER write connector code that directly uses framework database tables, other than the ones installed and managed by your connector</li>
+        </ul>
+        <p></p>
+        <p>If you are tempted to violate these rules, it may well mean you don't understand something important.  At the very least, we'd like to know why.  Send email
+              to dev@manifoldcf.apache.org with a description of your problem and how you are tempted to solve it.</p>
+      </section>
+    </section>
+  </body>
+</document>
\ No newline at end of file
diff --git a/site/src/documentation/resources/images/en_US/add-new-authority-connection-type.PNG b/site/src/documentation/resources/images/en_US/add-new-authority-connection-type.PNG
index bffa3b3..3e5f010 100644
--- a/site/src/documentation/resources/images/en_US/add-new-authority-connection-type.PNG
+++ b/site/src/documentation/resources/images/en_US/add-new-authority-connection-type.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/add-new-authority-group-name.PNG b/site/src/documentation/resources/images/en_US/add-new-authority-group-name.PNG
new file mode 100644
index 0000000..0e02f64
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/add-new-authority-group-name.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/add-new-mapping-connection-name.PNG b/site/src/documentation/resources/images/en_US/add-new-mapping-connection-name.PNG
new file mode 100644
index 0000000..1094403
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/add-new-mapping-connection-name.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/add-new-mapping-connection-type.PNG b/site/src/documentation/resources/images/en_US/add-new-mapping-connection-type.PNG
new file mode 100644
index 0000000..a9631c6
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/add-new-mapping-connection-type.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration-save.png b/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration-save.png
index 81ca954..a1228c9 100644
--- a/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration-save.png
+++ b/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration-save.png
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration.png b/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration.png
index 4b0fe02..3c88223 100644
--- a/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration.png
+++ b/site/src/documentation/resources/images/en_US/alfresco-repository-connection-configuration.png
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/authority-prerequisites.PNG b/site/src/documentation/resources/images/en_US/authority-prerequisites.PNG
new file mode 100644
index 0000000..b35317f
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/authority-prerequisites.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/dropbox-repository-application-secret-passwords.PNG b/site/src/documentation/resources/images/en_US/dropbox-repository-application-secret-passwords.PNG
new file mode 100644
index 0000000..5a7b728
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/dropbox-repository-application-secret-passwords.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/dropbox-repository-connection-configuration-save.PNG b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-configuration-save.PNG
new file mode 100644
index 0000000..38e9f73
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-configuration-save.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/dropbox-repository-connection-configuration.PNG b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-configuration.PNG
new file mode 100644
index 0000000..7c7da3a
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-configuration.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/dropbox-repository-connection-job-dropbox-folder-to-index.PNG b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-job-dropbox-folder-to-index.PNG
new file mode 100644
index 0000000..8d1a153
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-job-dropbox-folder-to-index.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/dropbox-repository-connection-job-save.PNG b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-job-save.PNG
new file mode 100644
index 0000000..e51e624
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/dropbox-repository-connection-job-save.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/dropbox-repository-create-application.PNG b/site/src/documentation/resources/images/en_US/dropbox-repository-create-application.PNG
new file mode 100644
index 0000000..b33b9b2
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/dropbox-repository-create-application.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/email-configure-server.PNG b/site/src/documentation/resources/images/en_US/email-configure-server.PNG
new file mode 100644
index 0000000..e496897
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/email-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/email-configure-url.PNG b/site/src/documentation/resources/images/en_US/email-configure-url.PNG
new file mode 100644
index 0000000..dfe9b71
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/email-configure-url.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/email-job-filter.PNG b/site/src/documentation/resources/images/en_US/email-job-filter.PNG
new file mode 100644
index 0000000..c6b7ce7
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/email-job-filter.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/email-job-metadata.PNG b/site/src/documentation/resources/images/en_US/email-job-metadata.PNG
new file mode 100644
index 0000000..e46c18c
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/email-job-metadata.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/email-status.PNG b/site/src/documentation/resources/images/en_US/email-status.PNG
new file mode 100644
index 0000000..4fba838
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/email-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filenet-configure-credentials.PNG b/site/src/documentation/resources/images/en_US/filenet-configure-credentials.PNG
new file mode 100644
index 0000000..7858522
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/filenet-configure-credentials.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filenet-configure-documenturl.PNG b/site/src/documentation/resources/images/en_US/filenet-configure-documenturl.PNG
new file mode 100644
index 0000000..d2cbf58
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/filenet-configure-documenturl.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filenet-configure-objectstore.PNG b/site/src/documentation/resources/images/en_US/filenet-configure-objectstore.PNG
new file mode 100644
index 0000000..47c2450
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/filenet-configure-objectstore.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filenet-configure-server.PNG b/site/src/documentation/resources/images/en_US/filenet-configure-server.PNG
new file mode 100644
index 0000000..7050140
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/filenet-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filenet-status.PNG b/site/src/documentation/resources/images/en_US/filenet-status.PNG
new file mode 100644
index 0000000..0796dd4
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/filenet-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filesystem-job-output-path.PNG b/site/src/documentation/resources/images/en_US/filesystem-job-output-path.PNG
new file mode 100644
index 0000000..c115708
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/filesystem-job-output-path.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/filesystem-job-paths.PNG b/site/src/documentation/resources/images/en_US/filesystem-job-paths.PNG
index bed8771..7036308 100644
--- a/site/src/documentation/resources/images/en_US/filesystem-job-paths.PNG
+++ b/site/src/documentation/resources/images/en_US/filesystem-job-paths.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-connection-configuration-save.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-connection-configuration-save.PNG
new file mode 100644
index 0000000..86c42b8
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-connection-configuration-save.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-connection-configuration.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-connection-configuration.PNG
new file mode 100644
index 0000000..482589e
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-connection-configuration.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-connection-job-googledrive-seed-query.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-connection-job-googledrive-seed-query.PNG
new file mode 100644
index 0000000..dc52764
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-connection-job-googledrive-seed-query.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-setup-1.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-1.PNG
new file mode 100644
index 0000000..0ac4c2e
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-1.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-setup-2.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-2.PNG
new file mode 100644
index 0000000..4800c35
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-2.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-setup-3.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-3.PNG
new file mode 100644
index 0000000..c82e350
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-3.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-setup-4.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-4.PNG
new file mode 100644
index 0000000..c500993
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-4.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/googledrive-repository-setup-5.PNG b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-5.PNG
new file mode 100644
index 0000000..fc79595
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/googledrive-repository-setup-5.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/hdfs-configure-server.PNG b/site/src/documentation/resources/images/en_US/hdfs-configure-server.PNG
new file mode 100644
index 0000000..38c5822
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/hdfs-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/hdfs-job-hopcount.PNG b/site/src/documentation/resources/images/en_US/hdfs-job-hopcount.PNG
new file mode 100644
index 0000000..a0e37f6
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/hdfs-job-hopcount.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/hdfs-job-output-path.PNG b/site/src/documentation/resources/images/en_US/hdfs-job-output-path.PNG
new file mode 100644
index 0000000..aff1eed
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/hdfs-job-output-path.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/hdfs-job-paths.PNG b/site/src/documentation/resources/images/en_US/hdfs-job-paths.PNG
new file mode 100644
index 0000000..e186018
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/hdfs-job-paths.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/hdfs-repository-configure-server.PNG b/site/src/documentation/resources/images/en_US/hdfs-repository-configure-server.PNG
new file mode 100644
index 0000000..7b5c1cb
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/hdfs-repository-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-authority-configure-credentials.PNG b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-credentials.PNG
new file mode 100644
index 0000000..af89d98
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-credentials.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-authority-configure-database-type.PNG b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-database-type.PNG
new file mode 100644
index 0000000..807a116
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-database-type.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-authority-configure-queries.PNG b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-queries.PNG
new file mode 100644
index 0000000..0be22a0
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-queries.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-authority-configure-server.PNG b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-server.PNG
new file mode 100644
index 0000000..1e0cdc9
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jdbc-authority-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-authority-status.PNG b/site/src/documentation/resources/images/en_US/jdbc-authority-status.PNG
new file mode 100644
index 0000000..753aeb9
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jdbc-authority-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-configure-server.PNG b/site/src/documentation/resources/images/en_US/jdbc-configure-server.PNG
index 0235328..7509780 100644
--- a/site/src/documentation/resources/images/en_US/jdbc-configure-server.PNG
+++ b/site/src/documentation/resources/images/en_US/jdbc-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jdbc-status.PNG b/site/src/documentation/resources/images/en_US/jdbc-status.PNG
index 655d485..f0dc9eb 100644
--- a/site/src/documentation/resources/images/en_US/jdbc-status.PNG
+++ b/site/src/documentation/resources/images/en_US/jdbc-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jira-repository-connection-configuration-save.PNG b/site/src/documentation/resources/images/en_US/jira-repository-connection-configuration-save.PNG
new file mode 100644
index 0000000..1920140
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jira-repository-connection-configuration-save.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jira-repository-connection-configuration.PNG b/site/src/documentation/resources/images/en_US/jira-repository-connection-configuration.PNG
new file mode 100644
index 0000000..132e194
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jira-repository-connection-configuration.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/jira-repository-connection-job-jira-seed-query.PNG b/site/src/documentation/resources/images/en_US/jira-repository-connection-job-jira-seed-query.PNG
new file mode 100644
index 0000000..fef100f
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/jira-repository-connection-job-jira-seed-query.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/list-authority-groups.PNG b/site/src/documentation/resources/images/en_US/list-authority-groups.PNG
new file mode 100644
index 0000000..defcf1a
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/list-authority-groups.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/list-mapping-connections.PNG b/site/src/documentation/resources/images/en_US/list-mapping-connections.PNG
new file mode 100644
index 0000000..de42ba2
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/list-mapping-connections.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/login.PNG b/site/src/documentation/resources/images/en_US/login.PNG
new file mode 100644
index 0000000..8dba553
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/login.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/mapping-prerequisites.PNG b/site/src/documentation/resources/images/en_US/mapping-prerequisites.PNG
new file mode 100644
index 0000000..7cfecdf
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/mapping-prerequisites.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/mapping-throttling.PNG b/site/src/documentation/resources/images/en_US/mapping-throttling.PNG
new file mode 100644
index 0000000..38703c2
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/mapping-throttling.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/regexp-mapping-status.PNG b/site/src/documentation/resources/images/en_US/regexp-mapping-status.PNG
new file mode 100644
index 0000000..035d9a7
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/regexp-mapping-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/regexp-mapping-user-mapping.PNG b/site/src/documentation/resources/images/en_US/regexp-mapping-user-mapping.PNG
new file mode 100644
index 0000000..691b9f3
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/regexp-mapping-user-mapping.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepoint-configure-authoritytype.PNG b/site/src/documentation/resources/images/en_US/sharepoint-configure-authoritytype.PNG
new file mode 100644
index 0000000..581e98c
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepoint-configure-authoritytype.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepoint-status.PNG b/site/src/documentation/resources/images/en_US/sharepoint-status.PNG
index b208c08..cc02e18 100644
--- a/site/src/documentation/resources/images/en_US/sharepoint-status.PNG
+++ b/site/src/documentation/resources/images/en_US/sharepoint-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepointadauthority-configure-cache.PNG b/site/src/documentation/resources/images/en_US/sharepointadauthority-configure-cache.PNG
new file mode 100644
index 0000000..6e5bc84
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepointadauthority-configure-cache.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepointadauthority-configure-dc.PNG b/site/src/documentation/resources/images/en_US/sharepointadauthority-configure-dc.PNG
new file mode 100644
index 0000000..a9c1cfb
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepointadauthority-configure-dc.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepointadauthority-status.PNG b/site/src/documentation/resources/images/en_US/sharepointadauthority-status.PNG
new file mode 100644
index 0000000..a38a4e2
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepointadauthority-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepointnativeauthority-configure-cache.PNG b/site/src/documentation/resources/images/en_US/sharepointnativeauthority-configure-cache.PNG
new file mode 100644
index 0000000..85a5608
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepointnativeauthority-configure-cache.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepointnativeauthority-configure-server.PNG b/site/src/documentation/resources/images/en_US/sharepointnativeauthority-configure-server.PNG
new file mode 100644
index 0000000..322f5b4
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepointnativeauthority-configure-server.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/sharepointnativeauthority-status.PNG b/site/src/documentation/resources/images/en_US/sharepointnativeauthority-status.PNG
new file mode 100644
index 0000000..1c2ceff
--- /dev/null
+++ b/site/src/documentation/resources/images/en_US/sharepointnativeauthority-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session-form.PNG b/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session-form.PNG
index 9657cd0..8354e21 100644
--- a/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session-form.PNG
+++ b/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session-form.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session.PNG b/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session.PNG
index ff15273..e147502 100644
--- a/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session.PNG
+++ b/site/src/documentation/resources/images/en_US/web-configure-access-credentials-session.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/web-status.PNG b/site/src/documentation/resources/images/en_US/web-status.PNG
index 932592f..c036aaa 100644
--- a/site/src/documentation/resources/images/en_US/web-status.PNG
+++ b/site/src/documentation/resources/images/en_US/web-status.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/en_US/welcome-screen.PNG b/site/src/documentation/resources/images/en_US/welcome-screen.PNG
index 63b3c39..b1707fd 100644
--- a/site/src/documentation/resources/images/en_US/welcome-screen.PNG
+++ b/site/src/documentation/resources/images/en_US/welcome-screen.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/ja_JP/dropbox-repository-application-secret-passwords.PNG b/site/src/documentation/resources/images/ja_JP/dropbox-repository-application-secret-passwords.PNG
new file mode 100644
index 0000000..5a7b728
--- /dev/null
+++ b/site/src/documentation/resources/images/ja_JP/dropbox-repository-application-secret-passwords.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-configuration-save.PNG b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-configuration-save.PNG
new file mode 100644
index 0000000..38e9f73
--- /dev/null
+++ b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-configuration-save.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-configuration.PNG b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-configuration.PNG
new file mode 100644
index 0000000..7c7da3a
--- /dev/null
+++ b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-configuration.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-job-dropbox-folder-to-index.PNG b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-job-dropbox-folder-to-index.PNG
new file mode 100644
index 0000000..8d1a153
--- /dev/null
+++ b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-job-dropbox-folder-to-index.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-job-save.PNG b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-job-save.PNG
new file mode 100644
index 0000000..e51e624
--- /dev/null
+++ b/site/src/documentation/resources/images/ja_JP/dropbox-repository-connection-job-save.PNG
Binary files differ
diff --git a/site/src/documentation/resources/images/ja_JP/dropbox-repository-create-application.PNG b/site/src/documentation/resources/images/ja_JP/dropbox-repository-create-application.PNG
new file mode 100644
index 0000000..b33b9b2
--- /dev/null
+++ b/site/src/documentation/resources/images/ja_JP/dropbox-repository-create-application.PNG
Binary files differ
diff --git a/site/src/documentation/skinconf.xml b/site/src/documentation/skinconf.xml
index 50ca3b7..3e6d871 100644
--- a/site/src/documentation/skinconf.xml
+++ b/site/src/documentation/skinconf.xml
@@ -104,6 +104,7 @@
     QBase, MetaCarta, and GTS are trademarks of QBase, Inc.
     Meridio and Autonomy are trademarks of Hewlett Packard, Inc.
     Alfresco is a trademark of Alfresco Software, Inc.
+    Jira is a trademark of Atlassian, Inc.
   </trademark-statement>
 
   <!-- Some skins use this to form a 'breadcrumb trail' of links.
diff --git a/test-materials/alfresco-4-war/jetty/jetty-env.xml b/test-materials/alfresco-4-war/jetty/jetty-env.xml
new file mode 100755
index 0000000..6d01acb
--- /dev/null
+++ b/test-materials/alfresco-4-war/jetty/jetty-env.xml
@@ -0,0 +1,33 @@
+
+<?xml version="1.0"?>
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
+
+<Configure class="org.mortbay.jetty.webapp.WebAppContext">
+    <New id="myDataSource"
+         class="org.mortbay.jetty.plus.naming.Resource">
+        <Arg>jdbc/dataSource</Arg>
+        <Arg>
+            <New class="org.h2.jdbcx.JdbcDataSource">
+                <Set name="URL">jdbc:h2:alf_data_jetty/h2_data/alf_jetty</Set>
+                <Set name="User">alfresco</Set>
+                <Set name="Password">alfresco</Set>
+            </New>
+        </Arg>
+    </New>
+</Configure>
diff --git a/test-materials/alfresco-4-war/pom.xml b/test-materials/alfresco-4-war/pom.xml
new file mode 100644
index 0000000..398f93a
--- /dev/null
+++ b/test-materials/alfresco-4-war/pom.xml
@@ -0,0 +1,140 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor 
+    license agreements. See the NOTICE file distributed with this work for additional 
+    information regarding copyright ownership. The ASF licenses this file to 
+    You under the Apache License, Version 2.0 (the "License"); you may not use 
+    this file except in compliance with the License. You may obtain a copy of 
+    the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required 
+    by applicable law or agreed to in writing, software distributed under the 
+    License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS 
+    OF ANY KIND, either express or implied. See the License for the specific 
+    language governing permissions and limitations under the License. -->
+    
+<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
+    <modelVersion>4.0.0</modelVersion>
+    <artifactId>alfresco-4-war</artifactId>
+    <version>1.5-SNAPSHOT</version>
+    <name>ManifoldCF - Test Materials - Alfresco WAR</name>
+    <description>Alfresco WAR builder</description>
+    <packaging>war</packaging>
+
+    <parent>
+        <groupId>org.apache.manifoldcf</groupId>
+        <artifactId>mcf-test-materials</artifactId>
+        <version>1.5-SNAPSHOT</version>
+    </parent>
+    
+    <properties>
+        <alfresco.groupId>org.alfresco</alfresco.groupId>
+        <alfresco.version>4.2.c</alfresco.version>
+        <app.log.root.level>WARN</app.log.root.level>
+        <alfresco.data.location>alf_data_dev</alfresco.data.location>
+        <!-- This controls which properties will be picked in src/test/properties for embedded run -->
+        <env>local</env>
+        <alfresco.db.name>alf_jetty</alfresco.db.name>
+        <alfresco.db.url>jdbc:h2:${alfresco.data.location}/h2_data/${alfresco.db.name}</alfresco.db.url>
+        <alfresco.db.driver>org.h2.Driver</alfresco.db.driver>
+        <alfresco.db.username>alfresco</alfresco.db.username>
+        <alfresco.db.password>alfresco</alfresco.db.password>
+        <alfresco.db.hibernate.dialect>org.hibernate.dialect.H2Dialect</alfresco.db.hibernate.dialect>
+    </properties>
+    
+    <!-- Here we realize the connection with the Alfresco selected platform (e.g.version and edition) -->
+   <dependencyManagement>
+     <dependencies>
+          <!-- This will import the dependencyManagement for all artifacts in the selected Alfresco plaftorm
+               (see http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#Importing_Dependencies)
+               NOTE: You still need to define dependencies in your POM, but you can omit version as it's enforced by this dependencyManagement.
+               NOTE: It defaults to the latest version this SDK pom has been tested with, but alfresco version can/should be overridden in your project's pom   
+           -->
+          <dependency>
+              <groupId>${alfresco.groupId}</groupId>
+              <artifactId>alfresco-platform-distribution</artifactId>
+              <version>${alfresco.version}</version>
+              <type>pom</type>
+              <scope>import</scope>
+          </dependency>
+     </dependencies>
+  </dependencyManagement>
+  
+    <!--
+    No more repos are needed since they will be inherited by the parent POM. 
+    This is needed to download the alfresco-plaftorm POM.
+    -->
+    <repositories>
+        <repository>
+            <id>alfresco-artifacts</id>
+            <url>https://artifacts.alfresco.com/nexus/content/groups/public</url>
+        </repository>
+        <repository>
+            <id>alfresco-artifacts-snapshots</id>
+            <url>https://artifacts.alfresco.com/nexus/content/groups/public-snapshots</url>
+        </repository>
+    </repositories>
+
+    <dependencies>
+        <dependency>
+            <groupId>${alfresco.groupId}</groupId>
+            <artifactId>alfresco</artifactId>
+            <type>war</type>
+        </dependency>
+        <dependency>
+		      <groupId>tk.skuro.alfresco</groupId>
+		      <artifactId>h2-support</artifactId>
+		      <version>1.5</version>
+		    </dependency>
+		    <dependency>
+		        <groupId>com.h2database</groupId>
+		        <artifactId>h2</artifactId>
+		        <version>1.3.172</version>
+		    </dependency>
+    </dependencies>
+
+    <build>
+        <finalName>alfresco</finalName>
+        <!--
+      In certain cases we do build time filtering with the single sourcing
+      alfresco-global.properties
+    -->
+    <filters>
+      <filter>src/main/properties/${env}/alfresco-global.properties</filter>
+    </filters>
+    
+    <resources>
+      <resource>
+        <directory>src/main/properties/${env}</directory>
+        <includes>
+          <include>alfresco-global.properties</include>
+        </includes>
+        <filtering>true</filtering>
+      </resource>
+    </resources>
+    
+        <plugins>
+            <plugin>
+                <artifactId>maven-war-plugin</artifactId>
+                <configuration>
+                    <!--  Here is can control the order of overlay of your (WAR, AMP, etc.) dependencies
+                        | NOTE: At least one WAR dependency must be uncompressed first
+                        | NOTE: In order to have a dependency effectively added to the WAR you need to 
+                        | explicitly mention it in the overlay section.
+                        | NOTE: First-win resource strategy is used by the WAR plugin
+                         -->
+                    <overlays>
+                        <!-- Current project customizations -->
+                        <overlay/>
+                        <!-- The Alfresco WAR -->
+                        <overlay>
+                            <groupId>${alfresco.groupId}</groupId>
+                            <artifactId>alfresco</artifactId>
+                            <type>war</type>
+                            <!-- To allow inclusion of META-INF -->
+                            <excludes/>
+                        </overlay>
+                    </overlays>
+                </configuration>
+            </plugin>
+        </plugins>
+    </build>
+</project>
diff --git a/test-materials/alfresco-4-war/src/main/properties/local/alfresco-global.properties b/test-materials/alfresco-4-war/src/main/properties/local/alfresco-global.properties
new file mode 100644
index 0000000..94712e6
--- /dev/null
+++ b/test-materials/alfresco-4-war/src/main/properties/local/alfresco-global.properties
@@ -0,0 +1,316 @@
+#    Licensed to the Apache Software Foundation (ASF) under one or more
+#    contributor license agreements.  See the NOTICE file distributed with
+#    this work for additional information regarding copyright ownership.
+#    The ASF licenses this file to You under the Apache License, Version 2.0
+#    (the "License"); you may not use this file except in compliance with
+#    the License.  You may obtain a copy of the License at
+#    
+#    http://www.apache.org/licenses/LICENSE-2.0
+#    
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+# RUN TIME PROPERTIES
+# -------------------
+
+# Sample custom content and index data location
+# This will create alf_data Relative to appserver run folder
+# In this default file we take the property from the POM (for compatbility with local jetty and jboss deployments) but it can also be edited here.
+
+###############################
+## Common Alfresco Properties #
+###############################
+
+cifs.enabled=false
+ftp.enabled=false
+nfs.enabled=false
+
+dir.root=${alfresco.data.location}
+# Allowed values are: NONE, AUTO, FULL
+index.recovery.mode=NONE
+
+# Fail or not when there are node integrity checker errors
+integrity.failOnError=true
+
+# database connection properties
+# MySQL connection (This is default and requires mysql-connector-java-5.0.3-bin.jar, which ships with the Alfresco server)
+
+db.driver=${alfresco.db.driver}
+db.url=${alfresco.db.url}
+db.username=${alfresco.db.username}
+db.password=${alfresco.db.password}
+
+db.pool.initial=10
+db.pool.max=100
+
+# Dialect is autodetected starting from 3.2
+# H2 dialect
+hibernate.dialect=${alfresco.db.hibernate.dialect}
+
+index.subsystem.name=lucene
+solr.host=localhost
+solr.port=8080
+solr.secureComms=none
+# Setting Solr backup for the future. Tweak this if needed (ideally in other env properties files) 
+solr.backup.alfresco.cronExpression=0 30 2 * * ? 2050  
+solr.backup.archive.cronExpression=0 30 3 * * ? 2050
+solr.backup.alfresco.remoteBackupLocation=${dir.root}/solrBackup/alfresco
+solr.backup.archive.remoteBackupLocation=${dir.root}/solrBackup/archive
+# We are in the local DEV properties file, no need for Solr backup
+solr.backup.alfresco.numberToKeep=0
+solr.backup.archive.numberToKeep=0
+
+# These jobs seem to require Lucene (Unsupported Operation with Solr) so we disasble them / set to future date
+# See https://forums.alfresco.com/en/viewtopic.php?f=52&t=41597
+# If you want to enable them (and so full WQS functionality), please also set index.subsystem.name=lucene
+wcmqs.dynamicCollectionProcessor.schedule=0 30 2 * * ? 2060
+wcmqs.feedbackProcessor.schedule=0 40 2 * * ? 2060
+wcmqs.publishQueueProcessor.schedule=0 50 2 * * ? 2060
+
+
+#
+# Sample custom content and index data location
+#
+#dir.root=/srv/alfresco/alf_data
+#dir.keystore=${dir.root}/keystore
+
+#
+# Sample database connection properties
+#
+#db.username=alfresco
+#db.password=alfresco
+
+#
+# External locations
+#-------------
+#ooo.exe=soffice
+#ooo.enabled=false
+#jodconverter.officeHome=./OpenOffice.org
+#jodconverter.portNumbers=8101
+#jodconverter.enabled=true
+#img.root=./ImageMagick
+#swf.exe=./bin/pdf2swf
+
+#
+# Property to control whether schema updates are performed automatically.
+# Updates must be enabled during upgrades as, apart from the static upgrade scripts,
+# there are also auto-generated update scripts that will need to be executed.  After
+# upgrading to a new version, this can be disabled.
+#
+#db.schema.update=true
+
+#
+# MySQL connection
+#
+#db.driver=org.gjt.mm.mysql.Driver
+#db.url=jdbc:mysql://localhost/alfresco?useUnicode=yes&characterEncoding=UTF-8
+
+#
+# Oracle connection
+#
+#db.driver=oracle.jdbc.OracleDriver
+#db.url=jdbc:oracle:thin:@localhost:1521:alfresco
+
+#
+# SQLServer connection
+# Requires jTDS driver version 1.2.5 and SNAPSHOT isolation mode
+# Enable TCP protocol on fixed port 1433
+# Prepare the database with:
+# ALTER DATABASE alfresco SET ALLOW_SNAPSHOT_ISOLATION ON; 
+#
+#db.driver=net.sourceforge.jtds.jdbc.Driver
+#db.url=jdbc:jtds:sqlserver://localhost:1433/alfresco
+#db.txn.isolation=4096
+
+#
+# PostgreSQL connection (requires postgresql-8.2-504.jdbc3.jar or equivalent)
+#
+#db.driver=org.postgresql.Driver
+#db.url=jdbc:postgresql://localhost:5432/alfresco
+
+#
+# DB2 connection
+#
+#db.driver=com.ibm.db2.jcc.DB2Driver
+#db.url=jdbc:db2://localhost:50000/alfresco:retrieveMessagesFromServerOnGetMessage=true;
+
+#
+# Index Recovery Mode
+#-------------
+#index.recovery.mode=AUTO
+
+#
+# Outbound Email Configuration
+#-------------
+#mail.host=
+#mail.port=25
+#mail.username=anonymous
+#mail.password=
+#mail.encoding=UTF-8
+#mail.from.default=alfresco@alfresco.org
+#mail.smtp.auth=false
+
+#
+# Alfresco Email Service and Email Server
+#-------------
+
+# Enable/Disable the inbound email service.  The service could be used by processes other than
+# the Email Server (e.g. direct RMI access) so this flag is independent of the Email Service.
+#-------------
+#email.inbound.enabled=true
+
+# Email Server properties 
+#-------------
+#email.server.enabled=true
+#email.server.port=25
+#email.server.domain=alfresco.com
+#email.inbound.unknownUser=anonymous
+
+# A comma separated list of email REGEX patterns of allowed senders.
+# If there are any values in the list then all sender email addresses
+# must match. For example:
+#   .*\@alfresco\.com, .*\@alfresco\.org
+# Allow anyone:
+#-------------
+#email.server.allowed.senders=.*
+
+#
+# The default authentication chain
+# To configure external authentication subsystems see:
+# http://wiki.alfresco.com/wiki/Alfresco_Authentication_Subsystems
+#-------------
+#authentication.chain=alfrescoNtlm1:alfrescoNtlm
+
+#
+# URL Generation Parameters (The ${localname} token is replaced by the local server name)
+#-------------
+#alfresco.context=alfresco
+#alfresco.host=${localname}
+#alfresco.port=8080
+#alfresco.protocol=http
+#
+#share.context=share
+#share.host=${localname}
+#share.port=8080
+#share.protocol=http
+
+#imap.server.enabled=true
+#imap.server.port=143
+#imap.server.host=localhost
+
+# Default value of alfresco.rmi.services.host is 0.0.0.0 which means 'listen on all adapters'.
+# This allows connections to JMX both remotely and locally.
+#
+alfresco.rmi.services.host=0.0.0.0
+
+#
+# RMI service ports for the individual services.
+# These seven services are available remotely.
+#
+# Assign individual ports for each service for best performance 
+# or run several services on the same port. You can even run everything on 50500 if needed.
+#
+# Select 0 to use a random unused port.
+# 
+#avm.rmi.service.port=50501
+#avmsync.rmi.service.port=50502
+#attribute.rmi.service.port=50503
+#authentication.rmi.service.port=50504
+#repo.rmi.service.port=50505
+#action.rmi.service.port=50506
+#wcm-deployment-receiver.rmi.service.port=50507
+#monitor.rmi.service.port=50508
+
+
+# Dialect is autodetected starting from 3.2
+# H2 dialect
+#hibernate.dialect=org.hibernate.dialect.H2Dialect
+
+
+# Property to control whether schema updates are performed automatically.
+# Updates must be enabled during upgrades as, apart from the static upgrade scripts,
+# there are also auto-generated update scripts that will need to be executed.  After
+# upgrading to a new version, this can be disabled.
+#db.schema.update=true
+
+
+# File servers related properties 
+# For local builds we disable CIFS and FTP. Edit the following property to reenable them
+smb.server.enabled=false
+smb.server.name=CFS_SHARE_LOCAL
+smb.server.domain=mycompany.com
+smb.server.bindto=127.0.0.1
+smb.tcpip.port=1445
+netbios.session.port=1139
+netbios.name.port=1137
+netbios.datagram.port=1138
+ftp.server.enabled=false
+ftp.port=1121
+ftp.authenticator=alfresco
+
+# This properties file is used to configure LDAP authentication
+# NB: The following LDAP related properties are read only in case -Denteprise mvn build property is specified
+# Wheter to allow silent deletion of users in the Alfresco UI (note: users will be then resynced in the next synchronization)
+ldap.authentication.allowDeleteUser=true
+# LDAP JNDI provider
+ldap.authentication.provider=com.sun.jndi.ldap.LdapCtxFactory
+# Url and protocol for LDAP server to carry authentication against 
+ldap.authentication.url=ldap://ldap.mycompany.com:636
+# can be (simple, ssl)
+ldap.authentication.protcol=ssl
+# Credentials with full access to the directoty used
+ldap.authentication.adminUser=ou=Admin,ou=Services,o=Company
+ldap.authentication.adminPassword=secret
+# Wheter to allow unauthenticated guest a read only login
+ldap.authentication.guestLogin.allowed=false
+# Wheter users can be created on the fly upon successful external (e.g. LDAP) authentication. Useful to avoid user synchronization in case just uid and pwd are needed for a user
+server.transaction.allow-writes=true
+# Wheter user names are case sensitive
+user.name.caseSensitive=true
+# Wheter the synchronization process has to process duplicated users (e.g. synced users and users coming from the sync)
+personService.processDuplicates=true
+# Which action to take when processin duplicates. One of:  LEAVE, SPLIT, DELETE 
+personService.duplicateMode=DELETE
+# Which of the users (in case of SPLIT duplicates policy) should be considered valid
+personService.lastIsBest=true
+# Wheter auto created users should be considered when processing duplicates
+personService.includeAutoCreated=true
+# The query to find the people to import
+ldap.synchronisation.personQuery=(objectclass=inetOrgPerson)
+# The search base of the query to find people to import
+ldap.synchronisation.personSearchBase=ou=Identities,ou=mycompany,o=com
+# The attribute name on people objects found in LDAP to use as the uid in Alfresco
+ldap.synchronisation.userIdAttributeName=cn
+# The attribute on person objects in LDAP to map to the first name property in Alfresco
+ldap.synchronisation.userFirstNameAttributeName=givenName
+# The attribute on person objects in LDAP to map to the last name property in Alfresco
+ldap.synchronisation.userLastNameAttributeName=sn
+# The attribute on person objects in LDAP to map to the email property in Alfresco
+ldap.synchronisation.userEmailAttributeName=cn
+# The attribute on person objects in LDAP to map to the organizational id  property in Alfresco
+ldap.synchronisation.userOrganizationalIdAttributeName=maildomain
+# The default home folder provider to use for people created via LDAP import
+ldap.synchronisation.defaultHomeFolderProvider=companyHomeFolderProvider
+# The query to find group objects
+ldap.synchronisation.groupQuery=(objectclass=AlfrescoGroup)
+# The search base to use to find group objects
+ldap.synchronisation.groupSearchBase=ou=AlfrescoGroups,ou=mycompany,o=com
+# The attribute on LDAP group objects to map to the gid property in Alfrecso
+ldap.synchronisation.groupIdAttributeName=cn
+# The group type in LDAP
+ldap.synchronisation.groupType=AlfrescoGroup
+# The person type in LDAP
+ldap.synchronisation.personType=inetOrgPerson
+# The attribute in LDAP on group objects that defines the DN for its members
+ldap.synchronisation.groupMemberAttributeName=member
+# The cron expression defining when people imports should take place (e.g. every evening at 22:00 hours)
+ldap.synchronisation.import.person.cron=0 0 22 * * ?
+# The cron expression defining when group imports should take place (e.g. every evening at 21:45 hours)
+ldap.synchronisation.import.group.cron=0 45 21 * * ?
+# Should all groups be cleared out at import time?
+# - this is safe as groups are not used in Alfresco for other things (unlike person objects which you should never clear out during an import)
+# - setting this to true means old group definitions will be tidied up.
+ldap.synchronisation.import.group.clearAllChildren=false
+
diff --git a/test-materials/alfresco-war/src/main/webapp/WEB-INF/faces-config-custom.xml b/test-materials/alfresco-4-war/src/main/webapp/WEB-INF/faces-config-custom.xml
similarity index 100%
rename from test-materials/alfresco-war/src/main/webapp/WEB-INF/faces-config-custom.xml
rename to test-materials/alfresco-4-war/src/main/webapp/WEB-INF/faces-config-custom.xml
diff --git a/test-materials/alfresco-war/jetty/jetty-env.xml b/test-materials/alfresco-war/jetty/jetty-env.xml
deleted file mode 100755
index ea64672..0000000
--- a/test-materials/alfresco-war/jetty/jetty-env.xml
+++ /dev/null
@@ -1,32 +0,0 @@
-<?xml version="1.0"?>
-<!--
- Licensed to the Apache Software Foundation (ASF) under one or more
- contributor license agreements.  See the NOTICE file distributed with
- this work for additional information regarding copyright ownership.
- The ASF licenses this file to You under the Apache License, Version 2.0
- (the "License"); you may not use this file except in compliance with
- the License.  You may obtain a copy of the License at
-
-     http://www.apache.org/licenses/LICENSE-2.0
-
- Unless required by applicable law or agreed to in writing, software
- distributed under the License is distributed on an "AS IS" BASIS,
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- See the License for the specific language governing permissions and
- limitations under the License.
--->
-<!DOCTYPE Configure PUBLIC "-//Mort Bay Consulting//DTD Configure//EN" "http://jetty.mortbay.org/configure.dtd">
-
-<Configure class="org.mortbay.jetty.webapp.WebAppContext">
-    <New id="myDataSource"
-         class="org.mortbay.jetty.plus.naming.Resource">
-        <Arg>jdbc/dataSource</Arg>
-        <Arg>
-            <New class="org.h2.jdbcx.JdbcDataSource">
-                <Set name="URL">jdbc:h2:${alfresco.data.location}/h2_data/${alfresco.db.name}</Set>
-                <Set name="User">${alfresco.db.username}</Set>
-                <Set name="Password">${alfresco.db.password}</Set>
-            </New>
-        </Arg>
-    </New>
-</Configure>
diff --git a/test-materials/alfresco-war/pom.xml b/test-materials/alfresco-war/pom.xml
deleted file mode 100644
index 14d979d..0000000
--- a/test-materials/alfresco-war/pom.xml
+++ /dev/null
@@ -1,756 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-	<!--
-		Licensed to the Apache Software Foundation (ASF) under one or more
-		contributor license agreements. See the NOTICE file distributed with
-		this work for additional information regarding copyright ownership.
-		The ASF licenses this file to You under the Apache License, Version
-		2.0 (the "License"); you may not use this file except in compliance
-		with the License. You may obtain a copy of the License at
-		http://www.apache.org/licenses/LICENSE-2.0 Unless required by
-		applicable law or agreed to in writing, software distributed under the
-		License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-		CONDITIONS OF ANY KIND, either express or implied. See the License for
-		the specific language governing permissions and limitations under the
-		License.
-	-->
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
-    <!--
-        | Prerequisites: Mysql installed. Just run mvn clean install -Prun to run Alfresco
-        | How it works:
-        | By default the project is overlayed to the alfresco war which depends upon and deployed as a WAR (local development and testing purposes)
-        | Available properties: 
-        |  -Denv ==> toggles  src/main/properties/<env>/alfresco-global.properties files (default local) 
-        |  -DrestoreVersion=<restoreVersion> - toggles profile "restore" in order to include tools/export/<restoreVersion>/restore/*.acp files in alfresco/extension/restore (default: no restore) 
-        | -Dcustomization.name=<customizationName> - name of the jar artifact containing classes and resources for this extension |
-        | -Dwebapp.name=<extensionName> - name of the WAR artifact to be built (default: mcf-tests-alfresco-{version}.jar ) 
-        |  DEPRECATED: -Denterprise - Includes LDAP configuration as defined in alfresco-global.properties  
-        --> 
-        
-  <parent>
-    <groupId>org.apache.manifoldcf</groupId>
-    <artifactId>mcf-test-materials</artifactId>
-    <version>1.2-SNAPSHOT</version>
-    <relativePath>../pom.xml</relativePath>
-  </parent>
-	<modelVersion>4.0.0</modelVersion>
-	
-	<artifactId>mcf-alfresco-war-test</artifactId>
-	
-	<name>ManifoldCF - Test Materials - Alfresco WAR</name>
-	<packaging>war</packaging>
-	<url />
-	<description>Alfresco WAR builder</description>
-    <!-- oOo SINGLE POINT OF CONFIGURATION FOR COMMON ALFRESCO PROPERTIES oOo -->
-	<properties>
-        <!-- Alfresco version/edition selection -->
-        <alfresco.version>3.4.a</alfresco.version>
-        <alfresco.edition>community</alfresco.edition>
-        <!-- Build environment ==> src/main/properties/<env>/alfresco-global.properties is loaded -->
-        <env>local</env>
-        <!-- Webapp packaged name -->
-        <webapp.name>alfresco</webapp.name>
-        <!--
-            | Empty log dir creates file alfresco.log in the current root folder.
-            | You can also specify a meaningful log directory for the server (add a trailing slash, e.g. '/var/log/alfresco/' ) 
-            | Jetty embedded run logs by default in ${project.basedir}/alfresco.log
-            -->
-        <log.dir />
-        <!-- 
-         | By default the src/main/properties/local/alfresco-global.properties uses the property "alfresco.data.location" to specify where 
-         | alf_data gets created. For env=local you can use this shortcut property below, which gets filtered in the alfresco-global.properties file
-         | DEFAULT: alf_data_jetty relativel to run dir 
-         -->
-        <!-- For env=local DB is also configurable here. Of course keep in sync these two values otherwise you'll get integrity errors. Default Mysql-->
-        <alfresco.data.location>./alf_data_jetty</alfresco.data.location>
-        <alfresco.db.name>alf_jetty</alfresco.db.name>
-        <alfresco.db.url>jdbc:h2:${alfresco.data.location}/h2_data/${alfresco.db.name}</alfresco.db.url>
-        <alfresco.db.driver>org.h2.Driver</alfresco.db.driver>
-        <alfresco.db.username>alfresco</alfresco.db.username>
-        <alfresco.db.password>alfresco</alfresco.db.password>
-        <alfresco.db.hibernate.dialect>org.hibernate.dialect.H2Dialect</alfresco.db.hibernate.dialect>
-       
-       
-        <!-- H2 configuration: To be fixed <alfresco.db.url>jdbc:h2:${alfresco.data.location}/h2_data/${alfresco.db.name}</alfresco.db.url> -->
-        
-        <!-- DEPRECATED -->
-        <desktop.action.package>org.alfresco.filesys.repo.desk</desktop.action.package>
-        <!--
-            | Uncomment this property together with the <scm> section downwards
-            |
-            | <svn.url> https://mycompany.com/repos/my-test-project </svn.url>
-            -->
-        <!--
-            | Uncomment this property together with the maven-release-plugin <plugin><configuration><tagBase /></configuration></plugin> section downwards
-            |  <svn.tags.url>${svn.url}/tags</svn.tags.url>
-            -->
-        <!--
-            These redundancies are due to filtering issues of Maven. See here
-            http://maven.apache.org/plugins/maven-site-plugin/usage.html
-        -->
-        <site_pom_description>${project.description}</site_pom_description>
-        <site_pom_url>${project.organization.url}</site_pom_url>
-        <site_pom_groupId>${project.groupId}</site_pom_groupId>
-        <site_pom_artifactId>${project.artifactId}</site_pom_artifactId>
-        <site_pom_version>${project.version}</site_pom_version>
-        <site_tags_url>${svn.tags.url}</site_tags_url>
-        <site_site_url>${site.url}</site_site_url>
-    </properties>
-	<!--
-		| Alfresco Community dependencies are generally available in ss-public
-		repo. | FIXME: Alfresco enterprise dependencies are only available on
-		SS repo ATM. Alfresco *needs* to deliver their artifacts on (at least)
-		partner repos |
-	-->
-	<repositories>
-		<repository>
-			<id>alfresco-release</id>
-			<url>https://artifacts.alfresco.com/nexus/content/repositories/releases</url>
-		</repository>
-		<repository>
-            <id>alfresco-snapshots</id>
-            <url>https://artifacts.alfresco.com/nexus/content/repositories/snapshots</url>
-           <snapshots>
-                <enabled>true</enabled>
-           </snapshots>
-		</repository>
-        <repository>
-            <id>alfresco-sites</id>
-            <url>https://artifacts.alfresco.com/nexus/content/repositories/alfresco-docs</url>
-        </repository>
-        <repository>
-          <id>clojars</id>
-          <url>http://clojars.org/repo/</url>
-        </repository>
-	</repositories>
-	<pluginRepositories>
-		<pluginRepository>
-			<id>alfresco-release</id>
-			<url>https://artifacts.alfresco.com/nexus/content/repositories/releases</url>
-		</pluginRepository>
-		<pluginRepository>
-			<id>alfresco-snapshots</id>
-			<url>https://artifacts.alfresco.com/nexus/content/repositories/snapshots</url>
-			<snapshots>
-				<enabled>true</enabled>
-			</snapshots>
-		</pluginRepository>
-	</pluginRepositories>
-    
-    <!--
-        | Uncomment SCM definitions in order to have mvn release:perform to actually tag the code <scm>
-        | <developerConnection>scm:svn:${svn.url}</developerConnection>
-        | <url>${svn.url}</url> </scm>
-	    -->
-
-	<!-- Alfresco dependencies -->
-	<dependencies>
-        <!--
-            | Alfresco Dependencies 
-            | NB: These files are not publicly available. Please vote for Alfresco to release them if you care :) 
-            | Jboss alfresco build (e.g. no log4j.properties and fix for myFaces)
-            -->
-		<dependency>
-			<groupId>org.alfresco</groupId>
-			<artifactId>alfresco</artifactId>
-			<version>${alfresco.version}</version>
-			<type>war</type>
-			<classifier>${alfresco.edition}</classifier>
-		</dependency>
-		<!--
-			All provided libs (as contained in the war dependency) but useful for
-			development (e.g. IDE configuration)
-		-->
-
-    <dependency>
-      <groupId>it.sk.alfresco</groupId>
-      <artifactId>h2-support</artifactId>
-      <version>1.1</version>
-    </dependency>
-    <dependency>
-        <groupId>com.h2database</groupId>
-        <artifactId>h2</artifactId>
-        <version>1.3.158</version>
-    </dependency>
-    <dependency>
-			<groupId>org.alfresco</groupId>
-			<artifactId>alfresco-web-client</artifactId>
-			<version>${alfresco.version}</version>
-			<scope>provided</scope>
-			<classifier>${alfresco.edition}</classifier>
-		</dependency>
-		<dependency>
-			<groupId>org.alfresco</groupId>
-			<artifactId>alfresco-core</artifactId>
-			<version>${alfresco.version}</version>
-			<scope>provided</scope>
-			<classifier>${alfresco.edition}</classifier>
-		</dependency>
-		<dependency>
-			<groupId>org.alfresco</groupId>
-			<artifactId>alfresco-repository</artifactId>
-			<version>${alfresco.version}</version>
-			<scope>provided</scope>
-			<classifier>${alfresco.edition}</classifier>
-		</dependency>
-		<dependency>
-			<groupId>org.alfresco</groupId>
-			<artifactId>alfresco-remote-api</artifactId>
-			<version>${alfresco.version}</version>
-			<scope>provided</scope>
-			<classifier>${alfresco.edition}</classifier>
-		</dependency>
-
-		<!--
-			Sample of AMP dependency that will be properly uncompressed in the
-			Alfresco WAR (no more need for AMP): this can be either an AMP built
-			with maven-amp-plugin and deployed on an accessible maven repo or a
-			generally available AMP previously deployed to a repo using mvn
-			deploy:deploy-file 
-            
-            <dependency> 
-                <scope>runtime</scope>
-			    <type>amp</type>
-                <artifactId>recordsmanagement</artifactId>
-			    <version>${alfresco.version}</version>
-			    <groupId>org.alfresco</groupId> 
-            </dependency>
-		-->
-
-		<dependency>
-			<groupId>javax.servlet</groupId>
-			<artifactId>servlet-api</artifactId>
-			<version>2.5</version>
-			<scope>provided</scope>
-		</dependency>
-	</dependencies>
-
-	<build>
-		<finalName>${webapp.name}</finalName>
-		<!--
-			In certain cases we do build time filtering with the single sourcing
-			alfresco-global.properties
-		-->
-		<filters>
-			<filter>src/main/properties/${env}/alfresco-global.properties</filter>
-		</filters>
-		<!--
-			Default profile to build as an Alfresco extension - resources are
-			copied into classpath
-		-->
-		<resources>
-			<!-- By default also no content is restored -->
-			<resource>
-				<directory>src/main/resources</directory>
-				<excludes>
-					<exclude>**/restore-context.xml</exclude>
-					<exclude>**/ldap-*.xml</exclude>
-				</excludes>
-			</resource>
-            <!--
-                | Include application properties file in classpath: this allows Spring contexts to have customization properties available at 
-                | classpath:alfresco-global.properties
-                -->
-			<resource>
-				<directory>src/main/properties/${env}</directory>
-				<includes>
-					<include>alfresco-global.properties</include>
-				</includes>
-				<filtering>true</filtering>
-			</resource>
-			<resource>
-				<directory>src/main/resources</directory>
-				<includes>
-					<include>log4j.properties</include>
-				</includes>
-				<filtering>true</filtering>
-			</resource>
-			<resource>
-				<directory>${project.basedir}/tools/mysql</directory>
-				<includes>
-					<include>*.sql</include>
-				</includes>
-				<filtering>true</filtering>
-				<targetPath>tools/mysql</targetPath>
-			</resource>
-		</resources>
-
-		<plugins>
-			<!-- Needed for cross OS compatibility in acp/zip encoding -->
-			<plugin>
-				<groupId>org.apache.maven.plugins</groupId>
-				<artifactId>maven-resources-plugin</artifactId>
-                <version>2.5</version>
-				<configuration>
-					<encoding>UTF-8</encoding>
-				</configuration>
-			</plugin>
-			<!--
-				useful for eclipse project configuration. Run "mvn eclipse:eclipse"
-				and hit "F5" on the project
-			-->
-			<plugin>
-				<groupId>org.apache.maven.plugins</groupId>
-				<artifactId>maven-eclipse-plugin</artifactId>
-				<version>2.8</version>
-				<!--
-					<configuration> <downloadSources>true</downloadSources>
-					</configuration>
-				-->
-			</plugin>
-			<!-- Add documentation locales here -->
-			<plugin>
-				<artifactId>maven-site-plugin</artifactId>
-                <version>3.0</version>
-				<configuration>
-					<locales>en</locales>
-				</configuration>
-			</plugin>
-			<plugin>
-				<groupId>org.apache.maven.plugins</groupId>
-				<artifactId>maven-dependency-plugin</artifactId>
-				<executions>
-					<execution>
-						<id>unpack-amps</id>
-						<phase>process-resources</phase>
-						<goals>
-							<goal>unpack-dependencies</goal>
-						</goals>
-						<configuration>
-							<includeTypes>amp</includeTypes>
-							<outputDirectory>${project.build.directory}/${webapp.name}</outputDirectory>
-							<excludes>META*</excludes>
-						</configuration>
-					</execution>
-				</executions>
-				<dependencies>
-					<!--
-						This is required to be re-defined explicitly at plugin level as
-						otherwise the 'amp' extension unArchiver won't be available to the
-						maven-dependency-plugin
-					-->
-					<dependency>
-						<groupId>org.alfresco.maven.plugin</groupId>
-						<artifactId>maven-amp-plugin</artifactId>
-						<version>3.0.1</version>
-					</dependency>
-				</dependencies>
-			</plugin>
-			<plugin>
-				<groupId>org.apache.maven.plugins</groupId>
-				<artifactId>maven-war-plugin</artifactId>
-                <version>2.1.1</version>
-				<configuration>
-					<archiveClasses>false</archiveClasses>
-					<webappDirectory>target/${webapp.name}</webappDirectory>
-					<warSourceExcludes>tools/**</warSourceExcludes>
-					<webResources>
-
-					</webResources>
-				</configuration>
-			</plugin>
-			<plugin>
-				<groupId>org.codehaus.cargo</groupId>
-				<artifactId>cargo-maven2-plugin</artifactId>
-				<version>1.1.1</version>
-			</plugin>
-			<plugin>
-				<groupId>org.apache.maven.plugins</groupId>
-				<artifactId>maven-release-plugin</artifactId>
-				<configuration>
-					<!-- useEditMode>true</useEditMode> -->
-					<dryRun>true</dryRun>
-					<preparationGoals>clean package</preparationGoals>
-					<goals>install deploy cargo:undeploy cargo:deploy
-						site:deploy</goals>
-					<!-- <tagBase>${svn.tags.url}</tagBase> -->
-				</configuration>
-			</plugin>
-			<!--
-				Adds support for books PDF and RTF generation for single sourced
-				documentation
-			-->
-			<plugin>
-				<groupId>org.apache.maven.doxia</groupId>
-				<artifactId>doxia-maven-plugin</artifactId>
-				<version>1.2</version>
-				<executions>
-					<execution>
-						<phase>pre-site</phase>
-						<goals>
-							<goal>render-books</goal>
-						</goals>
-					</execution>
-				</executions>
-				<configuration>
-					<!--
-						| Target books dir: within the site so it can be linked and
-						deployed | TODO: Use ${project.target.dir} or so similar property
-						instead of | hard wiring 'target'
-					-->
-					<generatedDocs>target/site/books</generatedDocs>
-					<books>
-						<book>
-							<directory>src/site</directory>
-							<descriptor>src/books/manual.xml</descriptor>
-							<formats>
-								<format>
-									<id>xdoc</id>
-								</format>
-								<format>
-									<id>pdf</id>
-								</format>
-								<format>
-									<id>rtf</id>
-								</format>
-							</formats>
-						</book>
-					</books>
-				</configuration>
-			</plugin>
-		</plugins>
-	</build>
-	<reporting>
-		<plugins>
-			<!-- Targeting 1.6 -->
-			<plugin>
-				<artifactId>maven-compiler-plugin</artifactId>
-                <version>2.3.2</version>
-				<configuration>
-					<source>1.6</source>
-					<target>1.6</target>
-				</configuration>
-			</plugin>
-			<plugin>
-                <artifactId>maven-failsafe-plugin</artifactId>
-                <version>2.12</version>
-            </plugin>
-			<plugin>
-				<artifactId>maven-javadoc-plugin</artifactId>
-                <version>2.8</version>
-			</plugin>
-			<plugin>
-				<groupId>org.codehaus.mojo</groupId>
-				<artifactId>jxr-maven-plugin</artifactId>
-                <version>2.0-beta-1</version>
-			</plugin>
-			<plugin>
-				<artifactId>maven-clover-plugin</artifactId>
-                <version>2.4</version>
-			</plugin>
-			<!--
-				Enable this plugin only after setting SCM connection, otherwise mvn
-				site will fail <plugin> <groupId>org.codehaus.mojo</groupId>
-				<artifactId>changelog-maven-plugin</artifactId> </plugin>
-			-->
-			<plugin>
-				<groupId>org.codehaus.mojo</groupId>
-				<artifactId>taglist-maven-plugin</artifactId>
-                <version>2.4</version>
-			</plugin>
-		</plugins>
-	</reporting>
-	<!--
-		| Configured to deploy on SS public repository ATM. | You'd need a
-		valid uid/pwd in our repo |
-	-->
-	<distributionManagement>
-		<!--
-			| | Enable this repo in case of publicly redistributable artifacts
-			<repository> <id>yourcompany</id>
-			<url>scp://yourcompany/var/maven2</url> </repository>
-			<distributionManagement> <site> <id>yourcompany-site</id>
-			<url>scp://yourcompany/var/maven2-sites</url> </site>
-		-->
-		<!--
-			| | Enable this repo in case of non publicly redistributable
-			artifacts (Sourcesense private repositories via webdav) |
-		-->
-	</distributionManagement>
-
-	<!-- 
-		| Build Profiles
-	-->
-	<profiles>
-		<!--
-			| Profile to automatically restore export files committed under |
-			"tools/export/<restoreVersion>/export_*.[acp,xml]" and the
-			restore-context.xml. | Gets automatically activated specifiying a
-			value for the property | restoreVersion which maps to the name of the
-			folder. | NB: In order this to work you *MUST* export your full repo
-			with "export" package name
-		-->
-		<profile>
-			<id>restore</id>
-			<activation>
-				<property>
-					<name>restoreVersion</name>
-				</property>
-			</activation>
-			<build>
-				<defaultGoal>package</defaultGoal>
-				<resources>
-					<resource>
-						<directory>src/main/resources</directory>
-						<includes>
-							<include>**/restore-*.xml</include>
-						</includes>
-					</resource>
-					<resource>
-						<directory>tools/export/${restoreVersion}</directory>
-						<includes>
-							<include>**</include>
-						</includes>
-						<targetPath>alfresco/extension/restore</targetPath>
-					</resource>
-				</resources>
-			</build>
-		</profile>
-		<!--
-			Profile for deploying (only locally , due to
-			http://jira.codehaus.org/browse/CARGO-416) on jboss. | | FIXME: Add
-			<dependencies> override in order to have jboss specific
-			alfresco-*-jboss.war (e.g no log4j.properties and log4j jar) | being
-			substituted as a depenendency, and avoid log4j classCasts |
-		-->
-		<profile>
-			<id>jboss</id>
-			<!--
-				| By default the src/main/properties/local/alfresco-global.properties
-				uses the property "alfresco.data.location" to specify where |
-				alf_data gets created and "alfresco.db.name" for the database name.
-				| For local jboss deployment default creation dir (alf_data) is
-				under appserver $JBOSS_HOME/bin directory (as location is specified
-				relatively to | run dir) | | Empty log dir creates file alfresco.log
-				in appserver default dir. You can also specify a meaningful log
-				directory for the server | (add a trailing slash, e.g.
-				'/var/log/alfresco/' ) | | NB: Remember to grant appropriate
-				permissions on database you specify here by running the script found
-				in tools/mysql/[jetty/tomcat/jboss] | sql scripts (after editing
-				them), or run those you find in
-				target/classes/tools/[db_remove,db_setup].sql which are already
-				filtered according to | ' alfresco.db.name ' property. |
-			-->
-			<properties>
-				<alfresco.data.location>./alf_data</alfresco.data.location>
-				<alfresco.db.name>alf_jboss</alfresco.db.name>
-				<log.dir />
-			</properties>
-			<build>
-				<defaultGoal>cargo:deploy</defaultGoal>
-				<resources>
-					<resource>
-						<directory>src/main/resources</directory>
-						<excludes>
-							<exclude>**/restore-context.xml</exclude>
-							<!--<exclude>**/ldap-*.xml</exclude>-->
-						</excludes>
-					</resource>
-					<resource>
-						<directory>src/main/resources</directory>
-						<includes>
-							<include />
-						</includes>
-						<filtering>true</filtering>
-					</resource>
-				</resources>
-				<plugins>
-					<plugin>
-						<groupId>org.codehaus.cargo</groupId>
-						<artifactId>cargo-maven2-plugin</artifactId>
-						<configuration>
-							<container>
-								<containerId>jboss4x</containerId>
-								<type>remote</type>
-							</container>
-							<configuration>
-								<type>runtime</type>
-								<properties>
-									<cargo.servlet.port>8080</cargo.servlet.port>
-								</properties>
-							</configuration>
-							<deployer>
-								<type>remote</type>
-								<deployables>
-									<deployable>
-										<groupId>${project.groupId}</groupId>
-										<artifactId>${project.artifactId}</artifactId>
-										<type>war</type>
-									</deployable>
-								</deployables>
-							</deployer>
-						</configuration>
-					</plugin>
-					<!-- log4j.properties is excluded from source and dependencies -->
-					<plugin>
-						<groupId>org.apache.maven.plugins</groupId>
-						<artifactId>maven-war-plugin</artifactId>
-                        <version>2.1.1</version>
-						<configuration>
-							<archiveClasses>false</archiveClasses>
-							<webappDirectory>target/${webapp.name}</webappDirectory>
-							<dependentWarExcludes>**/log4j.properties,**/lib/log4j*.jar,log4j.properties</dependentWarExcludes>
-							<warSourceExcludes>**/log4j.properties,WEB-INF/classes/tools</warSourceExcludes>
-						</configuration>
-					</plugin>
-				</plugins>
-			</build>
-		</profile>
-		<!-- 
-			| Profile for deploying on tomcat 5.x
-			|
-			|
-		-->
-		<profile>
-			<id>tomcat</id>
-			<!--
-				| By default the src/main/properties/local/alfresco-global.properties
-				uses the property "alfresco.data.location" to specify where |
-				alf_data gets created. | For tomcat deployment default creation dir
-				(alf_data) is under appserver $CATALINA_HOME/bin directory (as
-				location is specified relatively to | run dir) and db is
-				configurable likewhise. | Empty log dir creates file alfresco.log in
-				appserver default dir. You can also specify a meaningful log
-				directory for the server | (add a trailing slash, e.g.
-				'/var/log/alfresco/' ) | | NB: Remember to grant appropriate
-				permissions on database you specify here by running the script found
-				in tools/mysql/[jetty/tomcat/jboss] | mysql scripts (properly
-				edited) or those you find in target/tools/[db_remove,db_setup].sql
-				which are already filtered according to | ' alfresco.db.name '
-				property |
-			-->
-			<properties>
-				<alfresco.data.location>./alf_data</alfresco.data.location>
-				<alfresco.db.name>alf_tomcat</alfresco.db.name>
-				<log.dir />
-			</properties>
-			<build>
-				<defaultGoal>package</defaultGoal>
-				<plugins>
-					<plugin>
-						<groupId>org.codehaus.cargo</groupId>
-						<artifactId>cargo-maven2-plugin</artifactId>
-						<configuration>
-							<container>
-								<containerId>tomcat5x</containerId>
-								<type>remote</type>
-							</container>
-							<!-- Configure here your Tomcat server manager credentials -->
-							<configuration>
-								<type>runtime</type>
-								<properties>
-									<cargo.remote.username>tomcat</cargo.remote.username>
-									<cargo.remote.password>tomcat</cargo.remote.password>
-									<cargo.servlet.port>8080</cargo.servlet.port>
-								</properties>
-							</configuration>
-							<deployer>
-								<type>remote</type>
-								<deployables>
-									<deployable>
-										<artifactId>${project.artifactId}</artifactId>
-										<type>war</type>
-										<properties>
-											<context>/${webapp.name}</context>
-										</properties>
-									</deployable>
-								</deployables>
-							</deployer>
-						</configuration>
-					</plugin>
-				</plugins>
-			</build>
-		</profile>
-		<!-- -Pinitialize : boostraps the db (only to be used the 1st run)  -->
-		<profile>
-			<id>initialize</id>
-			<build>
-				<plugins>
-					<!-- Cleans the alf_data folder and logs-->
-					<plugin>
-						<artifactId>maven-clean-plugin</artifactId>
-						<executions>
-							<execution>
-							  <id>clean-execution</id>
-								<phase>generate-resources</phase>
-								<goals>
-									<goal>clean</goal>
-								</goals>
-								<configuration>
-									<filesets>
-										<fileset>
-											<directory>${alfresco.data.location}</directory>
-											<includes>
-												<include>**/*</include>
-											</includes>
-										</fileset>
-										<fileset>
-											<directory>.</directory>
-											<includes>
-												<include>**/*.log</include>
-												<include>*.log</include>
-											</includes>
-										</fileset>
-									</filesets>
-								</configuration>
-							</execution>
-						</executions>
-					</plugin>
-				</plugins>
-			</build>
-		</profile>
-		<profile>
-			<id>run</id>
-			<build>
-                <resources>
-                    <resource>
-                        <directory>jetty</directory>
-                        <includes>
-                            <include>jetty-env.xml</include>
-                        </includes>
-                    </resource>
-                </resources>
-				<plugins>
-                    <plugin>
-                        <groupId>org.apache.maven.plugins</groupId>
-                        <artifactId>maven-war-plugin</artifactId>
-                        <configuration>
-                            <webResources>
-                                <resource>
-                                    <directory>jetty</directory>
-                                    <targetPath>WEB-INF</targetPath>
-                                    <filtering>true</filtering>
-                                </resource>
-                            </webResources>
-                        </configuration>
-                    </plugin>
-                    <plugin>
-                        <groupId>org.mortbay.jetty</groupId>
-						<artifactId>maven-jetty-plugin</artifactId>
-						<version>6.1.21</version>
-						<executions>
-							<!-- Runs jetty when 'integration-test' phase is called -->
-							<execution>
-								<id>it</id>
-								<phase>integration-test</phase>
-								<goals>
-									<goal>run-exploded</goal>
-								</goals>
-								<configuration>
-									<contextPath>/${webapp.name}</contextPath>
-									<webApp>${project.build.directory}/${webapp.name}</webApp>
-									<scanIntervalSeconds>10</scanIntervalSeconds>
-									<connectors>
-										<connector implementation="org.mortbay.jetty.nio.SelectChannelConnector">
-											<port>8080</port>
-											<maxIdleTime>60000</maxIdleTime>
-										</connector>
-									</connectors>
-								</configuration>
-							</execution>
-						</executions>
-					</plugin>
-				</plugins>
-			</build>
-		</profile>
-	</profiles>
-</project>
diff --git a/test-materials/alfresco-war/src/main/properties/local/README-properties.txt b/test-materials/alfresco-war/src/main/properties/local/README-properties.txt
deleted file mode 100644
index ca1531a..0000000
--- a/test-materials/alfresco-war/src/main/properties/local/README-properties.txt
+++ /dev/null
@@ -1,43 +0,0 @@
-#    Licensed to the Apache Software Foundation (ASF) under one or more
-#    contributor license agreements.  See the NOTICE file distributed with
-#    this work for additional information regarding copyright ownership.
-#    The ASF licenses this file to You under the Apache License, Version 2.0
-#    (the "License"); you may not use this file except in compliance with
-#    the License.  You may obtain a copy of the License at
-#    
-#    http://www.apache.org/licenses/LICENSE-2.0
-#    
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-
-Ant/m2 runtime properties management
-------------------------------------
-
-Both build systems will comply to the following convention for properties:
-
-- if -Denv=<yourEnv> property is specified at build time properties will be 
-  looked up in folder 
-  
-  src/main/properties/<yourEnv>/application.properties
-  
-  and copied in the classpath under
-  
-  alfresco/extension/application.properties
-  
-- if no "env" system property is specified env=local default value will be used
-  
-  
-Buildtime properties management - Note for Ant Users:
------------------------------------------------------
-Here you can also configure buildtime properties which will be loaded in ant
-build context with the same aforementioned convention.
-This is done for tomcat ATM.
-
-Buildtime properties management - Note for Maven Users:
------------------------------------------------------
-You should configure your buildtime properties as suggested by the maven 
-cascading build properties system, i.e. externalizing them from the project 
-by the means of settings.xml file.
\ No newline at end of file
diff --git a/test-materials/alfresco-war/src/main/properties/local/alfresco-global.properties b/test-materials/alfresco-war/src/main/properties/local/alfresco-global.properties
deleted file mode 100644
index b04863c..0000000
--- a/test-materials/alfresco-war/src/main/properties/local/alfresco-global.properties
+++ /dev/null
@@ -1,143 +0,0 @@
-#    Licensed to the Apache Software Foundation (ASF) under one or more
-#    contributor license agreements.  See the NOTICE file distributed with
-#    this work for additional information regarding copyright ownership.
-#    The ASF licenses this file to You under the Apache License, Version 2.0
-#    (the "License"); you may not use this file except in compliance with
-#    the License.  You may obtain a copy of the License at
-#    
-#    http://www.apache.org/licenses/LICENSE-2.0
-#    
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-
-
-
-# RUN TIME PROPERTIES
-# -------------------
-
-# Sample custom content and index data location
-# This will create alf_data Relative to appserver run folder
-# In this default file we take the property from the POM (for compatbility with local jetty and jboss deployments) but it can also be edited here.
-dir.root=${alfresco.data.location}
-# Allowed values are: NONE, AUTO, FULL
-index.recovery.mode=NONE
-
-# Fail or not when there are node integrity checker errors
-integrity.failOnError=true
-
-# database connection properties
-# MySQL connection (This is default and requires mysql-connector-java-5.0.3-bin.jar, which ships with the Alfresco server)
-
-db.driver=${alfresco.db.driver}
-db.url=${alfresco.db.url}
-db.username=${alfresco.db.username}
-db.password=${alfresco.db.password}
-
-db.pool.initial=10
-db.pool.max=100
-
-# Dialect is autodetected starting from 3.2
-# H2 dialect
-hibernate.dialect=${alfresco.db.hibernate.dialect}
-
-
-# Property to control whether schema updates are performed automatically.
-# Updates must be enabled during upgrades as, apart from the static upgrade scripts,
-# there are also auto-generated update scripts that will need to be executed.  After
-# upgrading to a new version, this can be disabled.
-#db.schema.update=true
-
-
-# File servers related properties 
-# For local builds we disable CIFS and FTP. Edit the following property to reenable them
-smb.server.enabled=false
-smb.server.name=CFS_SHARE_LOCAL
-smb.server.domain=mycompany.com
-smb.server.bindto=127.0.0.1
-smb.tcpip.port=1445
-netbios.session.port=1139
-netbios.name.port=1137
-netbios.datagram.port=1138
-ftp.server.enables=false
-ftp.port=1121
-ftp.authenticator=alfresco
-
-# This properties file is used to configure LDAP authentication
-# NB: The following LDAP related properties are read only in case -Denteprise mvn build property is specified
-# Wheter to allow silent deletion of users in the Alfresco UI (note: users will be then resynced in the next synchronization)
-ldap.authentication.allowDeleteUser=true
-# LDAP JNDI provider
-ldap.authentication.provider=com.sun.jndi.ldap.LdapCtxFactory
-# Url and protocol for LDAP server to carry authentication against 
-ldap.authentication.url=ldap://ldap.mycompany.com:636
-# can be (simple, ssl)
-ldap.authentication.protcol=ssl
-# Credentials with full access to the directoty used
-ldap.authentication.adminUser=ou=Admin,ou=Services,o=Company
-ldap.authentication.adminPassword=secret
-# Wheter to allow unauthenticated guest a read only login
-ldap.authentication.guestLogin.allowed=false
-# Wheter users can be created on the fly upon successful external (e.g. LDAP) authentication. Useful to avoid user synchronization in case just uid and pwd are needed for a user
-server.transaction.allow-writes=true
-# Wheter user names are case sensitive
-user.name.caseSensitive=true
-# Wheter the synchronization process has to process duplicated users (e.g. synced users and users coming from the sync)
-personService.processDuplicates=true
-# Which action to take when processin duplicates. One of:  LEAVE, SPLIT, DELETE 
-personService.duplicateMode=DELETE
-# Which of the users (in case of SPLIT duplicates policy) should be considered valid
-personService.lastIsBest=true
-# Wheter auto created users should be considered when processing duplicates
-personService.includeAutoCreated=true
-# The query to find the people to import
-ldap.synchronisation.personQuery=(objectclass=inetOrgPerson)
-# The search base of the query to find people to import
-ldap.synchronisation.personSearchBase=ou=Identities,ou=mycompany,o=com
-# The attribute name on people objects found in LDAP to use as the uid in Alfresco
-ldap.synchronisation.userIdAttributeName=cn
-# The attribute on person objects in LDAP to map to the first name property in Alfresco
-ldap.synchronisation.userFirstNameAttributeName=givenName
-# The attribute on person objects in LDAP to map to the last name property in Alfresco
-ldap.synchronisation.userLastNameAttributeName=sn
-# The attribute on person objects in LDAP to map to the email property in Alfresco
-ldap.synchronisation.userEmailAttributeName=cn
-# The attribute on person objects in LDAP to map to the organizational id  property in Alfresco
-ldap.synchronisation.userOrganizationalIdAttributeName=maildomain
-# The default home folder provider to use for people created via LDAP import
-ldap.synchronisation.defaultHomeFolderProvider=companyHomeFolderProvider
-# The query to find group objects
-ldap.synchronisation.groupQuery=(objectclass=AlfrescoGroup)
-# The search base to use to find group objects
-ldap.synchronisation.groupSearchBase=ou=AlfrescoGroups,ou=mycompany,o=com
-# The attribute on LDAP group objects to map to the gid property in Alfrecso
-ldap.synchronisation.groupIdAttributeName=cn
-# The group type in LDAP
-ldap.synchronisation.groupType=AlfrescoGroup
-# The person type in LDAP
-ldap.synchronisation.personType=inetOrgPerson
-# The attribute in LDAP on group objects that defines the DN for its members
-ldap.synchronisation.groupMemberAttributeName=member
-# The cron expression defining when people imports should take place (e.g. every evening at 22:00 hours)
-ldap.synchronisation.import.person.cron=0 0 22 * * ?
-# The cron expression defining when group imports should take place (e.g. every evening at 21:45 hours)
-ldap.synchronisation.import.group.cron=0 45 21 * * ?
-# Should all groups be cleared out at import time?
-# - this is safe as groups are not used in Alfresco for other things (unlike person objects which you should never clear out during an import)
-# - setting this to true means old group definitions will be tidied up.
-ldap.synchronisation.import.group.clearAllChildren=false
-
-# BUILD TIME PROPERTIES
-# ---------------------
-
-# NB: This group of properties is only used by ant for tomcat deployment, and may not be maintained. 
-# Appserver to deploy to (tomcat) - used only by ant ATM. 
-# Use $M2_HOME/conf/settings.xml (or ~/.m2/settings.xml) for maven2 appservers username/password
-appserver.dir=/your/appserver/dir
-appserver.host=localhost
-appserver.manager.url=http://${appserver.host}:8080/manager
-appserver.username=tomcat
-appserver.password=tomcat
-
diff --git a/test-materials/alfresco-war/src/main/resources/log4j.properties b/test-materials/alfresco-war/src/main/resources/log4j.properties
deleted file mode 100644
index d318671..0000000
--- a/test-materials/alfresco-war/src/main/resources/log4j.properties
+++ /dev/null
@@ -1,142 +0,0 @@
-#    Licensed to the Apache Software Foundation (ASF) under one or more
-#    contributor license agreements.  See the NOTICE file distributed with
-#    this work for additional information regarding copyright ownership.
-#    The ASF licenses this file to You under the Apache License, Version 2.0
-#    (the "License"); you may not use this file except in compliance with
-#    the License.  You may obtain a copy of the License at
-#    
-#    http://www.apache.org/licenses/LICENSE-2.0
-#    
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-
-
-# Customized alfresco log location
-# Set root logger level to error
-log4j.rootLogger=error, Console, File
-
-
-# All outputs currently set to be a ConsoleAppender.
-log4j.appender.Console=org.apache.log4j.ConsoleAppender
-log4j.appender.Console.layout=org.apache.log4j.PatternLayout
-log4j.appender.Console.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c{3}] %m%n
-#log4j.appender.Console.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c] %m%n
-
-log4j.appender.File=org.apache.log4j.DailyRollingFileAppender
-log4j.appender.File.File=${log.dir}alfresco.log
-log4j.appender.File.Append=true
-log4j.appender.File.DatePattern='.'yyyy-MM-dd
-log4j.appender.File.layout=org.apache.log4j.PatternLayout
-log4j.appender.File.layout.ConversionPattern=%d{ABSOLUTE} %-5p [%c] %m%n
-
-#log4j.appender.file=org.apache.log4j.FileAppender
-#log4j.appender.file.File=hibernate.log
-#log4j.appender.file.layout=org.apache.log4j.PatternLayout
-#log4j.appender.file.layout.ConversionPattern=%d{ABSOLUTE} %5p %c{1}:%L - %m%n
-
-
-log4j.logger.org.alfresco.repo.model.filefolder=info
-
-# Hibernate
-log4j.logger.org.hibernate=error
-log4j.logger.org.hibernate.util.JDBCExceptionReporter=fatal
-log4j.logger.org.hibernate.event.def.AbstractFlushingEventListener=fatal
-#log4j.logger.org.hibernate.cache.EhCacheProvider=warn
-log4j.logger.org.hibernate.type=warn
-# log4j.logger.org.hibernate.persister.collection=DEBUG
-
-# Spring
-log4j.logger.org.springframework=warn
-
-# Axis/WSS4J
-log4j.logger.org.apache.axis=info
-log4j.logger.org.apache.ws=info
-
-# MyFaces
-log4j.logger.org.apache.myfaces.util.DebugUtils=info
-log4j.logger.org.apache.myfaces.el.VariableResolverImpl=error
-log4j.logger.org.apache.myfaces.application.jsp.JspViewHandlerImpl=error
-log4j.logger.org.apache.myfaces.taglib=error
-
-# log prepared statement cache activity log4j.logger.org.hibernate.ps.PreparedStatementCache=info
-
-# Alfresco
-log4j.logger.org.alfresco=error
-log4j.logger.org.alfresco.repo.avm=info
-log4j.logger.org.alfresco.config=info
-log4j.logger.org.alfresco.sample=info
-log4j.logger.org.alfresco.web=info
-log4j.logger.org.alfresco.web.scripts=warn
-#log4j.logger.org.alfresco.web.ui.repo.component.UIActions=debug
-#log4j.logger.org.alfresco.web.ui.repo.tag.PageTag=debug
-#log4j.logger.org.alfresco.web.bean.clipboard=debug
-log4j.logger.org.alfresco.repo.webservice=info
-log4j.logger.org.alfresco.service.descriptor.DescriptorService=info
-#log4j.logger.org.alfresco.repo.importer.ImporterBootstrap=info
-#log4j.logger.org.alfresco.web.ui.common.Utils=info
-log4j.logger.org.alfresco.repo.admin.patch.PatchExecuter=info
-log4j.logger.org.alfresco.repo.module.ModuleServiceImpl=info
-log4j.logger.org.alfresco.repo.domain.schema.SchemaBootstrap=info
-log4j.logger.org.alfresco.repo.admin.ConfigurationChecker=info
-log4j.logger.org.alfresco.repo.node.index.FullIndexRecoveryComponent=info
-log4j.logger.org.alfresco.util.OpenOfficeConnectionTester=warn
-log4j.logger.org.alfresco.repo.node.db.hibernate.HibernateNodeDaoServiceImpl=warn
-#log4j.logger.org.alfresco.web.app.DebugPhaseListener=debug
-#log4j.logger.org.alfresco.repo.cache.EhCacheTracerJob=debug
-#log4j.logger.org.alfresco.repo.search.Indexer=debug
-#log4j.logger.org.alfresco.repo.workflow=info
-#log4j.logger.org.alfresco.repo.jscript=DEBUG
-log4j.logger.org.alfresco.repo.jscript.AlfrescoRhinoScriptDebugger=off
-
-# CIFS server debugging
-#log4j.logger.org.alfresco.smb.protocol=debug
-#log4j.logger.org.alfresco.smb.protocol.auth=debug
-#log4j.logger.org.alfresco.acegi=debug
-
-# FTP server debugging
-#log4j.logger.org.alfresco.ftp.protocol=debug
-#log4j.logger.org.alfresco.ftp.server=debug
-
-# WebDAV debugging
-#log4j.logger.org.alfresco.webdav.protocol=debug
-
-# NTLM servlet filters
-#log4j.logger.org.alfresco.web.app.servlet.NTLMAuthenticationFilter=debug
-#log4j.logger.org.alfresco.repo.webdav.auth.NTLMAuthenticationFilter=debug
-
-# Integrity message threshold - if 'failOnViolation' is off, then WARNINGS are generated
-log4j.logger.org.alfresco.repo.node.integrity=ERROR
-
-# New indexer debugging
-#log4j.logger.org.alfresco.repo.search.impl.lucene.index=DEBUG
-
-# Audit debugging
-# log4j.logger.org.alfresco.repo.audit=DEBUG
-# log4j.logger.org.alfresco.repo.audit.model=DEBUG
-
-# Turn off Spring remoting warnings that should really be info or debug.
-log4j.logger.org.springframework.remoting.support=error
-
-# Templating debugging
-# log4j.logger.org.alfresco.web.forms=debug
-# log4j.logger.org.chiba.xml.xforms=debug
-
-# Property sheet and modelling debugging
-# change to error to hide the warnings about missing properties and associations
-log4j.logger.alfresco.missingProperties=warn
-log4j.logger.org.alfresco.web.ui.repo.component.property.UIChildAssociation=warn
-log4j.logger.org.alfresco.web.ui.repo.component.property.UIAssociation=warn
-#log4j.logger.org.alfresco.web.ui.repo.component.property=debug
-#log4j.logger.org.alfresco.repo.dictionary.DictionaryDAO=info
-
-
-# Virtualization Server Registry
-#log4j.logger.org.alfresco.mbeans.VirtServerRegistry=debug
-
-# Link Validation debugging
-#log4j.logger.org.alfresco.linkvalidation.LinkValidationServiceImpl=debug
-#log4j.logger.org.alfresco.linkvalidation.LinkValidationStoreCallbackHandler=debug
-
diff --git a/test-materials/alfresco-war/src/main/webapp/WEB-INF/README-WEB-INF.txt b/test-materials/alfresco-war/src/main/webapp/WEB-INF/README-WEB-INF.txt
deleted file mode 100644
index 974995c..0000000
--- a/test-materials/alfresco-war/src/main/webapp/WEB-INF/README-WEB-INF.txt
+++ /dev/null
@@ -1,20 +0,0 @@
-#    Licensed to the Apache Software Foundation (ASF) under one or more
-#    contributor license agreements.  See the NOTICE file distributed with
-#    this work for additional information regarding copyright ownership.
-#    The ASF licenses this file to You under the Apache License, Version 2.0
-#    (the "License"); you may not use this file except in compliance with
-#    the License.  You may obtain a copy of the License at
-#    
-#    http://www.apache.org/licenses/LICENSE-2.0
-#    
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-
-Readme for the WEB-INF overlay procedures
------------------------------------------
-- Note:
-This folder contents will be overlayed to the ${webapp.name}/WEB-INF folder. So for example a web.xml file put into this folder will overwrite (not "override" by classpath or inheritance rules) the existing web.xml.
-This is useful for example for configuring external SSO authentication filters (e.g. NTLMAuthenticationFilter)
diff --git a/test-materials/alfresco-war/tools/export/testRestoreVersion/README-restore.txt b/test-materials/alfresco-war/tools/export/testRestoreVersion/README-restore.txt
deleted file mode 100755
index 486c249..0000000
--- a/test-materials/alfresco-war/tools/export/testRestoreVersion/README-restore.txt
+++ /dev/null
@@ -1,29 +0,0 @@
-#    Licensed to the Apache Software Foundation (ASF) under one or more
-#    contributor license agreements.  See the NOTICE file distributed with
-#    this work for additional information regarding copyright ownership.
-#    The ASF licenses this file to You under the Apache License, Version 2.0
-#    (the "License"); you may not use this file except in compliance with
-#    the License.  You may obtain a copy of the License at
-#    
-#    http://www.apache.org/licenses/LICENSE-2.0
-#    
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-   
-Restore procedure:
-------------------
-(README: Only supported for WAR integrated build ATM)
-
-1. Place here your 6 full repository export files (5 acp + 1 xml) calling the export 
-package "export" (so that your file will appear like export_spaces.acp, export_users.acp, etc.) 
-
-2. run your build with the property
-
-mvn clean package -DrestoreVersion=testRestoreVersion 
-
-3. deploy as a war (mvn jboss:deploy)
-
-4. if you had a consistent repository/database you should have your repo fully imported
\ No newline at end of file
diff --git a/test-materials/alfresco-war/tools/m2/m2-bootstrap.sh b/test-materials/alfresco-war/tools/m2/m2-bootstrap.sh
deleted file mode 100755
index 29bc652..0000000
--- a/test-materials/alfresco-war/tools/m2/m2-bootstrap.sh
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/bin/bash
-#
-# Name:   m2-bootstrap.sh
-# Author: g.columbro@sourcesense.com
-#
-# Description:
-# This script is needed *only* in case you *don't* have you don't have alfresco artifacts available in any public repo,
-# and you can't connect to Sourcesense public repo. 
-# So you can manually downlaod JAR and WAR alfresco artifacts in $BASE_DIR (1st param)
-# and have them deployed to $TARGET_REPO (2nd param) and $TARGET_REPO_URL (3rd param)
-# by running this script and passing the 5 params in the command line. 4th param indicates the version
-# while the 5th one the alfresco distro (labs vs enteprise) we're going to deploy.
-#
-# Note: 
-# This script works for alfresco > 3.0 artifacts. It must be modified for
-# earlier versions (as share.war won't be present)
-#
-# License:
-# Licensed to the Apache Software Foundation (ASF) under one or more
-# contributor license agreements.  See the NOTICE file distributed with
-# this work for additional information regarding copyright ownership.
-# The ASF licenses this file to You under the Apache License, Version 2.0
-# (the "License"); you may not use this file except in compliance with
-# the License.  You may obtain a copy of the License at
-#    
-# http://www.apache.org/licenses/LICENSE-2.0
-#    
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-#
-# Example run:  
-#				./m2-bootstrap.sh 
-#				/Users/mindthegab/downloads/alfresco/alfresco-labs-war-3a.1032_2 
-#				ss-public 
-#				scp://repository.sourcesense.com/var/www/demo.sourcesense.com/maven2 
-#				3a.1032 
-#				labs
-#
-# Artifacts will be deployed with the following pattern:
-#
-#  org.alfresco:alfresco[-*]:[jar|war]:[VERSION]:[RELEASE]
-#
-# To have this fully working you need to have the following alfresco BASE_DIR layout:
-#
-#   BASE_DIR
-#		|____ alfresco.war
-#		|____ alfresco
-#		|			|__ WEB-INF
-#		|					|__ lib
-#		|						 |___ alfresco-*.jar
-#		|____ share.war
-#		|____ share
-#				|__ WEB-INF
-#						|__ alfresco-*.jar
-#
-# which you can easily obtain downloading an alfresco WAR distribution and unpacking both alfresco.war and share.war in folders with the same name
-
-
-# 1st command line param: 
-# directory where jar and war dependencies are stored
-BASE_DIR=$1
-# 2st command line param: 
-# target repo id (matches in settings)
-TARGET_REPO=$2
-# 3rd command line param: 
-# target repo url
-TARGET_REPO_URL=$3
-# 4th command line param: 
-# Version 
-VERSION=$4
-# 5th command line param: 
-# Release [labs|enterprise]
-RELEASE=$5
-
-echo "Starting Alfresco JARs uploading to repo ${TARGET_REPO} at ${TARGET_REPO_URL}
-
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-core.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-core -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar  -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-deployment.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-deployment -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-jlan-embed.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-jlan-embed -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-linkvalidation.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-linkvalidation -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-mbeans.jar  -DrepositoryId=${TARGET_REPO}  -DgroupId=org.alfresco -DartifactId=alfresco-mbeans -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-remote-api.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-remote-api -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-repository.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-repository -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-web-client.jar   -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-web-client -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco/WEB-INF/lib/alfresco-webscript-framework.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-webscript-framework -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/share/WEB-INF/lib/alfresco-share.jar  -DrepositoryId=${TARGET_REPO}  -DgroupId=org.alfresco -DartifactId=alfresco-share -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/share/WEB-INF/lib/alfresco-web-framework.jar  -DrepositoryId=${TARGET_REPO} -DgroupId=org.alfresco -DartifactId=alfresco-web-framework -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=jar -Dclassifier=${RELEASE}
-
-echo "Starting Alfresco WARs uploading to repo ${TARGET_REPO} at ${TARGET_REPO_URL}
-
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/alfresco.war  -DrepositoryId=${TARGET_REPO}  -DgroupId=org.alfresco -DartifactId=alfresco -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=war -Dclassifier=${RELEASE}
-mvn deploy:deploy-file -Dfile=${BASE_DIR}/share.war  -DrepositoryId=${TARGET_REPO}  -DgroupId=org.alfresco -DartifactId=share -Dversion=${VERSION} -Durl=${TARGET_REPO_URL} -Dpackaging=war -Dclassifier=${RELEASE}
-
-echo "Artifacts uploaded"
diff --git a/test-materials/pom.xml b/test-materials/pom.xml
index af1ea27..df9ab0f 100644
--- a/test-materials/pom.xml
+++ b/test-materials/pom.xml
@@ -14,20 +14,18 @@
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
--->
-
-<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
+--><project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-parent</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>mcf-test-materials</artifactId>
   <name>ManifoldCF - Test materials</name>
   <packaging>pom</packaging>
   <modules>
-    <module>alfresco-war</module>
+    <module>alfresco-4-war</module>
   </modules>
   <build>
     <defaultGoal>package</defaultGoal>
diff --git a/tests/activedirectory/src/test/java/org/apache/manifoldcf/activedirectory_tests/NavigationDerbyUI.java b/tests/activedirectory/src/test/java/org/apache/manifoldcf/activedirectory_tests/NavigationDerbyUI.java
index 97f3e8e..4c35d0c 100644
--- a/tests/activedirectory/src/test/java/org/apache/manifoldcf/activedirectory_tests/NavigationDerbyUI.java
+++ b/tests/activedirectory/src/test/java/org/apache/manifoldcf/activedirectory_tests/NavigationDerbyUI.java
@@ -49,9 +49,31 @@
     HTMLTester.Loop loop;
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
-    
+
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+
     // Define an authority connection via the UI
     window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List authority groups"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add new authority group"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editgroup"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("groupname"));
+    textarea.setValue(testerInstance.createStringDescription("MyAuthorityConnection"));
+    button = window.findButton(testerInstance.createStringDescription("Save this authority group"));
+    button.click();
+
+    window = testerInstance.findWindow(null);
     link = window.findLink(testerInstance.createStringDescription("List authorities"));
     link.click();
     window = testerInstance.findWindow(null);
@@ -69,6 +91,8 @@
     form = window.findForm(testerInstance.createStringDescription("editconnection"));
     selectbox = form.findSelectbox(testerInstance.createStringDescription("classname"));
     selectbox.selectValue(testerInstance.createStringDescription("org.apache.manifoldcf.authorities.authorities.activedirectory.ActiveDirectoryAuthority"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("authoritygroup"));
+    selectbox.selectValue(testerInstance.createStringDescription("MyAuthorityConnection"));
     button = window.findButton(testerInstance.createStringDescription("Continue to next page"));
     button.click();
     // Server tab
diff --git a/tests/alfresco/build.xml b/tests/alfresco/build.xml
index 318e91e..7b93cae 100644
--- a/tests/alfresco/build.xml
+++ b/tests/alfresco/build.xml
@@ -27,7 +27,6 @@
             <include name="saaj*.jar"/>	
             <include name="commons-discovery*.jar"/>
             <include name="jaxrpc*.jar"/>
-            <include name="mail*.jar"/>
             <include name="opensaml*.jar"/>
             <include name="wsdl4j*.jar"/>
             <include name="wss4j*.jar"/>
@@ -46,7 +45,7 @@
             <jvmarg value="-DcrawlerWarPath=../../../framework/build/war/mcf-crawler-ui.war"/>
             <jvmarg value="-DauthorityserviceWarPath=../../../framework/build/war/mcf-authority-service.war"/>
             <jvmarg value="-DapiWarPath=../../../framework/build/war/mcf-api-service.war"/>
-            <jvmarg value="-DalfrescoServerWarPath=../../../connectors/alfresco/build/alfresco-war/alfresco.war"/>
+            <jvmarg value="-DalfrescoServerWarPath=../../../connectors/alfresco/build/alfresco-4-war/alfresco.war"/>
             <jvmarg value="-Xms512m"/>
             <jvmarg value="-Xmx1024m"/>
             <jvmarg value="-Xss1024k"/>
diff --git a/tests/alfresco/pom.xml b/tests/alfresco/pom.xml
index 126e2c7..e4bd3b2 100644
--- a/tests/alfresco/pom.xml
+++ b/tests/alfresco/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -60,7 +60,7 @@
               <artifactItems>
                 <artifactItem>
                   <groupId>${project.groupId}</groupId>
-                  <artifactId>mcf-alfresco-war-test</artifactId>
+                  <artifactId>alfresco-4-war</artifactId>
                   <version>${project.version}</version>
                   <type>war</type>
                   <overWrite>false</overWrite>
@@ -322,7 +322,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/APISanityIT.java b/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/APISanityIT.java
index 6e86e1a..b4f8fbd 100644
--- a/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/APISanityIT.java
+++ b/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/APISanityIT.java
@@ -68,12 +68,14 @@
   private static final String ALFRESCO_SERVER = "localhost";
   private static final String ALFRESCO_PORT = "9090";
   private static final String ALFRESCO_PATH = "/alfresco/api";
+  private static final int SOCKET_TIMEOUT = 120000;
   private static final String ALFRESCO_ENDPOINT_TEST_SERVER = 
       ALFRESCO_PROTOCOL+"://"+ALFRESCO_SERVER+":"+ALFRESCO_PORT+ALFRESCO_PATH;
   
   private static final Store STORE = new Store(Constants.WORKSPACE_STORE, "SpacesStore");
   
   public Reference getTestFolder() throws RepositoryFault, RemoteException{
+    WebServiceFactory.setTimeoutMilliseconds(SOCKET_TIMEOUT);
     WebServiceFactory.setEndpointAddress(ALFRESCO_ENDPOINT_TEST_SERVER);
     AuthenticationUtils.startSession(ALFRESCO_USERNAME, ALFRESCO_PASSWORD);
     Reference reference = new Reference();
@@ -141,6 +143,7 @@
    */
   public void changeDocument(String name, String newContent) throws RepositoryFault, RemoteException{
     String luceneQuery = StringUtils.replace(ALFRESCO_TEST_QUERY_CHANGE_DOC, REPLACER, name);
+    WebServiceFactory.setTimeoutMilliseconds(SOCKET_TIMEOUT);
     WebServiceFactory.setEndpointAddress(ALFRESCO_ENDPOINT_TEST_SERVER);
     AuthenticationUtils.startSession(ALFRESCO_USERNAME, ALFRESCO_PASSWORD);
     
@@ -171,6 +174,7 @@
   
   public void removeDocument(String name) throws RepositoryFault, RemoteException{
     String luceneQuery = StringUtils.replace(ALFRESCO_TEST_QUERY_CHANGE_DOC, REPLACER, name);
+    WebServiceFactory.setTimeoutMilliseconds(SOCKET_TIMEOUT);
     WebServiceFactory.setEndpointAddress(ALFRESCO_ENDPOINT_TEST_SERVER);
     AuthenticationUtils.startSession(ALFRESCO_USERNAME, ALFRESCO_PASSWORD);
     
@@ -363,6 +367,12 @@
       alfrescoPathNode.setValue(ALFRESCO_PATH);
       child.addChild(child.getChildCount(), alfrescoPathNode);
       
+      //socketTimeout
+      ConfigurationNode socketTimeoutNode = new ConfigurationNode("_PARAMETER_");
+      socketTimeoutNode.setAttribute("name", AlfrescoConfig.SOCKET_TIMEOUT_PARAM);
+      socketTimeoutNode.setValue(String.valueOf(SOCKET_TIMEOUT));
+      child.addChild(child.getChildCount(), socketTimeoutNode);
+      
       connectionObject.addChild(connectionObject.getChildCount(),child);
 
       requestObject = new Configuration();
diff --git a/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/BaseDerby.java b/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/BaseDerby.java
index 51b6f95..7702ba8 100644
--- a/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/BaseDerby.java
+++ b/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/BaseDerby.java
@@ -18,6 +18,7 @@
 */
 package org.apache.manifoldcf.alfresco_tests;
 
+import org.eclipse.jetty.security.HashLoginService;
 import org.eclipse.jetty.server.handler.ContextHandlerCollection;
 import org.eclipse.jetty.server.Server;
 import org.eclipse.jetty.webapp.WebAppContext;
@@ -76,6 +77,8 @@
 
     WebAppContext alfrescoServerApi = new WebAppContext(alfrescoServerWarPath,"/alfresco");
     alfrescoServerApi.setParentLoaderPriority(false);
+    HashLoginService dummyLoginService = new HashLoginService("TEST-SECURITY-REALM");
+    alfrescoServerApi.getSecurityHandler().setLoginService(dummyLoginService);
     contexts.addHandler(alfrescoServerApi);
     
     Class h2DataSource = Thread.currentThread().getContextClassLoader().loadClass("org.h2.jdbcx.JdbcDataSource");
diff --git a/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/NavigationDerbyUI.java b/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/NavigationDerbyUI.java
index 8f719be..0313c07 100644
--- a/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/NavigationDerbyUI.java
+++ b/tests/alfresco/src/test/java/org/apache/manifoldcf/alfresco_tests/NavigationDerbyUI.java
@@ -44,6 +44,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
@@ -122,6 +132,8 @@
     textarea.setValue(testerInstance.createStringDescription("9090"));
     textarea = form.findTextarea(testerInstance.createStringDescription("path"));
     textarea.setValue(testerInstance.createStringDescription("/alfresco/api"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("socketTimeout"));
+    textarea.setValue(testerInstance.createStringDescription("120000"));
     // Go back to the Name tab
     link = window.findLink(testerInstance.createStringDescription("Name tab"));
     link.click();
diff --git a/tests/cmis/pom.xml b/tests/cmis/pom.xml
index fcb9ebc..2915c8b 100644
--- a/tests/cmis/pom.xml
+++ b/tests/cmis/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -60,7 +60,7 @@
                 <artifactItem>
                   <groupId>org.apache.chemistry.opencmis</groupId>
                   <artifactId>chemistry-opencmis-server-inmemory</artifactId>
-                  <version>0.8.0</version>
+                  <version>0.9.0</version>
                   <type>war</type>
                   <overWrite>false</overWrite>
                   <destFileName>chemistry-opencmis-server-inmemory.war</destFileName>
@@ -135,7 +135,7 @@
     <dependency>
       <groupId>org.apache.chemistry.opencmis</groupId>
       <artifactId>chemistry-opencmis-server-inmemory</artifactId>
-      <version>0.8.0</version>
+      <version>0.9.0</version>
       <type>war</type>
     </dependency>
     <dependency>
@@ -311,7 +311,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
@@ -328,7 +328,7 @@
     <dependency>
       <groupId>org.apache.chemistry.opencmis</groupId>
       <artifactId>chemistry-opencmis-client-impl</artifactId>
-      <version>0.8.0</version>
+      <version>0.9.0</version>
   </dependency>
   <dependency>
       <groupId>commons-lang</groupId>
diff --git a/tests/cmis/src/test/java/org/apache/manifoldcf/cmis_tests/NavigationDerbyUI.java b/tests/cmis/src/test/java/org/apache/manifoldcf/cmis_tests/NavigationDerbyUI.java
index 10f7dbc..1c650ca 100644
--- a/tests/cmis/src/test/java/org/apache/manifoldcf/cmis_tests/NavigationDerbyUI.java
+++ b/tests/cmis/src/test/java/org/apache/manifoldcf/cmis_tests/NavigationDerbyUI.java
@@ -43,7 +43,17 @@
     HTMLTester.Loop loop;
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
-    
+
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
@@ -122,6 +132,19 @@
     
     // Define an authority connection via the UI
     window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List authority groups"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add new authority group"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editgroup"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("groupname"));
+    textarea.setValue(testerInstance.createStringDescription("MyAuthorityConnection"));
+    button = window.findButton(testerInstance.createStringDescription("Save this authority group"));
+    button.click();
+
+    window = testerInstance.findWindow(null);
     link = window.findLink(testerInstance.createStringDescription("List authorities"));
     link.click();
     window = testerInstance.findWindow(null);
@@ -139,6 +162,8 @@
     form = window.findForm(testerInstance.createStringDescription("editconnection"));
     selectbox = form.findSelectbox(testerInstance.createStringDescription("classname"));
     selectbox.selectValue(testerInstance.createStringDescription("org.apache.manifoldcf.crawler.connectors.cmis.CmisAuthorityConnector"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("authoritygroup"));
+    selectbox.selectValue(testerInstance.createStringDescription("MyAuthorityConnection"));
     button = window.findButton(testerInstance.createStringDescription("Continue to next page"));
     button.click();
     window = testerInstance.findWindow(null);
diff --git a/tests/elasticsearch/pom.xml b/tests/elasticsearch/pom.xml
index 43a3ad6..d1a17a4 100644
--- a/tests/elasticsearch/pom.xml
+++ b/tests/elasticsearch/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -69,7 +69,7 @@
                 <artifactItem>
                   <groupId>org.apache.chemistry.opencmis</groupId>
                   <artifactId>chemistry-opencmis-server-inmemory</artifactId>
-                  <version>0.7.0</version>
+                  <version>0.9.0</version>
                   <type>war</type>
                   <overWrite>false</overWrite>
                   <destFileName>chemistry-opencmis-server-inmemory.war</destFileName>
@@ -144,7 +144,7 @@
     <dependency>
       <groupId>org.apache.chemistry.opencmis</groupId>
       <artifactId>chemistry-opencmis-server-inmemory</artifactId>
-      <version>0.7.0</version>
+      <version>0.9.0</version>
       <type>war</type>
     </dependency>
     <dependency>
@@ -326,7 +326,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
@@ -343,7 +343,7 @@
     <dependency>
       <groupId>org.apache.chemistry.opencmis</groupId>
       <artifactId>chemistry-opencmis-client-impl</artifactId>
-      <version>0.7.0</version>
+      <version>0.9.0</version>
     </dependency>
     <dependency>
       <groupId>commons-lang</groupId>
diff --git a/tests/elasticsearch/src/test/java/org/apache/manifoldcf/elasticsearch_tests/NavigationDerbyUI.java b/tests/elasticsearch/src/test/java/org/apache/manifoldcf/elasticsearch_tests/NavigationDerbyUI.java
index 016ca8b..1b4325b 100644
--- a/tests/elasticsearch/src/test/java/org/apache/manifoldcf/elasticsearch_tests/NavigationDerbyUI.java
+++ b/tests/elasticsearch/src/test/java/org/apache/manifoldcf/elasticsearch_tests/NavigationDerbyUI.java
@@ -43,7 +43,17 @@
     HTMLTester.Loop loop;
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
-    
+
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/filesystem/pom.xml b/tests/filesystem/pom.xml
index 24eb25f..3dd4660 100644
--- a/tests/filesystem/pom.xml
+++ b/tests/filesystem/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -296,7 +296,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/tests/filesystem/src/test/java/org/apache/manifoldcf/filesystem_tests/NavigationUITester.java b/tests/filesystem/src/test/java/org/apache/manifoldcf/filesystem_tests/NavigationUITester.java
index 79ce120..7004d3a 100644
--- a/tests/filesystem/src/test/java/org/apache/manifoldcf/filesystem_tests/NavigationUITester.java
+++ b/tests/filesystem/src/test/java/org/apache/manifoldcf/filesystem_tests/NavigationUITester.java
@@ -57,6 +57,16 @@
     
     window = testerInstance.openMainWindow(startURL);
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
@@ -176,7 +186,7 @@
     form = window.findForm(testerInstance.createStringDescription("editjob"));
     radiobutton = form.findRadiobutton(testerInstance.createStringDescription("hopcountmode"),testerInstance.createStringDescription("2"));
     radiobutton.select();
-    link = window.findLink(testerInstance.createStringDescription("Paths tab"));
+    link = window.findLink(testerInstance.createStringDescription("Repository Paths tab"));
     link.click();
     // Add a record to the Paths list
     
diff --git a/tests/gts/src/test/java/org/apache/manifoldcf/gts_tests/NavigationDerbyUI.java b/tests/gts/src/test/java/org/apache/manifoldcf/gts_tests/NavigationDerbyUI.java
index b10d288..d191dc1 100644
--- a/tests/gts/src/test/java/org/apache/manifoldcf/gts_tests/NavigationDerbyUI.java
+++ b/tests/gts/src/test/java/org/apache/manifoldcf/gts_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/hdfs/build.xml b/tests/hdfs/build.xml
new file mode 100644
index 0000000..2505261
--- /dev/null
+++ b/tests/hdfs/build.xml
@@ -0,0 +1,22 @@
+<!--
+ Licensed to the Apache Software Foundation (ASF) under one or more
+ contributor license agreements.  See the NOTICE file distributed with
+ this work for additional information regarding copyright ownership.
+ The ASF licenses this file to You under the Apache License, Version 2.0
+ (the "License"); you may not use this file except in compliance with
+ the License.  You may obtain a copy of the License at
+
+     http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
+-->
+
+<project name="hdfs" default="all">
+
+    <import file="../ino-test-build.xml"/>
+    
+</project>
diff --git a/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/BaseUIDerby.java b/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/BaseUIDerby.java
new file mode 100644
index 0000000..76505d2
--- /dev/null
+++ b/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/BaseUIDerby.java
@@ -0,0 +1,53 @@
+/* $Id: BaseUIDerby.java 1231798 2012-01-15 23:58:22Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.hdfs_tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** Tests that run the "agents daemon" should be derived from this */
+public class BaseUIDerby extends org.apache.manifoldcf.crawler.tests.ConnectorBaseUIDerby
+{
+  protected String[] getConnectorNames()
+  {
+    return new String[]{"HDFS Repository Connector"};
+  }
+  
+  protected String[] getConnectorClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"};
+  }
+  
+  protected String[] getOutputNames()
+  {
+    return new String[]{"Null Output"};
+  }
+  
+  protected String[] getOutputClasses()
+  {
+    return new String[]{"org.apache.manifoldcf.agents.output.nullconnector.NullConnector"};
+  }
+
+}
diff --git a/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/NavigationDerbyUI.java b/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/NavigationDerbyUI.java
new file mode 100644
index 0000000..2e481ff
--- /dev/null
+++ b/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/NavigationDerbyUI.java
@@ -0,0 +1,43 @@
+/* $Id: NavigationDerbyUI.java 1422222 2012-12-15 11:29:02Z kwright $ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.hdfs_tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+import org.apache.manifoldcf.core.tests.HTMLTester;
+
+/** Basic UI navigation tests */
+public class NavigationDerbyUI extends BaseUIDerby
+{
+
+  @Test
+  public void createConnectionsAndJob()
+    throws Exception
+  {
+    new NavigationUITester(testerInstance,"http://localhost:8346/mcf-crawler-ui/index.jsp").createConnectionsAndJob();
+  }
+  
+}
diff --git a/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/NavigationUITester.java b/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/NavigationUITester.java
new file mode 100644
index 0000000..421f7dd
--- /dev/null
+++ b/tests/hdfs/src/test/java/org/apache/manifoldcf/hdfs_tests/NavigationUITester.java
@@ -0,0 +1,245 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.hdfs_tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+import org.apache.manifoldcf.core.tests.HTMLTester;
+
+/** Basic UI navigation tests */
+public class NavigationUITester
+{
+  protected final HTMLTester testerInstance;
+  protected final String startURL;
+  
+  public NavigationUITester(HTMLTester tester, String startURL)
+  {
+    this.testerInstance = tester;
+    this.startURL = startURL;
+  }
+  
+  public void createConnectionsAndJob()
+    throws Exception
+  {
+    testerInstance.newTest(Locale.US);
+    
+    HTMLTester.Window window;
+    HTMLTester.Link link;
+    HTMLTester.Form form;
+    HTMLTester.Textarea textarea;
+    HTMLTester.Selectbox selectbox;
+    HTMLTester.Button button;
+    HTMLTester.Radiobutton radiobutton;
+    HTMLTester.Loop loop;
+    
+    window = testerInstance.openMainWindow(startURL);
+    
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
+    // Define an output connection via the UI
+    link = window.findLink(testerInstance.createStringDescription("List output connections"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add an output connection"));
+    link.click();
+    // Fill in a name
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editconnection"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("connname"));
+    textarea.setValue(testerInstance.createStringDescription("MyOutputConnection"));
+    link = window.findLink(testerInstance.createStringDescription("Type tab"));
+    link.click();
+    // Select a type
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editconnection"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("classname"));
+    selectbox.selectValue(testerInstance.createStringDescription("org.apache.manifoldcf.agents.output.nullconnector.NullConnector"));
+    button = window.findButton(testerInstance.createStringDescription("Continue to next page"));
+    button.click();
+    // Visit the Throttling tab
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Throttling tab"));
+    link.click();
+    // Go back to the Name tab
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Name tab"));
+    link.click();
+    // Now save the connection.
+    window = testerInstance.findWindow(null);
+    button = window.findButton(testerInstance.createStringDescription("Save this output connection"));
+    button.click();
+    
+    // Define a repository connection via the UI
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List repository connections"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add a connection"));
+    link.click();
+    // Fill in a name
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editconnection"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("connname"));
+    textarea.setValue(testerInstance.createStringDescription("MyRepositoryConnection"));
+    link = window.findLink(testerInstance.createStringDescription("Type tab"));
+    link.click();
+    // Select a type
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editconnection"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("classname"));
+    selectbox.selectValue(testerInstance.createStringDescription("org.apache.manifoldcf.crawler.connectors.hdfs.HDFSRepositoryConnector"));
+    button = window.findButton(testerInstance.createStringDescription("Continue to next page"));
+    button.click();
+    // Visit the Throttling tab
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Throttling tab"));
+    link.click();
+    // Go back to the Name tab
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Name tab"));
+    link.click();
+    // Server tab
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Server tab"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editconnection"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("user"));
+    textarea.setValue(testerInstance.createStringDescription("foo"));
+    // Now save the connection.
+    window = testerInstance.findWindow(null);
+    button = window.findButton(testerInstance.createStringDescription("Save this connection"));
+    button.click();
+    
+    // Create a job
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List jobs"));
+    link.click();
+    // Add a job
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add a job"));
+    link.click();
+    // Fill in a name
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editjob"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("description"));
+    textarea.setValue(testerInstance.createStringDescription("MyJob"));
+    link = window.findLink(testerInstance.createStringDescription("Connection tab"));
+    link.click();
+    // Select the connections
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editjob"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("outputname"));
+    selectbox.selectValue(testerInstance.createStringDescription("MyOutputConnection"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("connectionname"));
+    selectbox.selectValue(testerInstance.createStringDescription("MyRepositoryConnection"));
+    button = window.findButton(testerInstance.createStringDescription("Continue to next screen"));
+    button.click();
+    // Visit all the tabs.  Scheduling tab first
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Scheduling tab"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editjob"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("dayofweek"));
+    selectbox.selectValue(testerInstance.createStringDescription("0"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("hourofday"));
+    selectbox.selectValue(testerInstance.createStringDescription("1"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("minutesofhour"));
+    selectbox.selectValue(testerInstance.createStringDescription("30"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("monthofyear"));
+    selectbox.selectValue(testerInstance.createStringDescription("11"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("dayofmonth"));
+    selectbox.selectValue(testerInstance.createStringDescription("none"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("duration"));
+    textarea.setValue(testerInstance.createStringDescription("120"));
+    button = window.findButton(testerInstance.createStringDescription("Add new schedule record"));
+    button.click();
+    // Now, HopFilters
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Hop Filters tab"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editjob"));
+    radiobutton = form.findRadiobutton(testerInstance.createStringDescription("hopcountmode"),testerInstance.createStringDescription("2"));
+    radiobutton.select();
+    link = window.findLink(testerInstance.createStringDescription("Repository Paths tab"));
+    link.click();
+    // Add a record to the Paths list
+    
+    // MHL
+    
+    // Save the job
+    window = testerInstance.findWindow(null);
+    button = window.findButton(testerInstance.createStringDescription("Save this job"));
+    button.click();
+
+    // Delete the job
+    window = testerInstance.findWindow(null);
+    HTMLTester.StringDescription jobID = window.findMatch(testerInstance.createStringDescription("<!--jobid=(.*?)-->"),0);
+    testerInstance.printValue(jobID);
+    link = window.findLink(testerInstance.createStringDescription("Delete this job"));
+    link.click();
+    
+    // Wait for the job to go away
+    loop = testerInstance.beginLoop(120);
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Manage jobs"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    HTMLTester.StringDescription isJobNotPresent = window.isNotPresent(jobID);
+    testerInstance.printValue(isJobNotPresent);
+    loop.breakWhenTrue(isJobNotPresent);
+    loop.endLoop();
+    
+    // Delete the repository connection
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List repository connections"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Delete MyRepositoryConnection"));
+    link.click();
+    
+    // Delete the output connection
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List output connections"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Delete MyOutputConnection"));
+    link.click();
+    
+    testerInstance.executeTest();
+  }
+  
+}
diff --git a/tests/jcifs/src/test/java/org/apache/manifoldcf/jcifs_tests/NavigationDerbyUI.java b/tests/jcifs/src/test/java/org/apache/manifoldcf/jcifs_tests/NavigationDerbyUI.java
index 83fafdc..b8efb22 100644
--- a/tests/jcifs/src/test/java/org/apache/manifoldcf/jcifs_tests/NavigationDerbyUI.java
+++ b/tests/jcifs/src/test/java/org/apache/manifoldcf/jcifs_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/jdbc/src/test/java/org/apache/manifoldcf/jdbc_tests/NavigationDerbyUI.java b/tests/jdbc/src/test/java/org/apache/manifoldcf/jdbc_tests/NavigationDerbyUI.java
index 44cb8ae..63cd862 100644
--- a/tests/jdbc/src/test/java/org/apache/manifoldcf/jdbc_tests/NavigationDerbyUI.java
+++ b/tests/jdbc/src/test/java/org/apache/manifoldcf/jdbc_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
@@ -212,6 +222,18 @@
     
     // Define an authority connection via the UI
     window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List authority groups"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add new authority group"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editgroup"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("groupname"));
+    textarea.setValue(testerInstance.createStringDescription("MyAuthorityConnection"));
+    button = window.findButton(testerInstance.createStringDescription("Save this authority group"));
+    button.click();
+    window = testerInstance.findWindow(null);
     link = window.findLink(testerInstance.createStringDescription("List authorities"));
     link.click();
     window = testerInstance.findWindow(null);
@@ -229,6 +251,8 @@
     form = window.findForm(testerInstance.createStringDescription("editconnection"));
     selectbox = form.findSelectbox(testerInstance.createStringDescription("classname"));
     selectbox.selectValue(testerInstance.createStringDescription("org.apache.manifoldcf.authorities.authorities.jdbc.JDBCAuthority"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("authoritygroup"));
+    selectbox.selectValue(testerInstance.createStringDescription("MyAuthorityConnection"));
     button = window.findButton(testerInstance.createStringDescription("Continue to next page"));
     button.click();
     // Credentials tab
diff --git a/tests/ldap/src/test/java/org/apache/manifoldcf/ldap_tests/NavigationDerbyUI.java b/tests/ldap/src/test/java/org/apache/manifoldcf/ldap_tests/NavigationDerbyUI.java
index 1b549e2..7fdd649 100644
--- a/tests/ldap/src/test/java/org/apache/manifoldcf/ldap_tests/NavigationDerbyUI.java
+++ b/tests/ldap/src/test/java/org/apache/manifoldcf/ldap_tests/NavigationDerbyUI.java
@@ -49,9 +49,32 @@
     HTMLTester.Loop loop;
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
-    
+
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an authority connection via the UI
     window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("List authority groups"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    link = window.findLink(testerInstance.createStringDescription("Add new authority group"));
+    link.click();
+    window = testerInstance.findWindow(null);
+    form = window.findForm(testerInstance.createStringDescription("editgroup"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("groupname"));
+    textarea.setValue(testerInstance.createStringDescription("MyAuthorityConnection"));
+    button = window.findButton(testerInstance.createStringDescription("Save this authority group"));
+    button.click();
+
+    window = testerInstance.findWindow(null);
     link = window.findLink(testerInstance.createStringDescription("List authorities"));
     link.click();
     window = testerInstance.findWindow(null);
@@ -69,6 +92,8 @@
     form = window.findForm(testerInstance.createStringDescription("editconnection"));
     selectbox = form.findSelectbox(testerInstance.createStringDescription("classname"));
     selectbox.selectValue(testerInstance.createStringDescription("org.apache.manifoldcf.authorities.authorities.ldap.LDAPAuthority"));
+    selectbox = form.findSelectbox(testerInstance.createStringDescription("authoritygroup"));
+    selectbox.selectValue(testerInstance.createStringDescription("MyAuthorityConnection"));
     button = window.findButton(testerInstance.createStringDescription("Continue to next page"));
     button.click();
     // Server tab
diff --git a/tests/nullauthority/src/test/java/org/apache/manifoldcf/nullauthority_tests/NavigationDerbyUI.java b/tests/nullauthority/src/test/java/org/apache/manifoldcf/nullauthority_tests/NavigationDerbyUI.java
index 8cfd37a..52b14b7 100644
--- a/tests/nullauthority/src/test/java/org/apache/manifoldcf/nullauthority_tests/NavigationDerbyUI.java
+++ b/tests/nullauthority/src/test/java/org/apache/manifoldcf/nullauthority_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an authority connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List authorities"));
     link.click();
diff --git a/tests/opensearchserver/src/test/java/org/apache/manifoldcf/opensearchserver_tests/NavigationDerbyUI.java b/tests/opensearchserver/src/test/java/org/apache/manifoldcf/opensearchserver_tests/NavigationDerbyUI.java
index 2917a17..418b05e 100644
--- a/tests/opensearchserver/src/test/java/org/apache/manifoldcf/opensearchserver_tests/NavigationDerbyUI.java
+++ b/tests/opensearchserver/src/test/java/org/apache/manifoldcf/opensearchserver_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/pom.xml b/tests/pom.xml
index 87bfc82..d3d0128 100644
--- a/tests/pom.xml
+++ b/tests/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-parent</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
   </parent>
   <modelVersion>4.0.0</modelVersion>
   <artifactId>mcf-tests</artifactId>
diff --git a/tests/rss/pom.xml b/tests/rss/pom.xml
index b22da1b..23e852e 100644
--- a/tests/rss/pom.xml
+++ b/tests/rss/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -290,7 +290,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/FlakyDerbyInstance.java b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/FlakyDerbyInstance.java
new file mode 100644
index 0000000..274f069
--- /dev/null
+++ b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/FlakyDerbyInstance.java
@@ -0,0 +1,75 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.rss_tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+import java.sql.*;
+import javax.naming.*;
+import javax.sql.*;
+import java.util.concurrent.atomic.*;
+
+/** This is a very basic sanity check */
+public class FlakyDerbyInstance extends org.apache.manifoldcf.core.database.DBInterfaceDerby
+{
+
+  protected final static AtomicBoolean lostConnection = new AtomicBoolean(false);
+  
+  public FlakyDerbyInstance(IThreadContext tc, String databaseName, String userName, String password)
+    throws ManifoldCFException
+  {
+    super(tc,databaseName,userName,password);
+  }
+
+  public FlakyDerbyInstance(IThreadContext tc, String databaseName)
+    throws ManifoldCFException
+  {
+    super(tc,databaseName);
+  }
+
+  @Override
+  protected IResultSet execute(Connection connection, String query, List params, boolean bResults, int maxResults,
+    ResultSpecification spec, ILimitChecker returnLimit)
+    throws ManifoldCFException
+  {
+    if (!lostConnection.get())
+      return super.execute(connection,query,params,bResults,maxResults,spec,returnLimit);
+    // Simulate a dead connection by throwing a database error
+    try
+    {
+      // Sleep, to limit log noise.
+      Thread.sleep(1000L);
+    }
+    catch (InterruptedException e)
+    {
+      throw new ManifoldCFException(e.getMessage(),ManifoldCFException.INTERRUPTED);
+    }
+    
+    throw new ManifoldCFException("Database error", new Exception("Manufactured db error"), ManifoldCFException.DATABASE_CONNECTION_ERROR);
+  }
+
+  public static void setConnectionWorking(boolean value)
+  {
+    lostConnection.set(!value);
+  }
+  
+}
diff --git a/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/NavigationDerbyUI.java b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/NavigationDerbyUI.java
index 6d96489..0377e64 100644
--- a/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/NavigationDerbyUI.java
+++ b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSFlakyDerbyIT.java b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSFlakyDerbyIT.java
new file mode 100644
index 0000000..07ef2ee
--- /dev/null
+++ b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSFlakyDerbyIT.java
@@ -0,0 +1,89 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.rss_tests;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class RSSFlakyDerbyIT extends BaseDerby
+{
+  protected RSSSimpleCrawlTester tester;
+  protected MockRSSService rssService = null;
+  
+  public RSSFlakyDerbyIT()
+  {
+    tester = new RSSSimpleCrawlTester(mcfInstance);
+  }
+  
+  // Setup and teardown the mock wiki service
+  
+  @Before
+  public void createRSSService()
+    throws Exception
+  {
+    rssService = new MockRSSService(10);
+    rssService.start();
+  }
+  
+  @After
+  public void shutdownRSSService()
+    throws Exception
+  {
+    if (rssService != null)
+      rssService.stop();
+  }
+
+  @Test
+  public void simpleCrawl()
+    throws Exception
+  {
+    tester.executeTest(new DBInterruptionNotification());
+  }
+  
+  /** Method to get database implementation class */
+  @Override
+  protected String getDatabaseImplementationClass()
+    throws Exception
+  {
+    return FlakyDerbyInstance.class.getName();
+  }
+
+  protected static class DBInterruptionNotification implements RSSSimpleCrawlTester.TestNotification
+  {
+    public void notifyMe()
+      throws Exception
+    {
+      // Wait 5 seconds, then turn of database access for 10 seconds.  Then, do it again.
+      Thread.sleep(5000L);
+      FlakyDerbyInstance.setConnectionWorking(false);
+      System.out.println("Database connectivity is OFF");
+      Thread.sleep(10000L);
+      FlakyDerbyInstance.setConnectionWorking(true);
+      System.out.println("Database connectivity restored");
+      Thread.sleep(5000L);
+      FlakyDerbyInstance.setConnectionWorking(false);
+      System.out.println("Database connectivity is OFF");
+      Thread.sleep(10000L);
+      FlakyDerbyInstance.setConnectionWorking(true);
+      System.out.println("Database connectivity restored");
+    }
+  }
+}
diff --git a/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSSimpleCrawlTester.java b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSSimpleCrawlTester.java
index f8e13ec..46e605a 100644
--- a/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSSimpleCrawlTester.java
+++ b/tests/rss/src/test/java/org/apache/manifoldcf/rss_tests/RSSSimpleCrawlTester.java
@@ -41,6 +41,12 @@
   public void executeTest()
     throws Exception
   {
+    executeTest(null);
+  }
+  
+  public void executeTest(TestNotification tn)
+    throws Exception
+  {
     // Hey, we were able to install the file system connector etc.
     // Now, create a local test job and run it.
     IThreadContext tc = ThreadContextFactory.make();
@@ -60,7 +66,7 @@
     cp.setParameter(RSSConfig.PARAMETER_ROBOTSUSAGE,"none");
     // Now, save
     mgr.save(conn);
-      
+    
     // Create a basic null output connection, and save it.
     IOutputConnectionManager outputMgr = OutputConnectionManagerFactory.make(tc);
     IOutputConnection outputConn = outputMgr.create();
@@ -101,6 +107,11 @@
     // Now, start the job, and wait until it completes.
     long startTime = System.currentTimeMillis();
     jobManager.manualStart(job.getID());
+    // Wait 15 seconds, then do a notification
+    if (tn != null)
+    {
+      tn.notifyMe();
+    }
     instance.waitJobInactiveNative(jobManager,job.getID(),600000L);
     System.err.println("Crawl required "+new Long(System.currentTimeMillis()-startTime).toString()+" milliseconds");
 
@@ -117,4 +128,9 @@
     // Cleanup is automatic by the base class, so we can feel free to leave jobs and connections lying around.
   }
   
+  public static interface TestNotification
+  {
+    public void notifyMe()
+      throws Exception;
+  }
 }
diff --git a/tests/sharepoint/pom.xml b/tests/sharepoint/pom.xml
index 1e50569..de5d439 100644
--- a/tests/sharepoint/pom.xml
+++ b/tests/sharepoint/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -94,7 +94,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/tests/solr/src/test/java/org/apache/manifoldcf/solr_tests/NavigationDerbyUI.java b/tests/solr/src/test/java/org/apache/manifoldcf/solr_tests/NavigationDerbyUI.java
index 0e86be3..ce003a2 100644
--- a/tests/solr/src/test/java/org/apache/manifoldcf/solr_tests/NavigationDerbyUI.java
+++ b/tests/solr/src/test/java/org/apache/manifoldcf/solr_tests/NavigationDerbyUI.java
@@ -50,6 +50,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/webcrawler/pom.xml b/tests/webcrawler/pom.xml
index 35a7f5b..7a6f6d2 100644
--- a/tests/webcrawler/pom.xml
+++ b/tests/webcrawler/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -276,7 +276,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/BigCrawlTester.java b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/BigCrawlTester.java
index e879763..ecd0acd 100644
--- a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/BigCrawlTester.java
+++ b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/BigCrawlTester.java
@@ -80,7 +80,8 @@
     job.setType(job.TYPE_SPECIFIED);
     job.setStartMethod(job.START_DISABLE);
     job.setHopcountMode(job.HOPCOUNT_ACCURATE);
-    job.addHopCountFilter("link",new Long(2));
+    // Start with hopfilter = 1, then we will increase it.
+    job.addHopCountFilter("link",new Long(1));
     //job.addHopCountFilter("redirect",new Long(2));
 
     // Now, set up the document specification.
@@ -120,6 +121,32 @@
     // Check to be sure we actually processed the right number of documents.
     JobStatus status = jobManager.getStatus(job.getID());
     // Four levels deep from 10 site seeds: Each site seed has 1 + 10 + 100 + 1000 = 1111 documents, so 10 has 11110.
+    // First run: 1/10 of the final
+    if (status.getDocumentsProcessed() != 110)
+    {
+      System.err.println("Sleeping for database inspection");
+      while (true)
+      {
+        if (1 < 0)
+          break;
+        Thread.sleep(10000L);
+      }
+      throw new ManifoldCFException("Wrong number of documents processed - expected 110, saw "+new Long(status.getDocumentsProcessed()).toString());
+    }
+    
+    // Increase the hopcount filter value
+    job.addHopCountFilter("link",new Long(2));
+    jobManager.save(job);
+    
+    // Run again
+    startTime = System.currentTimeMillis();
+    jobManager.manualStart(job.getID());
+    instance.waitJobInactiveNative(jobManager,job.getID(),220000000L);
+    System.err.println("Second crawl required "+new Long(System.currentTimeMillis()-startTime).toString()+" milliseconds");
+
+    // Check to be sure we actually processed the right number of documents.
+    status = jobManager.getStatus(job.getID());
+    // Four levels deep from 10 site seeds: Each site seed has 1 + 10 + 100 + 1000 = 1111 documents, so 10 has 11110.
     if (status.getDocumentsProcessed() != 1110)
     {
       System.err.println("Sleeping for database inspection");
@@ -131,7 +158,7 @@
       }
       throw new ManifoldCFException("Wrong number of documents processed - expected 1110, saw "+new Long(status.getDocumentsProcessed()).toString());
     }
-    
+
     // Now, delete the job.
     jobManager.deleteJob(job.getID());
     instance.waitJobDeletedNative(jobManager,job.getID(),18000000L);
diff --git a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/MockWebService.java b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/MockWebService.java
index 7258c55..69f3a04 100644
--- a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/MockWebService.java
+++ b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/MockWebService.java
@@ -36,12 +36,17 @@
 {
   Server server;
   WebServlet servlet;
-    
+  
   public MockWebService(int docsPerLevel)
   {
+    this(docsPerLevel, 10, false);
+  }
+  
+  public MockWebService(int docsPerLevel, int maxLevels, boolean generateBadPages)
+  {
     server = new Server(8191);
     server.setThreadPool(new QueuedThreadPool(100));
-    servlet = new WebServlet(docsPerLevel);
+    servlet = new WebServlet(docsPerLevel, maxLevels, generateBadPages);
     ServletContextHandler context = new ServletContextHandler(ServletContextHandler.SESSIONS);
     context.setContextPath("/web");
     server.setHandler(context);
@@ -61,11 +66,15 @@
   
   public static class WebServlet extends HttpServlet
   {
-    int docsPerLevel;
+    final int docsPerLevel;
+    final int maxLevels;
+    final boolean generateBadPages;
     
-    public WebServlet(int docsPerLevel)
+    public WebServlet(int docsPerLevel, int maxLevels, boolean generateBadPages)
     {
       this.docsPerLevel = docsPerLevel;
+      this.maxLevels = maxLevels;
+      this.generateBadPages = generateBadPages;
     }
     
     @Override
@@ -96,7 +105,9 @@
         {
           throw new IOException("Level number must be a number: "+level);
         }
-        
+        if (theLevel >= maxLevels)
+          throw new IOException("Level number too big.");
+
         int theItem;
         try
         {
@@ -119,43 +130,55 @@
           throw new IOException("Doc number too big: "+theItem+" ; level "+theLevel+" ; docsPerLevel "+docsPerLevel);
 
         // Generate the page
-        res.setStatus(HttpServletResponse.SC_OK);
-        res.setContentType("text/html; charset=utf-8");
-        res.getWriter().printf("<html>\n");
-        res.getWriter().printf("  <body>\n");
-
-        res.getWriter().printf("This is doc number "+theItem+" and level number "+theLevel+" in site "+site+"\n");
-
-        // Generate links to all parents
-        int parentLevel = theLevel;
-        int parentItem = theItem;
-        while (parentLevel > 0)
+        if (generateBadPages && (theItem % 2) == 1)
         {
-          parentLevel--;
-          parentItem /= docsPerLevel;
-          generateLink(res,site,parentLevel,parentItem);
-        }
-        
-        // Temporary: Prevent links to children deeper than a certain level; this is to help
-        // the debug process
-        if (theLevel < 9)
-        {
-          // Generate links to direct children
-          for (int i = 0; i < docsPerLevel; i++)
+          // Generate a bad page.  This is a page with a non-200 return code, and with some content
+          // > 1024 characters
+          res.setStatus(HttpServletResponse.SC_UNAUTHORIZED);
+          res.getWriter().printf("This is the error message for a 401 page.");
+          for (int i = 0; i < 1000; i++)
           {
-            int docNumber = i + theItem * docsPerLevel;
-            generateLink(res,site,theLevel+1,docNumber);
+            res.getWriter().printf(" Error message # "+i);
           }
         }
-        
-        // Generate some limited cross-links to other items at this level
-        for (int i = theItem; i < maxDocsThisLevel && i < theItem + docsPerLevel; i++)
+        else
         {
-          generateLink(res,site,theLevel,i);
+          res.setStatus(HttpServletResponse.SC_OK);
+          res.setContentType("text/html; charset=utf-8");
+          res.getWriter().printf("<html>\n");
+          res.getWriter().printf("  <body>\n");
+
+          res.getWriter().printf("This is doc number "+theItem+" and level number "+theLevel+" in site "+site+"\n");
+
+          // Generate links to all parents
+          int parentLevel = theLevel;
+          int parentItem = theItem;
+          while (parentLevel > 0)
+          {
+            parentLevel--;
+            parentItem /= docsPerLevel;
+            generateLink(res,site,parentLevel,parentItem);
+          }
+          
+          if (theLevel < maxLevels-1)
+          {
+            // Generate links to direct children
+            for (int i = 0; i < docsPerLevel; i++)
+            {
+              int docNumber = i + theItem * docsPerLevel;
+              generateLink(res,site,theLevel+1,docNumber);
+            }
+          }
+          
+          // Generate some limited cross-links to other items at this level
+          for (int i = theItem; i < maxDocsThisLevel && i < theItem + docsPerLevel; i++)
+          {
+            generateLink(res,site,theLevel,i);
+          }
+          
+          res.getWriter().printf("  </body>\n");
+          res.getWriter().printf("</html>\n");
         }
-        
-        res.getWriter().printf("  </body>\n");
-        res.getWriter().printf("</html>\n");
         res.getWriter().flush();
       }
       catch (IOException e)
diff --git a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/NavigationDerbyUI.java b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/NavigationDerbyUI.java
index cf73c1a..195ab28 100644
--- a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/NavigationDerbyUI.java
+++ b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/NavigationDerbyUI.java
@@ -50,7 +50,17 @@
     HTMLTester.Loop loop;
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
-    
+
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();
diff --git a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingDerbyLT.java b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingDerbyLT.java
new file mode 100644
index 0000000..2b9b833
--- /dev/null
+++ b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingDerbyLT.java
@@ -0,0 +1,61 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.webcrawler_tests;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class ThrottlingDerbyLT extends BaseDerby
+{
+
+  protected ThrottlingTester tester;
+  protected MockWebService webService = null;
+  
+  public ThrottlingDerbyLT()
+  {
+    tester = new ThrottlingTester(mcfInstance);
+  }
+  
+  // Setup and teardown the mock wiki service
+  
+  @Before
+  public void createWebService()
+    throws Exception
+  {
+    webService = new MockWebService(10,2,true);
+    webService.start();
+  }
+  
+  @After
+  public void shutdownWebService()
+    throws Exception
+  {
+    if (webService != null)
+      webService.stop();
+  }
+
+  @Test
+  public void bigCrawl()
+    throws Exception
+  {
+    tester.executeTest();
+  }
+}
diff --git a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingPostgresqlLT.java b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingPostgresqlLT.java
new file mode 100644
index 0000000..638b576
--- /dev/null
+++ b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingPostgresqlLT.java
@@ -0,0 +1,61 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.webcrawler_tests;
+
+import java.io.*;
+import java.util.*;
+import org.junit.*;
+
+/** This is a very basic sanity check */
+public class ThrottlingPostgresqlLT extends BasePostgresql
+{
+
+  protected ThrottlingTester tester;
+  protected MockWebService webService = null;
+  
+  public ThrottlingPostgresqlLT()
+  {
+    tester = new ThrottlingTester(mcfInstance);
+  }
+  
+  // Setup and teardown the mock wiki service
+  
+  @Before
+  public void createWebService()
+    throws Exception
+  {
+    webService = new MockWebService(10,2,true);
+    webService.start();
+  }
+  
+  @After
+  public void shutdownWebService()
+    throws Exception
+  {
+    if (webService != null)
+      webService.stop();
+  }
+
+  @Test
+  public void bigCrawl()
+    throws Exception
+  {
+    tester.executeTest();
+  }
+}
diff --git a/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingTester.java b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingTester.java
new file mode 100644
index 0000000..418efc7
--- /dev/null
+++ b/tests/webcrawler/src/test/java/org/apache/manifoldcf/webcrawler_tests/ThrottlingTester.java
@@ -0,0 +1,160 @@
+/* $Id$ */
+
+/**
+* Licensed to the Apache Software Foundation (ASF) under one or more
+* contributor license agreements. See the NOTICE file distributed with
+* this work for additional information regarding copyright ownership.
+* The ASF licenses this file to You under the Apache License, Version 2.0
+* (the "License"); you may not use this file except in compliance with
+* the License. You may obtain a copy of the License at
+*
+* http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*/
+package org.apache.manifoldcf.webcrawler_tests;
+
+import org.apache.manifoldcf.core.interfaces.*;
+import org.apache.manifoldcf.agents.interfaces.*;
+import org.apache.manifoldcf.crawler.interfaces.*;
+import org.apache.manifoldcf.crawler.system.ManifoldCF;
+
+import org.apache.manifoldcf.crawler.connectors.webcrawler.WebcrawlerConnector;
+import org.apache.manifoldcf.crawler.connectors.webcrawler.WebcrawlerConfig;
+
+import java.io.*;
+import java.util.*;
+
+/** This is a repeated 100-document crawl with throttling */
+public class ThrottlingTester
+{
+  protected org.apache.manifoldcf.crawler.tests.ManifoldCFInstance instance;
+  
+  public ThrottlingTester(org.apache.manifoldcf.crawler.tests.ManifoldCFInstance instance)
+  {
+    this.instance = instance;
+  }
+  
+  public void executeTest()
+    throws Exception
+  {
+    // Hey, we were able to install the connector etc.
+    // Now, create a local test job and run it.
+    IThreadContext tc = ThreadContextFactory.make();
+      
+    // Create a basic file system connection, and save it.
+    IRepositoryConnectionManager mgr = RepositoryConnectionManagerFactory.make(tc);
+    IRepositoryConnection conn = mgr.create();
+    conn.setName("Web Connection");
+    conn.setDescription("Web Connection");
+    conn.setClassName("org.apache.manifoldcf.crawler.connectors.webcrawler.WebcrawlerConnector");
+    conn.setMaxConnections(100);
+    ConfigParams cp = conn.getConfigParams();
+    
+    cp.setParameter(WebcrawlerConfig.PARAMETER_EMAIL,"someone@somewhere.com");
+    cp.setParameter(WebcrawlerConfig.PARAMETER_ROBOTSUSAGE,"none");
+    
+    // Throttling
+    ConfigurationNode cn = new ConfigurationNode(WebcrawlerConfig.NODE_BINDESC);
+    cn.setAttribute(WebcrawlerConfig.ATTR_BINREGEXP,"");
+    
+    ConfigurationNode con = new ConfigurationNode(WebcrawlerConfig.NODE_MAXCONNECTIONS);
+    con.setAttribute(WebcrawlerConfig.ATTR_VALUE,"10");
+    cn.addChild(cn.getChildCount(),con);
+    
+    ConfigurationNode maxKB = new ConfigurationNode(WebcrawlerConfig.NODE_MAXKBPERSECOND);
+    maxKB.setAttribute(WebcrawlerConfig.ATTR_VALUE,"128");
+    cn.addChild(cn.getChildCount(),maxKB);
+    
+    ConfigurationNode maxFetches = new ConfigurationNode(WebcrawlerConfig.NODE_MAXFETCHESPERMINUTE);
+    maxFetches.setAttribute(WebcrawlerConfig.ATTR_VALUE,"120");
+    cn.addChild(cn.getChildCount(),maxFetches);
+    
+    cp.addChild(cp.getChildCount(),cn);
+    
+    // Now, save
+    mgr.save(conn);
+      
+    // Create a basic null output connection, and save it.
+    IOutputConnectionManager outputMgr = OutputConnectionManagerFactory.make(tc);
+    IOutputConnection outputConn = outputMgr.create();
+    outputConn.setName("Null Connection");
+    outputConn.setDescription("Null Connection");
+    outputConn.setClassName("org.apache.manifoldcf.agents.output.nullconnector.NullConnector");
+    outputConn.setMaxConnections(100);
+    // Now, save
+    outputMgr.save(outputConn);
+
+    // Create a job.
+    IJobManager jobManager = JobManagerFactory.make(tc);
+    IJobDescription job = jobManager.createJob();
+    job.setDescription("Test Job");
+    job.setConnectionName("Web Connection");
+    job.setOutputConnectionName("Null Connection");
+    job.setType(job.TYPE_SPECIFIED);
+    job.setStartMethod(job.START_DISABLE);
+    job.setHopcountMode(job.HOPCOUNT_NEVERDELETE);
+
+    // Now, set up the document specification.
+    DocumentSpecification ds = job.getSpecification();
+    
+    // Set up 100 seeds
+    SpecificationNode sn = new SpecificationNode(WebcrawlerConfig.NODE_SEEDS);
+    StringBuilder sb = new StringBuilder();
+    for (int i = 0 ; i < 50 ; i++)
+    {
+      sb.append("http://localhost:8191/web/gen.php?site="+i+"&level=0&item=0\n");
+    }
+    sn.setValue(sb.toString());
+    ds.addChild(ds.getChildCount(),sn);
+    
+    sn = new SpecificationNode(WebcrawlerConfig.NODE_INCLUDES);
+    sn.setValue(".*\n");
+    ds.addChild(ds.getChildCount(),sn);
+    
+    sn = new SpecificationNode(WebcrawlerConfig.NODE_INCLUDESINDEX);
+    sn.setValue(".*\n");
+    ds.addChild(ds.getChildCount(),sn);
+
+    // Set up the output specification.
+    OutputSpecification os = job.getOutputSpecification();
+    // Null output connections have no output specification, so this is a no-op.
+    
+    // Save the job.
+    jobManager.save(job);
+
+    for (int i = 0; i < 100; i++)
+    {
+      System.err.println("Iteration # "+i);
+      // Now, start the job, and wait until it completes.
+      long startTime = System.currentTimeMillis();
+      jobManager.manualStart(job.getID());
+      try
+      {
+        instance.waitJobInactiveNative(jobManager,job.getID(),900000L);
+      }
+      catch (ManifoldCFException e)
+      {
+        System.err.println("Halting for inspection");
+        Thread.sleep(9000000L);
+        throw e;
+      }
+      System.err.println(" Crawl required "+new Long(System.currentTimeMillis()-startTime).toString()+" milliseconds");
+
+      // Check to be sure we actually processed the right number of documents.
+      JobStatus status = jobManager.getStatus(job.getID());
+      System.err.println(" "+new Long(status.getDocumentsProcessed())+" documents processed");
+    }
+    
+    // Now, delete the job.
+    jobManager.deleteJob(job.getID());
+    instance.waitJobDeletedNative(jobManager,job.getID(),900000L);
+      
+    // Cleanup is automatic by the base class, so we can feel free to leave jobs and connections lying around.
+  }
+  
+}
diff --git a/tests/wiki/pom.xml b/tests/wiki/pom.xml
index 5930f3b..d730d6e 100644
--- a/tests/wiki/pom.xml
+++ b/tests/wiki/pom.xml
@@ -20,7 +20,7 @@
   <parent>
     <groupId>org.apache.manifoldcf</groupId>
     <artifactId>mcf-tests</artifactId>
-    <version>1.2-SNAPSHOT</version>
+    <version>1.5-SNAPSHOT</version>
     <relativePath>../pom.xml</relativePath>
   </parent>
   <modelVersion>4.0.0</modelVersion>
@@ -282,7 +282,7 @@
     <dependency>
       <groupId>org.apache.httpcomponents</groupId>
       <artifactId>httpclient</artifactId>
-      <version>${httpcomponent.version}</version>
+      <version>${httpcomponent.httpclient.version}</version>
     </dependency>
     <dependency>
       <groupId>commons-logging</groupId>
diff --git a/tests/wiki/src/test/java/org/apache/manifoldcf/wiki_tests/NavigationDerbyUI.java b/tests/wiki/src/test/java/org/apache/manifoldcf/wiki_tests/NavigationDerbyUI.java
index f0e8767..03329c3 100644
--- a/tests/wiki/src/test/java/org/apache/manifoldcf/wiki_tests/NavigationDerbyUI.java
+++ b/tests/wiki/src/test/java/org/apache/manifoldcf/wiki_tests/NavigationDerbyUI.java
@@ -80,6 +80,16 @@
     
     window = testerInstance.openMainWindow("http://localhost:8346/mcf-crawler-ui/index.jsp");
     
+    // Login
+    form = window.findForm(testerInstance.createStringDescription("loginform"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("userID"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    textarea = form.findTextarea(testerInstance.createStringDescription("password"));
+    textarea.setValue(testerInstance.createStringDescription("admin"));
+    button = window.findButton(testerInstance.createStringDescription("Login"));
+    button.click();
+    window = testerInstance.findWindow(null);
+
     // Define an output connection via the UI
     link = window.findLink(testerInstance.createStringDescription("List output connections"));
     link.click();