Docs moved to the main repo
diff --git a/.github/ISSUE_TEMPLATE.md b/.github/ISSUE_TEMPLATE.md
index be4f816..c89c391 100644
--- a/.github/ISSUE_TEMPLATE.md
+++ b/.github/ISSUE_TEMPLATE.md
@@ -1,32 +1,3 @@
-<!--- Provide a general summary of the issue in the Title above -->
+Please make an issue in the main CouchDB repo
 
-## Expected Behavior
-<!--- If you're describing a bug, tell us what should happen -->
-<!--- If you're suggesting a change/improvement, tell us how it should work -->
-
-## Current Behavior
-<!--- If describing a bug, tell us what happens instead of the expected behavior -->
-<!--- If suggesting a change/improvement, explain the difference from current behavior -->
-
-## Possible Solution
-<!--- Not obligatory, but suggest a fix/reason for the bug, -->
-<!--- or ideas how to implement the addition or change -->
-
-## Steps to Reproduce (for bugs)
-<!--- Provide a link to a live example, or an unambiguous set of steps to -->
-<!--- reproduce this bug. Include code to reproduce, if relevant -->
-1.
-2.
-3.
-4.
-
-## Context
-<!--- How has this issue affected you? What are you trying to accomplish? -->
-<!--- Providing context helps us come up with a solution that is most useful in the real world -->
-
-## Your Environment
-<!--- Include as many relevant details about the environment you experienced the bug in -->
-* Version used:
-* Browser Name and version:
-* Operating System and version (desktop or mobile):
-* Link to your project:
+https://github.com/apache/couchdb
diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md
index c538e44..dc2e864 100644
--- a/.github/PULL_REQUEST_TEMPLATE.md
+++ b/.github/PULL_REQUEST_TEMPLATE.md
@@ -1,40 +1,3 @@
-<!-- Thank you for your contribution!
+Please make a PR against the main CouchDB repo
 
-     Please file this form by replacing the Markdown comments
-     with your text. If a section needs no action - remove it.
-
-     Also remember, that CouchDB uses the Review-Then-Commit (RTC) model
-     of code collaboration. Positive feedback is represented +1 from committers
-     and negative is a -1. The -1 also means veto, and needs to be addressed
-     to proceed. Once there are no objections, the PR can be merged by a
-     CouchDB committer.
-
-     See: http://couchdb.apache.org/bylaws.html#decisions for more info. -->
-
-## Overview
-
-<!-- Please give a short brief for the pull request,
-     what problem it solves or how it makes things better. -->
-
-## Testing recommendations
-
-<!-- Describe how we can test your changes.
-     Does it provides any behaviour that the end users
-     could notice? -->
-
-## GitHub issue number
-
-<!-- If this is a significant change, please file a separate issue at:
-     https://github.com/apache/couchdb-documentation/issues
-     and include the number here and in commit message(s) using
-     syntax like "Fixes #472" or "Fixes apache/couchdb#472".  -->
-
-## Related Pull Requests
-
-<!-- If your changes affects multiple components in different
-     repositories please put links to those pull requests here.  -->
-
-## Checklist
-
-- [ ] Update [rebar.config.script](https://github.com/apache/couchdb/blob/main/rebar.config.script) with the commit hash once this PR is rebased and merged
-<!-- Before opening the PR, consider running `make check` locally for a faster turnaround time -->
+https://github.com/apache/couchdb
diff --git a/.travis.yml b/.travis.yml
deleted file mode 100644
index 7bfff38..0000000
--- a/.travis.yml
+++ /dev/null
@@ -1,25 +0,0 @@
-language: python
-python:
-  - 3.6
-
-# start a push build on master and release branches + PRs build on every branch
-# Avoid double build on PRs (See https://github.com/travis-ci/travis-ci/issues/1147)
-branches:
-  only:
-    - master
-    - /^\d+\.x\.x$/
-    - /^\d+\.\d+\.x$/
-
-install:
-  - pip install -r requirements.txt
-
-script:
-  - make ${TARGET}
-
-env:
-  matrix:
-    - TARGET=html
-    - TARGET=man
-    - TARGET=check
-
-cache: apt
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
deleted file mode 100644
index db5aa08..0000000
--- a/CONTRIBUTING.md
+++ /dev/null
@@ -1,4 +0,0 @@
-This repository follows the same contribution guidelines as the
-main Apache CouchDB contribution guidelines:
-
-https://github.com/apache/couchdb/blob/main/CONTRIBUTING.md
diff --git a/Jenkinsfile b/Jenkinsfile
deleted file mode 100644
index e4c4046..0000000
--- a/Jenkinsfile
+++ /dev/null
@@ -1,58 +0,0 @@
-pipeline {
-
-  agent none
-
-  environment {
-    GIT_COMMITTER_NAME = 'Jenkins User'
-    GIT_COMMITTER_EMAIL = 'couchdb@apache.org'
-    DOCKER_IMAGE = 'couchdbdev/debian-buster-erlang-all:latest'
-    DOCKER_ARGS = '-e npm_config_cache=npm-cache -e HOME=. -v=/etc/passwd:/etc/passwd -v /etc/group:/etc/group'
-  }
-
-  options {
-    buildDiscarder(logRotator(numToKeepStr: '10', artifactNumToKeepStr: '10'))
-    // This fails the build immediately if any parallel step fails
-    parallelsAlwaysFailFast()
-    preserveStashes(buildCount: 10)
-    timeout(time: 30, unit: 'MINUTES')
-    timestamps()
-  }
-
-  stages {
-    stage('Test') {
-      matrix {
-        axes {
-          axis {
-            name 'TARGET'
-            values "html", "man", "check"
-          }
-        }
-        stages {
-          stage('Test') {
-            agent {
-              docker {
-                image "${DOCKER_IMAGE}"
-                label 'docker'
-                args "${DOCKER_ARGS}"
-              }
-            }
-            options {
-              timeout(time: 90, unit: 'MINUTES')
-            }
-            steps {
-              sh '''
-                make ${TARGET}
-              '''
-            }
-            post {
-              cleanup {
-                // UGH see https://issues.jenkins-ci.org/browse/JENKINS-41894
-                sh 'rm -rf ${WORKSPACE}/*'
-              }
-            }
-          } // stage
-        } // stages
-      } // matrix
-    } // stage "Test"
-  } // stages
-} // pipeline
diff --git a/LICENSE b/LICENSE
deleted file mode 100644
index ee1813e..0000000
--- a/LICENSE
+++ /dev/null
@@ -1,345 +0,0 @@
-
-                                Apache License
-                          Version 2.0, January 2004
-                       http://www.apache.org/licenses/
-
-  TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
-
-  1. Definitions.
-
-     "License" shall mean the terms and conditions for use, reproduction,
-     and distribution as defined by Sections 1 through 9 of this document.
-
-     "Licensor" shall mean the copyright owner or entity authorized by
-     the copyright owner that is granting the License.
-
-     "Legal Entity" shall mean the union of the acting entity and all
-     other entities that control, are controlled by, or are under common
-     control with that entity. For the purposes of this definition,
-     "control" means (i) the power, direct or indirect, to cause the
-     direction or management of such entity, whether by contract or
-     otherwise, or (ii) ownership of fifty percent (50%) or more of the
-     outstanding shares, or (iii) beneficial ownership of such entity.
-
-     "You" (or "Your") shall mean an individual or Legal Entity
-     exercising permissions granted by this License.
-
-     "Source" form shall mean the preferred form for making modifications,
-     including but not limited to software source code, documentation
-     source, and configuration files.
-
-     "Object" form shall mean any form resulting from mechanical
-     transformation or translation of a Source form, including but
-     not limited to compiled object code, generated documentation,
-     and conversions to other media types.
-
-     "Work" shall mean the work of authorship, whether in Source or
-     Object form, made available under the License, as indicated by a
-     copyright notice that is included in or attached to the work
-     (an example is provided in the Appendix below).
-
-     "Derivative Works" shall mean any work, whether in Source or Object
-     form, that is based on (or derived from) the Work and for which the
-     editorial revisions, annotations, elaborations, or other modifications
-     represent, as a whole, an original work of authorship. For the purposes
-     of this License, Derivative Works shall not include works that remain
-     separable from, or merely link (or bind by name) to the interfaces of,
-     the Work and Derivative Works thereof.
-
-     "Contribution" shall mean any work of authorship, including
-     the original version of the Work and any modifications or additions
-     to that Work or Derivative Works thereof, that is intentionally
-     submitted to Licensor for inclusion in the Work by the copyright owner
-     or by an individual or Legal Entity authorized to submit on behalf of
-     the copyright owner. For the purposes of this definition, "submitted"
-     means any form of electronic, verbal, or written communication sent
-     to the Licensor or its representatives, including but not limited to
-     communication on electronic mailing lists, source code control systems,
-     and issue tracking systems that are managed by, or on behalf of, the
-     Licensor for the purpose of discussing and improving the Work, but
-     excluding communication that is conspicuously marked or otherwise
-     designated in writing by the copyright owner as "Not a Contribution."
-
-     "Contributor" shall mean Licensor and any individual or Legal Entity
-     on behalf of whom a Contribution has been received by Licensor and
-     subsequently incorporated within the Work.
-
-  2. Grant of Copyright License. Subject to the terms and conditions of
-     this License, each Contributor hereby grants to You a perpetual,
-     worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-     copyright license to reproduce, prepare Derivative Works of,
-     publicly display, publicly perform, sublicense, and distribute the
-     Work and such Derivative Works in Source or Object form.
-
-  3. Grant of Patent License. Subject to the terms and conditions of
-     this License, each Contributor hereby grants to You a perpetual,
-     worldwide, non-exclusive, no-charge, royalty-free, irrevocable
-     (except as stated in this section) patent license to make, have made,
-     use, offer to sell, sell, import, and otherwise transfer the Work,
-     where such license applies only to those patent claims licensable
-     by such Contributor that are necessarily infringed by their
-     Contribution(s) alone or by combination of their Contribution(s)
-     with the Work to which such Contribution(s) was submitted. If You
-     institute patent litigation against any entity (including a
-     cross-claim or counterclaim in a lawsuit) alleging that the Work
-     or a Contribution incorporated within the Work constitutes direct
-     or contributory patent infringement, then any patent licenses
-     granted to You under this License for that Work shall terminate
-     as of the date such litigation is filed.
-
-  4. Redistribution. You may reproduce and distribute copies of the
-     Work or Derivative Works thereof in any medium, with or without
-     modifications, and in Source or Object form, provided that You
-     meet the following conditions:
-
-     (a) You must give any other recipients of the Work or
-         Derivative Works a copy of this License; and
-
-     (b) You must cause any modified files to carry prominent notices
-         stating that You changed the files; and
-
-     (c) You must retain, in the Source form of any Derivative Works
-         that You distribute, all copyright, patent, trademark, and
-         attribution notices from the Source form of the Work,
-         excluding those notices that do not pertain to any part of
-         the Derivative Works; and
-
-     (d) If the Work includes a "NOTICE" text file as part of its
-         distribution, then any Derivative Works that You distribute must
-         include a readable copy of the attribution notices contained
-         within such NOTICE file, excluding those notices that do not
-         pertain to any part of the Derivative Works, in at least one
-         of the following places: within a NOTICE text file distributed
-         as part of the Derivative Works; within the Source form or
-         documentation, if provided along with the Derivative Works; or,
-         within a display generated by the Derivative Works, if and
-         wherever such third-party notices normally appear. The contents
-         of the NOTICE file are for informational purposes only and
-         do not modify the License. You may add Your own attribution
-         notices within Derivative Works that You distribute, alongside
-         or as an addendum to the NOTICE text from the Work, provided
-         that such additional attribution notices cannot be construed
-         as modifying the License.
-
-     You may add Your own copyright statement to Your modifications and
-     may provide additional or different license terms and conditions
-     for use, reproduction, or distribution of Your modifications, or
-     for any such Derivative Works as a whole, provided Your use,
-     reproduction, and distribution of the Work otherwise complies with
-     the conditions stated in this License.
-
-  5. Submission of Contributions. Unless You explicitly state otherwise,
-     any Contribution intentionally submitted for inclusion in the Work
-     by You to the Licensor shall be under the terms and conditions of
-     this License, without any additional terms or conditions.
-     Notwithstanding the above, nothing herein shall supersede or modify
-     the terms of any separate license agreement you may have executed
-     with Licensor regarding such Contributions.
-
-  6. Trademarks. This License does not grant permission to use the trade
-     names, trademarks, service marks, or product names of the Licensor,
-     except as required for reasonable and customary use in describing the
-     origin of the Work and reproducing the content of the NOTICE file.
-
-  7. Disclaimer of Warranty. Unless required by applicable law or
-     agreed to in writing, Licensor provides the Work (and each
-     Contributor provides its Contributions) on an "AS IS" BASIS,
-     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
-     implied, including, without limitation, any warranties or conditions
-     of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
-     PARTICULAR PURPOSE. You are solely responsible for determining the
-     appropriateness of using or redistributing the Work and assume any
-     risks associated with Your exercise of permissions under this License.
-
-  8. Limitation of Liability. In no event and under no legal theory,
-     whether in tort (including negligence), contract, or otherwise,
-     unless required by applicable law (such as deliberate and grossly
-     negligent acts) or agreed to in writing, shall any Contributor be
-     liable to You for damages, including any direct, indirect, special,
-     incidental, or consequential damages of any character arising as a
-     result of this License or out of the use or inability to use the
-     Work (including but not limited to damages for loss of goodwill,
-     work stoppage, computer failure or malfunction, or any and all
-     other commercial damages or losses), even if such Contributor
-     has been advised of the possibility of such damages.
-
-  9. Accepting Warranty or Additional Liability. While redistributing
-     the Work or Derivative Works thereof, You may choose to offer,
-     and charge a fee for, acceptance of support, warranty, indemnity,
-     or other liability obligations and/or rights consistent with this
-     License. However, in accepting such obligations, You may act only
-     on Your own behalf and on Your sole responsibility, not on behalf
-     of any other Contributor, and only if You agree to indemnify,
-     defend, and hold each Contributor harmless for any liability
-     incurred by, or claims asserted against, such Contributor by reason
-     of your accepting any such warranty or additional liability.
-
-  END OF TERMS AND CONDITIONS
-
-  APPENDIX: How to apply the Apache License to your work.
-
-     To apply the Apache License to your work, attach the following
-     boilerplate notice, with the fields enclosed by brackets "[]"
-     replaced with your own identifying information. (Don't include
-     the brackets!)  The text should be enclosed in the appropriate
-     comment syntax for the file format. We also recommend that a
-     file or class name and description of purpose be included on the
-     same "printed page" as the copyright notice for easier
-     identification within third-party archives.
-
-  Copyright [yyyy] [name of copyright owner]
-
-  Licensed under the Apache License, Version 2.0 (the "License");
-  you may not use this file except in compliance with the License.
-  You may obtain a copy of the License at
-
-      http://www.apache.org/licenses/LICENSE-2.0
-
-  Unless required by applicable law or agreed to in writing, software
-  distributed under the License is distributed on an "AS IS" BASIS,
-  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-  See the License for the specific language governing permissions and
-  limitations under the License.
-
-Apache CouchDB Subcomponents
-
-The Apache CouchDB project includes a number of subcomponents with separate
-copyright notices and license terms. Your use of the code for the these
-subcomponents is subject to the terms and conditions of the following licenses.
-
-For the build/html/_static components:
-
-  Copyright (c) 2007-2011 by the Sphinx team (see AUTHORS file).
-  All rights reserved.
-
-  Redistribution and use in source and binary forms, with or without
-  modification, are permitted provided that the following conditions are
-  met:
-
-  * Redistributions of source code must retain the above copyright
-    notice, this list of conditions and the following disclaimer.
-
-  * Redistributions in binary form must reproduce the above copyright
-    notice, this list of conditions and the following disclaimer in the
-    documentation and/or other materials provided with the distribution.
-
-  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-For the build/html/_static/jquery.js component:
-
-  Copyright 2010, John Resig
-
-  Copyright 2010, The Dojo Foundation
-
-  Copyright 2012 jQuery Foundation and other contributors
-  http://jquery.com/
-
-  Permission is hereby granted, free of charge, to any person obtaining
-  a copy of this software and associated documentation files (the
-  "Software"), to deal in the Software without restriction, including
-  without limitation the rights to use, copy, modify, merge, publish,
-  distribute, sublicense, and/or sell copies of the Software, and to
-  permit persons to whom the Software is furnished to do so, subject to
-  the following conditions:
-
-  The above copyright notice and this permission notice shall be
-  included in all copies or substantial portions of the Software.
-
-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-  EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
-  MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-  NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE
-  LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION
-  OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
-  WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
-
-For the build/html/_static/underscore.js component:
-
-  Copyright (c) 2009-2012 Jeremy Ashkenas, DocumentCloud
-
-  Permission is hereby granted, free of charge, to any person
-  obtaining a copy of this software and associated documentation
-  files (the "Software"), to deal in the Software without
-  restriction, including without limitation the rights to use,
-  copy, modify, merge, publish, distribute, sublicense, and/or sell
-  copies of the Software, and to permit persons to whom the
-  Software is furnished to do so, subject to the following
-  conditions:
-
-  The above copyright notice and this permission notice shall be
-  included in all copies or substantial portions of the Software.
-
-  THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
-  EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES
-  OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
-  NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
-  HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
-  WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
-  FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
-  OTHER DEALINGS IN THE SOFTWARE.
-
-For the static/rtd.css component:
-
-  Copyright (c) 2007-2011 by the Sphinx team (see AUTHORS file).
-  All rights reserved.
-
-  Redistribution and use in source and binary forms, with or without
-  modification, are permitted provided that the following conditions are
-  met:
-
-  * Redistributions of source code must retain the above copyright
-    notice, this list of conditions and the following disclaimer.
-
-  * Redistributions in binary form must reproduce the above copyright
-    notice, this list of conditions and the following disclaimer in the
-    documentation and/or other materials provided with the distribution.
-
-  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-  "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-  LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-  A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-  OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-  SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-  LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-  DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-  THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
-
-For ext/httpdomain.py
-
-Copyright (c) 2010 by the contributors Hong Minhee <minhee@dahlia.kr>.
-All rights reserved.
-
-Redistribution and use in source and binary forms, with or without
-modification, are permitted provided that the following conditions are
-met:
-
-* Redistributions of source code must retain the above copyright
-  notice, this list of conditions and the following disclaimer.
-
-* Redistributions in binary form must reproduce the above copyright
-  notice, this list of conditions and the following disclaimer in the
-  documentation and/or other materials provided with the distribution.
-
-THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
-"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
-LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
-A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
-OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
-SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
-LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
-DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
-THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
-(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
-OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/Makefile b/Makefile
deleted file mode 100644
index d9b157a..0000000
--- a/Makefile
+++ /dev/null
@@ -1,75 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-SPHINXBUILD  := sphinx-build
-TEX          := tex
-PDFLATEX     := pdflatex
-MAKEINFO     := makeinfo
-
-BUILDDIR     := build
-SOURCE       := src/
-PAPERSIZE    := -D latex_elements.papersize=a4
-SPHINXFLAGS  := -a -W -n -A local=1 $(PAPERSIZE) -d $(BUILDDIR)/doctree
-SPHINXOPTS   := $(SPHINXFLAGS) $(SOURCE)
-
-ENSURECMD=which $(1) > /dev/null 2>&1 || (echo "*** Make sure that $(1) is installed and on your path" && exit 1)
-
-all: html man
-
-clean:
-	rm -rf $(BUILDDIR)
-
-html: $(SPHINXBUILD)
-	$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
-
-latex: $(TEX)
-	$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
-
-pdf: latex $(PDFLATEX)
-	$(MAKE) LATEXOPTS=' -interaction=batchmode ' -C $(BUILDDIR)/latex all-pdf
-
-info: $(SPHINXBUILD) $(MAKEINFO)
-	$(SPHINXBUILD) -b texinfo $(SPHINXOPTS) $(BUILDDIR)/texinfo
-	make -C $(BUILDDIR)/texinfo info
-
-man: $(SPHINXBUILD)
-	$(SPHINXBUILD) -b $@ $(SPHINXOPTS) $(BUILDDIR)/$@
-
-check:
-	python3 ext/linter.py $(SOURCE)
-
-install-html:
-install-pdf:
-install-info:
-install-man:
-
-install: install-html install-pdf install-info install-man
-	# copy-files
-
-distclean: clean
-	# delete-installed-files
-
-
-$(SPHINXBUILD):
-	@$(call ENSURECMD,$@)
-
-$(TEX):
-	@$(call ENSURECMD,$@)
-
-$(PDFLATEX):
-	@$(call ENSURECMD,$@)
-
-$(MAKEINFO):
-	@$(call ENSURECMD,$@)
-
-$(PYTHON):
-	@$(call ENSURECMD,$@)
diff --git a/NOTICE b/NOTICE
deleted file mode 100644
index f093282..0000000
--- a/NOTICE
+++ /dev/null
@@ -1,31 +0,0 @@
-Apache CouchDB
-Copyright 2009-2014 The Apache Software Foundation
-
-This product includes software developed at
-The Apache Software Foundation (http://www.apache.org/).
-
-This product also includes the following third-party components:
-
- * Sphinx (http://sphinx-doc.org/)
-
-   Copyright 2011, the Sphinx team
-
- * httpdomain.py (https://bitbucket.org/birkenfeld/sphinx-contrib/src/6a3a8ca714cfce957530890d0431d9a7b88c930f/httpdomain/sphinxcontrib/httpdomain.py?at=httpdomain-1.1.9)
-
-   Copyright (c) 2010, Hong Minhee <minhee@dahlia.kr>
-
- * src/externals.rst (http://davispj.com/2010/09/26/new-couchdb-externals-api.html)
-
-   Copyright 2008-2010, Paul Joseph Davis <paul.joseph.davis@gmail.com>
-
- * src/ddocs/views/intro.rst src/ddocs/views/nosql.rst src/ddocs/views/pagination.rst
-
-   Copyright 2013, Creative Commons Attribution license
-
- * src/ddocs/views/joins.rst (Using View Collation)
-
-   Copyright 2007, Christopher Lenz <cmlenz@gmail.com>
-
- * templates/couchdb/domainindex.html
-
-   Copyright 2007-2011 by the Sphinx team
diff --git a/README.md b/README.md
index b67dbaa..1378394 100644
--- a/README.md
+++ b/README.md
@@ -1,31 +1,5 @@
-# CouchDB Documentation
+## NOTE ##
 
-This repository contains the Sphinx source for Apache CouchDB's documentation.
-You can view the latest rendered build of this content at:
+CouchDB documentation was moved to the main repository
 
-    http://docs.couchdb.org/en/latest
-
-# Building this repo
-
-Install Python3 and pip. Then:
-
-```sh
-$ python3 -m venv .venv
-$ source .venv/bin/activate
-$ pip install -r requirements.txt
-$ make html # builds the docs
-$ make check # syntax checks the docs
-```
-
-# Feedback, Issues, Contributing
-
-General feedback is welcome at our [user][1] or [developer][2] mailing lists.
-
-Apache CouchDB has a [CONTRIBUTING][3] file with details on how to get started
-with issue reporting or contributing to the upkeep of this project.
-
-[1]: http://mail-archives.apache.org/mod_mbox/couchdb-user/
-[2]: http://mail-archives.apache.org/mod_mbox/couchdb-dev/
-[3]: https://github.com/apache/couchdb/blob/main/CONTRIBUTING.md
-
-
+https://github.com/apache/couchdb.git in [src/docs](https://github.com/apache/couchdb/tree/main/src/docs)
diff --git a/ext/configdomain.py b/ext/configdomain.py
deleted file mode 100644
index 66ed532..0000000
--- a/ext/configdomain.py
+++ /dev/null
@@ -1,113 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-from sphinx import addnodes
-from sphinx.roles import XRefRole
-from sphinx.domains import Domain, ObjType, Index
-from sphinx.directives import ObjectDescription
-from sphinx.util.nodes import make_refnode
-
-
-class ConfigObject(ObjectDescription):
-    def handle_signature(self, sig, signode):
-        if "::" in sig:
-            name, descr = map(lambda i: i.strip(), sig.split("::"))
-        else:
-            name, descr = sig.strip(), ""
-
-        signode["name"] = name
-        signode["descr"] = descr
-
-        domain, objtype = self.name.split(":")
-        if objtype == "section":
-            self.env.temp_data["section"] = signode["name"]
-            name = "[%s]" % signode["name"]
-
-        signode += addnodes.desc_name(name, name)
-
-        return signode["name"]
-
-    def needs_arglist(self):
-        return False
-
-    def add_target_and_index(self, name, sig, signode):
-        section = self.env.temp_data["section"]
-        domain, objtype = self.name.split(":")
-        data = self.env.domaindata[domain][objtype]
-        if objtype == "section":
-            data[name] = (self.env.docname, signode["descr"])
-            signode["ids"].append(signode["name"])
-        elif objtype == "option":
-            idx = "%s/%s" % (section, signode["name"])
-            data[idx] = (self.env.docname, signode["descr"])
-            signode["ids"].append(idx)
-        else:
-            assert "unknown object type %r" % objtype
-
-
-class ConfigIndex(Index):
-
-    name = "ref"
-    localname = "Configuration Quick Reference"
-    shortname = "Config Quick Reference"
-
-    def generate(self, docnames=None):
-        content = dict(
-            (name, [(name, 1, info[0], name, "", "", info[1])])
-            for name, info in self.domain.data["section"].items()
-        )
-
-        options = self.domain.data["option"]
-        for idx, info in sorted(options.items()):
-            path, descr = info
-            section, name = idx.split("/", 1)
-            content[section].append(
-                (name, 2, path, "%s/%s" % (section, name), "", "", descr)
-            )
-
-        return (sorted(content.items()), False)
-
-
-class ConfigDomain(Domain):
-
-    name = "config"
-    label = "CONFIG"
-
-    object_types = {
-        "section": ObjType("section", "section", "obj"),
-        "option": ObjType("option", "option", "obj"),
-    }
-
-    directives = {"section": ConfigObject, "option": ConfigObject}
-
-    roles = {"section": XRefRole(), "option": XRefRole()}
-
-    initial_data = {"section": {}, "option": {}}
-
-    indices = [ConfigIndex]
-
-    def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
-        if typ == "section":
-            info = self.data[typ][target]
-            title = "[%s]" % target
-        elif typ == "option":
-            assert "/" in target, "option without section: %r" % target
-            section, option = target.split("/", 1)
-            info = self.data[typ][target]
-            title = option
-        else:
-            assert "unknown role %r for target %r" % (typ, target)
-        return make_refnode(builder, fromdocname, info[0], target, contnode, title)
-
-
-def setup(app):
-    app.add_domain(ConfigDomain)
diff --git a/ext/github.py b/ext/github.py
deleted file mode 100644
index f812d9e..0000000
--- a/ext/github.py
+++ /dev/null
@@ -1,46 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-import os
-
-
-def get_github_url(app, view, path):
-    return "https://github.com/{project}/{view}/{branch}/{path}".format(
-        project=app.config.github_project,
-        view=view,
-        branch=app.config.github_branch,
-        path=path,
-    )
-
-
-def html_page_context(app, pagename, templatename, context, doctree):
-    # base template for common sphinx pages like search or genindex
-    # there is no need to provide github show/edit links for them
-    if templatename != "page.html":
-        return
-
-    # ok, I'm aware about that this is wrong way to concat url segments
-    # but this is one is most portable between 2.x and 3.x versions
-    # plus it fits our current requirements. But still, patches are welcome (:
-    path = os.path.join(
-        app.config.github_docs_path,
-        os.path.relpath(doctree.get("source"), app.builder.srcdir),
-    )
-    context["github_show_url"] = get_github_url(app, "blob", path)
-    context["github_edit_url"] = get_github_url(app, "edit", path)
-
-
-def setup(app):
-    app.add_config_value("github_project", "", True)
-    app.add_config_value("github_branch", "master", True)
-    app.add_config_value("github_docs_path", "", True)
-    app.connect("html-page-context", html_page_context)
diff --git a/ext/httpdomain.py b/ext/httpdomain.py
deleted file mode 100644
index 5e8803d..0000000
--- a/ext/httpdomain.py
+++ /dev/null
@@ -1,712 +0,0 @@
-"""
-    sphinxcontrib.httpdomain
-    ~~~~~~~~~~~~~~~~~~~~~~~~
-
-    The HTTP domain for documenting RESTful HTTP APIs.
-
-    :copyright: Copyright 2011 by Hong Minhee
-    :license: BSD, see LICENSE for details.
-
-"""
-
-import re
-
-from docutils import nodes
-from docutils.parsers.rst.roles import set_classes
-
-from pygments.lexer import RegexLexer, bygroups
-from pygments.lexers import get_lexer_by_name
-from pygments.token import Literal, Text, Operator, Keyword, Name, Number
-from pygments.util import ClassNotFound
-
-from sphinx import addnodes
-from sphinx.roles import XRefRole
-from sphinx.domains import Domain, ObjType, Index
-from sphinx.directives import ObjectDescription, directives
-from sphinx.util.nodes import make_refnode
-from sphinx.util.docfields import GroupedField, TypedField
-
-
-class DocRef(object):
-    """Represents a link to an RFC which defines an HTTP method."""
-
-    def __init__(self, base_url, anchor, section):
-        """Stores the specified attributes which represent a URL which links to
-        an RFC which defines an HTTP method.
-
-        """
-        self.base_url = base_url
-        self.anchor = anchor
-        self.section = section
-
-    def __repr__(self):
-        """Returns the URL which this object represents, which points to the
-        location of the RFC which defines some HTTP method.
-
-        """
-        return "{0}#{1}{2}".format(self.base_url, self.anchor, self.section)
-
-
-class RFC2616Ref(DocRef):
-    def __init__(self, section):
-        url = "http://www.w3.org/Protocols/rfc2616/rfc2616-sec{0:d}.html"
-        url = url.format(int(section))
-        super(RFC2616Ref, self).__init__(url, "sec", section)
-
-
-class IETFRef(DocRef):
-    def __init__(self, rfc, section):
-        url = "http://tools.ietf.org/html/rfc{0:d}".format(rfc)
-        super(IETFRef, self).__init__(url, "section-", section)
-
-
-class EventSourceRef(DocRef):
-    def __init__(self, section):
-        url = "http://www.w3.org/TR/eventsource/"
-        super(EventSourceRef, self).__init__(url, section, "")
-
-
-#: Mapping from lowercase HTTP method name to :class:`DocRef` object which
-#: maintains the URL which points to the section of the RFC which defines that
-#: HTTP method.
-METHOD_REFS = {
-    "patch": IETFRef(5789, 2),
-    "options": RFC2616Ref(9.2),
-    "get": RFC2616Ref(9.3),
-    "head": RFC2616Ref(9.4),
-    "post": RFC2616Ref(9.5),
-    "put": RFC2616Ref(9.6),
-    "delete": RFC2616Ref(9.7),
-    "trace": RFC2616Ref(9.8),
-    "connect": RFC2616Ref(9.9),
-    "copy": IETFRef(2518, 8.8),
-    "any": "",
-}
-
-#: Mapping from HTTP header name to :class:`DocRef` object which
-#: maintains the URL which points to the related section of the RFC.
-HEADER_REFS = {
-    "Accept": RFC2616Ref(14.1),
-    "Accept-Charset": RFC2616Ref(14.2),
-    "Accept-Encoding": RFC2616Ref(14.3),
-    "Accept-Language": RFC2616Ref(14.4),
-    "Accept-Ranges": RFC2616Ref(14.5),
-    "Age": RFC2616Ref(14.6),
-    "Allow": RFC2616Ref(14.7),
-    "Authorization": RFC2616Ref(14.8),
-    "Cache-Control": RFC2616Ref(14.9),
-    "Cookie": IETFRef(2109, "4.3.4"),
-    "Connection": RFC2616Ref(14.10),
-    "Content-Encoding": RFC2616Ref(14.11),
-    "Content-Language": RFC2616Ref(14.12),
-    "Content-Length": RFC2616Ref(14.13),
-    "Content-Location": RFC2616Ref(14.14),
-    "Content-MD5": RFC2616Ref(14.15),
-    "Content-Range": RFC2616Ref(14.16),
-    "Content-Type": RFC2616Ref(14.17),
-    "Date": RFC2616Ref(14.18),
-    "Destination": IETFRef(2518, 9.3),
-    "ETag": RFC2616Ref(14.19),
-    "Expect": RFC2616Ref(14.20),
-    "Expires": RFC2616Ref(14.21),
-    "From": RFC2616Ref(14.22),
-    "Host": RFC2616Ref(14.23),
-    "If-Match": RFC2616Ref(14.24),
-    "If-Modified-Since": RFC2616Ref(14.25),
-    "If-None-Match": RFC2616Ref(14.26),
-    "If-Range": RFC2616Ref(14.27),
-    "If-Unmodified-Since": RFC2616Ref(14.28),
-    "Last-Event-ID": EventSourceRef("last-event-id"),
-    "Last-Modified": RFC2616Ref(14.29),
-    "Location": RFC2616Ref(14.30),
-    "Max-Forwards": RFC2616Ref(14.31),
-    "Pragma": RFC2616Ref(14.32),
-    "Proxy-Authenticate": RFC2616Ref(14.33),
-    "Proxy-Authorization": RFC2616Ref(14.34),
-    "Range": RFC2616Ref(14.35),
-    "Referer": RFC2616Ref(14.36),
-    "Retry-After": RFC2616Ref(14.37),
-    "Server": RFC2616Ref(14.38),
-    "Set-Cookie": IETFRef(2109, "4.2.2"),
-    "TE": RFC2616Ref(14.39),
-    "Trailer": RFC2616Ref(14.40),
-    "Transfer-Encoding": RFC2616Ref(14.41),
-    "Upgrade": RFC2616Ref(14.42),
-    "User-Agent": RFC2616Ref(14.43),
-    "Vary": RFC2616Ref(14.44),
-    "Via": RFC2616Ref(14.45),
-    "Warning": RFC2616Ref(14.46),
-    "WWW-Authenticate": RFC2616Ref(14.47),
-}
-
-
-HTTP_STATUS_CODES = {
-    100: "Continue",
-    101: "Switching Protocols",
-    102: "Processing",
-    200: "OK",
-    201: "Created",
-    202: "Accepted",
-    203: "Non Authoritative Information",
-    204: "No Content",
-    205: "Reset Content",
-    206: "Partial Content",
-    207: "Multi Status",
-    226: "IM Used",  # see RFC 3229
-    300: "Multiple Choices",
-    301: "Moved Permanently",
-    302: "Found",
-    303: "See Other",
-    304: "Not Modified",
-    305: "Use Proxy",
-    307: "Temporary Redirect",
-    400: "Bad Request",
-    401: "Unauthorized",
-    402: "Payment Required",  # unused
-    403: "Forbidden",
-    404: "Not Found",
-    405: "Method Not Allowed",
-    406: "Not Acceptable",
-    407: "Proxy Authentication Required",
-    408: "Request Timeout",
-    409: "Conflict",
-    410: "Gone",
-    411: "Length Required",
-    412: "Precondition Failed",
-    413: "Request Entity Too Large",
-    414: "Request URI Too Long",
-    415: "Unsupported Media Type",
-    416: "Requested Range Not Satisfiable",
-    417: "Expectation Failed",
-    418: "I'm a teapot",  # see RFC 2324
-    422: "Unprocessable Entity",
-    423: "Locked",
-    424: "Failed Dependency",
-    426: "Upgrade Required",
-    449: "Retry With",  # proprietary MS extension
-    500: "Internal Server Error",
-    501: "Not Implemented",
-    502: "Bad Gateway",
-    503: "Service Unavailable",
-    504: "Gateway Timeout",
-    505: "HTTP Version Not Supported",
-    507: "Insufficient Storage",
-    510: "Not Extended",
-}
-
-http_sig_param_re = re.compile(
-    r"\((?:(?P<type>[^:)]+):)?(?P<name>[\w_]+)\)", re.VERBOSE
-)
-
-
-def sort_by_method(entries):
-    def cmp(item):
-        order = ["HEAD", "GET", "POST", "PUT", "DELETE", "COPY", "OPTIONS"]
-        method = item[0].split(" ", 1)[0]
-        if method in order:
-            return order.index(method)
-        return 100
-
-    return sorted(entries, key=cmp)
-
-
-def http_resource_anchor(method, path):
-    path = re.sub(r"[{}]", "", re.sub(r"[<>:/]", "-", path))
-    return method.lower() + "-" + path
-
-
-class HTTPResource(ObjectDescription):
-
-    doc_field_types = [
-        TypedField(
-            "parameter",
-            label="Parameters",
-            names=("param", "parameter", "arg", "argument"),
-            typerolename="obj",
-            typenames=("paramtype", "type"),
-        ),
-        TypedField(
-            "jsonobject",
-            label="JSON Object",
-            names=("jsonparameter", "jsonparam", "json"),
-            typerolename="obj",
-            typenames=("jsonparamtype", "jsontype"),
-        ),
-        TypedField(
-            "requestjsonobject",
-            label="Request JSON Object",
-            names=("reqjsonobj", "reqjson", "<jsonobj", "<json"),
-            typerolename="obj",
-            typenames=("reqjsontype", "<jsontype"),
-        ),
-        TypedField(
-            "requestjsonarray",
-            label="Request JSON Array of Objects",
-            names=("reqjsonarr", "<jsonarr"),
-            typerolename="obj",
-            typenames=("reqjsonarrtype", "<jsonarrtype"),
-        ),
-        TypedField(
-            "responsejsonobject",
-            label="Response JSON Object",
-            names=("resjsonobj", "resjson", ">jsonobj", ">json"),
-            typerolename="obj",
-            typenames=("resjsontype", ">jsontype"),
-        ),
-        TypedField(
-            "responsejsonarray",
-            label="Response JSON Array of Objects",
-            names=("resjsonarr", ">jsonarr"),
-            typerolename="obj",
-            typenames=("resjsonarrtype", ">jsonarrtype"),
-        ),
-        TypedField(
-            "queryparameter",
-            label="Query Parameters",
-            names=("queryparameter", "queryparam", "qparam", "query"),
-            typerolename="obj",
-            typenames=("queryparamtype", "querytype", "qtype"),
-        ),
-        GroupedField(
-            "formparameter",
-            label="Form Parameters",
-            names=("formparameter", "formparam", "fparam", "form"),
-        ),
-        GroupedField(
-            "requestheader",
-            label="Request Headers",
-            rolename="mailheader",
-            names=("<header", "reqheader", "requestheader"),
-        ),
-        GroupedField(
-            "responseheader",
-            label="Response Headers",
-            rolename="mailheader",
-            names=(">header", "resheader", "responseheader"),
-        ),
-        GroupedField(
-            "statuscode",
-            label="Status Codes",
-            rolename="statuscode",
-            names=("statuscode", "status", "code"),
-        ),
-    ]
-
-    option_spec = {
-        "deprecated": directives.flag,
-        "noindex": directives.flag,
-        "synopsis": lambda x: x,
-    }
-
-    method = NotImplemented
-
-    def handle_signature(self, sig, signode):
-        method = self.method.upper() + " "
-        signode += addnodes.desc_name(method, method)
-        offset = 0
-        path = None
-        for match in http_sig_param_re.finditer(sig):
-            path = sig[offset : match.start()]
-            signode += addnodes.desc_name(path, path)
-            params = addnodes.desc_parameterlist()
-            typ = match.group("type")
-            if typ:
-                typ += ": "
-                params += addnodes.desc_annotation(typ, typ)
-            name = match.group("name")
-            params += addnodes.desc_parameter(name, name)
-            signode += params
-            offset = match.end()
-        if offset < len(sig):
-            path = sig[offset : len(sig)]
-            signode += addnodes.desc_name(path, path)
-        if path is None:
-            assert False, "no matches for sig: %s" % sig
-        fullname = self.method.upper() + " " + path
-        signode["method"] = self.method
-        signode["path"] = sig
-        signode["fullname"] = fullname
-        return (fullname, self.method, sig)
-
-    def needs_arglist(self):
-        return False
-
-    def add_target_and_index(self, name_cls, sig, signode):
-        signode["ids"].append(http_resource_anchor(*name_cls[1:]))
-        if "noindex" not in self.options:
-            self.env.domaindata["http"][self.method][sig] = (
-                self.env.docname,
-                self.options.get("synopsis", ""),
-                "deprecated" in self.options,
-            )
-
-    def get_index_text(self, modname, name):
-        return ""
-
-
-class HTTPOptions(HTTPResource):
-
-    method = "options"
-
-
-class HTTPHead(HTTPResource):
-
-    method = "head"
-
-
-class HTTPPatch(HTTPResource):
-
-    method = "patch"
-
-
-class HTTPPost(HTTPResource):
-
-    method = "post"
-
-
-class HTTPGet(HTTPResource):
-
-    method = "get"
-
-
-class HTTPPut(HTTPResource):
-
-    method = "put"
-
-
-class HTTPDelete(HTTPResource):
-
-    method = "delete"
-
-
-class HTTPTrace(HTTPResource):
-
-    method = "trace"
-
-
-class HTTPCopy(HTTPResource):
-
-    method = "copy"
-
-
-class HTTPAny(HTTPResource):
-
-    method = "any"
-
-
-def http_statuscode_role(
-    name, rawtext, text, lineno, inliner, options=None, content=None
-):
-    if options is None:
-        options = {}
-    if content is None:
-        content = []
-    if text.isdigit():
-        code = int(text)
-        try:
-            status = HTTP_STATUS_CODES[code]
-        except KeyError:
-            msg = inliner.reporter.error(
-                "%d is invalid HTTP status code" % code, lineno=lineno
-            )
-            prb = inliner.problematic(rawtext, rawtext, msg)
-            return [prb], [msg]
-    else:
-        try:
-            code, status = re.split(r"\s", text.strip(), 1)
-            code = int(code)
-        except ValueError:
-            msg = inliner.reporter.error(
-                "HTTP status code must be an integer (e.g. `200`) or "
-                "start with an integer (e.g. `200 OK`); %r is invalid" % text,
-                line=lineno,
-            )
-            prb = inliner.problematic(rawtext, rawtext, msg)
-            return [prb], [msg]
-    nodes.reference(rawtext)
-    if code == 226:
-        url = "http://www.ietf.org/rfc/rfc3229.txt"
-    elif code == 418:
-        url = "http://www.ietf.org/rfc/rfc2324.txt"
-    elif code == 449:
-        url = "http://msdn.microsoft.com/en-us/library/dd891478(v=prot.10).aspx"
-    elif code in HTTP_STATUS_CODES:
-        url = "http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html" "#sec10." + (
-            "%d.%d" % (code // 100, 1 + code % 100)
-        )
-    else:
-        url = ""
-    set_classes(options)
-    node = nodes.reference(rawtext, "%d %s" % (code, status), refuri=url, **options)
-    return [node], []
-
-
-def http_method_role(name, rawtext, text, lineno, inliner, options=None, content=None):
-    if options is None:
-        options = {}
-    if content is None:
-        content = []
-    method = str(text).lower()
-    if method not in METHOD_REFS:
-        msg = inliner.reporter.error(
-            "%s is not valid HTTP method" % method, lineno=lineno
-        )
-        prb = inliner.problematic(rawtext, rawtext, msg)
-        return [prb], [msg]
-    url = str(METHOD_REFS[method])
-    node = nodes.reference(rawtext, method.upper(), refuri=url, **options)
-    return [node], []
-
-
-def http_header_role(name, rawtext, text, lineno, inliner, options=None, content=None):
-    if options is None:
-        options = {}
-    if content is None:
-        content = []
-    header = str(text)
-    if header not in HEADER_REFS:
-        header = header.title()
-    if header not in HEADER_REFS:
-        if header.startswith(("X-Couch-", "Couch-")):
-            return [nodes.strong(header, header)], []
-        msg = inliner.reporter.error(
-            "%s is not unknown HTTP header" % header, lineno=lineno
-        )
-        prb = inliner.problematic(rawtext, rawtext, msg)
-        return [prb], [msg]
-    url = str(HEADER_REFS[header])
-    node = nodes.reference(rawtext, header, refuri=url, **options)
-    return [node], []
-
-
-class HTTPXRefRole(XRefRole):
-    def __init__(self, method, **kwargs):
-        XRefRole.__init__(self, **kwargs)
-        self.method = method
-
-    def process_link(self, env, refnode, has_explicit_title, title, target):
-        if not target.startswith("/"):
-            pass
-        if not has_explicit_title:
-            title = self.method.upper() + " " + title
-        return title, target
-
-
-class HTTPIndex(Index):
-
-    name = "api"
-    localname = "API Quick Reference"
-    shortname = "API Reference"
-
-    def generate(self, docnames=None):
-        content = {}
-        items = (
-            (method, path, info)
-            for method, routes in self.domain.routes.items()
-            for path, info in routes.items()
-        )
-        items = sorted(items, key=lambda item: item[1])
-        for method, path, info in items:
-            entries = content.setdefault(path, [])
-            entry_name = method.upper() + " " + path
-            entries.append(
-                [
-                    entry_name,
-                    0,
-                    info[0],
-                    http_resource_anchor(method, path),
-                    "",
-                    "Deprecated" if info[2] else "",
-                    info[1],
-                ]
-            )
-        items = sorted(
-            (path, sort_by_method(entries)) for path, entries in content.items()
-        )
-        return (items, True)
-
-
-class HTTPDomain(Domain):
-    """HTTP domain."""
-
-    name = "http"
-    label = "HTTP"
-
-    object_types = {
-        "options": ObjType("options", "options", "obj"),
-        "head": ObjType("head", "head", "obj"),
-        "post": ObjType("post", "post", "obj"),
-        "get": ObjType("get", "get", "obj"),
-        "put": ObjType("put", "put", "obj"),
-        "patch": ObjType("patch", "patch", "obj"),
-        "delete": ObjType("delete", "delete", "obj"),
-        "trace": ObjType("trace", "trace", "obj"),
-        "copy": ObjType("copy", "copy", "obj"),
-        "any": ObjType("any", "any", "obj"),
-    }
-
-    directives = {
-        "options": HTTPOptions,
-        "head": HTTPHead,
-        "post": HTTPPost,
-        "get": HTTPGet,
-        "put": HTTPPut,
-        "patch": HTTPPatch,
-        "delete": HTTPDelete,
-        "trace": HTTPTrace,
-        "copy": HTTPCopy,
-        "any": HTTPAny,
-    }
-
-    roles = {
-        "options": HTTPXRefRole("options"),
-        "head": HTTPXRefRole("head"),
-        "post": HTTPXRefRole("post"),
-        "get": HTTPXRefRole("get"),
-        "put": HTTPXRefRole("put"),
-        "patch": HTTPXRefRole("patch"),
-        "delete": HTTPXRefRole("delete"),
-        "trace": HTTPXRefRole("trace"),
-        "copy": HTTPXRefRole("copy"),
-        "all": HTTPXRefRole("all"),
-        "statuscode": http_statuscode_role,
-        "method": http_method_role,
-        "header": http_header_role,
-    }
-
-    initial_data = {
-        "options": {},  # path: (docname, synopsis)
-        "head": {},
-        "post": {},
-        "get": {},
-        "put": {},
-        "patch": {},
-        "delete": {},
-        "trace": {},
-        "copy": {},
-        "any": {},
-    }
-
-    indices = [HTTPIndex]
-
-    @property
-    def routes(self):
-        return dict((key, self.data[key]) for key in self.object_types)
-
-    def clear_doc(self, docname):
-        for typ, routes in self.routes.items():
-            for path, info in list(routes.items()):
-                if info[0] == docname:
-                    del routes[path]
-
-    def resolve_xref(self, env, fromdocname, builder, typ, target, node, contnode):
-        try:
-            info = self.data[str(typ)][target]
-        except KeyError:
-            text = contnode.rawsource
-            if typ == "statuscode":
-                return http_statuscode_role(None, text, text, None, None)[0][0]
-            elif typ == "mailheader":
-                return http_header_role(None, text, text, None, None)[0][0]
-            else:
-                return nodes.emphasis(text, text)
-        else:
-            anchor = http_resource_anchor(typ, target)
-            title = typ.upper() + " " + target
-            return make_refnode(builder, fromdocname, info[0], anchor, contnode, title)
-
-    def get_objects(self):
-        for method, routes in self.routes.items():
-            for path, info in routes.items():
-                anchor = http_resource_anchor(method, path)
-                yield (path, path, method, info[0], anchor, 1)
-
-
-class HTTPLexer(RegexLexer):
-    """Lexer for HTTP sessions."""
-
-    name = "HTTP"
-    aliases = ["http"]
-
-    flags = re.DOTALL
-
-    def header_callback(self, match):
-        if match.group(1).lower() == "content-type":
-            content_type = match.group(5).strip()
-            if ";" in content_type:
-                content_type = content_type[: content_type.find(";")].strip()
-            self.content_type = content_type
-        yield match.start(1), Name.Attribute, match.group(1)
-        yield match.start(2), Text, match.group(2)
-        yield match.start(3), Operator, match.group(3)
-        yield match.start(4), Text, match.group(4)
-        yield match.start(5), Literal, match.group(5)
-        yield match.start(6), Text, match.group(6)
-
-    def continuous_header_callback(self, match):
-        yield match.start(1), Text, match.group(1)
-        yield match.start(2), Literal, match.group(2)
-        yield match.start(3), Text, match.group(3)
-
-    def content_callback(self, match):
-        content_type = getattr(self, "content_type", None)
-        content = match.group()
-        offset = match.start()
-        if content_type:
-            from pygments.lexers import get_lexer_for_mimetype
-
-            try:
-                lexer = get_lexer_for_mimetype(content_type)
-            except ClassNotFound:
-                pass
-            else:
-                for idx, token, value in lexer.get_tokens_unprocessed(content):
-                    yield offset + idx, token, value
-                return
-        yield offset, Text, content
-
-    tokens = {
-        "root": [
-            (
-                r"(GET|POST|PUT|PATCH|DELETE|HEAD|OPTIONS|TRACE|COPY)"
-                r"( +)([^ ]+)( +)"
-                r"(HTTPS?)(/)(1\.[01])(\r?\n|$)",
-                bygroups(
-                    Name.Function,
-                    Text,
-                    Name.Namespace,
-                    Text,
-                    Keyword.Reserved,
-                    Operator,
-                    Number,
-                    Text,
-                ),
-                "headers",
-            ),
-            (
-                r"(HTTPS?)(/)(1\.[01])( +)(\d{3})( +)([^\r\n]+)(\r?\n|$)",
-                bygroups(
-                    Keyword.Reserved,
-                    Operator,
-                    Number,
-                    Text,
-                    Number,
-                    Text,
-                    Name.Exception,
-                    Text,
-                ),
-                "headers",
-            ),
-        ],
-        "headers": [
-            (r"([^\s:]+)( *)(:)( *)([^\r\n]+)(\r?\n|$)", header_callback),
-            (r"([\t ]+)([^\r\n]+)(\r?\n|$)", continuous_header_callback),
-            (r"\r?\n", Text, "content"),
-        ],
-        "content": [(r".+", content_callback)],
-    }
-
-
-def setup(app):
-    app.add_domain(HTTPDomain)
-    app.add_lexer("http", HTTPLexer())
diff --git a/ext/linter.py b/ext/linter.py
deleted file mode 100644
index 0efa2b0..0000000
--- a/ext/linter.py
+++ /dev/null
@@ -1,293 +0,0 @@
-# Licensed under the Apache License, Version 2.0 (the "License"); you may not
-# use this file except in compliance with the License. You may obtain a copy of
-# the License at
-#
-#   http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-# License for the specific language governing permissions and limitations under
-# the License.
-
-
-# This is very-very-very simple linter made in one evening without thoughts of
-# making something great, but just a thing that works.
-
-import os
-import re
-
-
-RULES = []
-HAS_ERRORS = False
-IGNORE_ERROR = False
-
-
-def error_report(file, line, msg, _state=[]):
-    global HAS_ERRORS, IGNORE_ERROR
-    if IGNORE_ERROR:
-        return
-    if _state and _state[0] == file.name:
-        pass
-    else:
-        if _state:
-            _state[0] = file.name
-        else:
-            _state.append(file.name)
-        sys.stderr.write(file.name + "\n")
-    sys.stderr.write(" ".join(["  line", str(line), ":", msg]) + "\n")
-    HAS_ERRORS = True
-
-
-def register_rule(func):
-    RULES.append(func)
-    return func
-
-
-def main(path):
-    for file in iter_rst_files(os.path.abspath(path)):
-        validate(file)
-    sys.exit(HAS_ERRORS)
-
-
-def iter_rst_files(path):
-    if os.path.isfile(path):
-        with open(path) as f:
-            yield f
-        return
-    for root, dirs, files in os.walk(path):
-        for file in files:
-            if file.endswith(".rst"):
-                with open(os.path.join(root, file), "rb") as f:
-                    yield f
-
-
-def validate(file):
-    global IGNORE_ERROR
-    IGNORE_ERROR = False
-    rules = [rule(file) for rule in RULES]
-    for rule in rules:
-        for _ in rule:
-            # initialize coroutine
-            break
-    while True:
-        line = file.readline().decode("utf-8")
-        exhausted = []
-        for idx, rule in enumerate(rules):
-            try:
-                error = rule.send(line)
-            except StopIteration:
-                exhausted.append(rule)
-            else:
-                if error:
-                    error_report(*error)
-
-        # not very optimal, but I'm lazy to figure anything better
-        for rule in exhausted:
-            rules.pop(rules.index(rule))
-
-        if not line:
-            break
-
-
-@register_rule
-def silent_scream(file):
-    """Sometimes we must accept presence of some errors by some relevant
-    reasons. Here we're doing that."""
-    global IGNORE_ERROR
-    counter = 0
-    while True:
-        line = yield None
-        if not line:
-            break
-
-        if counter:
-            IGNORE_ERROR = True
-            counter -= 1
-        else:
-            IGNORE_ERROR = False
-
-        match = re.match("\s*.. lint: ignore errors for the next (\d+) line?", line)
-        if match:
-            # +1 for empty line right after comment
-            counter = int(match.group(1)) + 1
-
-
-@register_rule
-def license_adviser(file):
-    """Each source file must include ASF license header."""
-    header = iter(
-        """
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-""".lstrip().splitlines(
-            False
-        )
-    )
-    error = None
-    for n, hline in enumerate(header):
-        fline = yield error
-        error = None
-        if hline != fline.strip("\r\n"):
-            error = (
-                file,
-                n + 1,
-                "bad ASF license header\n"
-                "  expected: {0}\n"
-                "  found:    {1}".format(hline, fline.strip()),
-            )
-
-
-@register_rule
-def whitespace_committee(file):
-    """Whitespace committee takes care about whitespace (surprise!) characters
-    in files. The documentation style guide says:
-
-    - There should be no trailing white space;
-    - More than one empty line is not allowed and there shouldn't be such
-      at the end of file;
-    - The last line should ends with newline character
-
-    Additionally it alerts about tabs if they were used instead of spaces.
-
-    TODO: check for indentation
-    """
-    error = prev = None
-    n = 0
-    while True:
-        line = yield error
-        error = None
-        if not line:
-            break
-        n += 1
-
-        # Check for trailing whitespace
-        if line.strip("\r\n").endswith(" "):
-            error = (file, n + 1, "trailing whitespace detected!\n" "{0}".format(line))
-
-        # Check for continuous empty lines
-        if prev is not None:
-            if prev.strip() == line.strip() == "":
-                error = (file, n + 1, "too many empty lines")
-
-        # Nobody loves tabs-spaces cocktail, we prefer spaces
-        if "\t" in line:
-            error = (file, n + 1, "no tabs please")
-
-        prev = line
-
-    # Accidentally empty file committed?
-    if prev is None:
-        error = (file, 0, "oh no! file seems empty!")
-
-    # Empty last lines not welcome
-    elif prev.strip() == "":
-        error = (file, n + 1, "no empty last lines please")
-
-    # Last line should ends with newline character
-    elif not prev.endswith("\n"):
-        error = (file, n + 1, "last line should ends with newline character")
-
-    yield error
-    return
-
-
-@register_rule
-def line_length_checker(file):
-    """Use a modern max line length of 90 chars, as recommended by things like
-    https://github.com/ambv/black and https://youtu.be/wf-BqAjZb8M?t=260 .
-    """
-    in_code_block = False
-    seen_emptyline = False
-    n = 0
-    error = None
-    while True:
-        line = yield error
-        error = None
-        if not line:
-            break
-        n += 1
-        line = line.rstrip()
-
-        # We have to ignore stuff in code blocks since it's hard to keep it
-        # within 90 chars wide box.
-        if line.strip().startswith(".. code") or line.endswith("::"):
-            in_code_block = True
-            continue
-
-        # Check for line length unless we're not in code block
-        if len(line) > 90 and not in_code_block:
-            if line.startswith(".."):
-                # Ignore long lines with external links
-                continue
-
-            if line.endswith(">`_"):
-                # Ignore long lines because of URLs
-                # TODO: need to be more smart here
-                continue
-
-            error = (
-                file,
-                n + 1,
-                "too long ({0} > 90) line\n{1}\n" "".format(len(line), line),
-            )
-
-        # Empty lines are acts as separators for code block content
-        elif not line:
-            seen_emptyline = True
-
-        # So if we saw an empty line and here goes content without indentation,
-        # so it mostly have to sign about the end of our code block
-        # (if it ever occurs)
-        elif seen_emptyline and line and not line.startswith(" "):
-            seen_emptyline = False
-            in_code_block = False
-
-        else:
-            seen_emptyline = False
-
-
-@register_rule
-def my_lovely_hat(file):
-    """Everyone loves to wear a nice hat on they head, so articles does too."""
-    error = None
-    n = 0
-    while True:
-        line = yield error
-        error = None
-        if not line:
-            break
-
-        line = line.strip()
-
-        if not line:
-            continue
-
-        if line.startswith(".."):
-            continue
-
-        if set(line) < set(["#", "-", "=", "*"]):
-            break
-        else:
-            lines = [line, "\n", (yield None), (yield None)]
-            yield (file, n + 1, "bad title header:\n" "{}".format("".join(lines)))
-            return
-
-
-if __name__ == "__main__":
-    import sys
-
-    if len(sys.argv) == 1:
-        sys.stderr.write("Argument with the target path is missed")
-        sys.exit(2)
-    main(sys.argv[1])
diff --git a/images/23379351593_0c480537de_q.jpg b/images/23379351593_0c480537de_q.jpg
deleted file mode 100644
index e004b13..0000000
--- a/images/23379351593_0c480537de_q.jpg
+++ /dev/null
Binary files differ
diff --git a/images/TLS-Handshake.png b/images/TLS-Handshake.png
deleted file mode 100644
index 5bec6bd..0000000
--- a/images/TLS-Handshake.png
+++ /dev/null
Binary files differ
diff --git a/images/epub-icon.png b/images/epub-icon.png
deleted file mode 100644
index 3fda935..0000000
--- a/images/epub-icon.png
+++ /dev/null
Binary files differ
diff --git a/images/fair-enough.png b/images/fair-enough.png
deleted file mode 100755
index 0894022..0000000
--- a/images/fair-enough.png
+++ /dev/null
Binary files differ
diff --git a/images/favicon.ico b/images/favicon.ico
deleted file mode 100644
index e538aea..0000000
--- a/images/favicon.ico
+++ /dev/null
Binary files differ
diff --git a/images/futon-createdb.png b/images/futon-createdb.png
deleted file mode 100644
index c8c1b9d..0000000
--- a/images/futon-createdb.png
+++ /dev/null
Binary files differ
diff --git a/images/futon-editdoc.png b/images/futon-editdoc.png
deleted file mode 100644
index f31dbbe..0000000
--- a/images/futon-editdoc.png
+++ /dev/null
Binary files differ
diff --git a/images/futon-editeddoc.png b/images/futon-editeddoc.png
deleted file mode 100644
index a5913bc..0000000
--- a/images/futon-editeddoc.png
+++ /dev/null
Binary files differ
diff --git a/images/futon-overview.png b/images/futon-overview.png
deleted file mode 100644
index e1daf5c..0000000
--- a/images/futon-overview.png
+++ /dev/null
Binary files differ
diff --git a/images/futon-replform.png b/images/futon-replform.png
deleted file mode 100644
index 72b9ff5..0000000
--- a/images/futon-replform.png
+++ /dev/null
Binary files differ
diff --git a/images/gf-gnome-rainbows.png b/images/gf-gnome-rainbows.png
deleted file mode 100644
index 07c7145..0000000
--- a/images/gf-gnome-rainbows.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-01.png b/images/intro-consistency-01.png
deleted file mode 100644
index a577059..0000000
--- a/images/intro-consistency-01.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-02.png b/images/intro-consistency-02.png
deleted file mode 100644
index 06c23ea..0000000
--- a/images/intro-consistency-02.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-03.png b/images/intro-consistency-03.png
deleted file mode 100644
index 2164c6c..0000000
--- a/images/intro-consistency-03.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-04.png b/images/intro-consistency-04.png
deleted file mode 100644
index 068fa77..0000000
--- a/images/intro-consistency-04.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-05.png b/images/intro-consistency-05.png
deleted file mode 100644
index a94f9c3..0000000
--- a/images/intro-consistency-05.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-06.png b/images/intro-consistency-06.png
deleted file mode 100644
index af316d4..0000000
--- a/images/intro-consistency-06.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-consistency-07.png b/images/intro-consistency-07.png
deleted file mode 100644
index 7fb5027..0000000
--- a/images/intro-consistency-07.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-01.png b/images/intro-tour-01.png
deleted file mode 100644
index e6fe9df..0000000
--- a/images/intro-tour-01.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-03.png b/images/intro-tour-03.png
deleted file mode 100644
index 7137583..0000000
--- a/images/intro-tour-03.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-04.png b/images/intro-tour-04.png
deleted file mode 100644
index 7bc5678..0000000
--- a/images/intro-tour-04.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-05.png b/images/intro-tour-05.png
deleted file mode 100644
index 972cb65..0000000
--- a/images/intro-tour-05.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-06.png b/images/intro-tour-06.png
deleted file mode 100644
index 9f27df1..0000000
--- a/images/intro-tour-06.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-07.png b/images/intro-tour-07.png
deleted file mode 100644
index 229ce63..0000000
--- a/images/intro-tour-07.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-08.png b/images/intro-tour-08.png
deleted file mode 100644
index 4aa549b..0000000
--- a/images/intro-tour-08.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-09.png b/images/intro-tour-09.png
deleted file mode 100644
index b850ade..0000000
--- a/images/intro-tour-09.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-tour-10.png b/images/intro-tour-10.png
deleted file mode 100644
index 68038bf..0000000
--- a/images/intro-tour-10.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-why-01.png b/images/intro-why-01.png
deleted file mode 100644
index c927450..0000000
--- a/images/intro-why-01.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-why-02.png b/images/intro-why-02.png
deleted file mode 100644
index a5bb4ce..0000000
--- a/images/intro-why-02.png
+++ /dev/null
Binary files differ
diff --git a/images/intro-why-03.png b/images/intro-why-03.png
deleted file mode 100644
index 1f5e536..0000000
--- a/images/intro-why-03.png
+++ /dev/null
Binary files differ
diff --git a/images/logo.png b/images/logo.png
deleted file mode 100644
index 553f31c..0000000
--- a/images/logo.png
+++ /dev/null
Binary files differ
diff --git a/images/purge-checkpoint-docs.png b/images/purge-checkpoint-docs.png
deleted file mode 100644
index 0480aa3..0000000
--- a/images/purge-checkpoint-docs.png
+++ /dev/null
Binary files differ
diff --git a/images/replication-state-diagram.svg b/images/replication-state-diagram.svg
deleted file mode 100644
index f0d6950..0000000
--- a/images/replication-state-diagram.svg
+++ /dev/null
@@ -1,419 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
-<svg version="1.2" width="215.9mm" height="279.4mm" viewBox="0 0 21590 27940" preserveAspectRatio="xMidYMid" fill-rule="evenodd" stroke-width="28.222" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg" xmlns:ooo="http://xml.openoffice.org/svg/export" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:presentation="http://sun.com/xmlns/staroffice/presentation" xmlns:smil="http://www.w3.org/2001/SMIL20/" xmlns:anim="urn:oasis:names:tc:opendocument:xmlns:animation:1.0" xml:space="preserve">
- <defs class="ClipPathGroup">
-  <clipPath id="presentation_clip_path" clipPathUnits="userSpaceOnUse">
-   <rect x="0" y="0" width="21590" height="27940"/>
-  </clipPath>
-  <clipPath id="presentation_clip_path_shrink" clipPathUnits="userSpaceOnUse">
-   <rect x="21" y="27" width="21547" height="27885"/>
-  </clipPath>
- </defs>
- <defs>
-  <font id="EmbeddedFont_1" horiz-adv-x="2048">
-   <font-face font-family="Liberation Sans embedded" units-per-em="2048" font-weight="normal" font-style="normal" ascent="1839" descent="421"/>
-   <missing-glyph horiz-adv-x="2048" d="M 0,0 L 2047,0 2047,2047 0,2047 0,0 Z"/>
-   <glyph unicode="z" horiz-adv-x="842" d="M 83,0 L 83,137 688,943 117,943 117,1082 901,1082 901,945 295,139 922,139 922,0 Z"/>
-   <glyph unicode="y" horiz-adv-x="1014" d="M 191,-425 C 142,-425 100,-421 67,-414 L 67,-279 C 92,-283 120,-285 151,-285 263,-285 352,-203 417,-38 L 434,5 5,1082 197,1082 425,484 C 428,475 432,464 437,451 442,438 457,394 482,320 507,246 521,205 523,196 L 593,393 830,1082 1020,1082 604,0 C 559,-115 518,-201 479,-258 440,-314 398,-356 351,-384 304,-411 250,-425 191,-425 Z"/>
-   <glyph unicode="x" horiz-adv-x="982" d="M 801,0 L 510,444 217,0 23,0 408,556 41,1082 240,1082 510,661 778,1082 979,1082 612,558 1002,0 Z"/>
-   <glyph unicode="w" horiz-adv-x="1481" d="M 1174,0 L 965,0 776,765 740,934 C 734,904 725,861 712,805 699,748 631,480 508,0 L 300,0 -3,1082 175,1082 358,347 C 363,331 377,265 401,149 L 418,223 644,1082 837,1082 1026,339 1072,149 1103,288 1308,1082 1484,1082 Z"/>
-   <glyph unicode="u" horiz-adv-x="858" d="M 314,1082 L 314,396 C 314,325 321,269 335,230 349,191 371,162 402,145 433,128 478,119 537,119 624,119 692,149 742,208 792,267 817,350 817,455 L 817,1082 997,1082 997,231 C 997,105 999,28 1003,0 L 833,0 C 832,3 832,12 831,27 830,42 830,59 829,78 828,97 826,132 825,185 L 822,185 C 781,110 733,58 679,27 624,-5 557,-20 476,-20 357,-20 271,10 216,69 161,128 133,225 133,361 L 133,1082 Z"/>
-   <glyph unicode="t" horiz-adv-x="531" d="M 554,8 C 495,-8 434,-16 372,-16 228,-16 156,66 156,229 L 156,951 31,951 31,1082 163,1082 216,1324 336,1324 336,1082 536,1082 536,951 336,951 336,268 C 336,216 345,180 362,159 379,138 408,127 450,127 474,127 509,132 554,141 Z"/>
-   <glyph unicode="s" horiz-adv-x="890" d="M 950,299 C 950,197 912,118 835,63 758,8 650,-20 511,-20 376,-20 273,2 200,47 127,91 79,160 57,254 L 216,285 C 231,227 263,185 311,158 359,131 426,117 511,117 602,117 669,131 712,159 754,187 775,229 775,285 775,328 760,362 731,389 702,416 654,438 589,455 L 460,489 C 357,516 283,542 240,568 196,593 162,624 137,661 112,698 100,743 100,796 100,895 135,970 206,1022 276,1073 378,1099 513,1099 632,1099 727,1078 798,1036 868,994 912,927 931,834 L 769,814 C 759,862 732,899 689,925 645,950 586,963 513,963 432,963 372,951 333,926 294,901 275,864 275,814 275,783 283,758 299,738 315,718 339,701 370,687 401,673 467,654 568,629 663,605 732,583 774,563 816,542 849,520 874,495 898,470 917,442 930,410 943,377 950,340 950,299 Z"/>
-   <glyph unicode="r" horiz-adv-x="515" d="M 142,0 L 142,830 C 142,906 140,990 136,1082 L 306,1082 C 311,959 314,886 314,861 L 318,861 C 347,954 380,1017 417,1051 454,1085 507,1102 575,1102 599,1102 623,1099 648,1092 L 648,927 C 624,934 592,937 552,937 477,937 420,905 381,841 342,776 322,684 322,564 L 322,0 Z"/>
-   <glyph unicode="p" horiz-adv-x="936" d="M 1053,546 C 1053,169 920,-20 655,-20 488,-20 376,43 319,168 L 314,168 C 317,163 318,106 318,-2 L 318,-425 138,-425 138,861 C 138,972 136,1046 132,1082 L 306,1082 C 307,1079 308,1070 309,1054 310,1037 312,1012 314,978 315,944 316,921 316,908 L 320,908 C 352,975 394,1024 447,1055 500,1086 569,1101 655,1101 788,1101 888,1056 954,967 1020,878 1053,737 1053,546 Z M 864,542 C 864,693 844,800 803,865 762,930 698,962 609,962 538,962 482,947 442,917 401,887 371,840 350,777 329,713 318,630 318,528 318,386 341,281 386,214 431,147 505,113 607,113 696,113 762,146 803,212 844,277 864,387 864,542 Z"/>
-   <glyph unicode="o" horiz-adv-x="968" d="M 1053,542 C 1053,353 1011,212 928,119 845,26 724,-20 565,-20 407,-20 288,28 207,125 126,221 86,360 86,542 86,915 248,1102 571,1102 736,1102 858,1057 936,966 1014,875 1053,733 1053,542 Z M 864,542 C 864,691 842,800 798,868 753,935 679,969 574,969 469,969 393,935 346,866 299,797 275,689 275,542 275,399 298,292 345,221 391,149 464,113 563,113 671,113 748,148 795,217 841,286 864,395 864,542 Z"/>
-   <glyph unicode="n" horiz-adv-x="874" d="M 825,0 L 825,686 C 825,757 818,813 804,852 790,891 768,920 737,937 706,954 661,963 602,963 515,963 447,933 397,874 347,815 322,732 322,627 L 322,0 142,0 142,851 C 142,977 140,1054 136,1082 L 306,1082 C 307,1079 307,1070 308,1055 309,1040 310,1024 311,1005 312,986 313,950 314,897 L 317,897 C 358,972 406,1025 461,1056 515,1087 582,1102 663,1102 782,1102 869,1073 924,1014 979,955 1006,857 1006,721 L 1006,0 Z"/>
-   <glyph unicode="m" horiz-adv-x="1435" d="M 768,0 L 768,686 C 768,791 754,863 725,903 696,943 645,963 570,963 493,963 433,934 388,875 343,816 321,734 321,627 L 321,0 142,0 142,851 C 142,977 140,1054 136,1082 L 306,1082 C 307,1079 307,1070 308,1055 309,1040 310,1024 311,1005 312,986 313,950 314,897 L 317,897 C 356,974 400,1027 450,1057 500,1087 561,1102 633,1102 715,1102 780,1086 828,1053 875,1020 908,968 927,897 L 930,897 C 967,970 1013,1022 1066,1054 1119,1086 1183,1102 1258,1102 1367,1102 1447,1072 1497,1013 1546,954 1571,856 1571,721 L 1571,0 1393,0 1393,686 C 1393,791 1379,863 1350,903 1321,943 1270,963 1195,963 1116,963 1055,934 1012,876 968,817 946,734 946,627 L 946,0 Z"/>
-   <glyph unicode="l" horiz-adv-x="173" d="M 138,0 L 138,1484 318,1484 318,0 Z"/>
-   <glyph unicode="j" horiz-adv-x="360" d="M 137,1312 L 137,1484 317,1484 317,1312 Z M 317,-134 C 317,-236 297,-310 257,-356 217,-402 157,-425 77,-425 26,-425 -17,-422 -50,-416 L -50,-277 12,-283 C 58,-283 90,-271 109,-247 128,-223 137,-176 137,-107 L 137,1082 317,1082 Z"/>
-   <glyph unicode="i" horiz-adv-x="173" d="M 137,1312 L 137,1484 317,1484 317,1312 Z M 137,0 L 137,1082 317,1082 317,0 Z"/>
-   <glyph unicode="h" horiz-adv-x="874" d="M 317,897 C 356,968 402,1020 457,1053 511,1086 580,1102 663,1102 780,1102 867,1073 923,1015 978,956 1006,858 1006,721 L 1006,0 825,0 825,686 C 825,762 818,819 804,856 790,893 767,920 735,937 703,954 659,963 602,963 517,963 450,934 399,875 348,816 322,737 322,638 L 322,0 142,0 142,1484 322,1484 322,1098 C 322,1057 321,1015 319,972 316,929 315,904 314,897 Z"/>
-   <glyph unicode="g" horiz-adv-x="921" d="M 548,-425 C 430,-425 336,-402 266,-356 196,-309 151,-243 131,-158 L 312,-132 C 324,-182 351,-221 392,-248 433,-275 486,-288 553,-288 732,-288 822,-183 822,27 L 822,201 820,201 C 786,132 739,80 680,45 621,10 551,-8 472,-8 339,-8 242,36 180,124 117,212 86,350 86,539 86,730 120,872 187,963 254,1054 355,1099 492,1099 569,1099 635,1082 692,1047 748,1012 791,962 822,897 L 824,897 C 824,917 825,952 828,1001 831,1050 833,1077 836,1082 L 1007,1082 C 1003,1046 1001,971 1001,858 L 1001,31 C 1001,-273 850,-425 548,-425 Z M 822,541 C 822,629 810,705 786,769 762,832 728,881 685,915 641,948 591,965 536,965 444,965 377,932 335,865 293,798 272,690 272,541 272,393 292,287 331,222 370,157 438,125 533,125 590,125 640,142 684,175 728,208 762,256 786,319 810,381 822,455 822,541 Z"/>
-   <glyph unicode="f" horiz-adv-x="547" d="M 361,951 L 361,0 181,0 181,951 29,951 29,1082 181,1082 181,1204 C 181,1303 203,1374 246,1417 289,1460 356,1482 445,1482 495,1482 537,1478 572,1470 L 572,1333 C 542,1338 515,1341 492,1341 446,1341 413,1329 392,1306 371,1283 361,1240 361,1179 L 361,1082 572,1082 572,951 Z"/>
-   <glyph unicode="e" horiz-adv-x="952" d="M 276,503 C 276,379 302,283 353,216 404,149 479,115 578,115 656,115 719,131 766,162 813,193 844,233 861,281 L 1019,236 C 954,65 807,-20 578,-20 418,-20 296,28 213,123 129,218 87,360 87,548 87,727 129,864 213,959 296,1054 416,1102 571,1102 889,1102 1048,910 1048,527 L 1048,503 Z M 862,641 C 852,755 823,838 775,891 727,943 658,969 568,969 481,969 412,940 361,882 310,823 282,743 278,641 Z"/>
-   <glyph unicode="d" horiz-adv-x="921" d="M 821,174 C 788,105 744,55 689,25 634,-5 565,-20 484,-20 347,-20 247,26 183,118 118,210 86,349 86,536 86,913 219,1102 484,1102 566,1102 634,1087 689,1057 744,1027 788,979 821,914 L 823,914 821,1035 821,1484 1001,1484 1001,223 C 1001,110 1003,36 1007,0 L 835,0 C 833,11 831,35 829,74 826,113 825,146 825,174 Z M 275,542 C 275,391 295,282 335,217 375,152 440,119 530,119 632,119 706,154 752,225 798,296 821,405 821,554 821,697 798,802 752,869 706,936 633,969 532,969 441,969 376,936 336,869 295,802 275,693 275,542 Z"/>
-   <glyph unicode="c" horiz-adv-x="874" d="M 275,546 C 275,402 298,295 343,226 388,157 457,122 548,122 612,122 666,139 709,174 752,209 778,262 788,334 L 970,322 C 956,218 912,135 837,73 762,11 668,-20 553,-20 402,-20 286,28 207,124 127,219 87,359 87,542 87,724 127,863 207,959 287,1054 402,1102 551,1102 662,1102 754,1073 827,1016 900,959 945,880 964,779 L 779,765 C 770,825 746,873 708,908 670,943 616,961 546,961 451,961 382,929 339,866 296,803 275,696 275,546 Z"/>
-   <glyph unicode="b" horiz-adv-x="936" d="M 1053,546 C 1053,169 920,-20 655,-20 573,-20 505,-5 451,25 396,54 352,102 318,168 L 316,168 C 316,147 315,116 312,74 309,31 307,7 306,0 L 132,0 C 136,36 138,110 138,223 L 138,1484 318,1484 318,1061 C 318,1018 317,967 314,908 L 318,908 C 351,977 396,1027 451,1057 506,1087 574,1102 655,1102 792,1102 892,1056 957,964 1021,872 1053,733 1053,546 Z M 864,540 C 864,691 844,800 804,865 764,930 699,963 609,963 508,963 434,928 388,859 341,790 318,680 318,529 318,387 341,282 386,215 431,147 505,113 607,113 698,113 763,147 804,214 844,281 864,389 864,540 Z"/>
-   <glyph unicode="a" horiz-adv-x="1046" d="M 414,-20 C 305,-20 224,9 169,66 114,123 87,202 87,302 87,414 124,500 198,560 271,620 390,652 554,656 L 797,660 797,719 C 797,807 778,870 741,908 704,946 645,965 565,965 484,965 426,951 389,924 352,897 330,853 323,793 L 135,810 C 166,1005 310,1102 569,1102 705,1102 807,1071 876,1009 945,946 979,856 979,738 L 979,272 C 979,219 986,179 1000,152 1014,125 1041,111 1080,111 1097,111 1117,113 1139,118 L 1139,6 C 1094,-5 1047,-10 1000,-10 933,-10 885,8 855,43 824,78 807,132 803,207 L 797,207 C 751,124 698,66 637,32 576,-3 501,-20 414,-20 Z M 455,115 C 521,115 580,130 631,160 682,190 723,231 753,284 782,336 797,390 797,445 L 797,534 600,530 C 515,529 451,520 408,504 364,488 330,463 307,430 284,397 272,353 272,299 272,240 288,195 320,163 351,131 396,115 455,115 Z"/>
-   <glyph unicode="_" horiz-adv-x="1201" d="M -31,-407 L -31,-277 1162,-277 1162,-407 Z"/>
-   <glyph unicode="U" horiz-adv-x="1170" d="M 731,-20 C 616,-20 515,1 429,43 343,85 276,146 229,226 182,306 158,401 158,512 L 158,1409 349,1409 349,528 C 349,399 382,302 447,235 512,168 607,135 730,135 857,135 955,170 1026,239 1096,308 1131,408 1131,541 L 1131,1409 1321,1409 1321,530 C 1321,416 1297,318 1249,235 1200,152 1132,89 1044,46 955,2 851,-20 731,-20 Z"/>
-   <glyph unicode="T" horiz-adv-x="1154" d="M 720,1253 L 720,0 530,0 530,1253 46,1253 46,1409 1204,1409 1204,1253 Z"/>
-   <glyph unicode="S" horiz-adv-x="1186" d="M 1272,389 C 1272,259 1221,158 1120,87 1018,16 875,-20 690,-20 347,-20 148,99 93,338 L 278,375 C 299,290 345,228 414,189 483,149 578,129 697,129 820,129 916,150 983,193 1050,235 1083,297 1083,379 1083,425 1073,462 1052,491 1031,520 1001,543 963,562 925,581 880,596 827,609 774,622 716,635 652,650 541,675 456,699 399,724 341,749 295,776 262,807 229,837 203,872 186,913 168,954 159,1000 159,1053 159,1174 205,1267 298,1332 390,1397 522,1430 694,1430 854,1430 976,1406 1061,1357 1146,1308 1205,1224 1239,1106 L 1051,1073 C 1030,1148 991,1202 933,1236 875,1269 795,1286 692,1286 579,1286 493,1267 434,1230 375,1193 345,1137 345,1063 345,1020 357,984 380,956 403,927 436,903 479,884 522,864 609,840 738,811 781,801 825,791 868,781 911,770 952,758 991,744 1030,729 1067,712 1102,693 1136,674 1166,650 1191,622 1216,594 1236,561 1251,523 1265,485 1272,440 1272,389 Z"/>
-   <glyph unicode="R" horiz-adv-x="1217" d="M 1164,0 L 798,585 359,585 359,0 168,0 168,1409 831,1409 C 990,1409 1112,1374 1199,1303 1285,1232 1328,1133 1328,1006 1328,901 1298,813 1237,742 1176,671 1091,626 984,607 L 1384,0 Z M 1136,1004 C 1136,1086 1108,1149 1053,1192 997,1235 917,1256 812,1256 L 359,1256 359,736 820,736 C 921,736 999,760 1054,807 1109,854 1136,919 1136,1004 Z"/>
-   <glyph unicode="P" horiz-adv-x="1092" d="M 1258,985 C 1258,852 1215,746 1128,667 1041,588 922,549 773,549 L 359,549 359,0 168,0 168,1409 761,1409 C 919,1409 1041,1372 1128,1298 1215,1224 1258,1120 1258,985 Z M 1066,983 C 1066,1165 957,1256 738,1256 L 359,1256 359,700 746,700 C 959,700 1066,794 1066,983 Z"/>
-   <glyph unicode="O" horiz-adv-x="1404" d="M 1495,711 C 1495,564 1467,435 1411,324 1354,213 1273,128 1168,69 1063,10 938,-20 795,-20 650,-20 526,9 421,68 316,127 235,212 180,323 125,434 97,563 97,711 97,936 159,1113 282,1240 405,1367 577,1430 797,1430 940,1430 1065,1402 1170,1345 1275,1288 1356,1205 1412,1096 1467,987 1495,859 1495,711 Z M 1300,711 C 1300,886 1256,1024 1169,1124 1081,1224 957,1274 797,1274 636,1274 511,1225 423,1126 335,1027 291,889 291,711 291,534 336,394 425,291 514,187 637,135 795,135 958,135 1083,185 1170,286 1257,386 1300,528 1300,711 Z"/>
-   <glyph unicode="N" horiz-adv-x="1139" d="M 1082,0 L 328,1200 333,1103 338,936 338,0 168,0 168,1409 390,1409 1152,201 C 1144,332 1140,426 1140,485 L 1140,1409 1312,1409 1312,0 Z"/>
-   <glyph unicode="I" horiz-adv-x="188" d="M 189,0 L 189,1409 380,1409 380,0 Z"/>
-   <glyph unicode="H" horiz-adv-x="1139" d="M 1121,0 L 1121,653 359,653 359,0 168,0 168,1409 359,1409 359,813 1121,813 1121,1409 1312,1409 1312,0 Z"/>
-   <glyph unicode="F" horiz-adv-x="999" d="M 359,1253 L 359,729 1145,729 1145,571 359,571 359,0 168,0 168,1409 1169,1409 1169,1253 Z"/>
-   <glyph unicode="E" horiz-adv-x="1108" d="M 168,0 L 168,1409 1237,1409 1237,1253 359,1253 359,801 1177,801 1177,647 359,647 359,156 1278,156 1278,0 Z"/>
-   <glyph unicode="C" horiz-adv-x="1294" d="M 792,1274 C 636,1274 515,1224 428,1124 341,1023 298,886 298,711 298,538 343,400 434,295 524,190 646,137 800,137 997,137 1146,235 1245,430 L 1401,352 C 1343,231 1262,138 1157,75 1052,12 930,-20 791,-20 649,-20 526,10 423,69 319,128 240,212 186,322 131,431 104,561 104,711 104,936 165,1112 286,1239 407,1366 575,1430 790,1430 940,1430 1065,1401 1166,1342 1267,1283 1341,1196 1388,1081 L 1207,1021 C 1174,1103 1122,1166 1050,1209 977,1252 891,1274 792,1274 Z"/>
-   <glyph unicode="A" horiz-adv-x="1357" d="M 1167,0 L 1006,412 364,412 202,0 4,0 579,1409 796,1409 1362,0 Z M 685,1265 L 676,1237 C 659,1182 635,1111 602,1024 L 422,561 949,561 768,1026 C 749,1072 731,1124 712,1182 Z"/>
-   <glyph unicode="&gt;" horiz-adv-x="999" d="M 101,154 L 101,307 959,674 101,1040 101,1194 1096,776 1096,571 Z"/>
-   <glyph unicode="&lt;" horiz-adv-x="999" d="M 101,571 L 101,776 1096,1194 1096,1040 238,674 1096,307 1096,154 Z"/>
-   <glyph unicode="/" horiz-adv-x="578" d="M 0,-20 L 411,1484 569,1484 162,-20 Z"/>
-   <glyph unicode="-" horiz-adv-x="500" d="M 91,464 L 91,624 591,624 591,464 Z"/>
-   <glyph unicode=")" horiz-adv-x="546" d="M 555,528 C 555,335 525,162 465,9 404,-144 311,-289 186,-424 L 12,-424 C 137,-284 229,-137 287,19 345,174 374,344 374,530 374,716 345,887 287,1042 228,1197 137,1345 12,1484 L 186,1484 C 312,1348 405,1203 465,1050 525,896 555,723 555,532 Z"/>
-   <glyph unicode="(" horiz-adv-x="546" d="M 127,532 C 127,725 157,898 218,1051 278,1204 371,1349 496,1484 L 670,1484 C 545,1345 454,1198 396,1042 337,886 308,715 308,530 308,345 337,175 395,20 452,-135 544,-283 670,-424 L 496,-424 C 370,-288 277,-143 217,11 157,164 127,337 127,528 Z"/>
-   <glyph unicode=" " horiz-adv-x="561"/>
-  </font>
- </defs>
- <defs class="TextShapeIndex">
-  <g ooo:slide="id1" ooo:id-list="id3 id4 id5 id6 id7 id8 id9 id10 id11 id12 id13 id14 id15 id16 id17 id18 id19 id20 id21 id22 id23 id24 id25 id26 id27 id28 id29 id30 id31 id32 id33 id34 id35 id36 id37 id38 id39 id40"/>
- </defs>
- <defs class="EmbeddedBulletChars">
-  <g id="bullet-char-template(57356)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 580,1141 L 1163,571 580,0 -4,571 580,1141 Z"/>
-  </g>
-  <g id="bullet-char-template(57354)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 8,1128 L 1137,1128 1137,0 8,0 8,1128 Z"/>
-  </g>
-  <g id="bullet-char-template(10146)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 174,0 L 602,739 174,1481 1456,739 174,0 Z M 1358,739 L 309,1346 659,739 1358,739 Z"/>
-  </g>
-  <g id="bullet-char-template(10132)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 2015,739 L 1276,0 717,0 1260,543 174,543 174,936 1260,936 717,1481 1274,1481 2015,739 Z"/>
-  </g>
-  <g id="bullet-char-template(10007)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 0,-2 C -7,14 -16,27 -25,37 L 356,567 C 262,823 215,952 215,954 215,979 228,992 255,992 264,992 276,990 289,987 310,991 331,999 354,1012 L 381,999 492,748 772,1049 836,1024 860,1049 C 881,1039 901,1025 922,1006 886,937 835,863 770,784 769,783 710,716 594,584 L 774,223 C 774,196 753,168 711,139 L 727,119 C 717,90 699,76 672,76 641,76 570,178 457,381 L 164,-76 C 142,-110 111,-127 72,-127 30,-127 9,-110 8,-76 1,-67 -2,-52 -2,-32 -2,-23 -1,-13 0,-2 Z"/>
-  </g>
-  <g id="bullet-char-template(10004)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 285,-33 C 182,-33 111,30 74,156 52,228 41,333 41,471 41,549 55,616 82,672 116,743 169,778 240,778 293,778 328,747 346,684 L 369,508 C 377,444 397,411 428,410 L 1163,1116 C 1174,1127 1196,1133 1229,1133 1271,1133 1292,1118 1292,1087 L 1292,965 C 1292,929 1282,901 1262,881 L 442,47 C 390,-6 338,-33 285,-33 Z"/>
-  </g>
-  <g id="bullet-char-template(9679)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 813,0 C 632,0 489,54 383,161 276,268 223,411 223,592 223,773 276,916 383,1023 489,1130 632,1184 813,1184 992,1184 1136,1130 1245,1023 1353,916 1407,772 1407,592 1407,412 1353,268 1245,161 1136,54 992,0 813,0 Z"/>
-  </g>
-  <g id="bullet-char-template(8226)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 346,457 C 273,457 209,483 155,535 101,586 74,649 74,723 74,796 101,859 155,911 209,963 273,989 346,989 419,989 480,963 531,910 582,859 608,796 608,723 608,648 583,586 532,535 482,483 420,457 346,457 Z"/>
-  </g>
-  <g id="bullet-char-template(8211)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M -4,459 L 1135,459 1135,606 -4,606 -4,459 Z"/>
-  </g>
-  <g id="bullet-char-template(61548)" transform="scale(0.00048828125,-0.00048828125)">
-   <path d="M 173,740 C 173,903 231,1043 346,1159 462,1274 601,1332 765,1332 928,1332 1067,1274 1183,1159 1299,1043 1357,903 1357,740 1357,577 1299,437 1183,322 1067,206 928,148 765,148 601,148 462,206 346,322 231,437 173,577 173,740 Z"/>
-  </g>
- </defs>
- <defs class="TextEmbeddedBitmaps"/>
- <g>
-  <g id="id2" class="Master_Slide">
-   <g id="bg-id2" class="Background"/>
-   <g id="bo-id2" class="BackgroundObjects"/>
-  </g>
- </g>
- <g class="SlideGroup">
-  <g>
-   <g id="container-id1">
-    <g id="id1" class="Slide" clip-path="url(#presentation_clip_path)">
-     <g class="Page">
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id3">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4554" y="7857" width="2925" height="1274"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 6015,7858 C 6843,7858 7476,8133 7476,8493 7476,8853 6843,9128 6015,9128 5187,9128 4555,8853 4555,8493 4555,8133 5187,7858 6015,7858 Z M 4555,7858 L 4555,7858 Z M 7477,9129 L 7477,9129 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 6015,7858 C 6843,7858 7476,8133 7476,8493 7476,8853 6843,9128 6015,9128 5187,9128 4555,8853 4555,8493 4555,8133 5187,7858 6015,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 4555,7858 L 4555,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7477,9129 L 7477,9129 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="5311" y="8714"><tspan fill="rgb(0,0,0)" stroke="none">Error</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id4">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1887" y="15857" width="3433" height="1655"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 3602,15858 C 4574,15858 5317,16215 5317,16683 5317,17151 4574,17509 3602,17509 2630,17509 1888,17151 1888,16683 1888,16215 2630,15858 3602,15858 Z M 1888,15858 L 1888,15858 Z M 5318,17510 L 5318,17510 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 3602,15858 C 4574,15858 5317,16215 5317,16683 5317,17151 4574,17509 3602,17509 2630,17509 1888,17151 1888,16683 1888,16215 2630,15858 3602,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 1888,15858 L 1888,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 5318,17510 L 5318,17510 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="2442" y="16904"><tspan fill="rgb(0,0,0)" stroke="none">Pending</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id5">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8999" y="15857" width="3433" height="1655"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 10714,15858 C 11686,15858 12429,16215 12429,16683 12429,17151 11686,17509 10714,17509 9742,17509 9000,17151 9000,16683 9000,16215 9742,15858 10714,15858 Z M 9000,15858 L 9000,15858 Z M 12430,17510 L 12430,17510 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 10714,15858 C 11686,15858 12429,16215 12429,16683 12429,17151 11686,17509 10714,17509 9742,17509 9000,17151 9000,16683 9000,16215 9742,15858 10714,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 9000,15858 L 9000,15858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 12430,17510 L 12430,17510 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9537" y="16904"><tspan fill="rgb(0,0,0)" stroke="none">Running</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id6">
-        <rect class="BoundingBox" stroke="none" fill="none" x="15984" y="15985" width="3433" height="1528"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 17699,15986 C 18671,15986 19414,16315 19414,16747 19414,17179 18671,17509 17699,17509 16727,17509 15985,17179 15985,16747 15985,16315 16727,15986 17699,15986 Z M 15985,15986 L 15985,15986 Z M 19415,17511 L 19415,17511 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 17699,15986 C 18671,15986 19414,16315 19414,16747 19414,17179 18671,17509 17699,17509 16727,17509 15985,17179 15985,16747 15985,16315 16727,15986 17699,15986 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 15985,15986 L 15985,15986 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 19415,17511 L 19415,17511 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="16433" y="16969"><tspan fill="rgb(0,0,0)" stroke="none">Crashing</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id7">
-        <rect class="BoundingBox" stroke="none" fill="none" x="13318" y="7857" width="2797" height="1273"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 14716,9128 L 13319,9128 13319,7858 16113,7858 16113,9128 14716,9128 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 14716,9128 L 13319,9128 13319,7858 16113,7858 16113,9128 14716,9128 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="13855" y="8714"><tspan fill="rgb(0,0,0)" stroke="none">Failed</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id8">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8872" y="21191" width="3686" height="1654"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 10715,22843 L 8873,22843 8873,21192 12556,21192 12556,22843 10715,22843 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 10715,22843 L 8873,22843 8873,21192 12556,21192 12556,22843 10715,22843 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9184" y="22238"><tspan fill="rgb(0,0,0)" stroke="none">Completed</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id9">
-        <rect class="BoundingBox" stroke="none" fill="none" x="7603" y="5189" width="6100" height="639"/>
-        <path fill="rgb(255,255,255)" fill-opacity="0.988" stroke="rgb(255,255,255)" stroke-opacity="0.988" d="M 8831,5190 L 12473,5190 13701,5508 12473,5826 8831,5826 7604,5508 8831,5190 8831,5190 Z M 7604,5190 L 7604,5190 Z M 13701,5826 L 13701,5826 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 8831,5190 L 12473,5190 13701,5508 12473,5826 8831,5826 7604,5508 8831,5190 8831,5190 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7604,5190 L 7604,5190 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 13701,5826 L 13701,5826 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="8613" y="5654"><tspan fill="rgb(0,0,0)" stroke="none">Create job from document</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id10">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8110" y="12556" width="5211" height="512"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 9159,12557 L 12270,12557 13319,12811 12270,13066 9159,13066 8111,12811 9159,12557 9159,12557 Z M 8111,12557 L 8111,12557 Z M 13319,13066 L 13319,13066 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 9159,12557 L 12270,12557 13319,12811 12270,13066 9159,13066 8111,12811 9159,12557 9159,12557 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 8111,12557 L 8111,12557 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 13319,13066 L 13319,13066 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="9332" y="12935"><tspan fill="rgb(0,0,0)" stroke="none">Schedule new job</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id11">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10502" y="2396" width="301" height="764"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,2397 L 10652,2729"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10652,3159 L 10802,2709 10502,2709 10652,3159 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id12">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10651" y="5825" width="4075" height="2034"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,5826 C 10652,7350 13936,6513 14600,7469"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 14716,7858 L 14724,7384 14438,7474 14716,7858 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id13">
-        <rect class="BoundingBox" stroke="none" fill="none" x="6017" y="5825" width="4637" height="2034"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,5826 C 10652,7350 6890,6510 6145,7475"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 6017,7858 L 6307,7483 6024,7384 6017,7858 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id14">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10655" y="9127" width="8135" height="3431"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 18788,9128 C 18788,11700 11591,10164 10788,12166"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10716,12557 L 10949,12144 10655,12087 10716,12557 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id15">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10565" y="5825" width="302" height="6733"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,5826 C 10652,10875 10712,7727 10716,12076"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10716,12557 L 10866,12107 10566,12107 10716,12557 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id16">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10566" y="13064" width="301" height="2796"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10716,13065 L 10716,15429"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10716,15859 L 10866,15409 10566,15409 10716,15859 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id17">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10565" y="17509" width="302" height="3685"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10716,17510 C 10716,20271 10715,18639 10715,20734"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10715,21193 L 10865,20743 10565,20743 10715,21193 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id18">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4816" y="15497" width="4689" height="605"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 9503,16100 C 9503,15343 6051,15358 5072,15824"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 4816,16100 L 5236,15880 5020,15672 4816,16100 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id19">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4815" y="17268" width="4689" height="670"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 4816,17269 C 4816,18109 8314,18093 9266,17565"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 9503,17269 L 9100,17519 9331,17711 9503,17269 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id20">
-        <rect class="BoundingBox" stroke="none" fill="none" x="11927" y="15486" width="4562" height="724"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 11928,16100 C 11928,15337 15364,15309 16271,15893"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 16488,16209 L 16348,15756 16104,15930 16488,16209 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id21">
-        <rect class="BoundingBox" stroke="none" fill="none" x="11928" y="17269" width="4562" height="571"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 16488,17287 C 16488,17978 13165,17970 12192,17538"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 11928,17269 L 12145,17691 12355,17477 11928,17269 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id22">
-        <rect class="BoundingBox" stroke="none" fill="none" x="8492" y="3158" width="4322" height="1274"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 10652,3159 C 11876,3159 12811,3434 12811,3794 12811,4154 11876,4429 10652,4429 9428,4429 8493,4154 8493,3794 8493,3434 9428,3159 10652,3159 Z M 8493,3159 L 8493,3159 Z M 12812,4430 L 12812,4430 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 10652,3159 C 11876,3159 12811,3434 12811,3794 12811,4154 11876,4429 10652,4429 9428,4429 8493,4154 8493,3794 8493,3434 9428,3159 10652,3159 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 8493,3159 L 8493,3159 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 12812,4430 L 12812,4430 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="635px" font-weight="400"><tspan class="TextPosition" x="9263" y="4015"><tspan fill="rgb(0,0,0)" stroke="none">Initializing</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id23">
-        <rect class="BoundingBox" stroke="none" fill="none" x="3573" y="13064" width="7145" height="2796"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10716,13065 C 10716,15160 4548,13939 3700,15474"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 3604,15859 L 3864,15462 3574,15386 3604,15859 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id24">
-        <rect class="BoundingBox" stroke="none" fill="none" x="10502" y="4428" width="301" height="764"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 10652,4429 L 10652,4761"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10652,5191 L 10802,4741 10502,4741 10652,5191 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id25">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="24871" width="638" height="352"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 1316,24872 L 1634,24872 1634,25221 999,25221 999,24872 1316,24872 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1316,24872 L 1634,24872 1634,25221 999,25221 999,24872 1316,24872 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id26">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="25382" width="639" height="385"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 1316,25383 C 1496,25383 1634,25465 1634,25573 1634,25681 1496,25764 1316,25764 1136,25764 999,25681 999,25573 999,25465 1136,25383 1316,25383 Z M 999,25383 L 999,25383 Z M 1635,25765 L 1635,25765 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1316,25383 C 1496,25383 1634,25465 1634,25573 1634,25681 1496,25764 1316,25764 1136,25764 999,25681 999,25573 999,25465 1136,25383 1316,25383 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,25383 L 999,25383 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1635,25765 L 1635,25765 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id27">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="24239" width="638" height="384"/>
-        <path fill="rgb(253,233,169)" stroke="none" d="M 1317,24621 L 999,24621 999,24240 1634,24240 1634,24621 1317,24621 Z"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 1317,24621 L 999,24621 999,24240 1634,24240 1634,24621 1317,24621 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id28">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="23731" width="638" height="384"/>
-        <path fill="rgb(170,220,247)" stroke="none" d="M 1317,24113 L 999,24113 999,23732 1634,23732 1634,24113 1317,24113 Z"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 1317,24113 L 999,24113 999,23732 1634,23732 1634,24113 1317,24113 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TableShape">
-       <g>
-        <rect class="BoundingBox" stroke="none" fill="none" x="22055" y="25834" width="4448" height="619"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 22056,25835 L 26501,25835 26501,26451 22056,26451 22056,25835 Z"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 22056,25835 L 22056,26451"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 22056,26451 L 26501,26451"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 26501,25835 L 26501,26451"/>
-        <path fill="none" stroke="rgb(255,255,255)" d="M 22056,25835 L 26501,25835"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id29">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1635" y="24749" width="2510" height="751"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1885" y="25128"><tspan fill="rgb(0,0,0)" stroke="none">Terminal state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id30">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1683" y="25257" width="4148" height="608"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1933" y="25666"><tspan fill="rgb(0,0,0)" stroke="none">Non-terminal (retryin</tspan><tspan font-size="318px" fill="rgb(0,0,0)" stroke="none">g) state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id31">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1635" y="23606" width="2541" height="636"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1885" y="23985"><tspan fill="rgb(0,0,0)" stroke="none">Healthy state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id32">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1635" y="24185" width="2476" height="565"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1885" y="24564"><tspan fill="rgb(0,0,0)" stroke="none">Unhealthy state</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id33">
-        <rect class="BoundingBox" stroke="none" fill="none" x="6016" y="9127" width="4794" height="3431"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 6017,9128 C 6017,11700 10163,10178 10665,12135"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 10715,12557 L 10809,12092 10511,12129 10715,12557 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.ConnectorShape">
-       <g id="id34">
-        <rect class="BoundingBox" stroke="none" fill="none" x="4053" y="7390" width="994" height="2170"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 4983,8942 C 4983,9974 4054,9631 4054,8494 4054,7358 4686,7159 4907,7626"/>
-        <path fill="rgb(0,0,0)" stroke="none" d="M 4983,8044 L 5045,7574 4751,7631 4983,8044 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id35">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="25890" width="639" height="385"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 1127,25891 L 1506,25891 1635,26082 1506,26273 1127,26273 999,26082 1127,25891 1127,25891 Z M 999,25891 L 999,25891 Z M 1635,26273 L 1635,26273 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1127,25891 L 1506,25891 1635,26082 1506,26273 1127,26273 999,26082 1127,25891 1127,25891 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,25891 L 999,25891 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1635,26273 L 1635,26273 Z"/>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id36">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1649" y="25836" width="4178" height="565"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1899" y="26215"><tspan fill="rgb(0,0,0)" stroke="none">Internal API (not a state)</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.TextShape">
-       <g id="id37">
-        <rect class="BoundingBox" stroke="none" fill="none" x="1683" y="26273" width="3665" height="565"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="282px" font-weight="400"><tspan class="TextPosition" x="1933" y="26652"><tspan fill="rgb(0,0,0)" stroke="none">External API (not a state)</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id38">
-        <rect class="BoundingBox" stroke="none" fill="none" x="17001" y="7857" width="3576" height="1274"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 17002,7858 L 20575,7858 19940,9129 17636,9129 17002,7858 Z M 17002,7858 L 17002,7858 Z M 20575,9129 L 20575,9129 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 17002,7858 L 20575,7858 19940,9129 17636,9129 17002,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 17002,7858 L 17002,7858 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 20575,9129 L 20575,9129 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="18119" y="8421"><tspan fill="rgb(0,0,0)" stroke="none">POST to</tspan></tspan></tspan><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="17980" y="8811"><tspan fill="rgb(0,0,0)" stroke="none">/_replicate</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id39">
-        <rect class="BoundingBox" stroke="none" fill="none" x="7984" y="1126" width="5338" height="1274"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 7985,1127 L 13320,1127 12392,2398 8913,2398 7985,1127 Z M 7985,1127 L 7985,1127 Z M 13320,2398 L 13320,2398 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7985,1127 L 13320,1127 12392,2398 8913,2398 7985,1127 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 7985,1127 L 7985,1127 Z"/>
-        <path fill="none" stroke="rgb(52,101,164)" d="M 13320,2398 L 13320,2398 Z"/>
-        <text class="TextShape"><tspan class="TextParagraph" font-family="Liberation Sans, sans-serif" font-size="353px" font-weight="400"><tspan class="TextPosition" x="9298" y="1885"><tspan fill="rgb(0,0,0)" stroke="none">_replicator/&lt;doc&gt;</tspan></tspan></tspan></text>
-       </g>
-      </g>
-      <g class="com.sun.star.drawing.CustomShape">
-       <g id="id40">
-        <rect class="BoundingBox" stroke="none" fill="none" x="998" y="26494" width="639" height="290"/>
-        <path fill="rgb(255,255,255)" stroke="none" d="M 999,26495 L 1635,26495 1476,26782 1158,26782 999,26495 Z M 999,26495 L 999,26495 Z M 1635,26782 L 1635,26782 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,26495 L 1635,26495 1476,26782 1158,26782 999,26495 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 999,26495 L 999,26495 Z"/>
-        <path fill="none" stroke="rgb(0,0,0)" d="M 1635,26782 L 1635,26782 Z"/>
-       </g>
-      </g>
-     </g>
-    </g>
-   </g>
-  </g>
- </g>
-</svg>
diff --git a/images/rev-tree1.png b/images/rev-tree1.png
deleted file mode 100644
index 467f69e..0000000
--- a/images/rev-tree1.png
+++ /dev/null
Binary files differ
diff --git a/images/rev-tree2.png b/images/rev-tree2.png
deleted file mode 100644
index e77ca3b..0000000
--- a/images/rev-tree2.png
+++ /dev/null
Binary files differ
diff --git a/images/rev-tree3.png b/images/rev-tree3.png
deleted file mode 100644
index fa97c7d..0000000
--- a/images/rev-tree3.png
+++ /dev/null
Binary files differ
diff --git a/images/views-intro-01.png b/images/views-intro-01.png
deleted file mode 100644
index b102d5e..0000000
--- a/images/views-intro-01.png
+++ /dev/null
Binary files differ
diff --git a/images/views-intro-02.png b/images/views-intro-02.png
deleted file mode 100644
index 4e9f3dc..0000000
--- a/images/views-intro-02.png
+++ /dev/null
Binary files differ
diff --git a/images/views-intro-03.png b/images/views-intro-03.png
deleted file mode 100644
index 83929ee..0000000
--- a/images/views-intro-03.png
+++ /dev/null
Binary files differ
diff --git a/images/views-intro-04.png b/images/views-intro-04.png
deleted file mode 100644
index 51e3de8..0000000
--- a/images/views-intro-04.png
+++ /dev/null
Binary files differ
diff --git a/make.bat b/make.bat
deleted file mode 100644
index 77f6d98..0000000
--- a/make.bat
+++ /dev/null
@@ -1,253 +0,0 @@
-@ECHO OFF

-

-REM Command file for Sphinx documentation

-

-if "%SPHINXBUILD%" == "" (

-	set SPHINXBUILD=sphinx-build

-)

-set BUILDDIR=build

-set SOURCE=src/

-set PAPERSIZE=-D latex_elements.papersize=a4

-set SPHINXFLAGS=-a -n -A local=1 %PAPERSIZE%

-set SPHINXOPTS=%SPHINXFLAGS% %SOURCE%

-set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS%

-set I18NSPHINXOPTS=%SPHINXOPTS%

-if NOT "%PAPER%" == "" (

-	set ALLSPHINXOPTS=-D latex_elements.papersize=%PAPER% %ALLSPHINXOPTS%

-	set I18NSPHINXOPTS=-D latex_elements.papersize=%PAPER% %I18NSPHINXOPTS%

-)

-

-if "%1" == "" goto help

-

-if "%1" == "help" (

-	:help

-	echo.Please use `make ^<target^>` where ^<target^> is one of

-	echo.  html       to make standalone HTML files

-	echo.  dirhtml    to make HTML files named index.html in directories

-	echo.  singlehtml to make a single large HTML file

-	echo.  pickle     to make pickle files

-	echo.  json       to make JSON files

-	echo.  htmlhelp   to make HTML files and a HTML help project

-	echo.  qthelp     to make HTML files and a qthelp project

-	echo.  devhelp    to make HTML files and a Devhelp project

-	echo.  epub       to make an epub

-	echo.  latex      to make LaTeX files, you can set PAPER=a4 or PAPER=letter

-	echo.  text       to make text files

-	echo.  man        to make manual pages

-	echo.  texinfo    to make Texinfo files

-	echo.  gettext    to make PO message catalogs

-	echo.  changes    to make an overview over all changed/added/deprecated items

-	echo.  xml        to make Docutils-native XML files

-	echo.  pseudoxml  to make pseudoxml-XML files for display purposes

-	echo.  linkcheck  to check all external links for integrity

-	echo.  doctest    to run all doctests embedded in the documentation if enabled

-	echo.  check      to run the Python based linter

-	goto end

-)

-

-if "%1" == "clean" (

-	for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i

-	del /q /s %BUILDDIR%\*

-	goto end

-)

-

-

-%SPHINXBUILD% 1> nul 2> nul

-if errorlevel 9009 (

-	echo.

-	echo.The 'sphinx-build' command was not found. Make sure you have Sphinx

-	echo.installed, then set the SPHINXBUILD environment variable to point

-	echo.to the full path of the 'sphinx-build' executable. Alternatively you

-	echo.may add the Sphinx directory to PATH.

-	echo.

-	echo.If you don't have Sphinx installed, grab it from

-	echo.http://sphinx-doc.org/

-	exit /b 1

-)

-

-if "%1" == "html" (

-	%SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The HTML pages are in %BUILDDIR%/html.

-	goto end

-)

-

-if "%1" == "dirhtml" (

-	%SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml.

-	goto end

-)

-

-if "%1" == "singlehtml" (

-	%SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml.

-	goto end

-)

-

-if "%1" == "pickle" (

-	%SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished; now you can process the pickle files.

-	goto end

-)

-

-if "%1" == "json" (

-	%SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished; now you can process the JSON files.

-	goto end

-)

-

-if "%1" == "htmlhelp" (

-	%SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished; now you can run HTML Help Workshop with the ^

-.hhp project file in %BUILDDIR%/htmlhelp.

-	goto end

-)

-

-if "%1" == "qthelp" (

-	%SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished; now you can run "qcollectiongenerator" with the ^

-.qhcp project file in %BUILDDIR%/qthelp, like this:

-	echo.^> qcollectiongenerator %BUILDDIR%\qthelp\Couch.qhcp

-	echo.To view the help file:

-	echo.^> assistant -collectionFile %BUILDDIR%\qthelp\Couch.ghc

-	goto end

-)

-

-if "%1" == "devhelp" (

-	%SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished.

-	goto end

-)

-

-if "%1" == "epub" (

-	%SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The epub file is in %BUILDDIR%/epub.

-	goto end

-)

-

-if "%1" == "latex" (

-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished; the LaTeX files are in %BUILDDIR%/latex.

-	goto end

-)

-

-if "%1" == "latexpdf" (

-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex

-	cd %BUILDDIR%/latex

-	make all-pdf

-	cd %BUILDDIR%/..

-	echo.

-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.

-	goto end

-)

-

-if "%1" == "latexpdfja" (

-	%SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex

-	cd %BUILDDIR%/latex

-	make all-pdf-ja

-	cd %BUILDDIR%/..

-	echo.

-	echo.Build finished; the PDF files are in %BUILDDIR%/latex.

-	goto end

-)

-

-if "%1" == "text" (

-	%SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The text files are in %BUILDDIR%/text.

-	goto end

-)

-

-if "%1" == "man" (

-	%SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The manual pages are in %BUILDDIR%/man.

-	goto end

-)

-

-if "%1" == "texinfo" (

-	%SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo.

-	goto end

-)

-

-if "%1" == "gettext" (

-	%SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The message catalogs are in %BUILDDIR%/locale.

-	goto end

-)

-

-if "%1" == "changes" (

-	%SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.The overview file is in %BUILDDIR%/changes.

-	goto end

-)

-

-if "%1" == "linkcheck" (

-	%SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Link check complete; look for any errors in the above output ^

-or in %BUILDDIR%/linkcheck/output.txt.

-	goto end

-)

-

-if "%1" == "doctest" (

-	%SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Testing of doctests in the sources finished, look at the ^

-results in %BUILDDIR%/doctest/output.txt.

-	goto end

-)

-

-if "%1" == "xml" (

-	%SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The XML files are in %BUILDDIR%/xml.

-	goto end

-)

-

-if "%1" == "pseudoxml" (

-	%SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml

-	if errorlevel 1 exit /b 1

-	echo.

-	echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml.

-	goto end

-)

-

-if "%1" == "check" (

-    python ext\linter.py %SOURCE%

-	if errorlevel 1 exit /b 1

-    goto end

-)

-

-:end

diff --git a/rebar.config b/rebar.config
deleted file mode 100644
index d2fd397..0000000
--- a/rebar.config
+++ /dev/null
@@ -1,16 +0,0 @@
-% -*- mode: erlang;erlang-indent-level: 4;indent-tabs-mode: nil -*-
-% ex: ts=4 sw=4 ft=erlang et
-% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-% use this file except in compliance with the License. You may obtain a copy of
-% the License at
-%
-%   http://www.apache.org/licenses/LICENSE-2.0
-%
-% Unless required by applicable law or agreed to in writing, software
-% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-% License for the specific language governing permissions and limitations under
-% the License.
-
-{pre_hooks, [{compile, "make"}]}.
-{post_hooks, [{clean, "make clean"}]}.
diff --git a/requirements.txt b/requirements.txt
deleted file mode 100644
index d1ac8ce..0000000
--- a/requirements.txt
+++ /dev/null
@@ -1,3 +0,0 @@
-Sphinx==2.4.4
-sphinx-rtd-theme==1.0.0
-jinja2<3.1
\ No newline at end of file
diff --git a/rfcs/001-fdb-revision-metadata-model.md b/rfcs/001-fdb-revision-metadata-model.md
deleted file mode 100644
index b9e4071..0000000
--- a/rfcs/001-fdb-revision-metadata-model.md
+++ /dev/null
@@ -1,215 +0,0 @@
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-This is a proposal for the storage of document revision history metadata as a
-set of KVs in FoundationDB.
-
-## Abstract
-
-This design stores each edit branch as its own KV, and all of the edit branches
-are stored separately from the actual document data. Document reads can avoid
-retrieving this information, while writes can avoid retrieving the document
-body.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their
-definitions here.)
-
-`Versionstamp`: a 12 byte, unique, monotonically (but not sequentially)
-increasing value for each committed transaction. The first 8 bytes are the
-committed version of the database. The next 2 bytes are monotonic in the
-serialization order for transactions. The final 2 bytes are user-defined and can
-be used to create multiple versionstamps in a single transaction.
-
-`Incarnation`: a single byte, monotonically increasing value specified for each
-CouchDB database. The `Incarnation` starts at `\x00` when a database is created
-and is incremented by one whenever a database is relocated to a different
-FoundationDB cluster.
-
-`Sequence`: a 13 byte value formed by combining the current `Incarnation` of
-the database and the `Versionstamp` of the transaction. Sequences are
-monotonically increasing even when a database is relocated across FoundationDB
-clusters.
-
----
-
-# Detailed Description
-
-The size limits in FoundationDB preclude storing the entire revision tree as a
-single value; in pathological situations the tree could exceed 100KB. Rather, we
-propose to store each edit *branch* as a separate KV. We have two different
-value formats, one that is used for the "winning" edit branch and one used for
-any additional edit branches of the document. The winning edit branch includes
-the following information:
-
-`(“revisions”, DocID, NotDeleted, RevPosition, RevHash) = (RevFormat, Sequence,
-BranchCount, [ParentRev, GrandparentRev, …])`
-
-while the other edit branches omit the `Sequence` and `BranchCount`:
-
-`(“revisions”, DocID, NotDeleted, RevPosition, RevHash) = (RevFormat,
-[ParentRev, GrandparentRev, …])`
-
-The individual elements of the key and value are defined as follows:
-- `DocID`: the document ID
-- `NotDeleted`: `\x26` if the leaf of the edit branch is deleted, `\x27`
-  otherwise (following tuple encoding for booleans)
-- `RevPosition`: positive integer encoded using standard tuple layer encoding
-  (signed, variable-length, order-preserving)
-- `RevHash`: 16 bytes uniquely identifying this revision
-- `RevFormat`: enum for the revision encoding being used to enable schema
-  evolution
-- `Sequence`: the sequence of the last transaction that modified the document
-  (NB: not necessarily the last edit to *this* branch).
-- `BranchCount`: the number of edit branches associated with this document.
-- `[ParentRev, GrandparentRev, ...]`: 16 byte identifiers of ancestors, up to
-  1000 by default
-
-## Limits
-
-In order to stay compatible with FoundationDB size limits we need to prevent
-administrators from increasing `_revs_limit` beyond what we can fit into a
-single value. Suggest **4000** as a max.
-
-## Update Path
-
-Each edit on a document will read and modify the so-called "winning" edit
-branch, a property that is essential for FoundationDB to correctly identify
-concurrent modifications to a given document as conflicting. We enforce this
-specifically by storing the `Sequence` only on the winning branch. Other
-branches set this to null.
-
-If a writer comes in and tries to extend a losing edit branch, it will find the
-first element of the value to be null and will do an additional edit branch read
-to retrieve the winning branch. It can then compare both branches to see which
-one will be the winner following that edit, and can assign the extra metadata to
-that branch accordingly.
-
-A writer attempting to delete the winning branch (i.e., setting `NotDeleted` to
-`\x26`) will need to read two contiguous KVs, the one for the winner and the one
-right before it. If the branch before it will be the winner following the
-deletion then we move the storage of the extra metadata to it accordingly. If
-the tombstoned branch remains the winner for this document then we only update
-that branch.
-
-A writer extending the winning branch with an updated document (the common case)
-will proceed reading just the one branch.
-
-A writer attempting to insert a new document without any base revision will need
-to execute a `get_range_startswith` operation with `limit=1` and `reverse=true`
-on the key range prefixed by ("revisions", DocID). A null result from that range
-read would be the signal to go ahead with the write. If another transaction
-races our writer and inserts the document first FoundationDB will detect the
-intersection between the write set of that transaction and the read range here
-and correctly cause our writer to fail.
-
-New edit branches can only be created on that first edit to a document or during
-`new_edits=false`, so most interactive writers will just carry over the
-`BranchCount` with each edit they make. A writer with `new_edits=false` will
-retrieve the full range of KV pairs and set the `BranchCount` accordingly.
-Tracking the `BranchCount` here enables us to push that information into the
-`_changes` feed index, where it can be used to optimize the popular
-`style=all_docs` queries in the common case of a single edit branch per
-document.
-
-Summarizing the performance profile:
-- Extending a losing branch: 2 KVs, 2 roundtrips
-- Deleting the winning branch: 2 KVs, 1 roundtrip
-- Extending the winning branch: 1 KV, 1 roundtrip
-- `new_edits=false` update: `<N>` KVs, 1 roundtrip
-
-# Advantages
-
-We can read a document revision without retrieving the revision tree, which in
-the case of frequently-edited documents may be larger than the doc itself.
-
-We ensure that an interactive document update against the winning branch only
-needs to read the edit branch KV against which the update is being applied, and
-it can read that branch immediately knowing only the content of the edit that is
-being attempted (i.e., it does not need to read the current version of the
-document itself). The less common scenario of updating a losing branch is only
-slightly less efficient, requiring two roundtrips.
-
-Interactively updating a document with a large number of edit branches is
-therefore dramatically cheaper, as no more than two edit branches are read or
-modified regardless of the number of branches that exist, and no tree merge
-logic is required.
-
-Including `NotDeleted` in the key ensures that we can efficiently accept the
-case where we upload a new document with the same ID where all previous edit
-branches have been deleted; i.e. we can construct a key selector which
-automatically tells us there are no `deleted=false` edit branches.
-
-The `RevFormat` enum gives us the ability to evolve revision history storage
-over time, and to support alternative conflict resolution policies like Last
-Writer Wins.
-
-Access to the indexed `Sequence` ensures we can clear the old entry in the
-`changes` subspace during an edit. The `set_versionstamped_value` API is used to
-store this value automatically.
-
-The key structure above naturally sorts so that the "winning" revision is the
-last one in the list, which we leverage when deleting the winning edit branch
-(and thus promoting the one next in line), and extending a conflict branch (to
-coordinate the update to the `Sequence`) This is also a small optimization for
-reads with `?revs=true` or `?revs_info=true`, where we want the details of the
-winning edit branch but don't actually know the `RevPosition` and `RevHash` of
-that branch.
-
-# Disadvantages
-
-Historical revision identifiers shared by multiple edit branches are duplicated.
-
-# Key Changes
-
-Administrators cannot set `_revs_limit` larger than 4,000 (previously
-unlimited?). Default stays the same at 1,000.
-
-The intention with this data model is that an interactive edit that supplies a
-revision identifier of a deleted leaf will always fail with a conflict. This is
-a subtle departure from CouchDB 2.3 behavior, where an attempt to extend a
-deleted edit branch can succeed if some other `deleted=false` edit branch
-exists. This is an undocumented and seemingly unintentional behavior. If we need
-to match that behavior it will require reading 3 KVs in 2 roundtrips for *every*
-edit that we reject with a conflict.
-
-## Modules affected
-
-TBD depending on exact code layout going forward, but the `couch_key_tree`
-module contains the current revision tree implementation.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-None.
-
-## Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list
-discussion](https://lists.apache.org/thread.html/853b86f3a83108745af510959bb381370a99988af4528617bdbe1be4@%3Cdev.couchdb.apache.org%3E)
-
-[apache/couchdb#1957](https://github.com/apache/couchdb/issues/1957) (originally
-submitted RFC as an issue in the main project repo instead of a PR here).
-
-# Acknowledgements
-
-Thanks to @iilyak, @davisp, @janl, @garrensmith and @rnewson for comments on the
-mailing list discussion.
diff --git a/rfcs/002-shard-splitting.md b/rfcs/002-shard-splitting.md
deleted file mode 100644
index 54b0727..0000000
--- a/rfcs/002-shard-splitting.md
+++ /dev/null
@@ -1,373 +0,0 @@
----
-name: Shard Splitting
-about: Introduce Shard Splitting to CouchDB
-title: 'Shard Splitting'
-labels: rfc, discussion
-assignees: '@nickva'
-
----
-
-# Introduction
-
-This RFC proposes adding the capability to split shards to CouchDB. The API and
-the internals will also allow for other operations on shards in the future such
-as merging or rebalancing.
-
-## Abstract
-
-Since CouchDB 2.0 clustered databases have had a fixed Q value defined at
-creation. This often requires users to predict database usage ahead of time
-which can be hard to do. A too low of a value might result in large shards,
-slower performance, and needing more disk space to do compactions.
-
-It would be nice to start with a low Q initially, for example Q=1, and as
-usage grows to be able to split some shards that grow too big. Especially
-with partitioned queries being available there will be a higher chance
-of having uneven sized shards and so it would be beneficial to split the
-larger ones to even out the size distribution across the cluster.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-*resharding* : Manipulating CouchDB shards. Could be splitting, merging,
-rebalancing or other operations. This will be used as the top-level API
-endpoint name with the idea that in the future different types of shard
-manipulation jobs would be added.
-
----
-
-# Detailed Description
-
-From the user's perspective there would be a new HTTP API endpoint -
-`_reshard/*`. A POST request to `_reshard/jobs/` would start resharding jobs.
-Initially these will be of just one "type":"split" but in the future other
-types could be added.
-
-Users would then be able to monitor the state of these jobs to inspect their
-progress, see when they completed or failed.
-
-The API should be designed to be consistent with `_scheduler/jobs` as much as
-possible since that is another recent CouchDB's API exposing an internal jobs
-list.
-
-Most of the code implementing this would live in the mem3 application with some
-lower level components in the *couch* application. There will be a new child in
-the *mem3_sup* supervisor responsible for resharding called *mem3_reshard_sup*.
-It will have a *mem3_reshard* manager process which should have an Erlang API
-for starting jobs, stopping jobs, removing them, and inspecting their state.
-Individual jobs would be instances of a gen_server defined in
-*mem3_reshard_job* module. There will be simple-one-for-one supervisor under
-*mem3_reshard_sup* named *mem3_reshard_job_sup* to keep track of
-*mem3_reshard_job* children .
-
-An individual shard splitting job will follow roughly these steps in order:
-
-- **Create targets**. Targets are created. Some target properties should match
-  the source. This means matching the PSE engine if source uses a custom one.
-  If source is partitioned, targets should be partitioned as well, etc.
-
-- **Initial bulk copy.** After the targets are created, copy all the document
-  in the source shard to the targets. This operation should be as optimized as
-  possible as it could potentially copy tens of GBs of data. For this reason
-  this piece of code will be closer the what the compactor does.
-
-- **Build indices**. The source shard might have had up-to-date indices and so
-  it is beneficial for the split version to have them as well. Here we'd
-  inspect all `_design` docs and rebuild all the known indices. After this step
-  there will be a "topoff" step to replicate any change that might have
-  occurred on the source while the indices were built.
-
-- **Update shard map**. Here the global shard map is updated to remove the old
-  source shard and replace it with the targets. There will be a corresponding
-  entry added into the shard's document `changelog entry` indicating that a
-  split happened. To avoid conflicts being generated when multiple copies of a
-  range finish splitting and race to update the shard map. All shard map
-  updates will be routes through one consistently picked node (lowest in the
-  list connected nodes when they are sorted). After shard map is updated. There
-  will be another topoff replication job to bring in changes from the source
-  shard to the targets that might have occurred while the shard map was
-  updating.
-
-- **Delete source shard**
-
-This progression of split states will be visible when inspecting a job's status
-as well as in the history in the `detail` field of each event.
-
-
-# Advantages and Disadvantages
-
-Main advantage is to dynamically change shard size distribution on a cluster in
-response to changing user requirements without having to delete and recreate
-databases.
-
-One disadvantage is that it might break some basic constraints about all copies
-of a shard range being the same size. A user could choose to split for example
-a shard copy 00..-ff... on node1 only so on node2 and node3 the copy will be
-00-..ff.. but on node1 there will now be 00-..7f.. and 80-ff... External
-tooling inspecting $db/_shards endpoint might need to be updated to handle this
-scenario. A mitigating factor here is that resharding in the current proposal
-is not automatic it is an operation triggered manually by the users.
-
-# Key Changes
-
-The main change is the ability to split shard via the `_reshard/*` HTTP API
-
-## Applications and Modules affected
-
-Most of the changes will be in the *mem3* application with some changes in the *couch* application as well.
-
-## HTTP API additions
-
-`* GET /_reshard`
-
-Top level summary. Besides the new _reshard endpoint, there `reason` and the stats are more detailed.
-
-Returns
-
-```
-{
-    "completed": 3,
-    "failed": 4,
-    "running": 0,
-    "state": "stopped",
-    "state_reason": "Manual rebalancing",
-    "stopped": 0,
-    "total": 7
-}
-```
-
-* `PUT /_reshard/state`
-
-Start or stop global rebalacing.
-
-Body
-```
-{
-    "state": "stopped",
-    "reason": "Manual rebalancing"
-}
-```
-
-Returns
-
-```
-{
-    "ok": true
-}
-```
-
-* `GET /_reshard/state`
-
-Return global resharding state and reason.
-
-```
-{
-    "reason": "Manual rebalancing",
-    "state": "stopped"
-}
-```
-
-* `GET /_reshard/jobs`
-
-Get the state of all the resharding jobs on the cluster. Now we have a detailed
-state transition history which looks similar what _scheduler/jobs have.
-
-```
-{
-    "jobs": [
-        {
-            "history": [
-                {
-                    "detail": null,
-                    "timestamp": "2019-02-06T22:28:06Z",
-                    "type": "new"
-                },
-                ...
-                {
-                    "detail": null,
-                    "timestamp": "2019-02-06T22:28:10Z",
-                    "type": "completed"
-                }
-            ],
-            "id": "001-0a308ef9f7bd24bd4887d6e619682a6d3bb3d0fd94625866c5216ec1167b4e23",
-            "job_state": "completed",
-            "node": "node1@127.0.0.1",
-            "source": "shards/00000000-ffffffff/db1.1549492084",
-            "split_state": "completed",
-            "start_time": "2019-02-06T22:28:06Z",
-            "state_info": {},
-            "target": [
-                "shards/00000000-7fffffff/db1.1549492084",
-                "shards/80000000-ffffffff/db1.1549492084"
-            ],
-            "type": "split",
-            "update_time": "2019-02-06T22:28:10Z"
-        },
-        {
-           ....
-        },
-   ],
-   "offset": 0,
-   "total_rows": 7
-}
-```
-
-* `POST /_reshard/jobs`
-
-Create a new resharding job. This can now take other parameters and can split multiple ranges.
-
-To split one shard on a particular node
-
-```
-{
-    "type": "split",
-    "shard": "shards/80000000-bfffffff/db1.1549492084"
-    "node": "node1@127.0.0.1"
-}
-```
-
-To split a particular range on all nodes:
-
-```
-{
-     "type": "split",
-     "db" : "db1",
-     "range" : "80000000-bfffffff"
-}
-```
-
-To split a range on just one node:
-
-```
-{
-     "type": "split",
-     "db" : "db1",
-     "range" : "80000000-bfffffff",
-     "node": "node1@127.0.0.1"
-}
-```
-
-To split all ranges of a db on one node:
-
-```
-{
-     "type": "split",
-     "db" : "db1",
-     "node": "node1@127.0.0.1"
-}
-```
-
-Result now may contain multiple job IDs
-
-```
-[
-    {
-        "id": "001-d457a4ea82877a26abbcbcc0e01c4b0070027e72b5bf0c4ff9c89eec2da9e790",
-        "node": "node1@127.0.0.1",
-        "ok": true,
-        "shard": "shards/80000000-bfffffff/db1.1549986514"
-    },
-    {
-        "id": "001-7c1d20d2f7ef89f6416448379696a2cc98420e3e7855fdb21537d394dbc9b35f",
-        "node": "node1@127.0.0.1",
-        "ok": true,
-        "shard": "shards/c0000000-ffffffff/db1.1549986514"
-    }
-]
-```
-
-* `GET /_reshard/jobs/$jobid`
-
-Get just one job by its ID
-
-```
-{
-    "history": [
-        {
-            "detail": null,
-            "timestamp": "2019-02-12T16:55:41Z",
-            "type": "new"
-        },
-        {
-            "detail": "Shard splitting disabled",
-            "timestamp": "2019-02-12T16:55:41Z",
-            "type": "stopped"
-        }
-    ],
-    "id": "001-d457a4ea82877a26abbcbcc0e01c4b0070027e72b5bf0c4ff9c89eec2da9e790",
-    "job_state": "stopped",
-    "node": "node1@127.0.0.1",
-    "source": "shards/80000000-bfffffff/db1.1549986514",
-    "split_state": "new",
-    "start_time": "1970-01-01T00:00:00Z",
-    "state_info": {
-        "reason": "Shard splitting disabled"
-    },
-    "target": [
-        "shards/80000000-9fffffff/db1.1549986514",
-        "shards/a0000000-bfffffff/db1.1549986514"
-    ],
-    "type": "split",
-    "update_time": "2019-02-12T16:55:41Z"
-}
-```
-
-* `GET /_reshard/jobs/$jobid/state`
-
-Get the running state of a particular job only
-
-```
-{
-    "reason": "Shard splitting disabled",
-    "state": "stopped"
-}
-```
-
-* `PUT /_reshard/jobs/$jobid/state`
-
-Stop or resume a particular job
-
-Request body
-
-```
-{
-     "state": "stopped",
-     "reason": "Pause this job for now"
-}
-```
-
-
-## HTTP API deprecations
-
-None
-
-# Security Considerations
-
-None.
-
-# References
-
-Original RFC-as-an-issue:
-
-https://github.com/apache/couchdb/issues/1920
-
-Most of the discussion regarding this has happened on the `@dev` mailing list:
-
-https://mail-archives.apache.org/mod_mbox/couchdb-dev/201901.mbox/%3CCAJd%3D5Hbs%2BNwrt0%3Dz%2BGN68JPU5yHUea0xGRFtyow79TmjGN-_Sg%40mail.gmail.com%3E
-
-https://mail-archives.apache.org/mod_mbox/couchdb-dev/201902.mbox/%3CCAJd%3D5HaX12-fk2Lo8OgddQryZaj5KRa1GLN3P9LdYBQ5MT0Xew%40mail.gmail.com%3E
-
-
-# Acknowledgments
-
-@davisp @kocolosk : Collaborated on the initial idea and design
-
-@mikerhodes @wohali @janl @iilyak : Additionally collaborated on API design
diff --git a/rfcs/003-fdb-seq-index.md b/rfcs/003-fdb-seq-index.md
deleted file mode 100644
index 9061c5a..0000000
--- a/rfcs/003-fdb-seq-index.md
+++ /dev/null
@@ -1,244 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Data Model and Index Management for _changes in FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-Data Model and Index Management for `_changes` in FoundationDB
-
-## Abstract
-
-This document describes how to implement the `by_seq` index that supports the
-`_changes` endpoints in FoundationDB. It covers the data model, index
-maintenance, and access patterns.
-
-The basic data model is one where the key is a `Sequence` (as defined below) and
-the value is a document ID, revision, and branch count.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`Versionstamp`: a 12 byte, unique, monotonically (but not sequentially)
-increasing value for each committed transaction. The first 8 bytes are the
-committed version of the database. The next 2 bytes are monotonic in the
-serialization order for transactions. The final 2 bytes are user-defined and can
-be used to create multiple versionstamps in a single transaction.
-
-`Incarnation`: a monotonically increasing Integer value specified for each
-CouchDB database. The `Incarnation` starts at zero (i.e. `\x14` in the tuple
-layer encoding) when a database is created and is incremented by one whenever a
-database is relocated to a different FoundationDB cluster. Thus the majority of
-the time an Incarnation fits into a single byte, or two bytes if the database
-has been moved around a small number of times.
-
-`Sequence`: the combinination of the current `Incarnation` for the database and
-the `Versionstamp` of the transaction. Sequences are monotonically increasing
-even when a database is relocated across FoundationDB clusters.
-
-`style=all_docs`: An optional query parameter to the `_changes` feed which
-requests that all leaf revision ids are included in the response. The replicator
-(one of the most frequent consumers of `_changes`) supplies this parameter.
-
----
-
-# Detailed Description
-
-The `_changes` feed provides a list of the documents in a given database, in the
-order in which they were most recently updated. Each document shows up exactly
-once in a normal response to the `_changes` feed.
-
-In CouchDB 2.x and 3.x the database sequence is a composition of sequence
-numbers from individual database shards. In the API this sequence is encoded as
-a long Base64 string. The response to the `_changes` feed is not totally
-ordered; the only guarantee is that a client can resume the feed from a given
-sequence and be guaranteed not to miss any updates.
-
-Future releases of CouchDB based on FoundationDB will be able to offer stronger
-guarantees. The `Sequence` defined in the Terminology section above is totally
-ordered across the entire cluster, and repeated calls to `_changes` on a
-quiescent database will retrieve the same results in the same order. The
-`Sequence` will still be encoded as a string, but as it's a more compact value
-we propose to encode it in hexadecimal notation. These strings will sort
-correctly, something that has not always been true in CouchDB 2.x.
-
-## Data Model
-
-Each database will contain a `changes` subspace with keys and values that take
-the form
-
-`("changes", Sequence) = (SeqFormat, DocID, RevPosition, RevHash, BranchCount,
-NotDeleted)`
-
-where the individual elements are defined as follows:
-
-- `SeqFormat`: enum for the value encoding, to enable schema evolution
-- `DocID`: the document ID
-- `RevPosition`: positive integer encoded using standard tuple layer encoding
-  (signed, variable-length, order-preserving)
-- `RevHash`: 16 bytes uniquely identifying the winning revision of this document
-- `Sequence`: the sequence of the last transaction that modified the document
-  (NB: not necessarily the transaction that produced the `RevPosition-RevHash`
-  edit).
-- `BranchCount`: the number of edit branches associated with this document
-- `NotDeleted`: `\x26` if the leaf of the edit branch is deleted, `\x27`
-  otherwise (following tuple encoding for booleans)
-
-A typical response to `_changes` includes all of this information in each row
-except the internal `SeqFormat` and the `BranchCount`. The latter is used as an
-optimization for the `style=all_docs` request; if this parameter is specified
-and the `BranchCount` is 1 we can avoid making an extra request to the
-"revisions" space to discover that there are no other revisions to include.
-
-## Index Maintenance
-
-As discussed in [RFC 001](001-fdb-revision-metadata-model.md), an update attempt
-always retrieves the metadata KV for the current winning branch from the
-"revisions" subspace. This metadata entry includes the sequence of the last edit
-to the document, which serves as the key into the index in our "changes"
-subspace. The writer will use that information to clear the existing KV from the
-`_changes` subspace as part of the transaction.
-
-The writer also knows in all cases what the `RevPosition`, `RevHash`,
-`BranchCount`, and `NotDeleted` will be following the edit, and can use the
-`set_versionstamped_key` API to write a new KV with the correct new sequence of
-the transaction into the "changes" subspace.
-
-In short, the operations in this subspace are
-- doc insert: 0 read, 0 clear, 1 insert
-- doc update: 0 read, 1 clear, 1 insert
-
-## Handling of Unknown Commit Results
-
-When using versionstamped keys as proposed in this RFC one needs to pay
-particular care to the degraded mode when FoundationDB responds to a transaction
-commit with `commit_unknown_result`. Versionstamped keys are not idempotent, and
-so a naïve retry approach could result in duplicate entries in the "changes"
-subspace. The index maintenance in this subspace is "blind" (i.e. no reads in
-this subspace are performed), so the risk for duplicate entries is indeed a
-valid concern.
-
-We can guard against creating duplicates in the "changes" subspace by having the
-transaction that updates that subspace also insert a KV into a dedicated
-"transaction ID" subspace specifically corresponding to this document update. If
-the CouchDB layer receives a `commit_unknown_result` it can simply check for the
-presence of the transaction ID in FoundationDB to determine whether the previous
-transaction succeeded or failed. If the transaction ID is not present, CouchDB
-can safely retry with the same transaction ID. After a successful transaction
-commit, the CouchDB layer can delete the transaction ID KV asynchronously. For
-example, each process could dump the transaction ID of a successful commit into
-a local ets table (shared by all databases), and a process could scan that table
-once every few seconds and clear the associated entries from FDB in a single
-transaction.
-
-## Access Patterns
-
-Let's consider first the simple case where an entire response to `_changes` fits
-within a single FoundationDB transaction (specifically the 5 second limit). In
-this case a normal request to `_changes` can be satisfied with a single range
-read from the "changes" subspace. A `style=all_docs` request will need to check
-the `BranchCount` for each row; if it's larger than 1, the client will need to
-do a followup range request against the "revisions" subspace to retrieve the
-additional revision identifiers to include in the response. A request with
-`include_docs=true` will need to make a separate range request to the doc
-storage subspace to retrieve the body of each winning document revision.
-
-If a normal response to `_changes` cannot be delivered in a single transaction
-the CouchDB layer should execute multiple transactions in series and stitch the
-responses together as needed. Note that this opens up a subtle behavior change
-from classic CouchDB, where a single database snapshot could be held open
-~indefinitely for each shard, providing a complete snapshot of the database as
-it existed at the *beginning* of the response. While future enhancements in
-FoundationDB may allow us to recover that behavior, in the current version we
-may end up with duplicate entries for individual documents that are updated
-during the course of streaming the `_changes` response. The end result will be
-that each document in the database shows up at least once, and if you take the
-last entry for each document that you observe in the feed, you'll have the state
-of the database as it existed at the *end* of the response.
-
-Finally, when a user requests `_changes` with `feed=continuous` there is no
-expectation of exactly-once semantics, and in fact this is implemented using
-multiple database snapshots for each shard today. The extra bit of work with
-this response type is to efficiently discover when a new read of the "changes"
-subspace for a given database is required in FoundationDB. A few different
-options have been discussed on the mailing list:
-
-1. Writers publish `db_updated` events to `couch_event`, listeners use
-   distributed Erlang to subscribe to all nodes, similar to the classic
-   approach.
-1. Poll the `_changes` subspace, scale by nominating a specific process per node
-   to do the polling.
-1. Same as above but using a watch on DB metadata that changes with every update
-   instead of polling.
-
-This RFC proposes to pursue the second approach. It preserves the goal of a
-stateless CouchDB layer with no coordination between instances, and has a
-well-known scalability and performance profile.
-
-# Advantages
-
-This design eliminates "rewinds" of the `_changes` feed due to cluster
-membership changes, and enhances database sequences to enable relocation of
-logical CouchDB databases across FoundationDB clusters without rewinds as well.
-
-We anticipate improved throughput due to the more compact encoding of database
-sequences.
-
-The new sequence format always sorts correctly, which simplifies the job of
-consumers tracking the sequence from which they should resume in parallel
-processing environments.
-
-# Disadvantages
-
-It will not be possible to retrieve a complete point-in-time snapshot of a large
-database in which each document appears exactly once. This may change with a
-future enhancement to the storage engine underpinning FoundationDB.
-
-# Key Changes
-
-Nothing additional to report here.
-
-## Applications and Modules affected
-
-TBD depending on exact code layout going forward, but this functionality cuts
-across several core modules of CouchDB.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-None.
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list
-discussion](https://lists.apache.org/thread.html/29d69efc47cb6328977fc1c66efecaa50c5d93a2f17aa7a3392211af@%3Cdev.couchdb.apache.org%3E)
-
-[Detailed thread on isolation semantics for long
-responses](https://lists.apache.org/thread.html/a4429197919e66ef0193d128872e17b3b62c1f197918df185136b35d@%3Cuser.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-Thanks to @iilyak, @rnewson, @mikerhodes, @garrensmith and @alexmiller-apple for
-comments on the mailing list discussions, and to @wohali for working through the
-implications of the isolation changes on IRC.
diff --git a/rfcs/004-document-storage.md b/rfcs/004-document-storage.md
deleted file mode 100644
index b031775..0000000
--- a/rfcs/004-document-storage.md
+++ /dev/null
@@ -1,251 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'JSON document storage in FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-This document describes a data model for storing JSON documents as key-value
-pairs in FoundationDB. It includes a discussion of storing multiple versions of
-the document, each identified by unique revision identifiers, and discusses some
-of the operations needed to query and modify these documents.
-
-## Abstract
-
-The data model maps each "leaf" JSON value (number, string, true, false, and
-null) to a single KV in FoundationDB. Nested relationships are modeled using a
-tuple structure in the keys. Different versions of a document are stored
-completely independently from one another. Values are encoded using
-FoundationDB's tuple encoding.
-
-The use of a single KV pair for each leaf value implies a new 100KB limit on
-those values stored in CouchDB documents. An alternative design could split
-these large (string) values across multiple KV pairs.
-
-Extremely deeply-nested data structures and the use of long names in the nesting
-objects could cause a path to a leaf value to exceed FoundationDB's 10KB limit
-on key sizes. String interning could reduce the likelihood of this occurring but
-not eliminate it entirely. Interning could also provide some significant space
-savings in the current FoundationDB storage engine, although the introduction of
-key prefix elision in the Redwood engine should also help on that front.
-
-FoundationDB imposes a hard 10MB limit on transactions. In order to reserve
-space for additional metadata, user-defined indexes, and generally drive users
-towards best practices in data modeling this RFC proposes a **1MB (1,000,000
-byte)** limit on document sizes going forward.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
----
-
-# Detailed Description
-
-## Value Encoding
-
-The `true` (`\x27`), `false` (`\x26`) and `null` (`\x00`) values each have a
-single-byte encoding in FoundationDB's tuple layer. Integers are represented
-with arbitrary precision (technically, up to 255 bytes can be used).
-Floating-point numbers use an IEEE binary representation up to double precision.
-More details on these specific byte codes are available in the [FoundationDB
-documentation](https://github.com/apple/foundationdb/blob/6.0.18/design/tuple.md).
-
-Unicode strings must be encoded into UTF-8. They are prefixed with a `\x02`
-bytecode and are null-terminated. Any nulls within the string must be replaced
-by `\x00\xff`. Raw byte strings have their own `\x01` prefix and must follow the
-same rules regarding null bytes in the string. Both are limited to 100KB.
-
-An object is decomposed into multiple key-value pairs, where each key is a tuple
-identifying the path to a final leaf value. For example, the object
-
-```
-{
-    "foo": {
-        "bar": {
-            "baz": 123
-        }
-    }
-}
-```
-
-would be represented by a key-value pair of
-
-```
-pack({"foo", "bar", "baz"}) = pack({123})
-```
-
-Clients SHOULD NOT submit objects containing duplicate keys, as CouchDB will
-only preserve  the last occurrence of the key and will silently drop the other
-occurrences. Similarly, clients MUST NOT rely on the ordering of keys within an
-Object as this ordering will generally not be preserved by the database.
-
-An array of N elements is represented by N distinct key-value pairs, where the
-last element of the tuple key is an integer representing the zero-indexed
-position of the value within the array. As an example:
-
-```
-{
-    "states": ["MA", "OH", "TX", "NM", "PA"]
-}
-```
-
-becomes
-
-```
-pack({"states", 0}) = pack({"MA"})
-pack({"states", 1}) = pack({"OH"})
-pack({"states", 2}) = pack({"TX"})
-pack({"states", 3}) = pack({"NM"})
-pack({"states", 4}) = pack({"PA"})
-```
-
-More details on the encodings in the FoundationDB Tuple Layer can be found in
-the [design
-documentation](https://github.com/apple/foundationdb/blob/6.0.18/design/tuple.md).
-
-## Document Subspace and Versioning
-
-Document bodies will be stored in their own portion of the keyspace with a fixed
-single-byte prefix identifying the "subspace". Each revision of a document will
-be stored separately without term sharing, and the document ID and revision ID
-are baked into the key. The structure looks like this
-
-```
-{DbName, ?DOCUMENTS, DocID, NotDeleted, RevPos, RevHash} = RevisionMetadata
-{DbName, ?DOCUMENTS, DocID, NotDeleted, RevPos, RevHash, "foo"} = (value for doc.foo)
-et cetera
-```
-
-where `RevisionMetadata` includes at the minimum an enum to enable schema
-evolution for subsequent changes to the document encoding structure, and
-`NotDeleted` is `true` if this revision is a typical `deleted=false` revision,
-and `false` if the revision is storing user-supplied data associated with the
-tombstone. Regular document deletions without any data in the tombstone do not
-show up in the `?DOCUMENTS` subspace at all. This key structure ensures that in
-the case of multiple edit branches the "winning" revision's data will sort last
-in the key space.
-
-## CRUD Operations
-
-FoundationDB transactions have a hard limit of 10 MB each. Our document
-operations will need to modify some metadata alongside the user data, and we'd
-also like to reserve space for updating indexes as part of the same transaction.
-This document proposes to limit the maximum document size to **1 MB (1,000,000
-bytes)** going forward (excluding attachments).
-
-A document insert does not need to clear any data in the `?DOCUMENTS` subspace,
-and simply inserts the new document content. The transaction will issue a read
-against the `?REVISIONS` subspace to ensure that no `NotDeleted` revision
-already exists.
-
-A document update targeting a parent revision will clear the entire range of
-keys associated with the parent revision in the `?DOCUMENTS` space as part of
-its transaction. Again, the read in the `?REVISIONS` space ensures that this
-transaction can only succeed if the parent revision is actually a leaf revision.
-
-Document deletions are a special class of update that typically do not insert
-any keys into the `?DOCUMENTS` subspace. However, if a user includes extra
-fields in the deletion they will show up in this subspace.
-
-Document reads where we already know the specific revision of interest can be
-done efficiently using a single `get_range_startswith` operation. In the more
-common case where we do not know the revision identifier, there are two basic
-options:
-
-1. We can retrieve the winning revision ID from the `?REVISIONS` subspace, then
-   execute a `get_range_startswith` operation as above.
-1. We can start streaming the entire key range from the `?DOCUMENTS` space
-   prefixed by `DocID` in reverse, and break if we reach another revision of the
-   document ID besides the winning one.
-
-Document reads specifying `conflicts`, `deleted_conflicts`, `meta`, or
-`revs_info` will need to retrieve the revision metadata from the `?REVISIONS`
-subspace alongside the document body regardless of which option we pursue above.
-
-If a reader is implementing Option 2 and does not find any keys associated with
-the supplied `DocID` in the `?DOCUMENTS` space, it will need to do a followup
-read on the `?REVISIONS` space in order to determine whether the appropriate
-response is `{"not_found": "missing"}` or `{"not_found": "deleted"}`.
-
-# Advantages and Disadvantages
-
-A leading alternative to this design in the mailing list discussion was to
-simply store each JSON document as a single key-value pair. Documents exceeding
-the 100KB value threshold would be chunked up into contiguous key-value pairs.
-The advantages of this "exploded" approach are
-
-- it lends itself nicely to sub-document operations, e.g. apache/couchdb#1559
-- it optimizes the creation of Mango indexes on existing databases since we only
-  need to retrieve the value(s) we want to index
-- it optimizes Mango queries that use field selectors
-
-The disadvantages of this approach are that it uses a larger number of key-value
-pairs and has a higher overall storage overhead from the repeated common key
-prefixes. The new FoundationDB storage engine should eliminate some of the
-storage overhead.
-As per the FoundationDB discussion about being able to co-locate compute operations with data storage servers/nodes](https://forums.foundationdb.org/t/feature-request-predicate-pushdown/954/6), if we were to make use of this hypothetical feature, we’d not get a guarantee of entire documents being co-located on one storage node, requiring us to do extra work should we want to, say, assemble a full `doc` to send to a map function. JS views would have a harder time, while Mango indexes with their explicit field declarations might get around this particular complexity more easily. For now, this is recorded here so we don’t forget track of this later.
-
-
-# Key Changes
-
-- Individual strings within documents are limited to 100 KB each.
-- The "path" to a leaf value within a document can be no longer than 10 KB.
-- The entire JSON document is limited to 1 MiB.
-
-Size limitations aside, this design preserves all of the existing API options
-for working with CouchDB documents.
-
-## Applications and Modules affected
-
-TBD depending on exact code layout going forward.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-None, aside from the more restrictive size limitations discussed in the Key
-Changes section above.
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list discussion](https://lists.apache.org/thread.html/fb8bdd386b83d60dc50411c51c5dddff7503ece32d35f88612d228cc@%3Cdev.couchdb.apache.org%3E)
-
-[Draft RFC for revision metadata](https://github.com/apache/couchdb-documentation/blob/rfc/001-fdb-revision-model/rfcs/001-fdb-revision-metadata-model.md)
-
-[Current version of Tuple Layer documentation](https://github.com/apple/foundationdb/blob/6.0.18/design/tuple.md)
-
-# Acknowledgements
-
-We had lots of input on the mailing list in this discussion, thanks to
-
-- @banjiewen
-- @davisp
-- @ermouth
-- @iilyak
-- @janl
-- @mikerhodes
-- @rnewson
-- @vatamane
-- @wohali
-- Michael Fair.
-- Reddy B.
diff --git a/rfcs/005-all-docs-index.md b/rfcs/005-all-docs-index.md
deleted file mode 100644
index 9606efd..0000000
--- a/rfcs/005-all-docs-index.md
+++ /dev/null
@@ -1,207 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: Implementation of _all_docs DB info metadata in FoundationDB
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-## Abstract
-
-This document describes how to maintain an index of all the documents in a
-database backed by FoundationDB, one sufficient to power the _all_docs endpoint.
-It also addresses the individual metadata fields included in the response to a
-GET /dbname request.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their definitions here.)
-
----
-
-# Detailed Description
-
-## _all_docs
-
-Normal requests to the `_all_docs` index will be powered by a dedicated subspace
-containing a single key for each document in the database that has at least one
-deleted=false entry in the revisions subspace. This dedicated subspace can be
-populated by blind writes on each update transaction, as the revisions subspace
-ensures proper coordination of concurrent writers trying to modify the same
-document. The structure of the keys in this space looks like
-
-```
-(?BY_ID, DocID) = (ValueFormat, RevPosition, RevHash)
-```
-
-where the individual elements are defined as follows:
-
-* ValueFormat: enum for the value encoding, to enable schema evolution
-* DocID: the document ID
-* RevPosition: positive integer encoded using standard tuple layer encoding
-* RevHash: 16 bytes uniquely identifying the winning revision of this document
-
-If a transaction deletes the last "live" edit branch of a document, it must also
-clear the corresponding entry for the document from this subspace.
-
-A request that specifies `include_docs=true` can be implemented either by
-performing a range request against this subspace and then N additional range
-requests explicitly specifying the full revision information in the ?DOCS
-subspace, or by doing a full range scan directly against that subspace,
-discarding conflict bodies and any user data associated with deleted revisions.
-As the implementation choice there has no bearing on the actual data model we
-leave it unspecified in this RFC.
-
-## dbinfo
-
-The so-called "dbinfo" JSON object contains various bits of metadata about a
-database. Here's how we'll carry those forward:
-
-`db_name`: should be trivially accessible.
-
-`doc_count`: this will be maintained as a single key mutated using
-FoundationDB's atomic operations. Transactions that create a new document or
-re-create one where all previous edit branches had been deleted should increment
-the counter by 1.
-
-`doc_del_count`: as above, this is a key mutated using atomic operations.
-Transactions that tombstone the last deleted=false edit branch on a document
-should increment it by 1. Transactions that add a new deleted=false edit branch
-to a document where all previous edit branches were deleted must decrement it by
-1.
-
-The revisions model ensures that every transaction has enough information to
-know whether it needs to modify either or both of the above counters.
-
-`update_seq`: the most efficient way to retrieve this value is to execute a
-`get_key` operation using a `last_less_than` KeySelector on the end of the
-?CHANGES subspace, so no additional writes are required.
-
-`purge_seq`: TBD on a more detailed design for purge. If it ends up being
-entirely transactional then this could be fixed to `update_seq` or dropped
-entirely.
-
-### Data Sizes
-
-There are three distinct sizes that we currently track for every database:
-
-* `sizes.external`: described as the "number of bytes that would be required to
-  represent the contents outside of the database".
-* `sizes.active`: a theoretical minimum number of bytes to store this database
-  on disk.
-* `sizes.file`: the current number of bytes on disk.
-
-The relationship between `sizes.active` and `sizes.file` is used to guide
-decisions on database compaction. FoundationDB doesn't require compaction, and
-any distinction that might exist between these two quantities (e.g. from storage
-engine compression) is not surfaced up to the clients, so it probably doesn't
-make sense to have both.
-
-The current implementation of `sizes.external` does *not* measure the length of
-a JSON representation of the data, but rather the size of an uncompressed Erlang
-term representation of the JSON. This is a somewhat awkward choice as the
-internal Erlang term representation is liable to change over time (e.g. with the
-introduction of Maps in newer Erlang releases, or plausibly even a JSON decoder
-that directly emits the format defined in the document storage RFC).
-
-Assuming we can agree on a set of sizes and how they should be calculated, the
-implementation will require two pieces: a single key for each size, mutated by
-atomic operations, and a record of the size of each revision in the ?REVISIONS
-subspace so that a transaction can compute the delta for each document.
-
-### Clustering
-
-The `r`, `w`, `q`, and `n` values in the `cluster` object were introduced in
-CouchDB 2.x to describe the topology of a database and the default quorum
-settings for operations against it. If we wanted to bring these forward, here's
-how they'd be defined:
-
-* `r`: always fixed at 1
-
-* `w`: interpreted as the number of transaction logs that record a commit, this
-  is dependent on the `redundancy mode` for the underlying FoundationDB database
-
-* `n`: interpreted as number of storage servers that host a key, this is also
-  dependent on the `redundancy mode` for the underlying FoundationDB database
-
-* `q`: the closest analogue here would be to use the `get_boundary_keys` API and
-  report number of distinct ranges implied by the boundary keys
-
-This interpretation could lead to some surprises, though. For example, "r=1,
-w=4, n=3" is a popular configuration, but this is nonsensical for someone
-expecting to see Dynamo-style numbers. Ignoring backwards compatibility, the
-sensible thing is to point users toward the actual FoundationDB configuration
-information, and to deprecate this entire `cluster` object. Open for discussion.
-
-# Advantages and Disadvantages
-
-[NOTE]: # ( Briefly, list the benefits and drawbacks that would be realized should )
-[NOTE]: # ( the proposal be accepted for inclusion into Apache CouchDB. )
-
-# Key Changes
-
-The underlying transaction in FoundationDB must complete within 5 seconds, which
-implicitly limits the number of results that can be returned in a single
-_all_docs invocation.
-
-## Applications and Modules affected
-
-TBD depending on exact code layout going forward.
-
-## HTTP API additions
-
-None.
-
-## HTTP API deprecations
-
-The `total_rows` and `offset` fields are removed from the response to
-`_all_docs`, which now has the simpler form
-
-    {"rows": [
-        {"id":"foo", "key":"foo", "value":{"rev":"1-deadbeef..."}},
-        ...
-    ]}
-
-The following fields are removed in the dbinfo response:
-
-* `compact_running`
-
-* `disk_format_version`: this is a tricky one. We define "format versions" for
-  every single type of key we're storing in FoundationDB, and those versions
-  could vary on a key-by-key basis, so listing a single number for an entire
-  database is sort of ill-posed. 
-
-
-The following fields are already marked as deprecated and can be removed in the
-next major release, independent of the FoundationDB work:
-
-* `instance_start_time`
-* `other`
-* `data_size`
-* `disk_size`
-
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[TIP]:  # ( Include any references to CouchDB documentation, mailing list discussion, )
-[TIP]:  # ( external standards or other links here. )
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
\ No newline at end of file
diff --git a/rfcs/006-mango-fdb.md b/rfcs/006-mango-fdb.md
deleted file mode 100644
index 19f5f02..0000000
--- a/rfcs/006-mango-fdb.md
+++ /dev/null
@@ -1,149 +0,0 @@
-# Mango RFC
-
----
-
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: ‘Mango JSON indexes in FoundationDB’
-labels: rfc, discussion
-assignees: ‘’
-
----
-
-[note]: # " ^^ Provide a general summary of the RFC in the title above. ^^ "
-
-# Introduction
-
-This document describes the data model, querying and indexing management for Mango JSON indexes with FoundationDB.
-
-## Abstract
-
-This document details the data model for storing Mango indexes. Indexes will be updated in the transaction that a document is written to FoundationDB. When an index is created on an existing database, a background task will build the index up to the Sequence that the index was created at.
-
-## Requirements Language
-
-[note]: # " Do not alter the section below. Follow its instructions. "
-
-The keywords “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”,
-“SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “MAY”, and “OPTIONAL” in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`Sequence`: a 13-byte value formed by combining the current `Incarnation` of the database and the `Versionstamp` of the transaction. Sequences are monotonically increasing even when a database is relocated across FoundationDB clusters. See (RFC002)[LINK TBD] for a full explanation.
-
----
-
-# Detailed Description
-
-Mango is a declarative JSON querying syntax that allows a user to retrieve documents based on a selector. Indexes can be defined to improve query performance. In CouchDB Mango is a query layer built on top of Map/Reduce indexes. Each Mango query follows a two-step process, first a subset of the selector is converted into a map query to be used with a predefined index or falling back to `_all_docs` if no indexes are available. Each document retrieved from the index is then matched against the full query selector.
-
-With CouchDB on FoundationDB, all new created Mango indexes have the `interactive: true` option set. Thereby Mango indexes will be indexed in the same transaction that a document is add/updated to the database.
-
-## Data Model
-
-### Index Definitions
-
-A Mango index is defined as:
-
-```json
-{
-  "name": "view-name",
-  "index": {
-    "fields": ["fieldA", "fieldB"]
-  },
-  "partial_filter_selector": {}
-}
-```
-
-The above index definition would be converted into a map index that looks like this:
-
-```json
-{
-  "_id": "_design/ddoc",
-  "language": "query",
-  "views": {
-    "view-name": {
-      "map": {
-        "fields": [{ "fieldA": "asc" }, { "fieldB": "asc" }],
-        "selector": {}
-      }
-    }
-  },
-  "options": [{ "autoupdate": false }, { "interactive": true }]
-}
-```
-
-- `{"autoupdate": false}` means that the index will not be auto updated in the background
-- `{"interactive": true}` configures the index to be updated in the document update transaction
-
-### Index Definition
-
-Mango indexes are a layer on top of map indexes. So the index definition is the same as the map index definition.
-
-### Index Limits
-
-This design has certain defined limits for it to work correctly:
-
-- The index definition (`name`, `fields` and `partial_filter_selector`) cannot exceed 64 KB FDB value limit
-- The sorted keys for an index cannot exceed the 8 KB key limit
-- To be able to update the index in the transaction that a document is updated in, there will have to be a limit on the number of Mango indexes for a database so that the transaction stays within the 10MB transaction limit. This limit is still TBD based on testing.
-
-## Index building and management
-
-When an index is created on an existing database, the index will be updated in a background job up to the versionstamp that the index was added to the database at. The process for building a new index would be:
-
-1. Save index to the database, along with a creation versionstamp and set the index status to `building` so that is it not used to service any queries until it is updated. Add a job to `couch_jobs` to build the index.
-2. Any write requests (document updates) after the saved index definition will update the index in the document update. Index writers can assume that previous versions of the document have already been indexed.
-3. `couch_jobs` will start reading sections of the changes feed and building the index, this background process will keep processing the changes read until it reaches the creation versionstamp. Once it reaches that point, the index is up to date and `build_status` will be marked as `active` and the index can be used to service queries.
-4. There is some subtle behavior around step 3 that is worth mentioning. The background process will have the 5-second transaction limit, so it will process smaller parts of the changes feed. Which means that it won’t have one consistent view of the changes feed throughout the index building process. This will lead to a conflict situation when the background process transaction is adding a document to the index while at the same time a write request has a transaction that is updating the same document. There are two possible outcomes to this, if the background process wins, the write request will get a conflict. At that point the write request will try to process the document again, read the old values for that document, remove them from the index and add the new values to the index. If the write request wins, and the background process gets a conflict, then the background process can try again, the document would have been removed from its old position in the changes feed and moved to the later position, so the background process won’t see the document and will then move on to the next one.
-
-## Advantages
-
-- Indexes are kept up to date when documents are changed, meaning you can read your own writes
-- Makes Mango indexes first-class citizens and opens up the opportunity to create more Mango specific functionality
-
-## Disadvantages
-
-- FoundationDB currently does not allow CouchDB to do the document selector matching at the shard level. However, there is a discussion for this [Feature Request: Predicate pushdown](https://forums.foundationdb.org/t/feature-request-predicate-pushdown/954)
-
-## Key Changes
-
-- Mango indexes will be stored separately to Map/Reduce indexes.
-- Mango Indexes will be updated when a document is updated
-- A background process will build a new Mango index on an existing database
-- There are specific index limits mentioned in the Index Limits section.
-
-Index limitations aside, this design preserves all of the existing API options
-for working with CouchDB documents.
-
-## Applications and Modules affected
-
-The `mango` application will be modified to work with FoundationDB
-
-## HTTP API additions
-
-When querying any of the `_index` endpoints an extra field, `build_status`, will be added to the index definition.
-The `build_status` will either be `building` or `active`.
-
-## HTTP API deprecations
-
-None,
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list discussion](https://lists.apache.org/thread.html/b614d41b72d98c7418aa42e5aa8e3b56f9cf1061761f912cf67b738a@%3Cdev.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-thanks to following in participating in the design discussion
-
-- @kocolosk
-- @willholley
-- @janl
-- @alexmiller-apple
diff --git a/rfcs/007-background-jobs.md b/rfcs/007-background-jobs.md
deleted file mode 100644
index a61420a..0000000
--- a/rfcs/007-background-jobs.md
+++ /dev/null
@@ -1,347 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Background jobs with FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-This document describes a data model, implementation, and an API for running
-CouchDB background jobs with FoundationDB.
-
-## Abstract
-
-CouchDB background jobs are used for things like index building, replication
-and couch-peruser processing. We present a generalized model which allows
-creation, running, and monitoring of these jobs.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-
-## General Concepts
-
-In the discussion below a job is considered to be an abstract unit of work. It
-is identified by a `JobId` and has a `JobType`. Client code creates a job which
-is then is executed by a job processor. A job processor is language-specific
-execution unit that runs the job. It could be an Erlang process, a thread, or
-just a function.
-
-The API used to create jobs is called the `Job Creation API` and the API used
-by the job processors to run jobs is called the `Job Processing API`.
-
-### Job States
-
-Jobs in the system can be in 3 states. After a job is added and
-is waiting to run, the job is considered to be `pending`. A job executed by
-a job processor is considered to be `running`. When a job is neither `running`,
-nor `pending`, it is considered to be `finished`. This is the state transition
-diagram:
-
-```
-         +------------>+
-         |             |
-         |             v
- -->[PENDING]     [RUNNING]--->[FINISHED]
-         ^             |           |
-         |             v           |
-         +-------------+<----------+
-```
-
-
-
-### Typical API Usage
-
-The general pattern of using this API might look like:
-
-  * Job creators:
-    - Call `add/4,5` to add a job
-    - Call `remove/3` to remove it
-
-  * Job processors:
-    - Call `accept/1,2` and wait until it gets a job to process.
-    - Periodically call `update/2,3` to prevent the job from being re-enqueued
-      due to idleness.
-    - When done running a job, call `finish/2,3`
-
-
-### Job Creation API
-
-```
-add(Tx, Type, JobId, JobData[, ScheduledTime]) -> ok | {error, Error}
-```
- - Add a job to be executed by a job processor
-   - `JobData` is map with a job type-specific data in it. It MAY contain any
-     data as long as it can be properly encoded as JSON.
-   - `ScheduledTime` is an optional parameter to schedule the job to be executed
-     at a later time. The format is an integer seconds since UNIX epoch.
-   - If the job with the same `JobId` exists:
-      * If it is `pending`, then the `ScheduledTime` is updated.
-      * If it is `running` then the job is flagged to be resubmitted when it finishes running.
-      * If it is `finished` then it will be re-enqueued as `pending`
-
-```
-remove(Tx, Type, JobId) -> ok | {error, Error}
-```
- - Remove a job. If it is running, it will be stopped.
-
-```
-get_job_data(Job) -> {ok, JobData} | {error, Error}
-```
- - Get `JobData` associated with the job.
-
-```
-get_job_state(Job) -> {ok, pending | running | finished} | {error, Error}
-```
- - Get the job's state.
-
-```
-set_type_timeout(Type, TimeoutSec) -> ok
-```
-
- - Set the activity timeout for a job type. This function needs to be called
-   once for each job type before any job of that type is added.
-
-```
-get_type_timeout(Type)  -> {ok, TimeoutSec} | {error, Error}
-```
-
- - Get the type timeout for a job type.
-
-```
-subscribe(Type, JobId) -> {ok, SubscriptionId, JobState}
-```
-
- - Subscribe to receive job state updates. Notifications can be received using
- the `wait/2,3` calls.
-
-```
-unsubscribe(SubscriptionId) -> ok
-```
- - Unsubscribe from receiving job state updates.
-
-```
-wait(SubscriptionId, Timeout) -> {Type, JobId, JobState} | timeout
-wait([SubscriptionId], Timeout) -> {Type, JobId, JobState} | timeout
-
-```
- - Receive subscription notification updates from one or more subscriptions.
-
-```
-wait(SubscriptionId, Type, Timeout) -> {Type, JobId, JobState} | timeout
-wait([SubscriptionId], Type, Timeout) -> {Type, JobId, JobState} | timeout
-
-```
- - Receive subscription notification updates for one particular state only.
-   Updates for any other state will be ignored. This function can be used, for
-   example, to wait until a job has finished running.
-
-
-### Job Processing API
-
-```
-accept(Type[, OptionsMap]) -> {ok, Job} | {error, Error}
-```
-
- - Get a `pending` job and start running it. `OptionsMap` is a map that MAY
-   have these parameters:
-    * `no_schedule` = `true` | `false` Use a more optimized dequeueing strategy
-      if time-based scheduling is not used and job IDs are known to start with
-      a random looking (UUID-like) prefix.
-    * `max_sched_time` = `SecondsSinceEpoch` : Only accept jobs which have been
-      scheduled before or at `SecondsSinceEpoch` UNIX time.
-    * `timeout` = `TimeoutMSec` : Maximum timeout to wait when there are no
-      pending jobs available. `0` means don't wait at all and return `{error,
-      not_found}` immediately, effectively making `accept/1,2` non-blocking.
-
-
-```
-update(Tx, Job[, JobData]) -> {ok, Job} | {error, halt | Error}
-
-```
- - This MAY be called to update a job's `JobData`. It MUST be called at least
-   as often as the configured timeout value for the job’s type. Not doing this
-   will result in the job being re-enqueued. If `halt` is returned, the job
-   processor MUST stop running the job. Job processors MUST call `update/2,3`
-   in any write transactions it performs in order to guarantee mutual exclusion
-   that at most one job processor is executing a particular job at a time.
-
-```
-finish(Tx, Job[, JobData]) -> ok | {error, halt | Error}
-```
- - Called by the job processor when it has finished running the job. The
-   `JobData` parameter MAY contain a final result. If `halt` is returned, it
-   means that the `JobData` value wasn't updated. Job processors MUST call
-   `update/2,3` or `finish/2,3` in any write transactions it performs in order
-   to guarantee mutual exclusion that at most one job processor is executing a
-   particular job at a time.
-
-```
-resubmit(Tx, Job[, ScheduledTime]) -> {ok, Job} | {error, Error}
-```
- - Mark the job for resubmission. The job won't be re-enqueued until
-   `finish/2,3` is called.
-
-```
-is_resubmitted(Job) -> true | false
-```
- - Check if the job object was marked for resubmission. The job processor MAY
-   call this function on the `Job` object that gets returned from the
-   `update/2,3` function to determine if job creator had requested the job to
-   be resubmitted. The job won't actually be re-enqueued until `finish/2,3`
-   function is called.
-
-# Framework Implementation Details
-
-This section discusses how some of the framework functionality is implemented.
-
-All the coordination between job creation and job processing is done via
-FoundationDB. There is a top level `"couch_jobs"` subspace. All the subspaces
-mentioned below will be under this subspace.
-
-Each job managed by the framework will have an entry in the main `jobs table`.
-Pending jobs are added to a `pending queue` subspace. When they are
-accepted by a jobs processor, the jobs are removed from the pending queue and added
-to the `active jobs` subspace.
-
-Job states referenced in the API section are essentially defined based on the
-presence in any of these subspaces:
-
- * If a job is in the `pending queue` it is considered `pending`
- * If a job is in the `active jobs` subspace, then it is `running`
- * If a job is not `pending` or `running` then it is considered `finished`
-
-### Activity Monitor
-
-Job processors may suddenly crash and stop running their jobs. In that case the
-framework will automatically make those jobs `pending` after a timeout. That
-ensures the jobs continue to make progress. To avoid getting re-enqueued as
-`pending` due the timeout, each job processor must periodically call the
-`update/2,3` function. That functionality is implemented by the `activity
-monitor`. It periodically watches a per-type versionstamp-ed key, then scans
-`active jobs` subspace for any `running` jobs which haven't updated their
-entries during the timeout period.
-
-### Subscription Notifications
-
-Subscription notifications are managed separately for each job type. They use
-a per-type versionstamp-ed watch to monitor which jobs have updated since
-the last time it delivered notifications to the subscribers.
-
-### Data Model
-
- * `("couch_jobs", "data", Type, JobId) = (Sequence, JobLock, ScheduledTime, Resubmit, JobData)`
- * `("couch_jobs", "pending", Type, ScheduledTime, JobId) = ""`
- * `("couch_jobs", "watches_pending", Type) = Sequence`
- * `("couch_jobs", "watches_activity", Type) = Sequence`
- * `("couch_jobs", "activity_timeout", Type) = ActivityTimeout`
- * `("couch_jobs", "activity", Type, Sequence) = JobId`
-
-
-### Job Lifecycle Implementation
-
-This section describes how the framework implements some of the API functions.
-
- - `add/4,5` :
-   * Add the new job to the main jobs table.
-   * If a job with the same `JobId` exists, resubmit the job.
-   * Update `"pending"` watch for the type with a new versionstamp and bump its
-     counter.
-   * `JobLock` is set to `null`.
-
- - `remove/3` :
-   * Job is removed from the main jobs table.
-   * Job processor during the next `update/2,3` call will get a `halt` error
-     and know to stop running the job.
-
- - `accept/1,2` :
-   * Generate a unique `JobLock` UUID.
-   * Attempt to dequeue the item from the pending queue, then assign it the
-     `JobLock` in the jobs table.
-   * Create an entry in the `"activity"` subspace.
-   * If there are no pending jobs, get a watch for the `"pending"` queue and
-     wait until it fires, then try again.
-
- - `update/2,3`:
-   * If job is missing from the main jobs table return `halt`.
-   * Check if `JobLock` matches, otherwise return `halt`.
-   * Delete old `"activity"` sequence entry.
-   * Maybe update `JobData`.
-   * Create a new `"activity"` sequence entry and in main job table.
-   * Update `"watches"` sequence for that job type.
-
- - `finish/2,3`:
-   * If job is missing from the main jobs table return `halt`.
-   * Check if `JobLock` matches, otherwise returns `halt`.
-   * Delete old `"activity"` sequence entry.
-   * If `Resubmit` field is `true`, re-enqueue the job, and set `Resubmit` to `false`.
-   * Set job table's `JobLock` to `null`
-
- - `resubmit/2,3`:
-   * Set the `Resubmit` field to `true`.
-   * The job will be re-enqueued when `finish/2,3` is called.
-
-
-# Advantages and Disadvantages
-
-The main advantage is having a central way to coordinate batch processing
-across a cluster, with a single, unified API.
-
-
-## Possible Future Extensions
-
-Since all job keys and values are just FDB tuples and JSON encoded objects, in
-the future it might be possible to accept external jobs, not just jobs defined
-by the CouchDB internals. Also, since workers could be written in any language
-as long as they can talk to the FDB cluster, and follow the behavior describes
-in the design, it opens the possibility to have custom (user defined) workers
-of different types. But that is out of scope in the current RFC discussion.
-
-# Key Changes
-
- - New job execution framework
- - A single global job queue for each job type
- - An activity monitor to ensure jobs continue to make progress
-
-## Applications and Modules Affected
-
-Replication, indexing, couch-peruser
-
-## HTTP API Additions
-
-None. However, in the future, it might be useful to have an API to query and
-monitor the state of all the queues and workers.
-
-## HTTP API Deprecations
-
-None have been identified.
-
-# Security Considerations
-
-None have been identified.
-
-# References
-
-[Original mailing list discussion](https://lists.apache.org/thread.html/9338bd50f39d7fdec68d7ab2441c055c166041bd84b403644f662735@%3Cdev.couchdb.apache.org%3E)
-
-# Co-authors
-  - @davisp
-
-# Acknowledgments
- - @davisp
- - @kocolosk
- - @garrensmith
- - @rnewson
- - @mikerhodes
- - @sansato
diff --git a/rfcs/008-map-indexes.md b/rfcs/008-map-indexes.md
deleted file mode 100644
index b10c4d7..0000000
--- a/rfcs/008-map-indexes.md
+++ /dev/null
@@ -1,243 +0,0 @@
-# Map indexes RFC
-
----
-
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: ‘Map indexes on FoundationDB’
-labels: rfc, discussion
-assignees: ''
-
----
-
-## Introduction
-
-This document describes the data model and index management for building and querying map indexes.
-
-## Abstract
-
-Map indexes will have their data model stored in FoundationDB. Each index is grouped via its design doc's view signature. An index will store the index's key/values, size of the index and the last sequence number from the changes feed used to update the index.
-
-Indexes will be built using the background jobs api, `couch_jobs`, and will use the changes feed. There will be new size limitations on keys (10KB) and values (100KB) that are emitted from a map function.
-
-## Requirements Language
-
-[note]: # " Do not alter the section below. Follow its instructions. "
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`Sequence`: a 13-byte value formed by combining the current `Incarnation` of the database and the `Versionstamp` of the transaction. Sequences are monotonically increasing even when a database is relocated across FoundationDB clusters. See (RFC002)[LINK TBD] for a full explanation.
-
-`View Signature`: A md5 hash of the views, options, view language defined in a design document.
-`Interactive view`: A view updated in the same transaction that the document is added/updated to the database.
-
----
-
-## Detailed Description
-
-CouchDB views are used to create secondary indexes in a database. An index is defined by creating map/reduce functions in a design document. This document describes building the map indexes on top of FoundationDB (FDB).
-There are two ways to build a secondary index: via a background job or via in the same transaction that the document is added to the database. Building the index via the background job is the default way that a map index will be build. An example map function to do this is shown below:
-
-```json
-{
-  "_id": "_design/design-doc-id",
-  "_rev": "1-8d361a23b4cb8e213f0868ea3d2742c2",
-  "views": {
-    "map-view": {
-      "map": "function (doc) {\n  emit(doc._id, 1);\n}"
-    }
-  },
-  "language": "javascript"
-}
-```
-
-Adding `interactive: true` to the option field of an index will configure the index to be updated in the same transaction that the document is added to the database. This functionality has primarily been added to support Mango indexes but can work with map indexes. An example of a map index configured is shown below:
-
-```json
-{
-  "_id": "_design/design-doc-id",
-  "_rev": "1-8d361a23b4cb8e213f0868ea3d2742c2",
-  "views": {
-    "map-view": {
-      "map": "function (doc) {\n  emit(doc._id, 1);\n}"
-    }
-  },
-  "language": "javascript",
-  "options": [{ "interactive": true }]
-}
-```
-
-Interactive views have two step process to being built. When an index is added to the database, a background job is created for the index to be built up to the change sequence, creation versionstamp, that the index was added at. Any new documents added after the index was added will be indexed in the transaction that the document is added to the database. If a query for an interactive view is received before the background job is complete, CouchDB will wait until the background job is complete before serving the request.
-
-### Data model
-
-The data model for a map indexed is:
-
-```
-% View build sequence - The change sequence that the index has been updated to.
-(<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_UPDATE_SEQ, <view_signature>) = Sequence
-
-% Interactive View Creation Versionstamp
-(<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_CREATION_VS, <signature>) = Versionstamp
-% Interactive View Build Status
-(<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_BUILD_STATUS, <signature>) = INDEX_BUILDING | INDEX_READY
-
-% Number of rows in the index
-{<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_ROW_COUNT, ?VIEW_ID_INFO, <view_id>, <view_signature> } = <row_count>
-% Key/Value size of index
-{<database>, ?DB_VIEWS, ?VIEW_INFO, ?VIEW_KV_SIZE, <view_signature>, <view_id>} = <kv_size>
-
-% Id index, used to track record what keys are in the index for each document
-(<database>, ?DB_VIEWS, ?VIEW_DATA, <view_signature>, ?VIEW_ID_RANGE, <_id>, <view_id>) = [total_keys, total_size, unique_keys]
-% The key/values for the index
-(<database>, ?DB_VIEWS, ?VIEW_DATA, <view_signature>, ?VIEW_MAP_RANGE, <view_id>, {<key>, <_id>}, <dupe_id>) = {<emitted_key>, <emitted_value>}
-```
-
-Each field is defined as:
-
-- `database` is the specific database namespace
-- `?DB_VIEWS` is the views namespace.
-- `<view_signature>` is the design documents `View Signature`
-- `?VIEW_INFO` is the view information namespace
-- `?VIEW_UPDATE_SEQ` is the change sequence namespace
-- `?VIEW_ID_RANGE` is the map id index namespace
-- `?VIEW_MAP_RANGE` is the map namespace
-- `_id` is the document id
-- `view_id` id of a view defined in the design document
-- `key` is the encoded emitted row key from a map function
-- `count` is a value that is incremented to allow duplicate keys to be emitted for a document
-- `emitted_key` is the emitted key from the map function
-- `emitted_value` is the emitted value from the map function
-- `row_count` number of rows in the index
-- `kv_size` size of the index
-- `total_keys` is the number of keys emitted by a document
-- `total_size` is the size of the key/values emitted by the document
-- `unique_keys` is the unique keys emitted by the document
-- `dupe_id` the duplication id to allow multiple documents to emit a key/value
-
-The process flow for a document to be indexed in the background is as follows:
-
-1. FDB Transaction is started
-1. Read the document from the changes read (The number of documents to read at one type is configurable, the default is 100)
-1. The document is passed to the javascript query server and run through all the map functions defined in the design document
-1. The view's sequence number is updated to the sequence the document is in the changes feed.
-1. If the document was deleted and was previously in the view, the previous keys for the document are read from `?VIEW_ID_RANGE` and then cleared from the `?VIEW_MAP_RANGE`. The Row count and size count are also decreased.
-1. If the document is being updated and was previously added to the index, then he previous keys for the document are read from `?VIEW_ID_RANGE` and then cleared from the `?VIEW_MAP_RANGE` and then the index is updated with the latest emitted keys and value.
-1. The emitted keys are stored in the `?VIEW_ID_RANGE`
-1. The emitted keys are encoded then added to the `?VIEW_MAP_RANGE` with the emitted keys and value stored
-1. The `?VIEW_ROW_COUNT` is incremented
-1. The `?VIEW_KV_SIZE` is increased
-
-### Emitted Keys and Values limites
-
-If we have a design document like the following:
-
-```js
-{
-  "_id": "_design/design-doc-id",
-  "_rev": "1-8d361a23b4cb8e213f0868ea3d2742c2",
-  "views": {
-    "map-view": {
-      "map": "function (doc) {
-          emit(doc._id, doc.location);
-          emit([doc._id, doc.value], doc.name);
-        }",
-    }
-  },
-  "language": "javascript",
-  "options": [{"interactive":  true}]
-}
-```
-
-Each emit would be a new key/value row in the map index. Each key row cannot exceed 8KB and and each value row cannot exceed 64KB.
-If a document is emitted as a value, that document is not allowed to exceeed 64KB.
-
-### Key ordering
-
-FoundationDB orders key by byte value which is not how CouchDB orders keys. To maintain CouchDB's view collation, a type value will need to be prepended to each key so that the correct sort order of null < boolean < numbers < strings < arrays < objects is maintained.
-
-In CouchDB 2.x, strings are compared via ICU. The way to do this with FoundationDB is that for every string an ICU sort string will be generated upfront and used for index ordering instead of the original string.
-
-### Index building
-
-An index will be built and updated via a [background job worker](https://github.com/apache/couchdb-documentation/blob/main/rfcs/007-background-jobs.md). When a request for a view is received, the request process will add a job item onto the background queue for the index to be updated. A worker will take the item off the queue and update the index. Once the index has been built, the background job server will notify the request that the index is up to date. The request process will then read from the index and return the results. This process can also be optimised in two ways. Firstly, using a new couch_events system to listen for document changes in a database and then adding indexing jobs to the queue to keep indexes warm. The second optimisation is if the index only requires a small update, rather update the index in the HTTP request process instead of doing the work via the background queue.
-
-Initially, the building of an index will be a single worker running through the changes feed and creating the index. In the future, we plan to parallelise that work so that multiple workers could build the index at the same time. This will reduce build times.
-
-### View clean up
-
-When a design document is changed, new indexes will be built and grouped under a new `View Signature`. The old map indexes will still be in FDB. To clean up will be supported via the existing [/db/\_view_cleanup](https://docs.couchdb.org/en/latest/api/database/compact.html#db-view-cleanup) endpoint.
-
-A future optimisation would be to automate this and have CouchDB to monitor design doc changes and then look to clean up old view indexes via a background worker.
-
-### Stale = “ok” and stable = true
-
-With the consistency guarantee’s CouchDB will get from FDB, `stable = true` will no longer be an option that CouchDB would support and so the argument would be ignored. Similar `stale = “ok”` would now be translated to `update = false`.
-
-### Size limits
-
-- The sum of all keys emitted for a document cannot exceed 64 KB
-- Emitted keys will not be able to exceed 8 KB
-- Values cannot exceed 64 KB
-- There could be rare cases where the number of key-value pairs emitted for a map function could lead to a transaction either exceeding 10 MB in size which isn’t allowed or exceeding 5 MB which impacts the performance of the cluster. In this situation, CouchDB will send an error.
-
-These limits are the hard limits imposed by FoundationDB. We will have to set the user imposed limits to lower than that as we store more information than just the user keys and values.
-
-## Advantages
-
-- Map indexes will work on FoundationDB with the same behaviour as current CouchDB 1.x
-- Options like stale = “ok” and ‘stable = true’ will no longer be needed
-
-## Disadvantages
-
-- Size limits on key and values
-
-## Key Changes
-
-- Indexes are stored in FoundationDB
-- Indexes will be built via the background job queue
-- ICU sort strings will be generated ahead of time for each key that is a string
-
-## Applications and Modules affected
-
-- couch_mrview will be removed and replaced with a new couch_views OTP application
-
-## HTTP API additions
-
-The API will remain the same.
-
-## HTTP API deprecations
-
-- `stable = true` is no longer supported
-- `stale = "ok"` is now converted to `update = false`
-- reduce functions are not supported in this RFC
-
-## Security Considerations
-
-None have been identified.
-
-## Future improvements
-
-Two future improvements we could look to do that builds upon this work:
-
-- Better error handling for user functions. Currently, if a document fails when run through the map function, a user has to read the logs to discover that. We could look at adding an error-index and a new API endpoint.
-- Parallel building of the index. In this RFC, the index is only built sequentially by one index worker. In the future, it would be nice to split that work up and parallelize the building of the index.
-
-## References
-
-- TBD link to background tasks RFC
-- [Original mailing list discussion](https://lists.apache.org/thread.html/5cb6e1dbe9d179869576b6b2b67bca8d86b30583bced9924d0bbe122@%3Cdev.couchdb.apache.org%3E)
-
-## Acknowledgements
-
-Thanks to everyone that participated in the mailing list discussion
-
-- @janl
-- @kocolosk
-- @willholley
-- @mikerhodes
diff --git a/rfcs/009-exunit.md b/rfcs/009-exunit.md
deleted file mode 100644
index 6f6bb85..0000000
--- a/rfcs/009-exunit.md
+++ /dev/null
@@ -1,122 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Use ExUnit testing framework for unit testing'
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-With the upgrade of supported Erlang version and introduction of Elixir into our
-integration test suite we have an opportunity to replace currently used eunit
-(for new tests only) with Elixir based ExUnit. 
-
-## Abstract
-
-Eunit testing framework has a number of issues which makes it very hard to use.
-We already use alternative testing framework called ExUnit for integration tests.
-The proposal is to extend the use of ExUnit to CouchDB unit tests as well.
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their definitions here.)
-
----
-
-# Detailed Description
-
-The eunit testing framework is very hard to maintain. In particular, it has the
-following problems:
-- the process structure is designed in such a way that failure in setup or teardown
-  of one test affects the execution environment of subsequent tests. Which makes it
-  really hard to locate the place where the problem is coming from.
-- inline test in the same module as the functions it tests might be skipped
-- incorrect usage of ?assert vs ?_assert is not detectable since it makes tests pass
-- there is a weird (and hard to debug) interaction when used in combination with meck
-   - https://github.com/eproxus/meck/issues/133#issuecomment-113189678
-   - https://github.com/eproxus/meck/issues/61
-   - meck:unload() must be used instead of meck:unload(Module)
-- teardown is not always run, which affects all subsequent tests
-- grouping of tests is tricky
-- it is hard to group tests so individual tests have meaningful descriptions
-- eunit implementation of `{with, Tests}` doesn't detect test name correctly
-- it is hard to skip certain tests when needed
-
-ExUnit shouldn't have these problems:
-- on_exit function is reliable in ExUnit
-- it is easy to group tests using `describe` directive
-- code-generation is trivial, which makes it is possible to generate tests from formal spec (if/when we have one)
-
-# Advantages and Disadvantages
-
-## Advantages
-
-- Modern testing framework
-- Easy codegeneration of tests from formal spec
-- Reliability of teardown functions
-- Increased productivity due to smart test scheduling (run only failing tests)
-- Unified style enforced by code linter
-- Possibly more contributions from Elixir community
-- We already use ExUnit for integration tests
-- Support for test tags which could help us to introduce schedule of tests ([see #1885](https://github.com/apache/couchdb/issues/1885)).
-  We could run tests in the optimal order: 
-    - recently modified
-    - couch_db API based
-    - fabric API based
-    - http API based
-    - performance tests
-    - property based tests
-
-## Disadvantages
-
-- New language & tooling to learn
-- We make Elixir required dependency (currently it is somewhat optional)
-
-# Key Changes
-
-- move all eunit tests from `<app>/test/*.erl` into `<app>/test/eunit/*.erl`
-- add `make exunit` target to Makefile
-- move `.credo.exs` (linter configuration) into root of a project
-- create `<app>/test/exunit/` directory to hold new test suites
-- add different test helpers under `test/elixir/lib`
-- add `mix.exs` into root of the project
-
-## Applications and Modules affected
-
-There is a possibility that we would need to modify content of `test/elixir/lib` 
-to have similar experience in both integration and unit test framework.
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-Production code is not updated. Therefore there is no security risk.
-
-# References
-
-- [Discussion on mailing list](https://lists.apache.org/thread.html/f842ca637f7cb06b34af699a793cab0a534e65970172e8117bf0b228@%3Cdev.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-Thanks to everyone who participated on the mailing list discussion
-
-- @davisp
-- @wohali
-- @garrensmith
\ No newline at end of file
diff --git a/rfcs/011-opentracing.md b/rfcs/011-opentracing.md
deleted file mode 100644
index 717511c..0000000
--- a/rfcs/011-opentracing.md
+++ /dev/null
@@ -1,236 +0,0 @@
----
-name: Opentracing support
-about: Adopt industry standard distributed tracing solution
-title: 'Opentracing support'
-labels: rfc, discussion
-assignees: ''
-
----
-
-Adopt an industry standard vendor-neutral APIs and instrumentation for distributed tracing.
-
-# Introduction
-
-Collecting profiling data is very tricky at the moment. 
-Developers have to run generic profiling tools which are not aware of CouchDB specifics. 
-This makes it hard to do the performance optimization work. We need a tool which would 
-allow us to get profiling data from specific points in the codebase. 
-This means code instrumentation. 
-
-## Abstract
-
-There is an https://opentracing.io/ project, which is a vendor-neutral API and instrumentation
-for distributed tracing. In Erlang it is implemented by one of the following libraries:
- - [otters](https://github.com/project-fifo/otters) extended and more performant version of `otter`
- - [opentracing-erlang](https://github.com/opentracing-contrib/opentracing-erlang) `otter` version donated to opentracing project.
- - [original otter](https://github.com/Bluehouse-Technology/otter)
- - [passage](https://github.com/sile/jaeger_passage)
- 
-The opentracing philosophy is founded on three pillars:
-- Low overhead: the tracing system should have a negligible performance impact on running services.
-- Application-level transparency: programmers should not need to be aware of the tracing system
-- Scalability
-
-The main addition is to include one of the above mentioned libraries and add instrumentation points into the codebase.
-In initial implementation, there would be a new span started on every HTTP request.
-The following HTTP headers would be used to link tracing span with application specific traces.
-- X-B3-ParentSpanId
-- X-B3-TraceId
-- b3
-
-More information about the use of these headers can be found [here](https://github.com/openzipkin/b3-propagation).
-Open tracing [specification](https://github.com/opentracing/specification/blob/main/specification.md) 
-has a number of [conventions](https://github.com/opentracing/specification/blob/main/semantic_conventions.md) 
-which would be good to follow.
-
-In a nutshell the idea is:
-- Take the reference to Parent span from one of the supported header and pass it to `span_start` call.
-- Construct action name to use in `span_start` call.
-- Call `span_start` from `chttpd:handle_request_int/1`.
-- Pass span in `#httpd{}` record
-- Pass `trace_id` and `parent_span_id` through the stack (extend records if needed)
-- Attach span tags to better identify trace events.
-- Attach span logs at important instrumentation points.
-- Forward spans to external service.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-- [span](https://github.com/opentracing/specification/blob/1.1/specification.md#the-opentracing-data-model): The "span"
-  is the primary building block of a distributed trace, representing an individual unit of work done in a distributed system.
-  Each Span encapsulates the following state:
-   - An operation name
-   - A start timestamp
-   - A finish timestamp
-   - A set of zero or more key:value `Span Tags`. 
-   - A set of zero or more structured logs (key:value `Span Logs`).
-   - A `SpanContext`
-   - `References` to zero or more causally-related `Spans`
-
----
-
-# Detailed Description
-
-## Selection of a library
-
-As mentioned earlier, there are two flavours of libraries. None of them is perfect for all use cases.
-The biggest differences in between `otters` and `passage` are:
-
-|                                | otters      | passage                   |
-| ------------------------------ | ----------- | ------------------------- |
-| reporting protocol             | http        | udp                       |
-| filtering                      | custom DSL  | sampling callback module  |
-| reporter                       | zipkin only | jaeger or plugin          |
-| functional API                 |      +      |             +             |
-| process dictionary             |      +      |             +             |
-| process based span storage     |      +      |             -             |
-| send event in batches          |      +      |             -             |
-| sender overload detection      |      -      |             +             |
-| report batches based on        | timer       | spans of single operation |
-| design for performance         |      +      |             -             |
-| design for robustness at scale |      -      |             +             |
-| counters                       |      +      |             -             |
-| sampling based on duration     |      +      |             -             |
-| number of extra dependencies   |      1      |             3             |
-
-In order to allow future replacement of a tracing library it would be desirable to create an interface module `couch_trace`.
-The `otters` library would be used for the first iteration.
-
-## Configuration
-
-The `otters` library uses application environment to store its configuration. 
-It also has a facility to compile filtering DSL into a beam module.
-The filtering DSL looks like following: `<name>([<condition>]) -> <action>.`. 
-The safety of DSL compiler is unknown. Therefore a modification of tracing settings via configuration over HTTP wouldn't be possible.
-The otter related section of the config `tracing.filters` would be protected by BLACKLIST_CONFIG_SECTIONS.
-The configuration of tracing would only be allowed from remsh or modification of the ini file.
-The configuration for otter filters would be stored in couch_config as follows:
-```
-[tracing.filters]
-
-<name> = ([<condition>]) -> <action>.
-```
-
-## Tracing related HTTP headers
-
-Following headers on the request would be supported 
-- X-B3-ParentSpanId : 16 lower-hex characters
-- X-B3-TraceId      :  32 lower-hex characters
-- X-B3-SpanId       : 16 lower-hex characters
-- b3 : {TraceId}-{SpanId}-{SamplingState}-{ParentSpanId}
-  - the `SamplingState` would be ignored
-
-Following headers on the response would be supported 
-- X-B3-ParentSpanId : 16 lower-hex characters
-- X-B3-TraceId      :  32 lower-hex characters
-- X-B3-SpanId       : 16 lower-hex characters
-
-## Conventions
-
-The conventions below are based on [conventions from opentracing](https://github.com/opentracing/specification/blob/main/semantic_conventions.md#standard-span-tags-and-log-fields).
-All tags are optional since it is just a recomendation from open tracing to hint visualization and filtering tools.
-
-### Span tags
-
-| Span tag name    | Type    | Notes and examples                                  |
-| ---------------- | ------- | --------------------------------------------------- |
-| component        | string  | couchdb.<app> (e.g. couchdb.chttpd, couchdb.fabric) |
-| db.instance      | string  | for fdb-layer would be fdb connection string        |
-| db.type          | string  | for fdb-layer would be fdb                          |
-| error            | bool    | `true` if operation failed                          |
-| http.method      | string  | HTTP method of the request for the associated Span  |
-| http.status_code | integer | HTTP response status code for the associated Span   |
-| http.url         | string  | sanitized URL of the request in URI format          |
-| span.kind        | string  | Either `client` or `server` (RPC roles).            |
-| user             | string  | Authenticated user name                             |
-| db.name          | string  | Name of the accessed database                       |
-| db.shard         | string  | Name of the accessed shard                          |
-| nonce            | string  | Nonce used for the request                          |
- 
-
-### Log fields
-
-| Span log field name | Type    | Notes and examples                          |
-| ------------------- | ------- | ------------------------------------------- |
-| error.kind          | string  | The "kind" of an error (error, exit, throw) |
-| message             | string  | human-readable, one-line message            |
-| stack               | string  | A stack trace (\n between lines)            |
-
-## Multicomponent traces
-
-CouchDB has complex architecture. The request handling crosses layers' and components' boundaries.
-Every component or layer would start a new span. It *MUST* specify its parent span in order
-for visualization tools to work. The value of a TraceId *MUST* be included in every span start.
-The value of TraceId and SpanId *MAY* be passed to FDB when
-[foundationdb#2085](https://github.com/apple/foundationdb/issues/2085) is resolved.
-
-## Roadmap
-
-- initial implementation as described in this document
-- extend rexi to pass traceid and parentspanid
-- redo otter configuration
-- add tracing to server initiated jobs (compaction, replication)
-- rewrite `otters_conn_zipkin:send_buffer/0` to make it more robust
-- switch `otters_conn_zipkin` from `thrift` to `gRPC`
-
-
-# Advantages and Disadvantages
-
-## Drawbacks
-
-Specifically for `otters` library there are following concerns:
-- safety of configuration mechanism
-- the robustness of the zipkin sender
-
-## Advantages
-
-- Ability to forward tracing events to external system for further analysis
-- Low overhead
-- Structured logging for span logs
-- Link all events to same parent trace id
-
-# Key Changes
-
-- New configuration section
-- New dependencies
-- Additional HTTP headers
-- Additional fields in some records
-
-## Applications and Modules affected
-
-- chttpd
-- couch_trace (new module)
-
-## HTTP API additions
-
-Support for following headers would be added:
-- X-B3-ParentSpanId
-- X-B3-TraceId
-- b3
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-The security risk of injecting malicious payload into ini config is mitigated via placing the section into BLACKLIST_CONFIG_SECTIONS. 
-
-# References
-
-- [opentracing specification](https://github.com/opentracing/specification/blob/main/specification.md)
-- https://opentracing.io/
-- https://www.jaegertracing.io/docs/1.14/
-- https://zipkin.io
-- [opentracing conventions](https://github.com/opentracing/specification/blob/main/semantic_conventions.md) 
-
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
diff --git a/rfcs/012-fdb-reduce.md b/rfcs/012-fdb-reduce.md
deleted file mode 100644
index b8b01e4..0000000
--- a/rfcs/012-fdb-reduce.md
+++ /dev/null
@@ -1,1096 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Reduce indexes on FoundationDB'
-labels: rfc, discussion
-assignees: ''
-
----
-
-## Introduction
-
-This document describes 3 possible ways to support CouchDB's reduce functionality on top of FoundationDB.
-The main focus will be on a Skip List algorithm as it has the most potential to support all the required functionality.
-
-## Abstract
-
-Reduce indexes allow users of CouchDB to perform aggregations on a map index. These aggregations need to be stored in FoundationDB in a way that is efficient for updating the index on document updates and when retrieving results for different reduce group levels.
-Three options are initially listed, with a skip list approach selected as the most viable option. A process flow for building, retrieving and updating a skip list based reduce index is described. Finally, the data model for using this with FoundationDB is shown.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-No new terminology at this point.
-
----
-
-## Detailed Description
-
-Reduce indexes allow users of CouchDB to perform aggregations based on the key/values emitted from the map function.
-A lot of the power of the reduce functionality that CouchDB currently supports is because internally each map/reduce index is stored in a b+tree with non-leaf nodes containing aggregations of the results from their children. Allowing for efficient retrieval of those values. It is difficult to replicate that behavior exactly using FoundationDB. Therefore to implement reduce indexes in FoundationDB three possible ways are considered.
-
-### Option 1 - On the fly reduce calculation
-
-The simplest implementation is to perform the reduce function on the fly when a reduce query is requested.
-The map index will be read and the reduce aggregation performed on the keys as they are fetched from FoundationDB.
-This is by far the easiest to implement, but query performance will degrade as the database and index grows and could reach a point where the reduce query just stops working.
-
-### Option 2 - Precompute of all group-levels
-
-Another option is to precompute all group level's for a reduce function and store them as key/values in FoundationDB.
-This makes querying fast as the results are already calculated. The first difficulty comes when updating the index.
-For the built-in `_sum` and `_count` reduce functions, a single reduce calculation can be run and then applied to all group levels.
-For any custom reduce functions along with  `_stats` (`min` and `max` specifically) and `_approx_count_distinct`, the updating of each group level would be more complex as it requires reading all keys and running them through the reduce functions for all group levels before storing it back in FoundationDB.
-
-Another issue is any queries using startkey/endkey can be an expensive operation as we would have to perform aggregations on the startkey and the endkey key ranges. Then `group_level = 0` queries with a startkey or endkey would require a full aggregation of the keys in that range which again could be an expensive operation.
-
-### Option 3 - Skip list implementation
-
-The final option is using a skip list. A skip list can be thought of as a layered linked list. The bottom layer contains all the elements in an index. Each layer up from that is has a reduced (pun intended) number of elements in that list. See figure 1 for a simple skip list layout.
-
-![Figure 1: Example of a skip list](images/SkExample1.png)
-*Figure 1:* Example of a skip list
-
-A skip list would make it easier to query an index using startkey/endkey and more efficient than option 2 to update an index for all types of reduce functions.
-
-### Skip list implementation
-
-This section does a deep dive into how a skip list can be used to create, query and update a reduce index.
-To explore these situations, we will have the following design document defined.
-
-```js
-    {
-        "_id": "_design/reduce-example"
-        "views: {
-            "example": {
-                "map": function (doc) {
-                    emit([doc.year, doc.month, doc.day], 1);
-                },
-
-                "reduce": "_count"
-            }
-        }
-    }
-
-```
-
-And it emits the following key/value results for the reduce function:
-
-```js
-    [2017, 03, 1] = 1
-    [2017, 04, 1] = 1
-    [2017, 04, 1] = 1
-    [2017, 04, 15] = 1
-    [2017, 05, 1] = 1
-
-    [2018, 03, 1] = 1
-    [2018, 04, 1] = 1
-    [2018, 05, 1] = 1
-
-    [2019, 03, 1] = 1
-    [2018, 04, 1] = 1
-    [2018, 05, 1] = 1
-```
-
-#### Create
-
-To build the skip list, all keys will be added to level 0. When multiple of the same keys are emitted, the values are re-reduced before being added so level 0. Then each level up, a reduced number of keys will be added. For each level above 0, if a key/value is not added to that level, then that key's value is aggregated with the previous node in that row. Therefore each key/value node in a level is an aggregation of its key/value in level 0 and any key/values from the previous level that are greater than the node and smaller than the next node on that level.
-
-See figure 2 for an example of the listed keys added to a skip list.
-
-![figure 2: The emitted reduce keys added to a reduce skip list](images/SkExample2.png)
-*figure 2:* The emitted reduce keys added to a reduce skip list
-
-##### Skip levels and Level Distribution
-
-The number of skip list levels will be made configurable and the best level will be determined via performance testing.
-The algorithm to distribute keys across levels will be
-
-```js
-const MAX_LEVELS = 6;
-const LEVEL_FAN_POW = 4; // 2^X per level or (1 / 2^X) less than previous level
-
-const hashCalc = (key, level) => {
-    const keyHash = hashCode(JSON.stringify(key));
-    const out = (keyHash & ((1 << (level * LEVEL_FAN_POW)) - 1));
-    if (out !== 0) {
-        return false;
-    }
-
-    return true;
-}
-```
-
-The `hashCode` will hash the key to an integer. This allows for a consistent and predictable distribution across levels.
-The `LEVEL_FAN_POW` will also be configurable.h
-
-#### Query
-
-From the above figure 2, we can see that for a reduce query with `group = true` then level 0 will be used to return all the exact keys.
-And for a query with `group_level = 0` then the highest level can be used. 
-If a `group_level > 1` is set for a reduce query, we need to traverse the skip list and aggregate the results before returning the results to the user.
-
-For example, using figure 2, with a reduce query of `group_level = 2`. We start at Level 4, we would traverse down to level 3, compare Node 0 and Node [2018, 03, 1]. They are not the same key for `group_level = 2` so we need to move to a lower level. We would perform the same action of comparing the current node with the next node to see if they are the same key until we find matching nodes or we get to level 0. In this case, we would reach level 0 and return Node [2017, 03, 1]. At Node [2017, 04, 1] it is possible to go back up to level 2 where we can compare [2017, 04, 1] and [2017, 04, 15]. Given they are the same, we can continue on level 1 and collect all the [2017, 04, x] keys. Once we have collected all of those. Those keys are run through a re-reduce before being returned. This process of comparing the current node with the next node would continue onwards. Each time comparing the current node with the next node. If they are the same, compare the Nodes at the next Level above until we reach a level where the keys are not equal. At that point, we would use the node values at the level just below that to move across, collecting all the equal keys to be run through a reducer before being returned.
-
-![figure 3: Traversing flow to return results for `group_level = 2`](images/SkExample3.png)
-*Figure 3:* Traversing flow to return results for `group_level = 2`
-
-A query with startkey/endkey would follow a similar process.  Start at the highest level traversing across until we exceed the startkey, then move down until we find the startkey or nearest node to it. Then follow the above process of traversing the skip list to return all results until we reach a node that is greater or equal to the endkey.
-
-#### Update
-
-To update the reduce index, we will use the same map id index that keeps track of what keys are associated with a document. When a document is deleted, the previous keys will be removed from level 0. If the reduce function is `_sum` or `_count`, an atomic update is then performed for all Node's above Level 0 that would have included the values for the keys deleted.  
-
-For reduce functions where we cannot perform an atomic update. The process for each level, above level 0, is to fetch all the key/values in the level below the current node that are used to calculate the current node's aggregation value. Re-reduce those keys to calculate the new value stored for the node and store those results back in FoundationDB.
-
-When updating a document, the initial delete process is followed to remove the existing keys that are no longer emitted for this document. The new keys are added at level 0. For levels above 0, the same distribution algorithm will be used to determine if the key/values are added to a level. If they are, then an aggregation of the nodes after this node but at the level below is performed to calculate the aggregation value stored for this node. The previous node's value is also recalculated. This continues up to the max level on the skip list. For `_sum` and `_count` this can be done as an atomic update. For other reduce functions this will be a re-reduce. If the new key/value is not added to the level, the value is aggregated with a node smaller than the key in that level.
-
-In the situation where multiple documents emit the same key, those keys are re-reduced before being added into fdb.
-
-### Data model
-
-The data model for the skip list implementation is below. The value will contain the reduce value, along with the unencoded key that would be returned for a query.
-
-```erlang
-{<database>, ?DB_VIEWS, Sig, ?VIEW_REDUCE_SK_RANGE, ViewId, SkipLevel, ReduceKey} = {UnEncodedKey, Value}
-
-SkipLevel = 0..?MAX_SKIP_LEVEL
-
-```
-
-Each field is defined as:
-
-- `<database>` is the specific database namespace
-- `?DB_VIEWS` is views namespace.
-- `Sig` is the design documents View Signature
-- `?VIEW_REDUCE_SK_RANGE` is the reduce namespace
-- `ViewId` id of a view defined in the design document
-- `SkipLevel` is the skip level the key/value is being stored for
-- `ReduceKey` is the encoded emitted keys
-- `RowType` indicates if the row is storing the emitted key or emitted value
-- `UnEncodedKey` the unencoded emitted keys
-- `Value` the reduce value for the emitted keys
-
-## FoundationDB Skip list implementation
-
-In Appendix A, is a full 
-
-## Advantages and Disadvantages
-
-- Skip Lists can be used for builtin reduces and custom reduces.
-
-## Disadvantages
-
-- Because the levels are randomly generated and values aggregated, there will be an increased number of traversals of lower levels compared to using a B+tree.
-
-## Key Changes
-
-Instead of using a B+tree to store reduce aggregations, CouchDB's reduce functionality will be built on top of FoundationDB using a skip list like algorithm.
-
-## Applications and Modules affected
-
-The module `couch_views` will be modified to support building and querying reduce indexes.
-
-## HTTP API additions
-
-There won't be any additions to the HTTP API.
-
-## HTTP API deprecations
-
-There are no HTTP API deprecations
-
-## Security Considerations
-
-None have been identified.
-
-## References
-
-[Wikipedia Skip List](https://en.wikipedia.org/wiki/Skip_list)
-[Skip lists done right](http://ticki.github.io/blog/skip-lists-done-right/)
-[FoundationDB Forum skip list suggestion](https://forums.foundationdb.org/t/couchdb-considering-rearchitecting-as-an-fdb-layer/1088/11)
-[Initial mailing list discussion](https://lists.apache.org/thread.html/011caa9244b3378e7e137ea7b0f726d8e6a17009df738a81636cb273@%3Cdev.couchdb.apache.org%3E)
-
-## Acknowledgements
-
-Thanks to
-  @rnewson
-  @alexmiller-apple
-  @kocolosk
-
-  for reviewing the RFC and mailing list discussion
-
-
-## Appendix A
-
-Below is a javascript implementation of a FoundationDB skip list implementation. It can also be found in a [github repo](https://github.com/garrensmith/fdb-skiplist-reduce) for quicker cloning and testing. This implementation makes some assumptions:
-
-1. All keys are arrays of [Year, Month, Day]
-2. Only implements startkey/endkey
-3. No delete was implemented
-4. This is a basic implementation to make sure we get creating/updating and traversal correct. It does not cover edge cases or much error handling
-
-Some results that I determined while running this:
-
-1. Time to insert a key stayed the same even as the skiplist grew
-1. For smaller reduce indexes (under a million rows), it was better to have a lower `LEVEL_FAN_POW`. Otherwise, the majority of keys remained on level 0 and level 1, so querying could not make much use of the higher levels. However, insertions are then slightly slower.
-
-```js
-/* To run locally
-    npm install foundationdb
-    node skiplist.js
-*/
-
-
-const assert = require('assert');
-const util = require('util');
-const fdb = require('foundationdb');
-const ks = require('foundationdb').keySelector;
-
-// CONSTANTS
-const SHOULD_LOG = false;
-const PREFIX = 'skiplist';
-const MAX_LEVELS = 6;
-const LEVEL_FAN_POW = 1; // 2^X per level or (1 / 2^X) less than previous level
-const END = 0xFF;
-
-fdb.setAPIVersion(600); // Must be called before database is opened
-const db = fdb.openSync()
-  .at(PREFIX) // database prefix for all operations
-  .withKeyEncoding(fdb.encoders.tuple)
-  .withValueEncoding(fdb.encoders.json); // automatically encode & decode values using JSON
-
-// Data model
-// (level, key) = reduce_value
-
-
-const log = (...args) => {
-    if (!SHOULD_LOG) {
-        return;
-    }
-    console.log(...args);
-}
-
-// keep a basic stats of which levels were used for a query
-let stats;
-const resetStats = () => {
-    stats = {
-        "0": [],
-        "1": [],
-        "2": [],
-        "3": [],
-        "4": [],
-        "5": [],
-        "6": [],
-    };
-}
-
-// An initial simple set of kvs to insert and query to verify the algoritym
-const kvs = [
-    [[2017,3,1], 9],
-    [[2017,4,1], 7], 
-    [[2019,3,1], 4], // out of order check
-    [[2017,4,15], 6],
-    [[2018,4,1], 3],  
-    [[2017,5,1], 9],
-    [[2018,3,1], 6],
-    [[2018,4,1], 4], // duplicate check
-    [[2018,5,1], 7],
-    [[2019,4,1], 6],
-    [[2019,5,1], 7]
-  ];
-
-// UTILS
-
-const getRandom = (min, max) => {
-    min = Math.ceil(min);
-    max = Math.floor(max);
-    return Math.floor(Math.random() * (max - min)) + min; //The maximum is exclusive and the minimum is inclusive
-  }
-
-const getRandomKey = (min, max) => {
-    return [getRandom(min, max), getRandom(1, 12), getRandom(1, 30)];
-}
-
-// Very rough hash algorithm to convert any string to an integer
-function hashCode(s) {
-    for(var i = 0, h = 0; i < s.length; i++)
-        h = Math.imul(31, h) + s.charCodeAt(i) | 0;
-    return h;
-}
-
-// calculation to determine if key should be added to a level
-const hashCalc = (key, level, pow) => {
-    const keyHash = hashCode(JSON.stringify(key));
-    const out = (keyHash & ((1 << (level * pow)) - 1));
-    if (out !== 0) {
-        return false;
-    }
-
-    return true;
-}
-
-// Basic rereduce function
-// _sum but pretend its more complex
-const rereduce = (values) => {
-    return values.reduce((acc, val) => {
-        return acc + val;
-    }, 0);
-};
-
-// Takes all key/values and collates to group level and runs rereduce
-const collateRereduce = (acc, groupLevel) => {
-    const acc1 = acc.reduce((acc, kv) => {
-        const key = getGroupLevelKey(kv.key, groupLevel);
-
-        if (!acc[key]) {
-            acc[key] = {
-                key,
-                values: []
-            };
-        }
-
-        acc[key].values.push(kv.value);
-        return acc;
-    }, {});
-
-    return Object.values(acc1).reduce((acc, kv) => {
-        const values = kv.values;
-        const key = kv.key;
-        const result = rereduce(values);
-
-        acc.push({
-            key,
-            value: result
-        });
-
-        return acc;
-    }, []);
-};
-
-// KEY UTIL FUNCTIONS
-
-// convert key to binary
-const keyToBinary = (one) => {
-    let keyOne = one.key ? one.key : one;
-
-    if (!Array.isArray(keyOne)) {
-        keyOne = [keyOne];
-    }
-
-
-    return Buffer.from(keyOne);
-}
-
-// check keys are equal
-const keysEqual = (one, two) => {
-    if (one === null || two === null) {
-        return false;
-    }
-
-    const binOne = keyToBinary(one);
-    const binTwo = keyToBinary(two);
-
-    return binOne.compare(binTwo) === 0;
-}
-
-// Are keys equal at set group level
-const groupLevelEqual = (one, two, groupLevel) => {
-    if (one === null || two === null) {
-        return false
-    }
-    const levelOne = getGroupLevelKey(one.key, groupLevel);
-    const levelTwo = getGroupLevelKey(two.key, groupLevel);
-
-    return keysEqual(levelOne, levelTwo);
-};
-
-// is key two greater than key one?
-const keyGreater = (one, two) => {
-    if (!one || !two) {
-        return false;
-    }
-
-    const binOne = keyToBinary(one);
-    const binTwo = keyToBinary(two);
-
-    // key two comes after
-    return binOne.compare(binTwo) === -1;
-}
-
-// convert key to group level. e.g Key = [2019,2,5] and group_level = 2
-// returns [2019, 2]
-const getGroupLevelKey = (key, groupLevel) => {
-    if (groupLevel === 0) {
-        return null
-    }
-
-    if (!Array.isArray(key)) {
-        return key;
-    }
-
-    if (key.length <= groupLevel) {
-        return key;
-    }
-
-    return key.slice(0, groupLevel);
-};
-
-// FDB OPERATIONS
-
-// clear full range
-const clear = async () => {
-    await db.doTransaction(async tn => {
-        tn.clearRangeStartsWith([]);
-    });
-}
-
-// get value for key at level
-const getVal = async (tn, key, level) => {
-    return  await tn.get([level, key]);
-}
-
-// add kv to level
-const insertAtLevel = async (tn, key, value, level) => {
-    log('inserting', level, key, ':', value);
-    return await tn.set([level, key], value);
-};
-
-// get all kvs within start/end, exclusive of end key
-const getRange = async (tn, start, end, level) => {
-    const kvs = await tn.getRangeAll([level, start], [level, end]);
-
-    return kvs.map(([[_level, key], value]) => {
-        return {
-            key,
-            value
-        };
-    });
-};
-
-// get all kvs within start/end, inclusive of end
-const getRangeInclusive = async (tn, start, end, level) => {
-    const kvs = await tn.getRangeAll(
-        ks.firstGreaterOrEqual([level, start]), 
-        ks.firstGreaterThan([level, end])
-        );
-
-    return kvs.map(([[_level, key], value]) => {
-        return {
-            key,
-            value
-        };
-    });
-}
-
-// return kv in common format
-const getKV = (item) => {
-    const [key, value] = item.value;
-    return {
-        key: key[1],
-        value: value
-    };
-}
-
-// Get key after supplied key
-const getNext = async (tn, key, level) => {
-    const iter = await tn.snapshot().getRange(
-        ks.firstGreaterThan([level, key]),
-        [level, END],
-        {limit: 1}
-    )
-
-    const item = await iter.next();
-    if (item.done) {
-        return {
-            key: END,
-            value: 0
-        };
-    }
-
-    const kv = getKV(item);
-    tn.addReadConflictKey([level, kv.key]);
-    return kv;
-};
-
-// Get key after supplied key but doesn't look further than endkey
-const getKeyAfter = async (tn, key, level, endkey) => {
-    const _endkey = endkey ? endkey : END;
-    const iter = await tn.getRange(
-        ks.firstGreaterThan([level, key]),
-        ks.firstGreaterThan([level, _endkey]),
-        {limit: 1}
-    )
-    
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// get kv before supplied key
-const getPrevious = async (tn, key, level) => {
-    const iter = await tn.snapshot().getRange(
-        ks.lastLessThan([level, key]),
-        ks.firstGreaterOrEqual([level, key]),
-        {limit: 1}
-    )
-
-    const item = await iter.next();
-    const kv = getKV(item);
-    tn.addReadConflictKey([level, kv.key]);
-    return kv;
-};
-
-// Get key at level or first one after key
-const getKeyOrNearest = async (tn, key, level, endkey) => {
-    const _endkey = endkey ? endkey : END;
-    const iter = await tn.getRange(
-        ks.firstGreaterOrEqual([level, key]),
-        ks.firstGreaterThan([level, _endkey]),
-        {limit: 1}
-    )
-    
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// Gets the final key in the set group level
-const getGroupLevelEndKey = async (tn, groupLevel, level, startkey) => {
-    const groupLevelKey = getGroupLevelKey(startkey, groupLevel);
-    const end = groupLevelKey === null ? END : [...groupLevelKey, END];
-    const iter = await tn.getRange(
-        ks.firstGreaterThan([level, groupLevelKey]),
-        ks.firstGreaterOrEqual([level, end]),
-        {reverse: true, limit: 1}
-    )
-    
-    //TODO: add a conflict key
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// Returns key for level or the first one before it
-const getKeyOrFirstBefore = async (tn, key, level) => {
-    const iter = await tn.getRange(
-        ks.lastLessThan([level, key]),
-        ks.firstGreaterThan([level, key]),
-        {limit: 1, reverse: true}
-    )
-    
-    //TODO: add a conflict key
-    const item = await iter.next();
-    if (item.done) {
-        return null;
-    }
-
-    return getKV(item);
-};
-
-// SKIP LIST OPERATIONS
-
-//setup skip list and insert the initial kvs
-const create = async () => {
-    await db.doTransaction(async tn => {
-        for(let level = 0; level <= MAX_LEVELS; level++) {
-            await insertAtLevel(tn, '0', 0, level);
-        }
-    });
-
-    log('setup done');
-    for ([key, val] of kvs) {
-        await db.doTransaction(async tn => {
-            await insert(tn, key, val);
-        });
-    }
-};
-
-// inserts a larger amount of keys, 1000 keys per transaction
-const rawKeys = []
-const createLots = async () => {
-    const docsPerTx = 3000;
-    console.time('total insert');
-    for (let i = 0; i <= 30000; i+= docsPerTx) {
-        const kvs = [];
-        for (let k = 0; k <= docsPerTx; k++) {
-            const key = getRandomKey(2015, 2020);
-            const value = getRandom(1, 20);
-            rawKeys.push({key, value});
-            kvs.push([key, value]);
-        }
-        console.time('tx');
-        await db.doTransaction(async tn => {
-            for ([key, value] of kvs) {
-                await insert(tn, key, value);
-            }
-        });
-        console.timeEnd('tx');
-        log('inserted ${i} keys');
-    }
-    console.timeEnd('total insert');
-}
-
-/* The insert algorithm
- Works as follows:
- Level 0:
- * Always insert,
- * if key already exists at level 0, then rereduce two values and insert
- At level's > 0
- * Get previous kv at level
- * If hashCalc is true, key should be inserted at level
- * So need to recalculate previous keys value,
- * Get range from level below from previous key to current key
- * Rereduce those kvs and update previous key's value
- * Then get next key after current key at level
- * Use that to get range from current key to next key at level below
- * Rereduce those values to create value for current key
- 
- * If hashCalc is false, key is not inserted at level
- * So rereduce previous key's value with current key's value and update previous kv
-*/
-const insert = async (tn, key, value) => {
-    let currentVal = value; // if this k/v has been stored before we need to update this value at level 0 to be used through the other levels
-    for(let level = 0; level <= MAX_LEVELS; level++) {
-        if (level === 0) {
-            const existing = await getVal(tn, key, level);
-            if (existing) {
-                currentVal = rereduce([existing, currentVal]);
-            }
-            await insertAtLevel(tn, key, currentVal, 0);
-            continue;
-        }
-        const previous = await getPrevious(tn, key, level);
-        log('Planning to insert at ', level, 'key', key, 'previous is', previous);
-        if (hashCalc(key, level, LEVEL_FAN_POW)) {
-            const lowerLevel = level - 1;
-            // update previous node
-            const newPrevRange = await getRange(tn, previous.key, key, lowerLevel);
-            log('prevRange', newPrevRange, 'prevKey', previous, 'key', key);
-            const prevValues = newPrevRange.map(kv => kv.value);
-            const newPrevValue = rereduce(prevValues)
-            if (newPrevValue !== previous.value) {
-                await insertAtLevel(tn, previous.key, newPrevValue, level);
-            }
-
-            // calculate new nodes values
-            const next = await getNext(tn, key, level);
-            const newRange = await getRange(tn, key, next.key, lowerLevel);
-            const newValues = newRange.map(kv => kv.value);
-            const newValue = rereduce(newValues);
-            log('inserting at level', level, 'key', key, 'after', next, 'range', newRange);
-            await insertAtLevel(tn, key, newValue, level);
-        } else {
-            const newValue = rereduce([previous.value, value]);
-            log('rereduce at', level, 'key', previous.key, 'new value', newValue, 'prev value', previous.value);
-            await insertAtLevel(tn, previous.key, newValue, level);
-        }
-    }
-};
-
-// A simple print that will show all keys at set levels and verify that the values at each level
-// sum up to the values at level = 0
-const print = async () => {
-    let total = 0;
-    await db.doTransaction(async tn => {
-        for(let level = 0; level <= MAX_LEVELS; level++) {
-            let levelTotal = 0;
-            const levelResults = await tn.getRangeAll([level, "0"], [level, END]);
-            const keys = levelResults.map(([[_, key], val]) => {
-                const a = {};
-                a[key] = val;
-                if (level === 0) {
-                    total += val;
-                }
-
-                levelTotal += val;
-                return a;
-            });
-
-            log(`Level ${level}`, keys);
-            assert.equal(levelTotal, total, `Level ${level} - level total ${levelTotal} values not equal to level 0 ${total}`);
-        }
-    });
-
-    return {
-        total
-    };
-};
-
-
-// Determines which level and the range the skiplist traversal can do next
-/* Works as follows:
-    * Get the final key for a group level from level 0 - ideally this algorithm looks to scan at the highest level possible
-      and we need this grouplevel endkey to know how far we can possibly scan
-    * If group end key is greater than endkey, send key to groupEndkey
-    * `levelRanges` is used to keep an array of possible ranges we could use scan. Level 0 is always added
-    * At the for loop, start at level 0, and look at one level above and see if the startkey exists in that level
-    * If the startkey does, also find the group level endkey for that level, if the group level endkey is valid add to `levelranges`
-    * If the startkey is not in the level above, scan at the current level from the startkey to the nearest key in the level above
-        this way we do a small scan at a lower level and then at the next traversal can scan at a level up
-*/
-const getNextRangeAndLevel = async (tn, groupLevel, level, startkey, endkey) => {
-    let groupEndkey = await getGroupLevelEndKey(tn, groupLevel, 0, startkey.key);
-    log('groupendkey', groupEndkey, 'start', startkey, 'end', endkey, keyGreater(endkey, groupEndkey));
-    if (keyGreater(endkey, groupEndkey)) {
-        groupEndkey = endkey;
-    }
-
-    // at end of this specific grouplevel, so have to do final scan at level 0
-    if (keysEqual(startkey, groupEndkey)) {
-        return [0, startkey, startkey];
-    }
-
-    const levelRanges = [{
-        level: 0,
-        start: startkey,
-        end: groupEndkey
-    }];
-    for (let i = 0; i < MAX_LEVELS; i++) {
-        log('next start', startkey, 'i', i);
-        // look 1 level above
-        let nearestLevelKey = await getKeyOrNearest(tn, startkey.key, i + 1, endkey.key);
-        log('nearest', nearestLevelKey, "level", i + 1, "start", startkey, "grouplevelequal", groupLevelEqual(startkey, nearestLevelKey, groupLevel));
-
-        if (keysEqual(nearestLevelKey, startkey)) {
-            const groupLevelEndKey = await getGroupLevelEndKey(tn, groupLevel, i + 1, nearestLevelKey.key);
-            log('CALCUP1', 'nearest', nearestLevelKey, 'after', groupLevelEndKey, 'level', i);
-            if (groupLevelEndKey !== null) {
-                if (keyGreater(endkey, groupLevelEndKey)) {
-                    log('grouplevel great than endkey', endkey, groupLevelEndKey);
-                    // exceeded the range at this level we can't go further
-                    break;
-                }
-                // end of grouplevel for set level have to use previous levels for read
-                if (keysEqual(nearestLevelKey, groupLevelEndKey)) {
-                    break;
-                }
-
-                levelRanges.push({
-                    level: i + 1,
-                    start: nearestLevelKey,
-                    end: groupLevelEndKey
-                });
-                continue;
-            }
-        } else if (nearestLevelKey !== null && groupLevelEqual(startkey, nearestLevelKey, groupLevel)) {
-            log('querying to nearest level up', startkey, nearestLevelKey);
-            return [i, startkey, nearestLevelKey];
-        } 
-
-        break;
-    }
-
-    
-    log('gone to far', JSON.stringify(levelRanges, ' ', null));
-    const out = levelRanges.pop();
-    return [out.level, out.start, out.end]
-};
-
-// Main algorithm to traverse the skip list
-/* Algorithm works as follows:
-    * calls getNextRangeAndLevel to determine what to scan
-    * Gets all values in that range for set level including endkey
-    * Final value in range is used as the next startkey
-    * Collates and Rereduces all values collected
-    * If there is no new startkey or rangeEnd = endkey at we scanned at level 0 then we done
-    * Otherwise start again at level 0 and continue traversal
-*/
-const traverse = async (tn, level, prevLevel, current, endkey, groupLevel, acc) => {
-    if (level < 0) {
-        throw new Error("gone to low");
-    }
-    const [rangeLevel, rangeStart, rangeEnd] = await getNextRangeAndLevel(tn, groupLevel, level, current, endkey);
-    log('traversing, level', rangeLevel, 'start', rangeStart, 'end', rangeEnd);
-
-    // simple stats to keep track of which levels are used the most
-    stats[rangeLevel].push([rangeStart.key, rangeEnd.key]);
-    const results = await getRangeInclusive(tn, rangeStart.key, rangeEnd.key, rangeLevel);
-    log('RESULTS', results, 'start', rangeStart.key, 'end', rangeEnd.key);
-    // test with rangeEnd always next startkey
-    let nextStartKey = results[results.length - 1];
-    let keyAfterStart = await getKeyAfter(tn, nextStartKey.key, rangeLevel, endkey.key);
-    log('checking', nextStartKey, keyAfterStart, groupLevelEqual(nextStartKey, keyAfterStart, groupLevel));
-
-    const useableResults = results.slice(0, results.length -1);
-    acc = [...acc, ...useableResults];
-    if (rangeLevel === 0 && !groupLevelEqual(nextStartKey, keyAfterStart, groupLevel)) {
-        acc.push(nextStartKey);
-        log('collating and reducing', acc);
-        const reducedResults = collateRereduce(acc, groupLevel);
-        acc = reducedResults;
-        nextStartKey = await getKeyAfter(tn, nextStartKey.key, rangeLevel, endkey.key);
-        //should stream results for a common group at this point
-    }
-
-    // Reached the end of the query, return results
-    if ((keysEqual(rangeEnd, endkey) || nextStartKey === null) && rangeLevel === 0) {
-        return acc;
-    }
-
-    log('moving next traversal', rangeLevel, 'newStart', nextStartKey, acc);
-    return traverse(tn, 0, rangeLevel, nextStartKey, endkey, groupLevel, acc);
-}
-
-// simple formatter to mimic CouchDb response
-const formatResult = (results) => {
-    return {
-        rows: results
-    };
-};
-
-
-// query function to set correct startkey/endkey and call correct query algorithm
-const query = async (opts) => {
-    resetStats();
-    return await db.doTransaction(async tn => {
-        let endkey = {key: END, value: 0};
-        let startkey = {key: '0', value: 0};
-
-        if (opts.startkey) {
-            startkey = await getKeyOrNearest(tn, opts.startkey, 0);
-            if (!startkey) {
-                return false; //startkey out of range;
-            }
-            log('startkey', opts.startkey, startkey);
-        }
-
-        if (opts.endkey) {
-            endkey = await getKeyOrFirstBefore(tn, opts.endkey, 0);
-            log('endkey', opts.endkey, endkey);
-        }
-
-        if (opts.group) {
-            const results = await getRangeInclusive(tn, startkey.key, endkey.key, 0);
-            return formatResult(results);
-        }
-
-        if (opts.group_level === 0 && !opts.startkey && !opts.endkey) {
-                const results = await getRange(tn, '0', END, MAX_LEVELS);
-                if (results.length > 1) {
-                    const vals = results.map(kv => kv.value);
-                    const total = rereduce(vals);
-                    return formatResult([{
-                        key: null,
-                        value: total
-                    }]);
-                }
-
-                return formatResult([{
-                    key: null,
-                    value: results[0].value
-                }]);
-        }
-
-
-        const results = await traverse(tn, 0, 0, startkey, endkey, opts.group_level, []);
-        console.log('query stats', util.inspect(stats, {depth: null}));
-        return formatResult(results);
-    });
-};
-
-
-// smaller queries with the initial kvs added to the skiplist
-// this is used to varify the accuracy of the insert and query
-const simpleQueries = async () => {
-    let result = {};
-    result = await query({group_level: 0});
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 68
-        }]
-    });
-
-    result = await query({group_level:0, startkey: [2018, 3, 2]});
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 31
-        }]
-    });
-
-    result = await query({
-        group_level:0,
-        startkey: [2018, 3, 2],
-        endkey: [2019, 5, 1]
-    });
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 31
-        }]
-    });
-
-    result = await query({
-        group_level: 0,
-        startkey: [2018, 03, 2],
-        endkey: [2019, 03, 2],
-
-    })
-
-    assert.deepEqual(result, {
-        rows: [{
-            key: null,
-            value: 18
-        }]
-    });
-
-    result = await query({
-        group_level: 1,
-        startkey: [2017, 4, 1],
-        endkey: [2018, 3, 1],
-    })
-
-    assert.deepEqual(result, {
-        rows: [
-        {
-            key: [2017],
-            value: 22
-        },
-        {
-            key: [2018],
-            value: 6
-        }
-    ]
-    });
-
-    result = await query({
-        group_level: 1,
-        startkey: [2017, 4, 1],
-        endkey: [2019, 03, 2],
-
-    })
-
-    assert.deepEqual(result, {
-        rows: [
-        {
-            key: [2017],
-            value: 22
-        },
-        {
-            key: [2018],
-            value: 20
-        },
-        {
-            key: [2019],
-            value: 4
-        }
-    ]
-    });
-
-    result = await query({
-        group_level: 1,
-        startkey: [2017, 4, 1],
-        endkey: [2019, 05, 1],
-
-    })
-
-    assert.deepEqual(result, {
-        rows: [
-        {
-            key: [2017],
-            value: 22
-        },
-        {
-            key: [2018],
-            value: 20
-        },
-        {
-            key: [2019],
-            value: 17
-        }
-    ]
-    });
-
-    result = await query({
-        group: true,
-        startkey: [2018, 5, 1],
-        endkey: [2019, 4, 1],
-    });
-
-    assert.deepEqual(result, {rows: [
-        {key: [2018,5,1], value: 7},
-        {key: [2019,3,1], value: 4},
-        {key: [2019,4,1], value: 6}
-    ]})
-    log('SIMPLE DONE');
-};
-
-// Fetch all level 0 kvs for a query and produce the correct result
-const queryLevel0 = async (opts) => {
-    return await db.doTransaction(async tn => {
-        let endkey = {key: END, value: 0};
-        let startkey = {key: '0', value: 0};
-
-        if (opts.startkey) {
-            startkey = await getKeyOrNearest(tn, opts.startkey, 0);
-        }
-
-        if (opts.endkey) {
-            endkey = await getKeyOrFirstBefore(tn, opts.endkey, 0);
-        }
-        const results = await getRangeInclusive(tn, startkey.key, endkey.key, 0);
-        const acc1 = collateRereduce(results, opts.group_level); 
-        return formatResult(acc1);
-    });
-}
-
-// Perform a full range scan on the skip list and compare the performance versus 
-// just reading from level 0
-const largeQueries = async () => {
-    let result;
-    const [startkey, endkey] = await db.doTransaction(async tn => {
-        const start = await getKeyAfter(tn, '0', 0);
-        const end = await getPrevious(tn, END, 0);
-
-        return [start.key, end.key];
-    });
-
-    for (let i = 0; i < 10; i++) {
-        const opts = {
-            group_level: 1,
-            startkey,
-            endkey
-        };
-        console.log('range', startkey, endkey);
-        console.time('query');
-        result = await query(opts);
-        console.timeEnd('query');
-
-        console.time('level0');
-        const level1Result = await queryLevel0(opts);
-        console.timeEnd('level0');
-        assert.deepEqual(result, level1Result);
-    }
-};
-
-
-// run function
-const run = async () => {
-    await clear();
-    await create();
-    await print();
-    await simpleQueries();
-    await createLots();
-    await print();
-    await largeQueries();
-};
-
-run();
-
-```
\ No newline at end of file
diff --git a/rfcs/013-node-types.md b/rfcs/013-node-types.md
deleted file mode 100644
index 5cd3adc..0000000
--- a/rfcs/013-node-types.md
+++ /dev/null
@@ -1,143 +0,0 @@
----
-name: Node Types
-about: Introduce heterogeneous node types to CouchDB 4
-title: 'Node Types'
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-This RFC proposes the ability to have different node types in CouchDB 4+. This
-would improve performance and allow for a more efficient use of resources.
-
-## Abstract
-
-Previously, in CouchDB 2 and 3, cluster functionality was uniformly distributed
-amongst the nodes. Any node could accept HTTP requests, run replication jobs
-and build secondary indices. With the FDB-based topology, CRUD operations have
-lower resource needs and so it could be useful to have a heterogeneous
-topology, where for example, CRUD operations run on lower capacity nodes, and a
-few higher capacity nodes handle replication or indexing jobs.
-
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-*node type* : A label used to designate a subset of CouchDB functionality.
-
----
-
-# Detailed Description
-
-## Node Types
-
-A node type is a description of a some internal CouchDB functionality. These
-are the initially defined node types:
-
- * `api_frontend` : Indicates this node can accept HTTP API requests.
- * `view_indexing` : Indicates this node can build map/reduce view indices.
- * `search_indexing` : Indicates this node can build search indices.
- * `replication` : Indicates this node can run replication jobs.
-
-Users can configure CouchDB nodes with any combination of those types.
-
-## Configuration
-
-Configuration MAY be specified in the Erlang application or OS environment
-variables. OS environment variables have a higher precedence. By default, if
-the type is not configured in either one of those places, it defaults to
-`true`.
-
-### Erlang Application Environment Configuration
-
-Configuration MUST be specified for the `fabric` application, under the
-`node_types` key. The value MUST be proplist which looks like `[{$type, true |
-false}, ...]`. For example, the `va.args` file MAY be used like such:
-
-```
--fabric node_types '[{api_frontend, false}, {replication, true}]'
-
-```
-
-### OS Environment Configuration
-
-Node types MAY be set via environment variables using the `COUCHDB_NODE_TYPE_`
-prefix. The prefix SHOULD be followed by the type label. If the value of the
-variable is `false` the functionality indicated will be disabled on that
-node. Any other value, indicates `true`.
-
-Example:
-
-`COUCHDB_NODE_TYPE_API_FRONTEND=false COUCHDB_NODE_TYPE_VIEW_INDEXING=true ...`
-
-## Implementation
-
-Implementation should be minimally invasive, at least for the node types listed
-above.
-
- * `api_frontend` would enable the `chttpd` application, or its top level
-   supervisor.
-
- * All background tasks in FDB are executed via the `couch_jobs` framework. The
-top level application supervisors typically have a separate `gen_server` in
-charge of accepting jobs and executing them. The implementation then would be
-as simple as having a `case` statement around the worker's `start_link()`
-function.
-
-# Advantages and Disadvantages
-
-## Disadvantages
-
- - Increased configuration-state complexity
-
-## Advantages
-
- - Ability to utilize hardware resources better
- - Possibly better security by running indexing and replication jobs in an
-   isolated environment inaccessible from the outside
-
-# Key Changes
-
- - Heterogeneous node types
- - New configuration section
- - New configuration environment variables
-
-## Applications and Modules Affected
-
- - chttpd
- - fabric
- - couch_views
- - couch_jobs
- - couch_replicator
- - mango
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-N/A
-
-# References
-
-[1] https://github.com/apache/couchdb/issues/1338
-
-[2] https://github.com/apache/couchdb-documentation/blob/main/rfcs/007-background-jobs.md
-
-# Acknowledgments
-
-@kocolosk
-@mikerhodes
diff --git a/rfcs/014-pagination.md b/rfcs/014-pagination.md
deleted file mode 100644
index 1d3a48f..0000000
--- a/rfcs/014-pagination.md
+++ /dev/null
@@ -1,215 +0,0 @@
----
-name: Pagination support
-about: Add bookmark based pagination interface
-title: 'Pagination API'
-labels: rfc, discussion
-assignees: ''
-
----
-
-Implement pagination support for query like endpoints
-
-# Introduction
-
-The main driver for this proposal is the introduction of FoundationDB as a
-storage engine. FoundationDB imposes limits on transaction duration and size.
-Therefore we need to find way to restrict amount of data we return to customers.
-We could simply set the maximum `limit` which would cap amount of rows client can
-request. However this "solution" has a big disadvantage. Which is it would require
-clients to write pagination recipe in their code. Current pagination scheme
-requires complex logic on the client side. There are quite a few corner cases
-to handle.
-
-## Abstract
-
-The main addition is to add new bookmark based pagination scheme to all quiery
-like endpoints. As a first step we would keep `_all_dbs`, `_dbs_info` and
-`_changes` out of scope for the following reasons:
-- _all_dbs - the endpoint returns a list instead of object as the rest of the endpoints
-- _dbs_info - the endpoint returns a list instead of object as the rest of the endpoints
-- _changes - the endpoint contains too many different modes and it would require
-  more careful consideration
-
-The endpoints in scope are:
-- {db}/_all_docs
-- {db}/_all_docs/queries
-- {db}/_design/{ddoc}/_view/{view}
-- {db}/_design/{ddoc}/_view/{view}/queries
-
-In a nutshell the idea is:
-- add `page_size` query field to control number of rows on each page and to flag
-  that client expects paginated response
-- add `first`, `previous`, `next` fields which contain bookmark part of URI
-- add `bookmark` query field to retrieve bookmarked page
-
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-- `bookmark` - is opaque token which would contain information needed to retrieve
-  bookmarked page. The format of the token value MUST NOT be relied upon in the client.
-
----
-
-# Detailed Description
-
-## Implementation proposal
-
-1) Add new optional query field called `bookmark` to following endpoints
-  - {db}/_all_docs
-  - {db}/_all_docs/queries
-  - {db}/_design/{ddoc}/_view/{view}
-  - {db}/_design/{ddoc}/_view/{view}/queries
-
-2) Add new optional query field called `page_size` if it is set we would use
-  paginated endpoint otherwise use old code path
-3) Implement per-endpoint configurable max limits to restrict page size of
-  paginated responses
-
- ```
- [request_limits]
- _all_docs = 5000
- _all_docs/queries = 5000
- _all_dbs = 5000
- _dbs_info = 5000
- _view = 2500
- _view/queries = 2500
- _find = 2500
- ```
-4) Add following additional fields into response:
-  ```
-    "first": "12345678945621321689",
-    "previous": "983uiwfjkdsdf",
-    "next": "12343tyekf3"
-  ```
-
-## Limitations
-
-- The `first`/`next`/`last` keys in the response are represented as path which
-  includes the bookmark query key. This means the bookmark token size contributes
-  to total URI length and is subject to a max URL length (around 2000 characters).
-  This means storing `keys` in a bookmark is not an option. For that reason
-  `POST` method is not supported when pagination is enabled
-- Ideally we would want to signal (return 400) when number of rows returned from
-  streaming version of the endpoint goes over limit configured in `request_limit`.
-  However with streaming we've already sent a return code.
-
-## Semantics of the implementation
-
-- Only GET method would have pagination support
-- The bookmark would include information needed to ensure proper pagination
-  without the need to repeat initial parameters of the request.
-- Don't use delayed responses when `bookmark` field is provided
-- Don't use delayed responses when `page_size` query key is specified and when
-  it is below the max limit
-- Return 400 when `bookmark` field is set and other query fields are present
-- Return 400 when `page_size` query key is specified and it is greater than
-  the max limit
-- The `previous`/`next`/`first` keys are optional and we omit them for the cases
-  they don't make sense
-- Set default value for `page_size` to `limit` if provided `limit` is less than
-  value configured in `request_limit` of `default.ini` for the given endpoint
-- Set default value for `page_size` to the value configured in `request_limit`
-  of `default.ini` for the given endpoint
-- Once the `limit` is reached the final response will not have a "next" bookmark
-- The maximum value for `skip` query parameter is limited to the same `page_size` or
-  value configured in `request_limit` whatever is less
-- Once the underlying call to FoundationDB returns less than `page_size`
-  the response will not have a "next" bookmark
-- When `page_size` is used with `_all_docs/queries` or `{db}/_design/{ddoc}/_view/{view}/queries`
-  the specified limit applies to number of queries provided in the request.
-- For `_all_docs/queries` and `{db}/_design/{ddoc}/_view/{view}/queries` the total
-  number of rows returned shouldn't exceed provided `page_size` or configured
-  max limit (whatever is less)
-- Paginated requests are subject to FDB transaction timeout. This is implemented
-  via lack of `{restart_tx, true}` option for FDB calls.
-- The request to `_all_docs/queries` and `{db}/_design/{ddoc}/_view/{view}/queries`
-  can include bookmarks:
-  ```
-  {"queries": [
-    {"bookmark": "bookmarkForQuery1PageL"},
-    {"bookmark": "bookmarkForQuery2PageM"},
-    {"bookmark": "bookmarkForQuery3PageN"}
-    ]
-  }
-  ```
-- Every bookmark returned by `_all_docs/queries` and `{db}/_design/{ddoc}/_view/{view}/queries`
-  can be submitted via separate request to `_all_docs` and `{db}/_design/{ddoc}/_view/{view}`
-  correspondly.
-
-
-## Configuration
-
-The page size limits are configured in `default.ini` (or another `ini` file) in
-`request_limit` section as follows:
-
-```
- [request_limits]
- _all_docs = 5000
- _all_docs/queries = 5000
- _all_dbs = 5000
- _dbs_info = 5000
- _view = 2500
- _view/queries = 2500
- _find = 2500
-```
-
-## Roadmap
-
-- initial implementation as described in this document
-- create API versioning proposal and implement the feature
-- create separate proposal for `_changes` endpoint
-- implement pagination enabled version of `_all_dbs` and `_dbs_info` which would
-  change response type to be object (using versioned API feature)
-
-# Key Changes
-
-- New configuration section
-- New query fields
-- New fields in response body
-- Enforcing strict limits on number of rows requested by client
-
-## Applications and Modules affected
-
-- chttpd
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-No changes to the security model
-
-# References
-
-- [Streaming API in CouchDB 4.0 discussion](https://lists.apache.org/thread.html/r02cee7045cac4722e1682bb69ba0ec791f5cce025597d0099fb34033%40%3Cdev.couchdb.apache.org%3E)
-- [Streaming API in CouchDB 4.0 discussion part 2](https://lists.apache.org/thread.html/ra8d16937cca332207d772844d2789f932fbc4572443a354391663b9c%40%3Cdev.couchdb.apache.org%3E)
-- [API versioning discussion](https://lists.apache.org/thread.html/rcc742c0fdca0363bb338b54526045720868597ea35ee6842aef174e0%40%3Cdev.couchdb.apache.org%3E)
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
-
-Thank you to all partitipants in the discussion threads. You all helped to shape
-and refine this proposal in one form or another.
-
-- Adam Kocoloski
-- Garren Smith
-- Glynn Bird
-- Joan Touzet
-- Mike Rhodes
-- Nick Vatamaniuc
-- Paul Davis
-- Richard Ellis
-- Robert Samuel Newson
diff --git a/rfcs/015-background-index-building.md b/rfcs/015-background-index-building.md
deleted file mode 100644
index f67b0f7..0000000
--- a/rfcs/015-background-index-building.md
+++ /dev/null
@@ -1,131 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Background index building'
-labels: rfc, discussion
-assignees: ''
-
----
-
-# Introduction
-
-This document describes the design for the background index builder in CouchDB 4.
-
-## Abstract
-
-Background index builder monitors databases for changes and then kicks off
-asynchronous index updates. It is also responsible for removing stale indexing
-data.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
----
-
-# Detailed Description
-
-The two main components of the background index builder are:
- 1) The notification mechanism
- 2) Index building behavior API and registration facility
-
-The notification mechanism monitors databases for updates and the secondary
-index applications register with the background indexer and provide an
-implementation of the index building API.
-
-## Database Updates Notifications
-
-After each document update transaction finishes, the background indexer is
-notified via a callback. The indexer then bumps the timestamp for that database
-in a set of sharded ETS tables. Each sharded ETS table has an associated
-background process which periodically removes entries from there and calls the
-index building API functions for each registered indexing backend.
-
-In addition to building indices, the background index builder also cleanups up
-stale index data. This is index data left behind after design documents have
-been updated or deleted and the view signatures changed.
-
-Background index building and cleaning may be enabled or disabled with
-configuration options. There is also a configurable delay during which db
-updates would accumulate for each database. This is used to avoid re-scheduling
-`couch_jobs` too often.
-
-## Background Index Building Behavior
-
-Unlike CouchDB 3 (`ken`), the background index builder in CouchDB 4 doesn't
-have centralized knowledge of all the possible secondary indices. Instead, each
-secondary indexing application may register with the background index builder
-and provide a set of callbacks implementing background index building for their
-particular index types.
-
-
-Background index building behavior is a standard Erlang/OTP behavior defined
-as:
-
-```
--callback build_indices(Db :: map(), DDocs :: list(#doc{})) ->
-    [{ok, JobId::binary()} | {error, any()}].
-
--callback cleanup_indices(Db :: map(), DDocs :: list(#doc{})) ->
-    [ok | {error, any()}].
-```
-
-Each indexing application, may register with the index builder by using
-`fabric2_index:register(Module)` function. When it registers, it must provide
-an implementation of that behavior in that module.
-
- * `build_indices/2`: must inspect all the passed in design doc bodies and
-trigger asynchronous index updates for the all views that module is responsible
-for.
-
- *`cleanup_indices/2`: must clean up all the stale indexing data associated
-with all the views in the design docs passed in as an argument.
-
-# Advantages and Disadvantages
-
- * Main advantage is simplicity. Rely on node-local updates and the fact that
-   all indexing is currently backed by `couch_jobs` jobs, which handle global
-   locking and coordination.
-
- * Main disadvantage is also simplicity. There is no concept of priority to
-   allow users to build some indices before others.
-
-# Key Changes
-
-Configuration format has changed. Instead of configuring background index
-building in the `[ken]` section, it is now configured in the `[fabric]` config
-section. Otherwise there are no external API changes.
-
-## Applications and Modules affected
-
- * fabric2_index
- * fabric2_db
- * couch_views
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-None
-
-# References
-
-[fabric2_index](https://github.com/apache/couchdb/blob/prototype/fdb-layer/src/fabric/src/fabric2_index.erl)
-[ken](https://github.com/apache/couchdb/tree/master/src/ken)
-
-# Co-authors
-
- * @davisp
-
-# Acknowledgements
-
- * @davisp
diff --git a/rfcs/016-fdb-replicator.md b/rfcs/016-fdb-replicator.md
deleted file mode 100644
index af66c20..0000000
--- a/rfcs/016-fdb-replicator.md
+++ /dev/null
@@ -1,384 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Replicator Implementation On FDB'
-labels: rfc, discussion
-assignees: 'vatamane@apache.org'
-
----
-
-# Introduction
-
-This document describes the design of the replicator application for CouchDB
-4.x. The replicator will rely on `couch_jobs` for centralized scheduling and
-monitoring of replication jobs.
-
-## Abstract
-
-Replication jobs can be created from documents in `_replicator` databases, or
-by `POST`-ing requests to the HTTP `/_replicate` endpoint. Previously, in
-CouchDB <= 3.x, replication jobs were mapped to individual cluster nodes and a
-scheduler component would run up to `max_jobs` number of jobs at a time on each
-node. The new design proposes using `couch_jobs`, as described in the
-[Background Jobs
-RFC](https://github.com/apache/couchdb-documentation/blob/main/rfcs/007-background-jobs.md),
-to have a central, FDB-based queue of replication jobs. `couch_jobs`
-application will manage job scheduling and coordination. The new design also
-proposes using heterogeneous node types as defined in the [Node Types
-RFC](https://github.com/apache/couchdb-documentation/blob/main/rfcs/013-node-types.md)
-such that replication jobs will be created only on `api_frontend` nodes and run
-only on `replication` nodes.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`_replicator` databases : A database that is either named `_replicator` or ends
-with the `/_replicator` suffix.
-
-`transient` replications : Replication jobs created by `POST`-ing to the
-`/_replicate` endpoint.
-
-`persistent` replications : Replication jobs defined in document in a
-`_replicator` database.
-
-`continuous` replications : Replication jobs created with the `"continuous":
-true` parameter. These jobs will try to run continuously until the user removes
-them. They may be temporarily paused to allow other jobs to make progress.
-
-`one-shot` replications : Replication jobs which are not `continuous`. If the
-`"continuous":true` parameter is not specified, by default, replication jobs
-will be `one-shot`. These jobs will try to run until they reach the end of the
-changes feed, then stop.
-
-`api_frontend node` : Database node which has the `api_frontend` type set to
-`true` as described in
-[RFC](https://github.com/apache/couchdb-documentation/blob/main/rfcs/013-node-types.md).
-Replication jobs can be only be created on these nodes.
-
-`replication node` : Database node which has the `replication` type set to
-`true` as described in
-[RFC](https://github.com/apache/couchdb-documentation/blob/main/rfcs/013-node-types.md).
-Replication jobs can only be run on these nodes.
-
-`filtered` replications: Replications with a user-defined filter on the source
-endpoint to filter its changes feed.
-
-`replication_id` : An ID defined by replication jobs, which is a hash of
-replication parameters that affect the result of the replication. These may
-include source and target endpoint URLs, as well as a filter function specified
-in a design document on the source endpoint.
-
-`job_id` : A replication job ID derived from the database and document IDs for
-persistent replications, and from source, target endpoint, user name and some
-options for transient replications. Computing a `job_id`, unlike a
-`replication_id`, doesn't require making any network requests. A filtered
-replication with a given `job_id` during its lifetime may change its
-`replication_id` multiple times when filter contents changes on the source.
-
-`max_jobs` : Configuration parameter which specifies up to how many replication
-jobs to run on each `replication` node.
-
-`max_churn` : Configuration parameter which specifies a limit of how many new
-jobs to spawn during each rescheduling interval.
-
-`min_backoff_penalty` : Configuration parameter specifying the minimum (the
-base) penalty applied to jobs which crash repeatedly.
-
-`max_backoff_penalty` : Configuration parameter specifying the maximum penalty
-applied to jobs which crash repeatedly.
-
----
-
-# Detailed Description
-
-Replication job creation and scheduling works roughly as follows:
-
- 1) `Persistent` and `transient` jobs both start by creating or updating a
- `couch_jobs` record in a separate replication key-space on `api_frontend`
- nodes. Persistent jobs are driven by the `couch_epi` callback mechanism which
- notifies `couch_replicator` application when documents in `_replicator` DBs
- are updated, or when `_replicator` DBs are created and deleted. Transient jobs
- are created from the `_replicate` HTTP handler directly. Newly created jobs
- are in a `pending` state.
-
- 2) Each `replication` node spawns some acceptor processes which wait in
- `couch_jobs:accept/2` call for jobs. It will accept only jobs which are
- scheduled to run at a time less or equal to the current time.
-
- 3) After a job is accepted, its state is updated to `running`, and then, a
- gen_server process monitoring these replication jobs will spawn another
- acceptor. That happens until the `max_jobs` limit is reached.
-
- 4) The same monitoring gen_server will periodically check if there are any
- pending jobs in the queue and, if there are, spawn up to some `max_churn`
- number of new acceptors. These acceptors may start new jobs and, if they do,
- for each one of them, the oldest running job will be stopped and re-enqueued
- as `pending`. This in large follows the logic from the replication scheduler
- in CouchDB <= 3.x except that is uses `couch_jobs` as the central queuing and
- scheduling mechanism.
-
- 5) After the job is marked as `running`, it computes its `replication_id`,
- initializes an internal replication state record from job's data object, and
- starts replicating. Underneath this level the logic is identical to what's
- already happening in CouchDB <= 3.x and so it is not described further in this
- document.
-
- 6) As jobs run, they periodically checkpoint, and when they do that, they also
- recompute their `replication_id`. In the case of filtered replications the
- `replication_id` may change, and if so, that job is stopped and re-enqueued as
- `pending`. Also, during checkpointing the job's data value is updated with
- stats such that the job stays active and doesn't get re-enqueued by the
- `couch_jobs` activity monitor.
-
- 7) If the job crashes, it will reschedule itself in `gen_server:terminate/2`
- via `couch_jobs:resubmit/3` call to run again at some future time, defined
- roughly as `now + max(min_backoff_penalty * 2^consecutive_errors,
- max_backoff_penalty)`. If a job starts and successfully runs for some
- predefined period of time without crashing, it is considered to be `"healed"`
- and its `consecutive_errors` count is reset to 0.
-
- 8) If the node where replication job runs crashes, or the job is manually
- killed via `exit(Pid, kill)`, `couch_jobs` activity monitor will automatically
- re-enqueue the job as `pending`.
-
-## Replicator Job States
-
-### Description
-
-The set of replication job states is defined as:
-
- * `pending` : A job is marked as `pending` in these cases:
-    - As soon as a job is created from an `api_frontend` node
-    - When it stopped to let other replication jobs run
-    - When a filtered replication's `replication_id` changes
-
- * `running` : Set when a job is accepted by the `couch_jobs:accept/2`
-   call. This generally means the job is actually running on a node,
-   however, in cases when a node crashes, the job may show as
-   `running` on that node until `couch_jobs` activity monitor
-   re-enqueues the job, and it starts running on another node.
-
- * `crashing` : The job was running, but then crashed with an intermittent
-   error. Job's data has an error count which is incremented, and then a
-   backoff penalty is computed and the job is rescheduled to try again at some
-   point in the future.
-
- * `completed` : One-Shot replications which have completed
-
- * `failed` : This can happen when:
-    - A replication job could not be parsed from a replication document. For
-      example, if the user has not specified a `"source"` field.
-    - A transient replication job crashes. Transient jobs don't get rescheduled
-      to run again after they crash.
-    - There already is another persistent replication job running or pending
-      with the same `replication_id`.
-
-### State Differences From CouchDB <= 3.x
-
-The set of states is slightly different than the ones from before. There are
-now fewer states as some of them have been combined together:
-
- * `initializing` was combined with `pending`
-
- * `error` was combined with `crashing`
-
-### Mapping Between couch_jobs States and Replication States
-
-`couch_jobs` application has its own set of state definitions and they map to
-replicator states like so:
-
- | Replicator States| `couch_jobs` States
- | ---              | :--
- | pending          | pending
- | running          | running
- | crashing         | pending
- | completed        | finished
- | failed           | finished
-
-### State Transition Diagram
-
-Jobs start in the `pending` state, after either a `_replicator` db doc
-update, or a POST to the `/_replicate` endpoint. Continuous jobs, will
-normally toggle between `pending` and `running` states. One-Shot jobs
-may toggle between `pending` and running a few times and then end up
-in `completed`.
-
-```
-_replicator doc       +-------+
-POST /_replicate ---->+pending|
-                      +-------+
-                          ^
-                          |
-                          |
-                          v
-                      +---+---+      +--------+
-            +---------+running+<---->|crashing|
-            |         +---+---+      +--------+
-            |             |
-            |             |
-            v             v
-        +------+     +---------+
-        |failed|     |completed|
-        +------+     +---------+
-```
-
-
-## Replication ID Collisions
-
-Multiple replication jobs may specify replications which map to the same
-`replication_id`. To handle these collisions there is an FDB subspace `(...,
-LayerPrefix, ?REPLICATION_IDS, replication_id) -> job_id` to keep track of
-them. After the `replication_id` is computed, each replication job checks if
-there is already another job pending or running with the same `replication_id`.
-If the other job is transient, then the current job will reschedule itself as
-`crashing`. If the other job is persistent, the current job will fail
-permanently as `failed`.
-
-## Replication Parameter Validation
-
-`_replicator` documents in CouchDB <= 3.x were parsed and validated in a
-two-step process:
-
-  1) In a validate-doc-update (VDU) javascript function from a programmatically
-  inserted _design document. This validation happened when the document was
-  updated, and performed some rough checks on field names and value types. If
-  this validation failed, the document update operation was rejected.
-
-  2) Inside replicator's Erlang code when it was translated to an internal
- record used by the replication application. This validation was more thorough
- but didn't have very friendly error messages. If validation failed here, the
- job would be marked as `failed`.
-
-For CouchDB 4.x the proposal is to use only the Erlang parser. It would be
-called from the `before_doc_update` callback. This is a callback which runs
-before every document update. If validation fails there it would reject the
-document update operation. This should reduce code duplication and also provide
-better feedback to the users directly when they update the `_replicator`
-documents.
-
-## Transient Job Behavior
-
-In CouchDB <= 3.x transient replication jobs ran in memory on a particular node
-in the cluster. If the node where the replication job ran crashes, the job
-would simply disappear without a trace. It was up to the user to periodically
-monitor the job status and re-create the job. In the current design,
-`transient` jobs are persisted to FDB as `couch_jobs` records, and so would
-survive node restarts. Also after transient jobs complete or failed,
-they used to disappear immediately. This design proposes keeping them around
-for a configurable emount of time to allow users to retrieve their status via
-`_scheduler/jobs/$id` API.
-
-## Monitoring Endpoints
-
-`_active_tasks`, `_scheduler/jobs` and `_scheduler/docs` endpoint are handled
-by traversing the replication job's data using a new `couch_jobs:fold_jobs/4`
-API function to retrieve each job's data. `_active_tasks` implementation
-already works that way and `_scheduler/*` endpoint will work similarly.
-
-## Replication Documents Not Updated For Transient Errors
-
-Configuration
-[option](https://docs.couchdb.org/en/latest/replication/replicator.html?highlight=update_docs#compatibility-mode)
-`[replicator] update_docs = false` was introduced with the scheduling
-replicator in a 2.x release. It controls whether to update replication
-documents with transient states like `triggered` and `error`. It defaulted to
-`false` and was mainly for compatibility with older monitoring user scripts.
-That behavior now becomes hard-coded such that replication documents are only
-updated with terminal states of `failed` and `completed`. Users should use
-`_scheduler/docs` API to check for completion status instead.
-
-
-# Advantages and Disadvantages
-
-Advantages:
-
- * Simplicity: re-using `couch_jobs` means having a lot less code to maintain
-   in `couch_replicator`. In the draft implementation there are about 3000
-   lines of code saved compared to the replicator application in CouchDB 3.x
-
- * Simpler endpoint and monitoring implementation
-
- * Fewer replication job states to keep track of
-
- * Transient replications can survive node crashes and restarts
-
- * Simplified and improved validation logic
-
- * Using node types allows tightening firewall rules such that only
-   `replication` nodes are the ones which may make arbitrary requests outside
-   the cluster, and `frontend_api` nodes are the only ones that may accept
-   incoming connections.
-
-Disadvantages:
-
- * Behavior changes for transient jobs
-
- * Centralized job queue might mean handling some number of conflicts generated
-   in the FDB backend when jobs are accepted. These are mitigated using the
-   `startup_jitter` configuration parameter and a configurable number of max
-   acceptors per node.
-
- * In monitoring API responses, `running` job state might not immediately
-   reflect the running process state on the replication node. If the node
-   crashes, it might take up to a minute or two until the job is re-enqueued by
-   the `couch_jobs` activity monitor.
-
-# Key Changes
-
- * Behavior changes for transient jobs
-
- * A delay in `running` state as reflected in monitoring API responses
-
- * `[replicator] update_docs = false` configuration option becomes hard-coded
-
-## Applications and Modules affected
-
- * couch_jobs : New APIs to fold jobs and get pending count job estimate
-
- * fabric2_db : Adding EPI db create/delete callbacks
-
- * couch_replicator :
-    - Remove `couch_replicator_scheduler*` modules
-    - Remove `couch_replicator_doc_processor_*` modules
-    - `couch_replicator` : job creation and a general API entry-point for
-      couch_replicator.
-    - `couch_replicator_job` : runs each replication job
-    - `couch_replicator_job_server` : replication job monitoring gen_server
-    - `couch_replicator_parse` : parses replication document and HTTP
-      `_replicate` POST bodies
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-Ability to confine replication jobs to run on `replication` nodes improves the
-security posture. It is possible to set up firewall rules which allow egress
-traffic sent out only from those nodes.
-
-# References
-
-* [Background Jobs RFC](https://github.com/apache/couchdb-documentation/blob/main/rfcs/007-background-jobs.md)
-
-* [Node Types RFC](https://github.com/apache/couchdb-documentation/blob/main/rfcs/013-node-types.md)
-
-* [CouchDB 3.x replicator implementation](https://github.com/apache/couchdb/blob/3.x/src/couch_replicator/README.md)
-
-# Co-authors
-
- * @davisp
-
-# Acknowledgements
-
- * @davisp
diff --git a/rfcs/017-fair-share-scheduling.md b/rfcs/017-fair-share-scheduling.md
deleted file mode 100644
index 64ba05e..0000000
--- a/rfcs/017-fair-share-scheduling.md
+++ /dev/null
@@ -1,210 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: 'Fair Share Job Scheduling for CouchDB 3.x Replicator'
-labels: rfc, discussion
-assignees: 'vatamane@apache.org'
-
----
-
-# Introduction
-
-This document describes an improvement to the CouchDB 3.x replicator to
-introduce fair resource sharing between replication jobs in different
-_replicator databases.
-
-## Abstract
-
-Currently CouchDB replicator 3.x schedules jobs without any regard to what
-database they originated from. If there are multiple `_replicator` dbs then
-replication jobs from dbs with most jobs will consume most of the scheduler's
-resources. The proposal is to implement a fair sharing scheme as described in
-[A Fair Share Scheduler][2] paper by Judy Kay and Piers Lauder. It would allow
-sharing replication scheduler resources fairly amongst `_replicator` dbs.
-
-The idea was originally discussed on the [couchdb-dev][1] mailing list and the
-use of the Fair Share algorithm suggested by Joan Touzet.
-
-## Requirements Language
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD",
-"SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be
-interpreted as described in [RFC
-2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-`_replicator` databases : A database that is either named `_replicator` or ends
-with the `/_replicator` suffix.
-
-`shares` : An abstract representation of entitlement to run on the replication
-scheduler.
-
-`usage` : A measure of resource usage by jobs from a particular `_replicator`
-db. For the scheduling replicator this will be the total time spent running.
-
-`continuous` replications : Replication jobs created with the `"continuous":
-true` parameter. These jobs will try to run continuously until the user removes
-them. They may be temporarily paused to allow other jobs to make progress.
-
-`one-shot` replications : Replication jobs which are not `continuous`. If the
-`"continuous":true` parameter is not specified, by default, replication jobs
-will be `one-shot`. These jobs will try to run until they reach the end of the
-changes feed, then stop.
-
-`job priority` : A job attribute which indicates the likelihood of the job
-being executed before other jobs. Following the convention in the "Fair Share"
-paper, jobs with a lower priority value are at the front of the pending queue,
-and get executed first.
-
-`max_jobs` : Configuration parameter which specifies up to how many replication
-jobs to run on each `replication` node.
-
-`max_churn` : Configuration parameter which specifies a limit of how many new
-jobs to spawn during each rescheduling interval.
-
----
-
-# Detailed Description
-
-The general idea behind the algorithm is to continuously monitor
-per-`_replicator` jobs statistics and update each job's priorities in
-proportion to the usage from all the jobs in the same `_replicator` db. To make
-sure all jobs eventually get a chance to run and do not starve, all the
-priorities are continuously boosted, such that jobs which haven't run for a
-while, and maybe be starved, will eventually get a chance to run.
-
-The algorithm has 3 basic components that can run mostly independently from
-each other:
-
-1) Keep track of `usage` for each `_replicator` db . In the paper this part is
-called "user-level scheduling". As jobs run, they send reports to this
-component. Those reports are accumulated for one period, then rolled up when
-the period ends. There is also a decay coefficient applied to account for
-recent historical usage (this is called `K1` in the paper). This ensures in
-absence of jobs running from a particular `_replicator` db, the usage would
-drops to 0 and the whole entry is removed from the table table altogether.
-
- Every `UsageUpdateInterval` seconds (called `t1` in the paper):
-   For each `Db`:
-     ```
-     DecayCoeff = get_usage_decay_coefficient(0.5)
-     AccumulatedUsage = get_accumulated_usage(Db),
-     update_usage(Db, usage(Db) * DecayCoeff + AccumulatedUsage)
-     reset_accumulated_usage(Db)
-     ```
-
-2) Uniformly decay all process priorities. Periodically lower the priority
-values, and thus boost the priority, of all the pending and running jobs in the
-system. The paper in this step applies a per-process "nice" value, which is
-skipped in the initial proposal. It could be added later if needed.
-
- Every `UniformPriorityBoostInterval` seconds (called `t2` in the paper):
-   For each `Job`:
-     ```
-     DecayCoeff = get_uniform_decay_coefficient(0.75),
-     Job#job.priority = Job#job.priority * DecayCoeff
-     ```
-
-[note]: If jobs were scheduled to run at an absolute future time (a deadline) this step could be avoided. Then, the effect of all the jobs needing to periodically move to the front of the queue would be accomplished instead by the current time (i.e. `now()`) moving head along the time-line.
-
-3) Adjust running process priority in proportion to the shares used by all the
-jobs in the same db:
-
- Every `RunningPriorityReduceInterval` seconds (called `t3` in the paper):
-   For each `Job`:
-     ```
-     Db = Job#job.db,
-     SharesSq = shares(Db) * shares(Db),
-     Job#job.priority = Job#job.priority + (usage(Db) * pending(Db)) / SharesSq
-     ```
-
-### How Jobs Start and Stop
-
-During each rescheduling cycle, `max_churn` running jobs from the back of the
-queue are stopped and `max_churn` jobs from the front of the pending queue are
-started. This part is not modified from the existing scheduling algorithm,
-except now, the jobs would be ordered by their `priority` value before being
-ordered by their last start time.
-
-In addition, `one-shot` replication jobs would still be skipped when stopping
-and we'd let them run in order to maintain traditional replication semantics
-just like before.
-
-When picking the jobs to run exclude jobs which have been exponentially backed
-off due to repeated errors. This part is unmodified and from the original
-scheduler.
-
-### Configuration
-
-The decay coefficients and interval times for each of the 3 parts of the algorithm would be configurable in the `[replicator]` config section.
-
-Per-`_replicator` db shares would be configurable in the `[replicator.shares]` section as:
-
-```
-[replicator.shares]
-$prefix/_replicator = $numshares
-```
-
-By default each db is assigned 100 shares. Then higher number of shares should
-then indicated a larger proportion of scheduler resources allocated to that db.
-A lower number would get proportionally less shares.
-
-For example:
-
-```
-[replicator.shares]
-
-; This is the default
-; _replicator = 100
-
-high/_replicator = 200
-low/_replicator = 50
-```
-
-# Advantages and Disadvantages
-
-Advantages:
-
-  * Allow a fair share of resources between multiple `_replicator` db instances
-
-  * Can boost or lower the priority of some replication jobs by adjusting the
-    shares assigned to that database instance.
-
-Disadvantages:
-
-  * Adds more complexity to the scheduler
-
-# Key Changes
-
- * Modifies replication scheduler
-
-## Applications and Modules affected
-
- * `couch_replicator` application
-
-## HTTP API additions
-
-N/A
-
-## HTTP API deprecations
-
-N/A
-
-# Security Considerations
-
-None
-
-# References
-
-* [1]: https://lists.apache.org/thread.html/rebba9a43bfdf9696f2ce974b0fc7550a631c7b835e4c14e51cd27a87%40%3Cdev.couchdb.apache.org%3E "couchdb-dev"
-
-* [2]: https://proteusmaster.urcf.drexel.edu/urcfwiki/images/KayLauderFairShare.pdf "Fair Share Scheduler"
-
-# Co-authors
-
- * Joan Touzet (@wohali)
-
-# Acknowledgments
-
- * Joan Touzet (@wohali)
diff --git a/rfcs/images/SkExample1.png b/rfcs/images/SkExample1.png
deleted file mode 100644
index cb3abb1..0000000
--- a/rfcs/images/SkExample1.png
+++ /dev/null
Binary files differ
diff --git a/rfcs/images/SkExample2.png b/rfcs/images/SkExample2.png
deleted file mode 100644
index 86d4506..0000000
--- a/rfcs/images/SkExample2.png
+++ /dev/null
Binary files differ
diff --git a/rfcs/images/SkExample3.png b/rfcs/images/SkExample3.png
deleted file mode 100644
index e47e216..0000000
--- a/rfcs/images/SkExample3.png
+++ /dev/null
Binary files differ
diff --git a/rfcs/template.md b/rfcs/template.md
deleted file mode 100644
index a966bd9..0000000
--- a/rfcs/template.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-name: Formal RFC
-about: Submit a formal Request For Comments for consideration by the team.
-title: ''
-labels: rfc, discussion
-assignees: ''
-
----
-
-[NOTE]: # ( ^^ Provide a general summary of the RFC in the title above. ^^ )
-
-# Introduction
-
-## Abstract
-
-[NOTE]: # ( Provide a 1-to-3 paragraph overview of the requested change. )
-[NOTE]: # ( Describe what problem you are solving, and the general approach. )
-
-## Requirements Language
-
-[NOTE]: # ( Do not alter the section below. Follow its instructions. )
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED",  "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in
-[RFC 2119](https://www.rfc-editor.org/rfc/rfc2119.txt).
-
-## Terminology
-
-[TIP]:  # ( Provide a list of any unique terms or acronyms, and their definitions here.)
-
----
-
-# Detailed Description
-
-[NOTE]: # ( Describe the solution being proposed in greater detail. )
-[NOTE]: # ( Assume your audience has knowledge of, but not necessarily familiarity )
-[NOTE]: # ( with, the CouchDB internals. Provide enough context so that the reader )
-[NOTE]: # ( can make an informed decision about the proposal. )
-
-[TIP]:  # ( Artwork may be attached to the submission and linked as necessary. )
-[TIP]:  # ( ASCII artwork can also be included in code blocks, if desired. )
-
-# Advantages and Disadvantages
-
-[NOTE]: # ( Briefly, list the benefits and drawbacks that would be realized should )
-[NOTE]: # ( the proposal be accepted for inclusion into Apache CouchDB. )
-
-# Key Changes
-
-[TIP]: # ( If the changes will affect how a user interacts with CouchDB, explain. )
-
-## Applications and Modules affected
-
-[NOTE]: # ( List the OTP applications or functional modules in CouchDB affected by the proposal. )
-
-## HTTP API additions
-
-[NOTE]: # ( Provide *exact* detail on each new API endpoint, including: )
-[NOTE]: # (   HTTP methods [HEAD, GET, PUT, POST, DELETE, etc.] )
-[NOTE]: # (   Synopsis of functionality )
-[NOTE]: # (   Headers and parameters accepted )
-[NOTE]: # (   JSON in [if a PUT or POST type] )
-[NOTE]: # (   JSON out )
-[NOTE]: # (   Valid status codes and their definitions )
-[NOTE]: # (   A proposed Request and Response block )
-
-## HTTP API deprecations
-
-[NOTE]: # ( Provide *exact* detail on the API endpoints to be deprecated. )
-[NOTE]: # ( If these endpoints are replaced by new endpoints, list those as well. )
-[NOTE]: # ( State the proposed version in which the deprecation and removal will occur. )
-
-# Security Considerations
-
-[NOTE]: # ( Include any impact to the security of CouchDB here. )
-
-# References
-
-[TIP]:  # ( Include any references to CouchDB documentation, mailing list discussion, )
-[TIP]:  # ( external standards or other links here. )
-
-# Acknowledgements
-
-[TIP]:  # ( Who helped you write this RFC? )
diff --git a/src/about.rst b/src/about.rst
deleted file mode 100644
index a3bfc50..0000000
--- a/src/about.rst
+++ /dev/null
@@ -1,24 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _about:
-
-===========================
-About CouchDB Documentation
-===========================
-
-License
-=======
-
-.. literalinclude:: ../LICENSE
-    :language: none
-    :lines: 1-202
diff --git a/src/api/basics.rst b/src/api/basics.rst
deleted file mode 100644
index bce96cc..0000000
--- a/src/api/basics.rst
+++ /dev/null
@@ -1,605 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/basics:
-
-==========
-API Basics
-==========
-
-The CouchDB API is the primary method of interfacing to a CouchDB instance.
-Requests are made using HTTP and requests are used to request information from
-the database, store new data, and perform views and formatting of the
-information stored within the documents.
-
-Requests to the API can be categorised by the different areas of the CouchDB
-system that you are accessing, and the HTTP method used to send the request.
-Different methods imply different operations, for example retrieval of
-information from the database is typically handled by the ``GET`` operation,
-while updates are handled by either a ``POST`` or ``PUT`` request. There are
-some differences between the information that must be supplied for the
-different methods. For a guide to the basic HTTP methods and request structure,
-see :ref:`api/format`.
-
-For nearly all operations, the submitted data, and the returned data structure,
-is defined within a JavaScript Object Notation (JSON) object. Basic information
-on the content and data types for JSON are provided in :ref:`json`.
-
-Errors when accessing the CouchDB API are reported using standard HTTP Status
-Codes. A guide to the generic codes returned by CouchDB are provided in
-:ref:`errors`.
-
-When accessing specific areas of the CouchDB API, specific information and
-examples on the HTTP methods and request, JSON structures, and error codes are
-provided.
-
-.. _api/format:
-
-Request Format and Responses
-============================
-
-CouchDB supports the following HTTP request methods:
-
-- ``GET``
-
-  Request the specified item. As with normal HTTP requests, the format of the
-  URL defines what is returned. With CouchDB this can include static items,
-  database documents, and configuration and statistical information. In most
-  cases the information is returned in the form of a JSON document.
-
-- ``HEAD``
-
-  The ``HEAD`` method is used to get the HTTP header of a ``GET`` request
-  without the body of the response.
-
-- ``POST``
-
-  Upload data. Within CouchDB ``POST`` is used to set values, including
-  uploading documents, setting document values, and starting certain
-  administration commands.
-
-- ``PUT``
-
-  Used to put a specified resource. In CouchDB ``PUT`` is used to create new
-  objects, including databases, documents, views and design documents.
-
-- ``DELETE``
-
-  Deletes the specified resource, including documents, views, and design
-  documents.
-
-- ``COPY``
-
-  A special method that can be used to copy documents and objects.
-
-If you use an unsupported HTTP request type with an URL that does not support
-the specified type then a ``405 - Method Not Allowed`` will be returned,
-listing the supported HTTP methods. For example:
-
-.. code-block:: javascript
-
-    {
-        "error":"method_not_allowed",
-        "reason":"Only GET,HEAD allowed"
-    }
-
-HTTP Headers
-============
-
-Because CouchDB uses HTTP for all communication, you need to ensure that the
-correct HTTP headers are supplied (and processed on retrieval) so that you get
-the right format and encoding. Different environments and clients will be more
-or less strict on the effect of these HTTP headers (especially when not
-present). Where possible you should be as specific as possible.
-
-Request Headers
----------------
-
-- ``Accept``
-
-  Specifies the list of accepted data types to be returned by the server (i.e.
-  that are accepted/understandable by the client). The format should be a list
-  of one or more MIME types, separated by colons.
-
-  For the majority of requests the definition should be for JSON data
-  (``application/json``). For attachments you can either specify the MIME type
-  explicitly, or use ``*/*`` to specify that all file types are supported. If
-  the ``Accept`` header is not supplied, then the ``*/*`` MIME type is assumed
-  (i.e. client accepts all formats).
-
-  The use of ``Accept`` in queries for CouchDB is not required, but is highly
-  recommended as it helps to ensure that the data returned can be processed by
-  the client.
-
-  If you specify a data type using the ``Accept`` header, CouchDB will honor
-  the specified type in the ``Content-type`` header field returned. For
-  example, if you explicitly request ``application/json`` in the ``Accept`` of
-  a request, the returned HTTP headers will use the value in the returned
-  ``Content-type`` field.
-
-  For example, when sending a request without an explicit ``Accept`` header, or
-  when specifying ``*/*``:
-
-  .. code-block:: http
-
-      GET /recipes HTTP/1.1
-      Host: couchdb:5984
-      Accept: */*
-
-  The returned headers are:
-
-  .. code-block:: http
-
-      HTTP/1.1 200 OK
-      Server: CouchDB (Erlang/OTP)
-      Date: Thu, 13 Jan 2011 13:39:34 GMT
-      Content-Type: text/plain;charset=utf-8
-      Content-Length: 227
-      Cache-Control: must-revalidate
-
-  .. Note::
-      The returned content type is ``text/plain`` even though the information
-      returned by the request is in JSON format.
-
-  Explicitly specifying the ``Accept`` header:
-
-  .. code-block:: http
-
-      GET /recipes HTTP/1.1
-      Host: couchdb:5984
-      Accept: application/json
-
-  The headers returned include the ``application/json`` content type:
-
-  .. code-block:: http
-
-      HTTP/1.1 200 OK
-      Server: CouchDB (Erlang/OTP)
-      Date: Thu, 13 Jan 2013 13:40:11 GMT
-      Content-Type: application/json
-      Content-Length: 227
-      Cache-Control: must-revalidate
-
-- ``Content-type``
-
-  Specifies the content type of the information being supplied within the
-  request. The specification uses MIME type specifications. For the majority of
-  requests this will be JSON (``application/json``). For some settings the MIME
-  type will be plain text. When uploading attachments it should be the
-  corresponding MIME type for the attachment or binary
-  (``application/octet-stream``).
-
-  The use of the ``Content-type`` on a request is highly recommended.
-
-Response Headers
-----------------
-
-Response headers are returned by the server when sending back content and
-include a number of different header fields, many of which are standard HTTP
-response header and have no significance to CouchDB operation. The list of
-response headers important to CouchDB are listed below.
-
-- ``Cache-control``
-
-  The cache control HTTP response header provides a suggestion for client
-  caching mechanisms on how to treat the returned information. CouchDB
-  typically returns the ``must-revalidate``, which indicates that the
-  information should be revalidated if possible. This is used to ensure that
-  the dynamic nature of the content is correctly updated.
-
-- ``Content-length``
-
-  The length (in bytes) of the returned content.
-
-- ``Content-type``
-
-  Specifies the MIME type of the returned data. For most request, the returned
-  MIME type is ``text/plain``. All text is encoded in Unicode (UTF-8), and this
-  is explicitly stated in the returned ``Content-type``, as
-  ``text/plain;charset=utf-8``.
-
-- ``Etag``
-
-  The ``Etag`` HTTP header field is used to show the revision for a document,
-  or a view.
-
-  ETags have been assigned to a map/reduce group (the collection of views in a
-  single design document). Any change to any of the indexes for those views
-  would generate a new ETag for all view URLs in a single design doc, even if
-  that specific view's results had not changed.
-
-  Each ``_view`` URL has its own ETag which only gets updated when changes are
-  made to the database that effect that index. If the index for that specific
-  view does not change, that view keeps the original ETag head (therefore
-  sending back ``304 - Not Modified`` more often).
-
-- ``Transfer-Encoding``
-
-  If the response uses an encoding, then it is specified in this header field.
-
-  ``Transfer-Encoding: chunked`` means that the response is sent in parts, a
-  method known as `chunked transfer encoding`_. This is used when CouchDB does
-  not know beforehand the size of the data it will send (for example,
-  the :ref:`changes feed <changes>`).
-
-- ``X-CouchDB-Body-Time``
-
-  Time spent receiving the request body in milliseconds.
-
-  Available when body content is included in the request.
-
-- ``X-Couch-Request-ID``
-
-  Unique identifier for the request.
-
-.. _chunked transfer encoding:
-    https://en.wikipedia.org/wiki/Chunked_transfer_encoding
-
-.. _json:
-
-JSON Basics
-===========
-
-The majority of requests and responses to CouchDB use the JavaScript Object
-Notation (JSON) for formatting the content and structure of the data and
-responses.
-
-JSON is used because it is the simplest and easiest solution for working with
-data within a web browser, as JSON structures can be evaluated and used as
-JavaScript objects within the web browser environment. JSON also integrates
-with the server-side JavaScript used within CouchDB.
-
-JSON supports the same basic types as supported by JavaScript, these are:
-
-- Array - a list of values enclosed in square brackets. For example:
-
-  .. code-block:: javascript
-
-      ["one", "two", "three"]
-
-- Boolean - a ``true`` or ``false`` value. You can use these strings directly.
-  For example:
-
-  .. code-block:: javascript
-
-      { "value": true}
-
-- Number - an integer or floating-point number.
-
-- Object - a set of key/value pairs (i.e. an associative array, or hash). The
-  key must be a string, but the value can be any of the supported JSON values.
-  For example:
-
-  .. code-block:: javascript
-
-      {
-          "servings" : 4,
-          "subtitle" : "Easy to make in advance, and then cook when ready",
-          "cooktime" : 60,
-          "title" : "Chicken Coriander"
-      }
-
-  In CouchDB, the JSON object is used to represent a variety of structures,
-  including the main CouchDB document.
-
-- String - this should be enclosed by double-quotes and supports Unicode
-  characters and backslash escaping. For example:
-
-  .. code-block:: javascript
-
-      "A String"
-
-Parsing JSON into a JavaScript object is supported through the ``JSON.parse()``
-function in JavaScript, or through various libraries that will perform the
-parsing of the content into a JavaScript object for you. Libraries for parsing
-and generating JSON are available in many languages, including Perl, Python,
-Ruby, Erlang and others.
-
-.. warning::
-    Care should be taken to ensure that your JSON structures are valid,
-    invalid structures will cause CouchDB to return an HTTP status code of 500
-    (server error).
-
-.. _json/numbers:
-
-Number Handling
----------------
-
-Developers and users new to computer handling of numbers often encounter
-surprises when expecting that a number stored in JSON format does not
-necessarily return as the same number as compared character by character.
-
-Any numbers defined in JSON that contain a decimal point or exponent will be
-passed through the Erlang VM's idea of the "double" data type. Any numbers that
-are used in views will pass through the view server's idea of a number (the
-common JavaScript case means even integers pass through a double due to
-JavaScript's definition of a number).
-
-Consider this document that we write to CouchDB:
-
-.. code-block:: javascript
-
-    {
-        "_id":"30b3b38cdbd9e3a587de9b8122000cff",
-        "number": 1.1
-    }
-
-Now let’s read that document back from CouchDB:
-
-.. code-block:: javascript
-
-    {
-        "_id":"30b3b38cdbd9e3a587de9b8122000cff",
-        "_rev":"1-f065cee7c3fd93aa50f6c97acde93030",
-        "number":1.1000000000000000888
-    }
-
-What happens is CouchDB is changing the textual representation of the
-result of decoding what it was given into some numerical format. In most
-cases this is an `IEEE 754`_ double precision floating point number which
-is exactly what almost all other languages use as well.
-
-.. _IEEE 754: https://en.wikipedia.org/wiki/IEEE_754-2008
-
-What Erlang does a bit differently than other languages is that it does not
-attempt to pretty print the resulting output to use the shortest number of
-characters. For instance, this is why we have this relationship:
-
-.. code-block:: erlang
-
-    ejson:encode(ejson:decode(<<"1.1">>)).
-    <<"1.1000000000000000888">>
-
-What can be confusing here is that internally those two formats decode into the
-same IEEE-754 representation. And more importantly, it will decode into a
-fairly close representation when passed through all major parsers that we know
-about.
-
-While we've only been discussing cases where the textual representation
-changes, another important case is when an input value contains more precision
-than can actually represented in a double. (You could argue that this case is
-actually "losing" data if you don't accept that numbers are stored in doubles).
-
-Here's a log for a couple of the more common JSON libraries that happen to be
-on the author's machine:
-
-Ejson (CouchDB's current parser) at CouchDB sha 168a663b::
-
-    $ ./utils/run -i
-    Erlang R14B04 (erts-5.8.5) [source] [64-bit] [smp:2:2] [rq:2]
-    [async-threads:4] [hipe] [kernel-poll:true]
-
-    Eshell V5.8.5  (abort with ^G)
-    1> ejson:encode(ejson:decode(<<"1.01234567890123456789012345678901234567890">>)).
-    <<"1.0123456789012346135">>
-    2> F = ejson:encode(ejson:decode(<<"1.01234567890123456789012345678901234567890">>)).
-    <<"1.0123456789012346135">>
-    3> ejson:encode(ejson:decode(F)).
-    <<"1.0123456789012346135">>
-
-Node::
-
-    $ node -v
-    v0.6.15
-    $ node
-    JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    '1.0123456789012346'
-    var f = JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    undefined
-    JSON.stringify(JSON.parse(f))
-    '1.0123456789012346'
-
-Python::
-
-    $ python
-    Python 2.7.2 (default, Jun 20 2012, 16:23:33)
-    [GCC 4.2.1 Compatible Apple Clang 4.0 (tags/Apple/clang-418.0.60)] on darwin
-    Type "help", "copyright", "credits" or "license" for more information.
-    import json
-    json.dumps(json.loads("1.01234567890123456789012345678901234567890"))
-    '1.0123456789012346'
-    f = json.dumps(json.loads("1.01234567890123456789012345678901234567890"))
-    json.dumps(json.loads(f))
-    '1.0123456789012346'
-
-Ruby::
-
-    $ irb --version
-    irb 0.9.5(05/04/13)
-    require 'JSON'
-    => true
-    JSON.dump(JSON.load("[1.01234567890123456789012345678901234567890]"))
-    => "[1.01234567890123]"
-    f = JSON.dump(JSON.load("[1.01234567890123456789012345678901234567890]"))
-    => "[1.01234567890123]"
-    JSON.dump(JSON.load(f))
-    => "[1.01234567890123]"
-
-.. note::
-    A small aside on Ruby, it requires a top level object or array, so I just
-    wrapped the value. Should be obvious it doesn't affect the result of
-    parsing the number though.
-
-Spidermonkey::
-
-    $ js -h 2>&1 | head -n 1
-    JavaScript-C 1.8.5 2011-03-31
-    $ js
-    js> JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    "1.0123456789012346"
-    js> var f = JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    js> JSON.stringify(JSON.parse(f))
-    "1.0123456789012346"
-
-As you can see they all pretty much behave the same except for Ruby actually
-does appear to be losing some precision over the other libraries.
-
-The astute observer will notice that ejson (the CouchDB JSON library) reported
-an extra three digits. While its tempting to think that this is due to some
-internal difference, its just a more specific case of the 1.1 input as
-described above.
-
-The important point to realize here is that a double can only hold a finite
-number of values. What we're doing here is generating a string that when passed
-through the "standard" floating point parsing algorithms (ie, ``strtod``) will
-result in the same bit pattern in memory as we started with. Or, slightly
-different, the bytes in a JSON serialized number are chosen such that they
-refer to a single specific value that a double can represent.
-
-The important point to understand is that we're mapping from one infinite set
-onto a finite set. An easy way to see this is by reflecting on this::
-
-    1.0 == 1.00 == 1.000 = 1.(infinite zeros)
-
-Obviously a computer can't hold infinite bytes so we have to decimate our
-infinitely sized set to a finite set that can be represented concisely.
-
-The game that other JSON libraries are playing is merely:
-
-"How few characters do I have to use to select this specific value for a
-double"
-
-And that game has lots and lots of subtle details that are difficult to
-duplicate in C without a significant amount of effort (it took Python over a
-year to get it sorted with their fancy build systems that automatically run on
-a number of different architectures).
-
-Hopefully we've shown that CouchDB is not doing anything "funky" by changing
-input. Its behaving the same as any other common JSON library does, its just
-not pretty printing its output.
-
-On the other hand, if you actually are in a position where an IEEE-754 double
-is not a satisfactory data type for your numbers, then the answer as has been
-stated is to not pass your numbers through this representation. In JSON this is
-accomplished by encoding them as a string or by using integer types (although
-integer types can still bite you if you use a platform that has a different
-integer representation than normal, ie, JavaScript).
-
-Further information can be found easily, including the
-`Floating Point Guide`_, and  `David Goldberg's Reference`_.
-
-.. _Floating Point Guide: http://floating-point-gui.de/
-.. _David Goldberg's Reference: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
-
-Also, if anyone is really interested in changing this behavior, we're all ears
-for contributions to `jiffy`_ (which is theoretically going to replace ejson
-when we get around to updating the build system). The places we've looked for
-inspiration are TCL and Python. If you know a decent implementation of this
-float printing algorithm give us a holler.
-
-.. _jiffy: https://github.com/davisp/jiffy
-
-.. _errors:
-
-HTTP Status Codes
-=================
-
-With the interface to CouchDB working through HTTP, error codes and statuses
-are reported using a combination of the HTTP status code number, and
-corresponding data in the body of the response data.
-
-A list of the error codes returned by CouchDB, and generic descriptions of the
-related errors are provided below. The meaning of different status codes for
-specific request types are provided in the corresponding API call reference.
-
-- ``200 - OK``
-
-  Request completed successfully.
-
-- ``201 - Created``
-
-  Document created successfully.
-
-- ``202 - Accepted``
-
-  Request has been accepted, but the corresponding operation may not have
-  completed. This is used for background operations, such as database
-  compaction.
-
-- ``304 - Not Modified``
-
-  The additional content requested has not been modified. This is used with the
-  ETag system to identify the version of information returned.
-
-- ``400 - Bad Request``
-
-  Bad request structure. The error can indicate an error with the request URL,
-  path or headers. Differences in the supplied MD5 hash and content also
-  trigger this error, as this may indicate message corruption.
-
-- ``401 - Unauthorized``
-
-  The item requested was not available using the supplied authorization, or
-  authorization was not supplied.
-
-- ``403 - Forbidden``
-
-  The requested item or operation is forbidden.
-
-- ``404 - Not Found``
-
-  The requested content could not be found. The content will include further
-  information, as a JSON object, if available. The structure will contain two
-  keys, ``error`` and ``reason``. For example:
-
-  .. code-block:: javascript
-
-      {"error":"not_found","reason":"no_db_file"}
-
-- ``405 - Method Not Allowed``
-
-  A request was made using an invalid HTTP request type for the URL requested.
-  For example, you have requested a ``PUT`` when a ``POST`` is required. Errors
-  of this type can also triggered by invalid URL strings.
-
-- ``406 - Not Acceptable``
-
-  The requested content type is not supported by the server.
-
-- ``409 - Conflict``
-
-  Request resulted in an update conflict.
-
-- ``412 - Precondition Failed``
-
-  The request headers from the client and the capabilities of the server do not
-  match.
-
-- ``413 - Request Entity Too Large``
-
-  A document exceeds the configured :config:option:`couchdb/max_document_size`
-  value or the entire request exceeds the
-  :config:option:`chttpd/max_http_request_size` value.
-
-- ``415 - Unsupported Media Type``
-
-  The content types supported, and the content type of the information being
-  requested or submitted indicate that the content type is not supported.
-
-- ``416 - Requested Range Not Satisfiable``
-
-  The range specified in the request header cannot be satisfied by the server.
-
-- ``417 - Expectation Failed``
-
-  When sending documents in bulk, the bulk load operation failed.
-
-- ``500 - Internal Server Error``
-
-  The request was invalid, either because the supplied JSON was invalid, or
-  invalid information was supplied as part of the request.
-
-- ``503 - Service Unavailable``
-
-  The request can't be serviced at this time, either because the cluster is overloaded,
-  maintenance is underway, or some other reason.
-  The request may be retried without changes, perhaps in a couple of minutes.
diff --git a/src/api/database/bulk-api.rst b/src/api/database/bulk-api.rst
deleted file mode 100644
index 14e72ba..0000000
--- a/src/api/database/bulk-api.rst
+++ /dev/null
@@ -1,1009 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/all_docs:
-
-===================
-``/{db}/_all_docs``
-===================
-
-.. http:get:: /{db}/_all_docs
-    :synopsis: Returns a built-in view of all documents in this database
-
-    Executes the built-in `_all_docs` :ref:`view <views>`, returning all of the
-    documents in the database.  With the exception of the URL parameters
-    (described below), this endpoint works identically to any other view. Refer
-    to the :ref:`view endpoint <api/ddoc/view>` documentation for a complete
-    description of the available query parameters and the format of the returned
-    data.
-
-    :param db: Database name
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :code 200: Request completed successfully
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_all_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 16:22:56 GMT
-        ETag: "1W2DJUZFZSZD9K78UFA3GZWB4"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": 0,
-            "rows": [
-                {
-                    "id": "16e458537602f5ef2a710089dffd9453",
-                    "key": "16e458537602f5ef2a710089dffd9453",
-                    "value": {
-                        "rev": "1-967a00dff5e02add41819138abb3284d"
-                    }
-                },
-                {
-                    "id": "a4c51cdfa2069f3e905c431114001aff",
-                    "key": "a4c51cdfa2069f3e905c431114001aff",
-                    "value": {
-                        "rev": "1-967a00dff5e02add41819138abb3284d"
-                    }
-                },
-                {
-                    "id": "a4c51cdfa2069f3e905c4311140034aa",
-                    "key": "a4c51cdfa2069f3e905c4311140034aa",
-                    "value": {
-                        "rev": "5-6182c9c954200ab5e3c6bd5e76a1549f"
-                    }
-                },
-                {
-                    "id": "a4c51cdfa2069f3e905c431114003597",
-                    "key": "a4c51cdfa2069f3e905c431114003597",
-                    "value": {
-                        "rev": "2-7051cbe5c8faecd085a3fa619e6e6337"
-                    }
-                },
-                {
-                    "id": "f4ca7773ddea715afebc4b4b15d4f0b3",
-                    "key": "f4ca7773ddea715afebc4b4b15d4f0b3",
-                    "value": {
-                        "rev": "2-7051cbe5c8faecd085a3fa619e6e6337"
-                    }
-                }
-            ],
-            "total_rows": 5
-        }
-
-.. http:post:: /{db}/_all_docs
-    :synopsis: Returns a built-in view of all documents in this database
-
-    :method:`POST` `_all_docs` functionality supports identical parameters and behavior
-    as specified in the :get:`/{db}/_all_docs` API but allows for the query string
-    parameters to be supplied as keys in a JSON object in the body of the `POST` request.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_all_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 70
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "keys" : [
-                "Zingylemontart",
-                "Yogurtraita"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: javascript
-
-        {
-            "total_rows" : 2666,
-            "rows" : [
-                {
-                    "value" : {
-                        "rev" : "1-a3544d296de19e6f5b932ea77d886942"
-                    },
-                    "id" : "Zingylemontart",
-                    "key" : "Zingylemontart"
-                },
-                {
-                    "value" : {
-                        "rev" : "1-91635098bfe7d40197a1b98d7ee085fc"
-                    },
-                    "id" : "Yogurtraita",
-                    "key" : "Yogurtraita"
-                }
-            ],
-            "offset" : 0
-        }
-
-.. _api/db/design_docs:
-
-======================
-``/{db}/_design_docs``
-======================
-
-.. versionadded:: 2.2
-
-.. http:get:: /{db}/_design_docs
-    :synopsis: Returns a built-in view of all design documents in this database
-
-    Returns a JSON structure of all of the design documents in a given
-    database. The information is returned as a JSON structure containing meta
-    information about the return structure, including a list of all design
-    documents and basic contents, consisting the ID, revision and key. The key
-    is the design document's ``_id``.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query boolean conflicts: Includes `conflicts` information in response.
-      Ignored if `include_docs` isn't ``true``. Default is ``false``.
-    :query boolean descending: Return the design documents in descending by
-      key order. Default is ``false``.
-    :query string endkey: Stop returning records when the specified key is
-      reached. *Optional*.
-    :query string end_key: Alias for `endkey` param.
-    :query string endkey_docid: Stop returning records when the specified
-        design document ID is reached. *Optional*.
-    :query string end_key_doc_id: Alias for `endkey_docid` param.
-    :query boolean include_docs: Include the full content of the design
-      documents in the return. Default is ``false``.
-    :query boolean inclusive_end: Specifies whether the specified end key
-      should be included in the result. Default is ``true``.
-    :query string key: Return only design documents that match the specified
-      key. *Optional*.
-    :query string keys: Return only design documents that match the specified
-      keys. *Optional*.
-    :query number limit: Limit the number of the returned design documents to
-      the specified number. *Optional*.
-    :query number skip: Skip this number of records before starting to return
-      the results. Default is ``0``.
-    :query string startkey: Return records starting with the specified key.
-      *Optional*.
-    :query string start_key: Alias for `startkey` param.
-    :query string startkey_docid: Return records starting with the specified
-      design document ID. *Optional*.
-    :query string start_key_doc_id: Alias for `startkey_docid` param.
-    :query boolean update_seq: Response includes an ``update_seq`` value
-      indicating which sequence id of the underlying database the view
-      reflects. Default is ``false``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response signature
-    :>json number offset: Offset where the design document list started
-    :>json array rows: Array of view row objects. By default the information
-      returned contains only the design document ID and revision.
-    :>json number total_rows: Number of design documents in the database. Note
-      that this is not the number of rows returned in the actual query.
-    :>json number update_seq: Current update sequence for the database
-    :code 200: Request completed successfully
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_design_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 23 Dec 2017 16:22:56 GMT
-        ETag: "1W2DJUZFZSZD9K78UFA3GZWB4"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": 0,
-            "rows": [
-                {
-                    "id": "_design/ddoc01",
-                    "key": "_design/ddoc01",
-                    "value": {
-                        "rev": "1-7407569d54af5bc94c266e70cbf8a180"
-                    }
-                },
-                {
-                    "id": "_design/ddoc02",
-                    "key": "_design/ddoc02",
-                    "value": {
-                        "rev": "1-d942f0ce01647aa0f46518b213b5628e"
-                    }
-                },
-                {
-                    "id": "_design/ddoc03",
-                    "key": "_design/ddoc03",
-                    "value": {
-                        "rev": "1-721fead6e6c8d811a225d5a62d08dfd0"
-                    }
-                },
-                {
-                    "id": "_design/ddoc04",
-                    "key": "_design/ddoc04",
-                    "value": {
-                        "rev": "1-32c76b46ca61351c75a84fbcbceece2f"
-                    }
-                },
-                {
-                    "id": "_design/ddoc05",
-                    "key": "_design/ddoc05",
-                    "value": {
-                        "rev": "1-af856babf9cf746b48ae999645f9541e"
-                    }
-                }
-            ],
-            "total_rows": 5
-        }
-
-.. http:post:: /{db}/_design_docs
-    :synopsis: Returns a built-in view of all design documents in this database
-
-    :method:`POST` `_design_docs` functionality supports identical parameters and behavior
-    as specified in the :get:`/{db}/_design_docs` API but allows for the query string
-    parameters to be supplied as keys in a JSON object in the body of the `POST` request.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_design_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 70
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "keys" : [
-                "_design/ddoc02",
-                "_design/ddoc05"
-            ]
-        }
-
-    The returned JSON is the all documents structure, but with only the
-    selected keys in the output:
-
-    .. code-block:: javascript
-
-        {
-            "total_rows" : 5,
-            "rows" : [
-                {
-                    "value" : {
-                        "rev" : "1-d942f0ce01647aa0f46518b213b5628e"
-                    },
-                    "id" : "_design/ddoc02",
-                    "key" : "_design/ddoc02"
-                },
-                {
-                    "value" : {
-                        "rev" : "1-af856babf9cf746b48ae999645f9541e"
-                    },
-                    "id" : "_design/ddoc05",
-                    "key" : "_design/ddoc05"
-                }
-            ],
-            "offset" : 0
-        }
-
-Sending multiple queries to a database
-======================================
-
-.. versionadded:: 2.2
-
-.. http:post:: /{db}/_all_docs/queries
-    :synopsis: Returns results for the specified queries
-
-    Executes multiple specified built-in view queries of all documents in this
-    database. This enables you to request multiple queries in a single
-    request, in place of multiple :post:`/{db}/_all_docs` requests.
-
-    :param db: Database name
-
-    :<header Content-Type: - :mimetype:`application/json`
-    :<header Accept: - :mimetype:`application/json`
-
-    :<json queries: An array of query objects with fields for the
-        parameters of each individual view query to be executed. The field names
-        and their meaning are the same as the query parameters of a
-        regular :ref:`_all_docs request <api/db/all_docs>`.
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json array results: An array of result objects - one for each query. Each
-        result object contains the same fields as the response to a regular
-        :ref:`_all_docs request <api/db/all_docs>`.
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 404: Specified database is missing
-    :code 500: Query execution error
-
-**Request**:
-
-.. code-block:: http
-
-    POST /db/_all_docs/queries HTTP/1.1
-    Content-Type: application/json
-    Accept: application/json
-    Host: localhost:5984
-
-    {
-        "queries": [
-            {
-                "keys": [
-                    "meatballs",
-                    "spaghetti"
-                ]
-            },
-            {
-                "limit": 3,
-                "skip": 2
-            }
-        ]
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 20 Dec 2017 11:17:07 GMT
-    ETag: "1H8RGBCK3ABY6ACDM7ZSC30QK"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "results" : [
-            {
-                "rows": [
-                    {
-                        "id": "meatballs",
-                        "key": "meatballs",
-                        "value": 1
-                    },
-                    {
-                        "id": "spaghetti",
-                        "key": "spaghetti",
-                        "value": 1
-                    }
-                ],
-                "total_rows": 3
-            },
-            {
-                "offset" : 2,
-                "rows" : [
-                    {
-                        "id" : "Adukiandorangecasserole-microwave",
-                        "key" : "Aduki and orange casserole - microwave",
-                        "value" : [
-                            null,
-                            "Aduki and orange casserole - microwave"
-                        ]
-                    },
-                    {
-                        "id" : "Aioli-garlicmayonnaise",
-                        "key" : "Aioli - garlic mayonnaise",
-                        "value" : [
-                            null,
-                            "Aioli - garlic mayonnaise"
-                        ]
-                    },
-                    {
-                        "id" : "Alabamapeanutchicken",
-                        "key" : "Alabama peanut chicken",
-                        "value" : [
-                            null,
-                            "Alabama peanut chicken"
-                        ]
-                    }
-                ],
-                "total_rows" : 2667
-            }
-        ]
-    }
-
-.. Note::
-    The multiple queries are also supported in /db/_local_docs/queries and
-    /db/_design_docs/queries (similar to /db/_all_docs/queries).
-
-.. _api/db/bulk_get:
-
-===================
-``/{db}/_bulk_get``
-===================
-
-.. http:post:: /{db}/_bulk_get
-    :synopsis: Fetches several documents at the given revisions
-
-    This method can be called to query several documents in bulk. It is well
-    suited for fetching a specific revision of documents, as replicators do for
-    example, or for getting revision history.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`multipart/related`
-                     - :mimetype:`multipart/mixed`
-    :<header Content-Type: :mimetype:`application/json`
-    :query boolean revs: Give the revisions history
-    :<json array docs: List of document objects, with ``id``, and optionally
-      ``rev`` and ``atts_since``
-    :>header Content-Type: - :mimetype:`application/json`
-    :>json object results: an array of results for each requested document/rev
-      pair. ``id`` key lists the requested document ID, ``docs`` contains a
-      single-item array of objects, each of which has either an ``error`` key and
-      value describing the error, or ``ok`` key and associated value of the
-      requested document, with the additional ``_revisions`` property that lists
-      the parent revisions if ``revs=true``.
-    :code 200: Request completed successfully
-    :code 400: The request provided invalid JSON data or invalid query parameter
-    :code 401: Read permission required
-    :code 404: Invalid database name
-    :code 415: Bad :header:`Content-Type` value
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_bulk_get HTTP/1.1
-        Accept: application/json
-        Content-Type:application/json
-        Host: localhost:5984
-
-        {
-            "docs": [
-                {
-                    "id": "foo"
-                    "rev": "4-753875d51501a6b1883a9d62b4d33f91",
-                },
-                {
-                    "id": "foo"
-                    "rev": "1-4a7e4ae49c4366eaed8edeaea8f784ad",
-                },
-                {
-                    "id": "bar",
-                }
-                {
-                    "id": "baz",
-                }
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 19 Mar 2018 15:27:34 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-          "results": [
-            {
-              "id": "foo",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "foo",
-                    "_rev": "4-753875d51501a6b1883a9d62b4d33f91",
-                    "value": "this is foo",
-                    "_revisions": {
-                      "start": 4,
-                      "ids": [
-                        "753875d51501a6b1883a9d62b4d33f91",
-                        "efc54218773c6acd910e2e97fea2a608",
-                        "2ee767305024673cfb3f5af037cd2729",
-                        "4a7e4ae49c4366eaed8edeaea8f784ad"
-                      ]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "id": "foo",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "foo",
-                    "_rev": "1-4a7e4ae49c4366eaed8edeaea8f784ad",
-                    "value": "this is the first revision of foo",
-                    "_revisions": {
-                      "start": 1,
-                      "ids": [
-                        "4a7e4ae49c4366eaed8edeaea8f784ad"
-                      ]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "id": "bar",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "bar",
-                    "_rev": "2-9b71d36dfdd9b4815388eb91cc8fb61d",
-                    "baz": true,
-                    "_revisions": {
-                      "start": 2,
-                      "ids": [
-                        "9b71d36dfdd9b4815388eb91cc8fb61d",
-                        "309651b95df56d52658650fb64257b97"
-                      ]
-                    }
-                  }
-                }
-              ]
-            },
-            {
-              "id": "baz",
-              "docs": [
-                {
-                  "error": {
-                    "id": "baz",
-                    "rev": "undefined",
-                    "error": "not_found",
-                    "reason": "missing"
-                  }
-                }
-              ]
-            }
-          ]
-        }
-
-    Example response with a conflicted document:
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_bulk_get HTTP/1.1
-        Accept: application/json
-        Content-Type:application/json
-        Host: localhost:5984
-
-        {
-            "docs": [
-                {
-                    "id": "a"
-                }
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 19 Mar 2018 15:27:34 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-          "results": [
-            {
-              "id": "a",
-              "docs": [
-                {
-                  "ok": {
-                    "_id": "a",
-                    "_rev": "1-23202479633c2b380f79507a776743d5",
-                    "a": 1
-                  }
-                },
-                {
-                  "ok": {
-                    "_id": "a",
-                    "_rev": "1-967a00dff5e02add41819138abb3284d"
-                  }
-                }
-              ]
-            }
-          ]
-        }
-
-.. _api/db/bulk_docs:
-
-====================
-``/{db}/_bulk_docs``
-====================
-
-.. http:post:: /{db}/_bulk_docs
-    :synopsis: Inserts or updates multiple documents in to the database in
-               a single request
-
-    The bulk document API allows you to create and update multiple documents
-    at the same time within a single request. The basic operation is similar
-    to creating or updating a single document, except that you batch the
-    document structure and information.
-
-    When creating new documents the document ID (``_id``) is optional.
-
-    For updating existing documents, you must provide the document ID, revision
-    information (``_rev``), and new document values.
-
-    In case of batch deleting documents all fields as document ID, revision
-    information and deletion status (``_deleted``) are required.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-
-    :<json array docs: List of documents objects
-    :<json boolean new_edits: If ``false``, prevents the database from
-      assigning them new revision IDs. Default is ``true``. *Optional*
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>jsonarr string id: Document ID
-    :>jsonarr string rev: New document revision token. Available
-      if document has saved without errors. *Optional*
-    :>jsonarr string error: Error type. *Optional*
-    :>jsonarr string reason: Error reason. *Optional*
-    :code 201: Document(s) have been created or updated
-    :code 400: The request provided invalid JSON data
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_bulk_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 109
-        Content-Type:application/json
-        Host: localhost:5984
-
-        {
-            "docs": [
-                {
-                    "_id": "FishStew"
-                },
-                {
-                    "_id": "LambStew",
-                    "_rev": "2-0786321986194c92dd3b57dfbfc741ce",
-                    "_deleted": true
-                }
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 144
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 00:15:05 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        [
-            {
-                "ok": true,
-                "id": "FishStew",
-                "rev":" 1-967a00dff5e02add41819138abb3284d"
-            },
-            {
-                "ok": true,
-                "id": "LambStew",
-                "rev": "3-f9c62b2169d0999103e9f41949090807"
-            }
-        ]
-
-Inserting Documents in Bulk
-===========================
-
-Each time a document is stored or updated in CouchDB, the internal B-tree
-is updated. Bulk insertion provides efficiency gains in both storage space,
-and time, by consolidating many of the updates to intermediate B-tree nodes.
-
-It is not intended as a way to perform ``ACID``-like transactions in CouchDB,
-the only transaction boundary within CouchDB is a single update to a single
-database. The constraints are detailed in :ref:`api/db/bulk_docs/semantics`.
-
-To insert documents in bulk into a database you need to supply a JSON
-structure with the array of documents that you want to add to the database.
-You can either include a document ID, or allow the document ID to be
-automatically generated.
-
-For example, the following update inserts three new documents, two with the
-supplied document IDs, and one which will have a document ID generated:
-
-.. code-block:: http
-
-    POST /source/_bulk_docs HTTP/1.1
-    Accept: application/json
-    Content-Length: 323
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "docs": [
-            {
-                "_id": "FishStew",
-                "servings": 4,
-                "subtitle": "Delicious with freshly baked bread",
-                "title": "FishStew"
-            },
-            {
-                "_id": "LambStew",
-                "servings": 6,
-                "subtitle": "Serve with a whole meal scone topping",
-                "title": "LambStew"
-            },
-            {
-                "servings": 8,
-                "subtitle": "Hand-made dumplings make a great accompaniment",
-                "title": "BeefStew"
-            }
-        ]
-    }
-
-The return type from a bulk insertion will be :statuscode:`201`,
-with the content of the returned structure indicating specific success
-or otherwise messages on a per-document basis.
-
-The return structure from the example above contains a list of the
-documents created, here with the combination and their revision IDs:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 215
-    Content-Type: application/json
-    Date: Sat, 26 Oct 2013 00:10:39 GMT
-    Server: CouchDB (Erlang OTP)
-
-    [
-        {
-            "id": "FishStew",
-            "ok": true,
-            "rev": "1-6a466d5dfda05e613ba97bd737829d67"
-        },
-        {
-            "id": "LambStew",
-            "ok": true,
-            "rev": "1-648f1b989d52b8e43f05aa877092cc7c"
-        },
-        {
-            "id": "00a271787f89c0ef2e10e88a0c0003f0",
-            "ok": true,
-            "rev": "1-e4602845fc4c99674f50b1d5a804fdfa"
-        }
-    ]
-
-For details of the semantic content and structure of the returned JSON see
-:ref:`api/db/bulk_docs/semantics`. Conflicts and validation errors when
-updating documents in bulk must be handled separately; see
-:ref:`api/db/bulk_docs/validation`.
-
-Updating Documents in Bulk
-==========================
-
-The bulk document update procedure is similar to the insertion
-procedure, except that you must specify the document ID and current
-revision for every document in the bulk update JSON string.
-
-For example, you could send the following request:
-
-.. code-block:: http
-
-    POST /recipes/_bulk_docs HTTP/1.1
-    Accept: application/json
-    Content-Length: 464
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "docs": [
-            {
-                "_id": "FishStew",
-                "_rev": "1-6a466d5dfda05e613ba97bd737829d67",
-                "servings": 4,
-                "subtitle": "Delicious with freshly baked bread",
-                "title": "FishStew"
-            },
-            {
-                "_id": "LambStew",
-                "_rev": "1-648f1b989d52b8e43f05aa877092cc7c",
-                "servings": 6,
-                "subtitle": "Serve with a whole meal scone topping",
-                "title": "LambStew"
-            },
-            {
-                "_id": "BeefStew",
-                "_rev": "1-e4602845fc4c99674f50b1d5a804fdfa",
-                "servings": 8,
-                "subtitle": "Hand-made dumplings make a great accompaniment",
-                "title": "BeefStew"
-            }
-        ]
-    }
-
-The return structure is the JSON of the updated documents, with the new
-revision and ID information:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 215
-    Content-Type: application/json
-    Date: Sat, 26 Oct 2013 00:10:39 GMT
-    Server: CouchDB (Erlang OTP)
-
-    [
-        {
-            "id": "FishStew",
-            "ok": true,
-            "rev": "2-2bff94179917f1dec7cd7f0209066fb8"
-        },
-        {
-            "id": "LambStew",
-            "ok": true,
-            "rev": "2-6a7aae7ac481aa98a2042718d09843c4"
-        },
-        {
-            "id": "BeefStew",
-            "ok": true,
-            "rev": "2-9801936a42f06a16f16c30027980d96f"
-        }
-    ]
-
-You can optionally delete documents during a bulk update by adding the
-``_deleted`` field with a value of ``true`` to each document ID/revision
-combination within the submitted JSON structure.
-
-The return type from a bulk insertion will be :statuscode:`201`, with the
-content of the returned structure indicating specific success or otherwise
-messages on a per-document basis.
-
-The content and structure of the returned JSON will depend on the transaction
-semantics being used for the bulk update; see :ref:`api/db/bulk_docs/semantics`
-for more information. Conflicts and validation errors when updating documents
-in bulk must be handled separately; see :ref:`api/db/bulk_docs/validation`.
-
-.. _api/db/bulk_docs/semantics:
-
-Bulk Documents Transaction Semantics
-====================================
-
-Bulk document operations are **non-atomic**. This means that CouchDB does not
-guarantee that any individual document included in the bulk update (or insert)
-will be saved when you send the request. The response will contain the list of
-documents successfully inserted or updated during the process. In the event of
-a crash, some of the documents may have been successfully saved, while others
-lost.
-
-The response structure will indicate whether the document was updated by
-supplying the new ``_rev`` parameter indicating a new document revision was
-created. If the update failed, you will get an ``error`` of type ``conflict``.
-For example:
-
-   .. code-block:: javascript
-
-       [
-           {
-               "id" : "FishStew",
-               "error" : "conflict",
-               "reason" : "Document update conflict."
-           },
-           {
-               "id" : "LambStew",
-               "error" : "conflict",
-               "reason" : "Document update conflict."
-           },
-           {
-               "id" : "BeefStew",
-               "error" : "conflict",
-               "reason" : "Document update conflict."
-           }
-       ]
-
-In this case no new revision has been created and you will need to submit the
-document update, with the correct revision tag, to update the document.
-
-Replication of documents is independent of the type of insert or update.
-The documents and revisions created during a bulk insert or update are
-replicated in the same way as any other document.
-
-.. _api/db/bulk_docs/validation:
-
-Bulk Document Validation and Conflict Errors
-============================================
-
-The JSON returned by the ``_bulk_docs`` operation consists of an array
-of JSON structures, one for each document in the original submission.
-The returned JSON structure should be examined to ensure that all of the
-documents submitted in the original request were successfully added to
-the database.
-
-When a document (or document revision) is not correctly committed to the
-database because of an error, you should check the ``error`` field to
-determine error type and course of action. Errors will be one of the
-following type:
-
--  **conflict**
-
-   The document as submitted is in conflict. The new revision will not have been
-   created and you will need to re-submit the document to the database.
-
-   Conflict resolution of documents added using the bulk docs interface
-   is identical to the resolution procedures used when resolving
-   conflict errors during replication.
-
--  **forbidden**
-
-   Entries with this error type indicate that the validation routine
-   applied to the document during submission has returned an error.
-
-   For example, if your :ref:`validation routine <vdufun>` includes
-   the following:
-
-   .. code-block:: javascript
-
-       throw({forbidden: 'invalid recipe ingredient'});
-
-   The error response returned will be:
-
-   .. code-block:: http
-
-       HTTP/1.1 201 Created
-       Cache-Control: must-revalidate
-       Content-Length: 80
-       Content-Type: application/json
-       Date: Sat, 26 Oct 2013 00:05:17 GMT
-       Server: CouchDB (Erlang OTP)
-
-       [
-           {
-               "id": "LambStew",
-               "error": "forbidden",
-               "reason": "invalid recipe ingredient"
-           }
-       ]
diff --git a/src/api/database/changes.rst b/src/api/database/changes.rst
deleted file mode 100644
index 6c90bc3..0000000
--- a/src/api/database/changes.rst
+++ /dev/null
@@ -1,760 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/changes:
-
-================
-``/db/_changes``
-================
-
-.. http:get:: /{db}/_changes
-    :synopsis: Returns changes for the given database
-
-    Returns a sorted list of changes made to documents in the database, in time
-    order of application, can be obtained from the database's ``_changes``
-    resource. Only the most recent change for a given document is guaranteed to
-    be provided, for example if a document has had fields added, and then
-    deleted, an API client checking for changes will not necessarily receive
-    the intermediate state of added documents.
-
-    This can be used to listen for update and modifications to the database for
-    post processing or synchronization, and for practical purposes,
-    a continuously connected ``_changes`` feed is a reasonable approach for
-    generating a real-time log for most applications.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/event-stream`
-                     - :mimetype:`text/plain`
-    :<header Last-Event-ID: ID of the last events received by the server on a
-        previous connection. Overrides ``since`` query parameter.
-    :query array doc_ids: List of document IDs to filter the changes feed as
-        valid JSON array. Used with :ref:`_doc_ids <changes/filter/doc_ids>`
-        filter. Since `length of URL is limited`_, it is better to use
-        :post:`/{db}/_changes` instead.
-    :query boolean conflicts: Includes `conflicts` information in response.
-        Ignored if `include_docs` isn't ``true``. Default is ``false``.
-    :query boolean descending: Return the change results in descending sequence
-        order (most recent change first). Default is ``false``.
-    :query string feed: - **normal** Specifies :ref:`Normal Polling Mode
-                          <changes/normal>`. All past changes are returned
-                          immediately. *Default.*
-                        - **longpoll** Specifies :ref:`Long Polling Mode
-                          <changes/longpoll>`. Waits until at least one change
-                          has occurred, sends the change, then closes the
-                          connection. Most commonly used in conjunction with
-                          ``since=now``, to wait for the next change.
-                        - **continuous** Sets :ref:`Continuous Mode
-                          <changes/continuous>`. Sends a line of JSON per
-                          event. Keeps the socket open until ``timeout``.
-                        - **eventsource** Sets :ref:`Event Source Mode
-                          <changes/eventsource>`. Works the same as Continuous
-                          Mode, but sends the events in `EventSource
-                          <http://dev.w3.org/html5/eventsource/>`_ format.
-    :query string filter: Reference to a :ref:`filter function <filterfun>`
-        from a design document that will filter whole stream emitting only
-        filtered events. See the section `Change Notifications in the book
-        CouchDB The Definitive Guide`_ for more information.
-    :query number heartbeat: Period in *milliseconds* after which an empty
-        line is sent in the results. Only applicable for :ref:`longpoll
-        <changes/longpoll>`, :ref:`continuous <changes/continuous>`, and
-        :ref:`eventsource <changes/eventsource>` feeds. Overrides any timeout
-        to keep the feed alive indefinitely. Default is ``60000``. May be
-        ``true`` to use default value.
-    :query boolean include_docs: Include the associated document with each
-        result. If there are conflicts, only the winning revision is returned.
-        Default is ``false``.
-    :query boolean attachments: Include the Base64-encoded content of
-        :ref:`attachments <api/doc/attachments>` in the documents that
-        are included if ``include_docs`` is ``true``. Ignored if ``include_docs``
-        isn't ``true``. Default is ``false``.
-    :query boolean att_encoding_info: Include encoding information in attachment
-        stubs if ``include_docs`` is ``true`` and the particular attachment is
-        compressed. Ignored if ``include_docs`` isn't ``true``.
-        Default is ``false``.
-    :query number last-event-id: Alias of `Last-Event-ID` header.
-    :query number limit: Limit number of result rows to the specified value
-        (note that using ``0`` here has the same effect as ``1``).
-    :query since: Start the results from the change immediately after the given
-        update sequence. Can be valid update sequence or ``now`` value.
-        Default is ``0``.
-    :query string style: Specifies how many revisions are returned in
-        the changes array. The default, ``main_only``, will only return
-        the current "winning" revision; ``all_docs`` will return all leaf
-        revisions (including conflicts and deleted former conflicts).
-    :query number timeout: Maximum period in *milliseconds* to wait for a change
-        before the response is sent, even if there are no results.
-        Only applicable for :ref:`longpoll <changes/longpoll>` or
-        :ref:`continuous <changes/continuous>` feeds.
-        Default value is specified by :config:option:`chttpd/changes_timeout`
-        configuration option. Note that ``60000`` value is also the default
-        maximum timeout to prevent undetected dead connections.
-    :query string view: Allows to use view functions as filters. Documents
-        counted as "passed" for view filter in case if map function emits
-        at least one record for them.
-        See :ref:`changes/filter/view` for more info.
-    :query number seq_interval: When fetching changes in a batch, setting the
-        *seq_interval* parameter tells CouchDB to only calculate the update seq
-        with every Nth result returned. By setting **seq_interval=<batch size>**
-        , where ``<batch size>`` is the number of results requested per batch,
-        load can be reduced on the source CouchDB database; computing the seq
-        value across many shards (esp. in highly-sharded databases) is expensive
-        in a heavily loaded CouchDB cluster.
-    :>header Cache-Control: ``no-cache`` if changes feed is
-        :ref:`eventsource <changes/eventsource>`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/event-stream`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response hash if changes feed is `normal`
-    :>header Transfer-Encoding: ``chunked``
-    :>json json last_seq: Last change update sequence
-    :>json number pending: Count of remaining items in the feed
-    :>json array results: Changes made to a database
-    :code 200: Request completed successfully
-    :code 400: Bad request
-
-    The ``results`` field of database changes:
-
-    :json array changes: List of document's leaves with single field ``rev``.
-    :json string id: Document ID.
-    :json json seq: Update sequence.
-    :json bool deleted: ``true`` if the document is deleted.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_changes?style=all_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 00:54:58 GMT
-        ETag: "6ASLEKEMSRABT0O5XY9UPO9Z"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "last_seq": "5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV-8_o5i",
-            "pending": 0,
-            "results": [
-                {
-                    "changes": [
-                        {
-                            "rev": "2-7051cbe5c8faecd085a3fa619e6e6337"
-                        }
-                    ],
-                    "id": "6478c2ae800dfc387396d14e1fc39626",
-                    "seq": "3-g1AAAAG3eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcjfGaQZmaUmmZClM8gZhyAmHGfsG0PICrBPmQC22ZqbGRqamyIqSsLAAArcXo"
-                },
-                {
-                    "changes": [
-                        {
-                            "rev": "3-7379b9e515b161226c6559d90c4dc49f"
-                        }
-                    ],
-                    "deleted": true,
-                    "id": "5bbc9ca465f1b0fcd62362168a7c8831",
-                    "seq": "4-g1AAAAHXeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBMZc4EC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HqQ_kQG3qgSQqnoUtxoYGZkZG5uS4NY8FiDJ0ACkgAbNx2cfROUCiMr9CJ8ZpJkZpaaZEOUziBkHIGbcJ2zbA4hKsA-ZwLaZGhuZmhobYurKAgCz33kh"
-                },
-                {
-                    "changes": [
-                        {
-                            "rev": "6-460637e73a6288cb24d532bf91f32969"
-                        },
-                        {
-                            "rev": "5-eeaa298781f60b7bcae0c91bdedd1b87"
-                        }
-                    ],
-                    "id": "729eb57437745e506b333068fff665ae",
-                    "seq": "5-g1AAAAIReJyVkE0OgjAQRkcwUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEB-yD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA"
-                }
-            ]
-        }
-
-.. _length of URL is limited: http://stackoverflow.com/a/417184/965635
-
-.. versionchanged:: 0.11.0 added ``include_docs`` parameter
-.. versionchanged:: 1.2.0 added ``view`` parameter and special value `_view`
-   for ``filter`` one
-.. versionchanged:: 1.3.0 ``since`` parameter could take `now` value to start
-   listen changes since current seq number.
-.. versionchanged:: 1.3.0 ``eventsource`` feed type added.
-.. versionchanged:: 1.4.0 Support ``Last-Event-ID`` header.
-.. versionchanged:: 1.6.0 added ``attachments`` and ``att_encoding_info``
-   parameters
-.. versionchanged:: 2.0.0 update sequences can be any valid json object,
-   added ``seq_interval``
-
-.. note::
-    If the specified replicas of the shards in any given since value are
-    unavailable, alternative replicas are selected, and the last known
-    checkpoint between them is used. If this happens, you might see changes
-    again that you have previously seen. Therefore, an application making use
-    of the ``_changes`` feed should be ‘idempotent’, that is, able to receive the
-    same data multiple times, safely.
-
-.. note::
-    Cloudant Sync and PouchDB already optimize the replication process by
-    setting ``seq_interval`` parameter to the number of results expected per
-    batch. This parameter increases throughput by reducing latency between
-    sequential requests in bulk document transfers. This has resulted in up to
-    a 20% replication performance improvement in highly-sharded databases.
-
-.. warning::
-    Using the ``attachments`` parameter to include attachments in the changes
-    feed is not recommended for large attachment sizes. Also note that the
-    Base64-encoding that is used leads to a 33% overhead (i.e. one third) in
-    transfer size for attachments.
-
-.. warning::
-    The results returned by `_changes` are partially ordered. In other words,
-    the order is not guaranteed to be preserved for multiple calls.
-
-.. http:post:: /{db}/_changes
-    :synopsis: Returns changes for the given database for certain document IDs
-
-    Requests the database changes feed in the same way as
-    :get:`/{db}/_changes` does, but is widely used with
-    ``?filter=_doc_ids`` query parameter and allows one to pass a larger list of
-    document IDs to filter.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /recipes/_changes?filter=_doc_ids HTTP/1.1
-        Accept: application/json
-        Content-Length: 40
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "doc_ids": [
-                "SpaghettiWithMeatballs"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 28 Sep 2013 07:23:09 GMT
-        ETag: "ARIHFWL3I7PIS0SPVTFU6TLR2"
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "last_seq": "5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV8_o5i",
-            "pending": 0,
-            "results": [
-                {
-                    "changes": [
-                        {
-                            "rev": "13-bcb9d6388b60fd1e960d9ec4e8e3f29e"
-                        }
-                    ],
-                    "id": "SpaghettiWithMeatballs",
-                    "seq":  "5-g1AAAAIReJyVkE0OgjAQRkcwUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEB-yD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA"
-                }
-            ]
-        }
-
-.. _changes:
-
-Changes Feeds
-=============
-
-.. _changes/normal:
-
-Polling
--------
-
-By default all changes are immediately returned within the JSON body::
-
-    GET /somedatabase/_changes HTTP/1.1
-
-.. code-block:: javascript
-
-    {"results":[
-    {"seq":"1-g1AAAAF9eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P__7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcTVnkAovI-YZUPICpBvs0CAN1eY_c","id":"fresh","changes":[{"rev":"1-967a00dff5e02add41819138abb3284d"}]},
-    {"seq":"3-g1AAAAG3eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcjfGaQZmaUmmZClM8gZhyAmHGfsG0PICrBPmQC22ZqbGRqamyIqSsLAAArcXo","id":"updated","changes":[{"rev":"2-7051cbe5c8faecd085a3fa619e6e6337CFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEB-yD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA","id":"deleted","changes":[{"rev":"2-eec205a9d413992850a6e32678485900"}],"deleted":true}
-    ],
-    "last_seq":"5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV-8_o5i",
-    "pending": 0}
-
-``results`` is the list of changes in sequential order. New and changed
-documents only differ in the value of the rev; deleted documents include the
-``"deleted": true`` attribute. (In the ``style=all_docs mode``, deleted applies
-only to the current/winning revision. The other revisions listed might be
-deleted even if there is no deleted property; you have to ``GET`` them
-individually to make sure.)
-
-``last_seq`` is the update sequence of the last update returned (Equivalent
-to the last item in the results).
-
-Sending a ``since`` param in the query string skips all changes up to and
-including the given update sequence:
-
-.. code-block:: http
-
-    GET /somedatabase/_changes?since=4-g1AAAAHXeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBMZc4EC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HqQ_kQG3qgSQqnoUtxoYGZkZG5uS4NY8FiDJ0ACkgAbNx2cfROUCiMr9CJ8ZpJkZpaaZEOUziBkHIGbcJ2zbA4hKsA-ZwLaZGhuZmhobYurKAgCz33kh HTTP/1.1
-
-The return structure for ``normal`` and ``longpoll`` modes is a JSON
-array of changes objects, and the last update sequence.
-
-In the return format for ``continuous`` mode, the server sends a ``CRLF``
-(carriage-return, linefeed) delimited line for each change. Each line
-contains the `JSON object` described above.
-
-You can also request the full contents of each document change (instead
-of just the change notification) by using the ``include_docs`` parameter.
-
-.. code-block:: javascript
-
-    {
-        "last_seq": "5-g1AAAAIreJyVkEsKwjAURZ-toI5cgq5A0sQ0OrI70XyppcaRY92J7kR3ojupaSPUUgotgRd4yTlwbw4A0zRUMLdnpaMkwmyF3Ily9xBwEIuiKLI05KOTW0wkV4rruP29UyGWbordzwKVxWBNOGMKZhertDlarbr5pOT3DV4gudUC9-MPJX9tpEAYx4TQASns2E24ucuJ7rXJSL1BbEgf3vTwpmedCZkYa7Pulck7Xt7x_usFU2aIHOD4eEfVTVA5KMGUkqhNZV-8_o5i",
-        "pending": 0,
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "2-eec205a9d413992850a6e32678485900"
-                    }
-                ],
-                "deleted": true,
-                "id": "deleted",
-                "seq":  "5-g1AAAAIReJyVkE0OgjAQRkcwUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloRid3MMkEUoJHbXbOxVy6arc_SxQWQzRVHCuYHaxSpuj1aqbj0t-3-AlSrZakn78oeSvjRSIkIhSNiCFHbsKN3c50b02mURvEByD296eNOzzoRMRLRZ98rkHS_veGcC_nR-fGe1gaCaxihhjOI2lX0BhniHaA",
-            }
-        ]
-    }
-
-.. _changes/longpoll:
-
-Long Polling
-------------
-
-The `longpoll` feed, probably most applicable for a browser, is a more
-efficient form of polling that waits for a change to occur before the response
-is sent. `longpoll` avoids the need to frequently poll CouchDB to discover
-nothing has changed!
-
-The request to the server will remain open until a change is made on the
-database and is subsequently transferred, and then the connection will close.
-This is low load for both server and client.
-
-The response is basically the same JSON as is sent for the `normal` feed.
-
-Because the wait for a change can be significant you can set a
-timeout before the connection is automatically closed (the
-``timeout`` argument). You can also set a heartbeat interval (using
-the ``heartbeat`` query argument), which sends a newline to keep the
-connection active.
-
-Keep in mind that ``heartbeat`` means "Send a linefeed every ``x`` ms
-if no change arrives, and hold the connection indefinitely" while ``timeout``
-means "Hold this connection open for ``x`` ms, and if no change arrives in that
-time, close the socket."  ``heartbeat`` overrides ``timeout``.
-
-.. _changes/continuous:
-
-Continuous
-----------
-
-Continually polling the CouchDB server is not ideal - setting up new HTTP
-connections just to tell the client that nothing happened puts unnecessary
-strain on CouchDB.
-
-A continuous feed stays open and connected to the database until explicitly
-closed and changes are sent to the client as they happen, i.e. in near
-real-time.
-
-As with the `longpoll` feed type you can set both the timeout and heartbeat
-intervals to ensure that the connection is kept open for new changes and
-updates.
-
-Keep in mind that ``heartbeat`` means "Send a linefeed every ``x`` ms
-if no change arrives, and hold the connection indefinitely" while ``timeout``
-means "Hold this connection open for ``x`` ms, and if no change arrives in that
-time, close the socket."  ``heartbeat`` overrides ``timeout``.
-
-The continuous feed's response is a little different than the other feed types
-to simplify the job of the client - each line of the response is either empty
-or a JSON object representing a single change, as found in the normal feed's
-results.
-
-If `limit` has been specified the feed will end with a `{ last_seq }` object.
-
-.. code-block:: http
-
-    GET /somedatabase/_changes?feed=continuous HTTP/1.1
-
-.. code-block:: javascript
-
-    {"seq":"1-g1AAAAF9eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MSGXAqSVIAkkn2IFUZzIkMuUAee5pRqnGiuXkKA2dpXkpqWmZeagpu_Q4g_fGEbEkAqaqH2sIItsXAyMjM2NgUUwdOU_JYgCRDA5ACGjQfn30QlQsgKvcTVnkAovI-YZUPICpBvs0CAN1eY_c","id":"fresh","changes":[{"rev":"5-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D666H6GcH6DYyMzIyNTUnwRR4LkGRoAFJAg-YjwiMtOdXCwJyU8ICYtABi0n6EnwzSzIxS00yI8hPEjAMQM-5nJTIQUPkAovI_UGUWAA0SgOI","id":"updated","changes":[{"rev":"2-7051cbe5c8faecd085a3fa619e6e6337"}]}
-    {"seq":"3-g1AAAAHReJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D660H6ExlwqspjAZIMDUAKqHA-yCZGiEuTUy0MzEnxL8SkBRCT9iPcbJBmZpSaZkKUmyFmHICYcZ-wux9AVIJ8mAUABgp6XQ","id":"deleted","changes":[{"rev":"2-eec205a9d413992850a6e32678485900"}],"deleted":true}
-    ... tum tee tum ...
-    {"seq":"6-g1AAAAIreJyVkEsKwjAURWMrqCOXoCuQ9MU0OrI70XyppcaRY92J7kR3ojupaVNopRQsgRd4yTlwb44QmqahQnN7VjpKImAr7E6Uu4eAI7EoiiJLQx6c3GIiuVJcx93vvQqxdFPsaguqLAY04YwpNLtYpc3RatXPJyW__-EFllst4D_-UPLXmh9VPAaICaEDUtixm-jmLie6N30YqTeYDenDmx7e9GwyYRODNuu_MnnHyzverV6AMkPkAMfHO1rdUAKUkqhLZV-_0o5j","id":"updated","changes":[{"rev":"3-825cb35de44c433bfb2df415563a19de"}]}
-
-Obviously, `... tum tee tum ...` does not appear in the actual response, but
-represents a long pause before the change with seq 6 occurred.
-
-.. _Change Notifications in the book CouchDB The Definitive Guide: http://guide.couchdb.org/draft/notifications.html
-
-.. _changes/eventsource:
-
-Event Source
-------------
-
-The `eventsource` feed provides push notifications that can be consumed in
-the form of DOM events in the browser. Refer to the `W3C eventsource
-specification`_ for further details. CouchDB also honours the ``Last-Event-ID``
-parameter.
-
-.. code-block:: http
-
-    GET /somedatabase/_changes?feed=eventsource HTTP/1.1
-
-.. code-block:: javascript
-
-    // define the event handling function
-    if (window.EventSource) {
-
-        var source = new EventSource("/somedatabase/_changes?feed=eventsource");
-        source.onerror = function(e) {
-            alert('EventSource failed.');
-        };
-
-        var results = [];
-        var sourceListener = function(e) {
-            var data = JSON.parse(e.data);
-            results.push(data);
-        };
-
-        // start listening for events
-        source.addEventListener('message', sourceListener, false);
-
-        // stop listening for events
-        source.removeEventListener('message', sourceListener, false);
-
-    }
-
-If you set a heartbeat interval (using the ``heartbeat`` query argument),
-CouchDB will send a ``hearbeat`` event that you can subscribe to with:
-
-.. code-block:: javascript
-
-    source.addEventListener('heartbeat', function () {}, false);
-
-This can be monitored by the client application to restart the EventSource
-connection if needed (i.e. if the TCP connection gets stuck in a half-open
-state).
-
-.. note::
-    EventSource connections are subject to cross-origin resource sharing
-    restrictions. You might need to configure :ref:`CORS support
-    <cors>` to get the EventSource to work in your application.
-
-.. _W3C eventsource specification: http://www.w3.org/TR/eventsource/
-
-.. _changes/filter:
-
-Filtering
-=========
-
-You can filter the contents of the changes feed in a number of ways. The
-most basic way is to specify one or more document IDs to the query. This
-causes the returned structure value to only contain changes for the
-specified IDs. Note that the value of this query argument should be a
-JSON formatted array.
-
-You can also filter the ``_changes`` feed by defining a filter function
-within a design document. The specification for the filter is the same
-as for replication filters. You specify the name of the filter function
-to the ``filter`` parameter, specifying the design document name and
-:ref:`filter name <filterfun>`. For example:
-
-.. code-block:: http
-
-    GET /db/_changes?filter=design_doc/filtername HTTP/1.1
-
-Additionally, a couple of built-in filters are available and described
-below.
-
-.. _changes/filter/doc_ids:
-
-_doc_ids
---------
-
-This filter accepts only changes for documents which ID in specified in
-``doc_ids`` query parameter or payload's object array. See
-:post:`/{db}/_changes` for an example.
-
-.. _changes/filter/selector:
-
-_selector
----------
-
-.. versionadded:: 2.0
-
-This filter accepts only changes for documents which match a specified
-selector, defined using the same :ref:`selector
-syntax <find/selectors>` used for :ref:`_find <api/db/_find>`.
-
-This is significantly more efficient than using a JavaScript filter
-function and is the recommended option if filtering on document attributes only.
-
-Note that, unlike JavaScript filters, selectors do not have access to the
-request object.
-
-**Request**:
-
-.. code-block:: http
-
-    POST /recipes/_changes?filter=_selector HTTP/1.1
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "selector": { "_id": { "$regex": "^_design/" } }
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Tue, 06 Sep 2016 20:03:23 GMT
-    Etag: "1H8RGBCK3ABY6ACDM7ZSC30QK"
-    Server: CouchDB (Erlang OTP/18)
-    Transfer-Encoding: chunked
-
-    {
-        "last_seq": "11-g1AAAAIreJyVkEEKwjAQRUOrqCuPoCeQZGIaXdmbaNIk1FLjyrXeRG-iN9Gb1LQRaimFlsAEJnkP_s8RQtM0VGhuz0qTmABfYXdI7h4CgeSiKIosDUVwcotJIpQSOmp_71TIpZty97OgymJAU8G5QrOLVdocrVbdfFzy-wYvcbLVEvrxh5K_NlJggIhSNiCFHbmJbu5yonttMoneYD6kD296eNOzzoRNBNqse2Xyjpd3vP96AcYNTQY4Pt5RdTOuHIwCY5S0qewLwY6OaA",
-        "pending": 0,
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "10-304cae84fd862832ea9814f02920d4b2"
-                    }
-                ],
-                "id": "_design/ingredients",
-                "seq": "8-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D666H6GcH6DYyMzIyNTUnwRR4LkGRoAFJAg-ZnJTIQULkAonI_ws0GaWZGqWkmRLkZYsYBiBn3Cdv2AKIS7ENWsG2mxkampsaGmLqyAOYpgEo"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "123-6f7c1b7c97a9e4f0d22bdf130e8fd817"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/cookbook",
-                "seq": "9-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D661F8YWBkZGZsbEqCL_JYgCRDA5ACGjQ_K5GBgMoFEJX7EW42SDMzSk0zIcrNEDMOQMy4T9i2BxCVYB-ygm0zNTYyNTU2xNSVBQDnK4BL"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "6-5b8a52c22580e922e792047cff3618f3"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/meta",
-                "seq": "11-g1AAAAIReJyVkE0OgjAQRiegUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloQhO7mGSCKWEjtrtnQq5dFXufhaoLIZoKjhXMLtYpc3RatXNxyW_b_ASJVstST_-UPLXRgpESEQpG5DCjlyFm7uc6F6bTKI3iA_Zhzc9vOlZZ0ImItqse2Xyjpd3vDMBfzo_vrPawLiaxihhjOI2lX0BirqHbg"
-            }
-        ]
-    }
-
-.. _changes/filter/selector/missing:
-
-Missing selector
-################
-
-If the selector object is missing from the request body,
-the error message is similar to the following example:
-
-.. code-block:: json
-
-   {
-      "error": "bad request",
-      "reason": "Selector must be specified in POST payload"
-   }
-
-.. _changes/filter/selector/invalidjson:
-
-Not a valid JSON object
-#######################
-
-If the selector object is not a well-formed JSON object,
-the error message is similar to the following example:
-
-.. code-block:: json
-
-   {
-      "error": "bad request",
-      "reason": "Selector error: expected a JSON object"
-   }
-
-.. _changes/filter/selector/invalidselector:
-
-Not a valid selector
-####################
-
-If the selector object does not contain a valid selection expression,
-the error message is similar to the following example:
-
-.. code-block:: json
-
-   {
-      "error": "bad request",
-      "reason": "Selector error: expected a JSON object"
-   }
-
-.. _changes/filter/design:
-
-_design
--------
-
-The ``_design`` filter accepts only changes for any design document within the
-requested database.
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_changes?filter=_design HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Tue, 06 Sep 2016 12:55:12 GMT
-    ETag: "ARIHFWL3I7PIS0SPVTFU6TLR2"
-    Server: CouchDB (Erlang OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "last_seq": "11-g1AAAAIreJyVkEEKwjAQRUOrqCuPoCeQZGIaXdmbaNIk1FLjyrXeRG-iN9Gb1LQRaimFlsAEJnkP_s8RQtM0VGhuz0qTmABfYXdI7h4CgeSiKIosDUVwcotJIpQSOmp_71TIpZty97OgymJAU8G5QrOLVdocrVbdfFzy-wYvcbLVEvrxh5K_NlJggIhSNiCFHbmJbu5yonttMoneYD6kD296eNOzzoRNBNqse2Xyjpd3vP96AcYNTQY4Pt5RdTOuHIwCY5S0qewLwY6OaA",
-        "pending": 0,
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "10-304cae84fd862832ea9814f02920d4b2"
-                    }
-                ],
-                "id": "_design/ingredients",
-                "seq": "8-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D666H6GcH6DYyMzIyNTUnwRR4LkGRoAFJAg-ZnJTIQULkAonI_ws0GaWZGqWkmRLkZYsYBiBn3Cdv2AKIS7ENWsG2mxkampsaGmLqyAOYpgEo"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "123-6f7c1b7c97a9e4f0d22bdf130e8fd817"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/cookbook",
-                "seq": "9-g1AAAAHxeJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBOZcoEC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HkV_kkGyZWqSEXH6E0D661F8YWBkZGZsbEqCL_JYgCRDA5ACGjQ_K5GBgMoFEJX7EW42SDMzSk0zIcrNEDMOQMy4T9i2BxCVYB-ygm0zNTYyNTU2xNSVBQDnK4BL"
-            },
-            {
-                "changes": [
-                    {
-                        "rev": "6-5b8a52c22580e922e792047cff3618f3"
-                    }
-                ],
-                "deleted": true,
-                "id": "_design/meta",
-                "seq": "11-g1AAAAIReJyVkE0OgjAQRiegUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloQhO7mGSCKWEjtrtnQq5dFXufhaoLIZoKjhXMLtYpc3RatXNxyW_b_ASJVstST_-UPLXRgpESEQpG5DCjlyFm7uc6F6bTKI3iA_Zhzc9vOlZZ0ImItqse2Xyjpd3vDMBfzo_vrPawLiaxihhjOI2lX0BirqHbg"
-            }
-        ]
-    }
-
-.. _changes/filter/view:
-
-_view
------
-
-.. versionadded:: 1.2
-
-The special filter ``_view`` allows to use existing
-:ref:`map function <mapfun>` as the :ref:`filter <filterfun>`. If the map
-function emits anything for the processed document it counts as accepted and
-the changes event emits to the feed. For most use-practice cases `filter`
-functions are very similar to `map` ones, so this feature helps to reduce
-amount of duplicated code.
-
-.. warning::
-    While :ref:`map functions <mapfun>` doesn't process the design documents,
-    using ``_view`` filter forces them to do this. You need to be sure, that
-    they are ready to handle documents with *alien* structure without panic.
-
-.. note::
-    Using ``_view`` filter doesn't queries the view index files, so you cannot
-    use common :ref:`view query parameters <api/ddoc/view>` to additionally
-    filter the changes feed by index key. Also, CouchDB doesn't returns
-    the result instantly as it does for views - it really uses the specified
-    map function as filter.
-
-    Moreover, you cannot make such filters dynamic e.g. process the request
-    query parameters or handle the :ref:`userctx_object` - the map function is
-    only operates with the document.
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_changes?filter=_view&view=ingredients/by_recipe HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Tue, 06 Sep 2016 12:57:56 GMT
-    ETag: "ARIHFWL3I7PIS0SPVTFU6TLR2"
-    Server: CouchDB (Erlang OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "last_seq": "11-g1AAAAIreJyVkEEKwjAQRUOrqCuPoCeQZGIaXdmbaNIk1FLjyrXeRG-iN9Gb1LQRaimFlsAEJnkP_s8RQtM0VGhuz0qTmABfYXdI7h4CgeSiKIosDUVwcotJIpQSOmp_71TIpZty97OgymJAU8G5QrOLVdocrVbdfFzy-wYvcbLVEvrxh5K_NlJggIhSNiCFHbmJbu5yonttMoneYD6kD296eNOzzoRNBNqse2Xyjpd3vP96AcYNTQY4Pt5RdTOuHIwCY5S0qewLwY6OaA",
-        "results": [
-            {
-                "changes": [
-                    {
-                        "rev": "13-bcb9d6388b60fd1e960d9ec4e8e3f29e"
-                    }
-                ],
-                "id": "SpaghettiWithMeatballs",
-                "seq": "11-g1AAAAIReJyVkE0OgjAQRiegUVceQU9g-mOpruQm2tI2SLCuXOtN9CZ6E70JFmpCCCFCmkyTdt6bfJMDwDQNFcztWWkcY8JXyB2cu49AgFwURZGloQhO7mGSCKWEjtrtnQq5dFXufhaoLIZoKjhXMLtYpc3RatXNxyW_b_ASJVstST_-UPLXRgpESEQpG5DCjlyFm7uc6F6bTKI3iA_Zhzc9vOlZZ0ImItqse2Xyjpd3vDMBfzo_vrPawLiaxihhjOI2lX0BirqHbg"
-            }
-        ]
-    }
diff --git a/src/api/database/common.rst b/src/api/database/common.rst
deleted file mode 100644
index 4831ab7..0000000
--- a/src/api/database/common.rst
+++ /dev/null
@@ -1,468 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db:
-
-=======
-``/db``
-=======
-
-.. http:head:: /{db}
-    :synopsis: Checks the database existence
-
-    Returns the HTTP Headers containing a minimal amount of information
-    about the specified database. Since the response body is empty, using the
-    HEAD method is a lightweight way to check if the database exists already or
-    not.
-
-    :param db: Database name
-    :code 200: Database exists
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        HEAD /test HTTP/1.1
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 01:27:41 GMT
-        Server: CouchDB (Erlang/OTP)
-
-.. http:get:: /{db}
-    :synopsis: Returns the database information
-
-    Gets information about the specified database.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json number cluster.n: Replicas. The number of copies of every document.
-    :>json number cluster.q: Shards. The number of range partitions.
-    :>json number cluster.r: Read quorum. The number of consistent copies
-      of a document that need to be read before a successful reply.
-    :>json number cluster.w: Write quorum. The number of copies of a document
-      that need to be written before a successful reply.
-    :>json boolean compact_running: Set to ``true`` if the database compaction
-      routine is operating on this database.
-    :>json string db_name: The name of the database.
-    :>json number disk_format_version: The version of the physical format used
-      for the data when it is stored on disk.
-    :>json number doc_count: A count of the documents in the specified
-      database.
-    :>json number doc_del_count: Number of deleted documents
-    :>json string instance_start_time: Always ``"0"``. (Returned for legacy
-      reasons.)
-    :>json string purge_seq: An opaque string that describes the purge state
-      of the database. Do not rely on this string for counting the number
-      of purge operations.
-    :>json number sizes.active: The size of live data inside the database, in
-      bytes.
-    :>json number sizes.external: The uncompressed size of database contents
-      in bytes.
-    :>json number sizes.file: The size of the database file on disk in bytes.
-      Views indexes are not included in the calculation.
-    :>json string update_seq: An opaque string that describes the state
-      of the database. Do not rely on this string for counting the number
-      of updates.
-    :>json boolean props.partitioned: (optional) If present and true, this
-      indicates that the database is partitioned.
-    :code 200: Request completed successfully
-    :code 404: Requested database not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /receipts HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 258
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 01:38:57 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "cluster": {
-                "n": 3,
-                "q": 8,
-                "r": 2,
-                "w": 2
-            },
-            "compact_running": false,
-            "db_name": "receipts",
-            "disk_format_version": 6,
-            "doc_count": 6146,
-            "doc_del_count": 64637,
-            "instance_start_time": "0",
-            "props": {},
-            "purge_seq": 0,
-            "sizes": {
-                "active": 65031503,
-                "external": 66982448,
-                "file": 137433211
-            },
-            "update_seq": "292786-g1AAAAF..."
-        }
-
-.. http:put:: /{db}
-    :synopsis: Creates a new database
-
-    Creates a new database. The database name ``{db}`` must be composed by
-    following next rules:
-
-    -  Name must begin with a lowercase letter (``a-z``)
-
-    -  Lowercase characters (``a-z``)
-
-    -  Digits (``0-9``)
-
-    -  Any of the characters ``_``, ``$``, ``(``, ``)``, ``+``, ``-``, and
-       ``/``.
-
-    If you're familiar with `Regular Expressions`_, the rules above could be
-    written as ``^[a-z][a-z0-9_$()+/-]*$``.
-
-    :param db: Database name
-    :query integer q: Shards, aka the number of range partitions. Default is
-      8, unless overridden in the :config:option:`cluster config <cluster/q>`.
-    :query integer n: Replicas. The number of copies of the database in the
-      cluster. The default is 3, unless overridden in the
-      :config:option:`cluster config <cluster/n>` .
-    :query boolean partitioned: Whether to create a partitioned database.
-      Default is false.
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header Location: Database URI location
-    :>json boolean ok: Operation status. Available in case of success
-    :>json string error: Error type. Available if response code is ``4xx``
-    :>json string reason: Error description. Available if response code is
-      ``4xx``
-    :code 201: Database created successfully (quorum is met)
-    :code 202: Accepted (at least by one node)
-    :code 400: Invalid database name
-    :code 401: CouchDB Server Administrator privileges required
-    :code 412: Database already exists
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:01:45 GMT
-        Location: http://localhost:5984/db
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-    If we repeat the same request to CouchDB, it will response with :code:`412`
-    since the database already exists:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 412 Precondition Failed
-        Cache-Control: must-revalidate
-        Content-Length: 95
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:01:16 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "error": "file_exists",
-            "reason": "The database could not be created, the file already exists."
-        }
-
-    If an invalid database name is supplied, CouchDB returns response with
-    :code:`400`:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /_db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Request**:
-
-    .. code-block:: http
-
-        HTTP/1.1 400 Bad Request
-        Cache-Control: must-revalidate
-        Content-Length: 194
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:02:10 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "error": "illegal_database_name",
-            "reason": "Name: '_db'. Only lowercase characters (a-z), digits (0-9), and any of the characters _, $, (, ), +, -, and / are allowed. Must begin with a letter."
-        }
-
-.. http:delete:: /{db}
-    :synopsis: Deletes an existing database
-
-    Deletes the specified database, and all the documents and attachments
-    contained within it.
-
-    .. note::
-        To avoid deleting a database, CouchDB will respond with the HTTP status
-        code 400 when the request URL includes a ?rev= parameter. This suggests
-        that one wants to delete a document but forgot to add the document id
-        to the URL.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 200: Database removed successfully (quorum is met and database is deleted by at least one node)
-    :code 202: Accepted (deleted by at least one of the nodes, quorum is not met yet)
-    :code 400: Invalid database name or forgotten document id by accident
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Database doesn't exist or invalid database name
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /db HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 08:54:00 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-.. http:post:: /{db}
-    :synopsis: Creates a new document with generated ID if _id is not specified
-
-    Creates a new document in the specified database, using the supplied JSON
-    document structure.
-
-    If the JSON structure includes the ``_id`` field, then the document will be
-    created with the specified document ID.
-
-    If the ``_id`` field is not specified, a new unique ID will be generated,
-    following whatever UUID algorithm is configured for that server.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-
-    :query string batch: Stores document in :ref:`batch mode
-      <api/doc/batch-writes>` Possible values: ``ok``. *Optional*
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header Location: Document's URI
-
-    :>json string id: Document ID
-    :>json boolean ok: Operation status
-    :>json string rev: Revision info
-
-    :code 201: Document created and stored on disk
-    :code 202: Document data accepted, but not yet stored on disk
-    :code 400: Invalid database name
-    :code 401: Write privileges required
-    :code 404: Database doesn't exist
-    :code 409: A Conflicting Document with same ID already exists
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db HTTP/1.1
-        Accept: application/json
-        Content-Length: 81
-        Content-Type: application/json
-
-        {
-            "servings": 4,
-            "subtitle": "Delicious with fresh bread",
-            "title": "Fish Stew"
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 95
-        Content-Type: application/json
-        Date: Tue, 13 Aug 2013 15:19:25 GMT
-        Location: http://localhost:5984/db/ab39fe0993049b84cfa81acd6ebad09d
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "ab39fe0993049b84cfa81acd6ebad09d",
-            "ok": true,
-            "rev": "1-9c65296036141e575d32ba9c034dd3ee"
-        }
-
-Specifying the Document ID
-==========================
-
-The document ID can be specified by including the ``_id`` field in the
-JSON of the submitted record. The following request will create the same
-document with the ID ``FishStew``.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db HTTP/1.1
-        Accept: application/json
-        Content-Length: 98
-        Content-Type: application/json
-
-        {
-            "_id": "FishStew",
-            "servings": 4,
-            "subtitle": "Delicious with fresh bread",
-            "title": "Fish Stew"
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 71
-        Content-Type: application/json
-        Date: Tue, 13 Aug 2013 15:19:25 GMT
-        ETag: "1-9c65296036141e575d32ba9c034dd3ee"
-        Location: http://localhost:5984/db/FishStew
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "FishStew",
-            "ok": true,
-            "rev": "1-9c65296036141e575d32ba9c034dd3ee"
-        }
-
-.. _api/doc/batch-writes:
-
-Batch Mode Writes
-=================
-
-You can write documents to the database at a higher rate by using the batch
-option. This collects document writes together in memory (on a per-user basis)
-before they are committed to disk. This increases the risk of the documents not
-being stored in the event of a failure, since the documents are not written to
-disk immediately.
-
-Batch mode is not suitable for critical data, but may be ideal for applications
-such as log data, when the risk of some data loss due to a crash is acceptable.
-
-To use batch mode, append the ``batch=ok`` query argument to the URL of a
-:post:`/{db}`, :put:`/{db}/{docid}`, or :delete:`/{db}/{docid}` request. The
-CouchDB server will respond with an HTTP :statuscode:`202` response code
-immediately.
-
-.. note::
-    Creating or updating documents with batch mode doesn't guarantee that all
-    documents will be successfully stored on disk. For example, individual
-    documents may not be saved due to conflicts, rejection by
-    :ref:`validation function <vdufun>` or by other reasons, even if overall
-    the batch was successfully submitted.
-
-**Request**:
-
-.. code-block:: http
-
-    POST /db?batch=ok HTTP/1.1
-    Accept: application/json
-    Content-Length: 98
-    Content-Type: application/json
-
-    {
-        "_id": "FishStew",
-        "servings": 4,
-        "subtitle": "Delicious with fresh bread",
-        "title": "Fish Stew"
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 202 Accepted
-    Cache-Control: must-revalidate
-    Content-Length: 28
-    Content-Type: application/json
-    Date: Tue, 13 Aug 2013 15:19:25 GMT
-    Location: http://localhost:5984/db/FishStew
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "id": "FishStew",
-        "ok": true
-    }
-
-.. _Regular Expressions: http://en.wikipedia.org/wiki/Regular_expression
diff --git a/src/api/database/compact.rst b/src/api/database/compact.rst
deleted file mode 100644
index 5297183..0000000
--- a/src/api/database/compact.rst
+++ /dev/null
@@ -1,246 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/compact:
-
-================
-``/db/_compact``
-================
-
-.. http:post:: /{db}/_compact
-    :synopsis: Starts a compaction for the database
-
-    Request compaction of the specified database. Compaction compresses the
-    disk database file by performing the following operations:
-
-    - Writes a new, optimised, version of the database file, removing any
-      unused sections from the new version during write. Because a new file is
-      temporarily created for this purpose, you may require up to twice the
-      current storage space of the specified database in order for the
-      compaction routine to complete.
-    - Removes the bodies of any non-leaf revisions of documents from the
-      database.
-    - Removes old revision history beyond the limit specified by the
-      ``_revs_limit`` database parameter.
-
-    Compaction can only be requested on an individual database; you cannot
-    compact all the databases for a CouchDB instance. The compaction process
-    runs as a background process.
-
-    You can determine if the compaction process is operating on a database
-    by obtaining the database meta information, the ``compact_running``
-    value of the returned database structure will be set to true. See
-    :get:`/{db}`.
-
-    You can also obtain a list of running processes to determine whether
-    compaction is currently running. See :ref:`api/server/active_tasks`.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 202: Compaction request has been accepted
-    :code 400: Invalid database name
-    :code 401: CouchDB Server Administrator privileges required
-    :code 415: Bad :header:`Content-Type` value
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_compact HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 202 Accepted
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 09:27:43 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-.. _api/db/compact/ddoc:
-
-===========================
-``/db/_compact/design-doc``
-===========================
-
-.. http:post:: /{db}/_compact/{ddoc}
-    :synopsis: Starts a compaction for all the views in the selected
-               design document
-
-    Compacts the view indexes associated with the specified design document.
-    It may be that compacting a large view can return more storage than
-    compacting the actual db. Thus, you can use this in place of the full
-    database compaction if you know a specific set of view indexes have been
-    affected by a recent database change.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 202: Compaction request has been accepted
-    :code 400: Invalid database name
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Design document not found
-    :code 415: Bad :header:`Content-Type` value
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_compact/posts HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 202 Accepted
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 09:36:44 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-    .. note::
-        View indexes are stored in a separate ``.couch`` file based on a hash
-        of the design document's relevant functions, in a sub directory of
-        where the main ``.couch`` database files are located.
-
-.. _api/db/ensure_full_commit:
-
-===========================
-``/db/_ensure_full_commit``
-===========================
-
-.. http:post:: /{db}/_ensure_full_commit
-    :synopsis: Deprecated endpoint to support CouchDB versions < 3.0
-               replicators.
-
-    .. versionchanged:: 3.0.0 Deprecated; endpoint is a no-op.
-
-    Before 3.0 this was used to commit recent changes to the database in case
-    the ``delayed_commits=true`` option was set. That option is always
-    ``false`` now, so commits are never delayed. However, this endpoint is kept
-    for compatibility with older replicators.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json string instance_start_time: Always ``"0"``. (Returned for legacy
-      reasons.)
-    :>json boolean ok: Operation status
-    :code 201: Commit completed successfully
-    :code 400: Invalid database name
-    :code 415: Bad :header:`Content-Type` value
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_ensure_full_commit HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 53
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 10:22:19 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "instance_start_time": "0",
-            "ok": true
-        }
-
-.. _api/db/view_cleanup:
-
-=====================
-``/db/_view_cleanup``
-=====================
-
-.. http:post:: /{db}/_view_cleanup
-    :synopsis: Removes view files that are not used by any design document
-
-    Removes view index files that are no longer required by CouchDB as a result
-    of changed views within design documents. As the view filename is based on
-    a hash of the view functions, over time old views will remain, consuming
-    storage. This call cleans up the cached view output on disk for
-    a given view.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 202: Compaction request has been accepted
-    :code 400: Invalid database name
-    :code 401: CouchDB Server Administrator privileges required
-    :code 415: Bad :header:`Content-Type` value
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_view_cleanup HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 202 Accepted
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 09:27:43 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
diff --git a/src/api/database/find.rst b/src/api/database/find.rst
deleted file mode 100644
index b5a0bd4..0000000
--- a/src/api/database/find.rst
+++ /dev/null
@@ -1,1425 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/_find:
-
-================
-``/db/_find``
-================
-
-.. http:post:: /{db}/_find
-    :synopsis: Find documents within a given database
-
-    Find documents using a declarative JSON querying syntax.
-    Queries will use custom indexes, specified using the :ref:`_index <api/db/find/index>`
-    endpoint, if available.
-    Otherwise, they use the built-in :ref:`_all_docs <api/db/all_docs>` index, which
-    can be arbitrarily slow.
-
-    :param db: Database name
-
-    :<header Content-Type: - :mimetype:`application/json`
-
-    :<json json selector: JSON object describing criteria used to select
-        documents. More information provided in the section on :ref:`selector
-        syntax <find/selectors>`. *Required*
-    :<json number limit: Maximum number of results returned. Default is ``25``.
-        *Optional*
-    :<json number skip: Skip the first 'n' results, where 'n' is the value
-        specified. *Optional*
-    :<json json sort: JSON array following :ref:`sort syntax <find/sort>`.
-        *Optional*
-    :<json array fields: JSON array specifying which fields of each object
-        should be returned. If it is omitted, the entire object is returned.
-        More information provided in the section on :ref:`filtering fields
-        <find/filter>`. *Optional*
-    :<json string|array use_index: Instruct a query to use a specific index.
-        Specified either as ``"<design_document>"`` or
-        ``["<design_document>", "<index_name>"]``. *Optional*
-    :<json boolean conflicts: Include conflicted documents if ``true``.
-        Intended use is to easily find conflicted documents, without an
-        index or view. Default is ``false``. *Optional*
-    :<json number r: Read quorum needed for the result. This defaults to 1, in
-        which case the document found in the index is returned. If set to a
-        higher value, each document is read from at least that many replicas
-        before it is returned in the results. This is likely to take more time
-        than using only the document stored locally with the index. *Optional,
-        default: 1*
-    :<json string bookmark: A string that enables you to specify which page of
-        results you require. Used for paging through result sets. Every query
-        returns an opaque string under the ``bookmark`` key that can then be
-        passed back in a query to get the next page of results. If any part of
-        the selector query changes between requests, the results
-        are undefined. *Optional, default: null*
-    :<json boolean update: Whether to update the index prior to returning the
-        result. Default is ``true``. *Optional*
-    :<json boolean stable: Whether or not the view results should be returned
-        from a "stable" set of shards. *Optional*
-    :<json string stale: Combination of ``update=false`` and ``stable=true``
-        options. Possible options: ``"ok"``, ``false`` (default). *Optional*
-        Note that this parameter is deprecated. Use ``stable`` and ``update`` instead.
-        See :ref:`views/generation` for more details.
-    :<json boolean execution_stats: Include
-        :ref:`execution statistics <find/statistics>` in the query response.
-        *Optional, default:* ``false``
-
-    :>header Content-Type: :mimetype:`application/json`
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json object docs: Array of documents matching the search. In each matching
-        document, the fields specified in the ``fields`` part of the request
-        body are listed, along with their values.
-    :>json string warning: Execution warnings
-    :>json object execution_stats: Execution statistics
-    :>json string bookmark: An opaque string used for paging. See the
-        ``bookmark`` field in the request (above) for usage details.
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 404: Requested database not found
-    :code 500: Query execution error
-
-The ``limit`` and ``skip`` values are exactly as you would expect. While
-``skip`` exists, it is not intended to be used for paging. The reason is that
-the ``bookmark`` feature is more efficient.
-
-    **Request**:
-
-Example request body for finding documents using an index:
-
-    .. code-block:: http
-
-        POST /movies/_find HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Content-Length: 168
-        Host: localhost:5984
-
-        {
-            "selector": {
-                "year": {"$gt": 2010}
-            },
-            "fields": ["_id", "_rev", "year", "title"],
-            "sort": [{"year": "asc"}],
-            "limit": 2,
-            "skip": 0,
-            "execution_stats": true
-        }
-
-    **Response**:
-
-Example response when finding documents using an index:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Thu, 01 Sep 2016 15:41:53 GMT
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "docs": [
-                {
-                    "_id": "176694",
-                    "_rev": "1-54f8e950cc338d2385d9b0cda2fd918e",
-                    "year": 2011,
-                    "title": "The Tragedy of Man"
-                },
-                {
-                    "_id": "780504",
-                    "_rev": "1-5f14bab1a1e9ac3ebdf85905f47fb084",
-                    "year": 2011,
-                    "title": "Drive"
-                }
-            ],
-            "execution_stats": {
-                "total_keys_examined": 0,
-                "total_docs_examined": 200,
-                "total_quorum_docs_examined": 0,
-                "results_returned": 2,
-                "execution_time_ms": 5.52
-            }
-        }
-
-.. _find/selectors:
-
-Selector Syntax
-===============
-
-Selectors are expressed as a JSON object describing documents of interest.
-Within this structure, you can apply conditional logic using specially named
-fields.
-
-Whilst selectors have some similarities with MongoDB query documents, these
-arise from a similarity of purpose and do not necessarily extend to commonality
-of function or result.
-
-.. _find/selectorbasics:
-
-Selector Basics
----------------
-
-Elementary selector syntax requires you to specify one or more fields, and the
-corresponding values required for those fields. This selector matches all
-documents whose "director" field has the value "Lars von Trier".
-
-.. code-block:: javascript
-
-    {
-        "director": "Lars von Trier"
-    }
-
-    A simple selector, inspecting specific fields
-
-.. code-block:: javascript
-
-    "selector": {
-      "title": "Live And Let Die"
-    },
-    "fields": [
-      "title",
-      "cast"
-    ]
-
-You can create more complex selector expressions by combining operators.
-For best performance, it is best to combine 'combination' or
-'array logical' operators, such as ``$regex``, with an equality
-operators such as ``$eq``, ``$gt``, ``$gte``, ``$lt``, and ``$lte``
-(but not ``$ne``). For more information about creating complex
-selector expressions, see :ref:`creating selector expressions
-<find/expressions>`.
-
-.. _find/twofields:
-
-Selector with 2 fields
-----------------------
-
-This selector matches any document with a name field containing ``"Paul"``,
-and that also has a location field with the value ``"Boston"``.
-
-.. code-block:: javascript
-
-    {
-        "name": "Paul",
-        "location": "Boston"
-    }
-
-.. _find/subfields:
-
-Subfields
----------
-
-A more complex selector enables you to specify the values for field of nested
-objects, or subfields. For example, you might use a standard JSON structure for
-specifying a field and subfield.
-
-Example of a field and subfield selector, using a standard JSON structure:
-
-.. code-block:: javascript
-
-    {
-        "imdb": {
-            "rating": 8
-        }
-    }
-
-An abbreviated equivalent uses a dot notation to combine the field and subfield
-names into a single name.
-
-.. code-block:: javascript
-
-    {
-        "imdb.rating": 8
-    }
-
-.. _find/operators:
-
-Operators
----------
-
-Operators are identified by the use of a dollar sign ($) prefix in the name
-field.
-
-There are two core types of operators in the selector syntax:
-
--  Combination operators
--  Condition operators
-
-In general, combination operators are applied at the topmost level of selection.
-They are used to combine conditions, or to create combinations of conditions,
-into one selector.
-
-Every explicit operator has the form:
-
-.. code-block:: javascript
-
-    {"$operator": argument}
-
-A selector without an explicit operator is considered to have an implicit
-operator. The exact implicit operator is determined by the structure of the
-selector expression.
-
-.. _find/implicit_operators:
-
-Implicit Operators
-------------------
-
-There are two implicit operators:
-
--  Equality
--  And
-
-In a selector, any field containing a JSON value, but that has no operators in
-it, is considered to be an equality condition. The implicit equality test
-applies also for fields and subfields.
-
-Any JSON object that is not the argument to a condition operator is an implicit
-$and operator on each field.
-
-In the below example, we use an operator to match any document, where the
-``"year"`` field has a value greater than ``2010``:
-
-.. code-block:: javascript
-
-    {
-        "year": {
-            "$gt": 2010
-        }
-    }
-
-In this next example, there must be a field ``"director"`` in a matching
-document, and the field must have a value exactly equal to ``"Lars von Trier"``.
-
-.. code-block:: javascript
-
-    {
-        "director": "Lars von Trier"
-    }
-
-You can also make the equality operator explicit.
-
-.. code-block:: javascript
-
-    {
-        "director": {
-            "$eq": "Lars von Trier"
-        }
-    }
-
-In the next example using subfields, the required field ``"imdb"`` in a matching
-document must also have a subfield ``"rating"`` and the subfield must have a
-value equal to ``8``.
-
-Example of implicit operator applied to a subfield test
-
-.. code-block:: javascript
-
-        {
-            "imdb": {
-                "rating": 8
-            }
-        }
-
-Again, you can make the equality operator explicit.
-
-.. code-block:: javascript
-
-    {
-        "imdb": {
-            "rating": { "$eq": 8 }
-        }
-    }
-
-An example of the ``$eq`` operator used with full text indexing
-
-.. code-block:: javascript
-
-    {
-      "selector": {
-        "year": {
-          "$eq": 2001
-        }
-      },
-      "sort": [
-        "title:string"
-      ],
-      "fields": [
-        "title"
-      ]
-    }
-
-An example of  the ``$eq`` operator used with database indexed on the field ``"year"``
-
-.. code-block:: javascript
-
-    {
-      "selector": {
-        "year": {
-          "$eq": 2001
-        }
-      },
-      "sort": [
-        "year"
-      ],
-      "fields": [
-        "year"
-      ]
-    }
-
-In this example, the field ``"director"`` must be present and contain the value
-``"Lars von Trier"`` and the field ``"year"`` must exist and have the value
-``2003``.
-
-.. code-block:: javascript
-
-    {
-        "director": "Lars von Trier",
-        "year": 2003
-    }
-
-You can make both the ``$and`` operator and the equality operator explicit.
-
-    Example of using explicit ``$and`` and ``$eq`` operators
-
-.. code-block:: javascript
-
-    {
-        "$and": [
-            {
-                "director": {
-                    "$eq": "Lars von Trier"
-                }
-            },
-            {
-                "year": {
-                    "$eq": 2003
-                }
-            }
-        ]
-    }
-
-.. _find/explicit_operators:
-
-Explicit Operators
-------------------
-
-All operators, apart from 'Equality' and 'And', must be stated explicitly.
-
-.. _find/combination_operators:
-
-Combination Operators
----------------------
-
-Combination operators are used to combine selectors. In addition to the common
-boolean operators found in most programming languages, there are three
-combination operators (``$all``, ``$elemMatch``, and ``$allMatch``) that help
-you work with JSON arrays and one that works with JSON maps (``$keyMapMatch``).
-
-A combination operator takes a single argument. The argument is either another
-selector, or an array of selectors.
-
-The list of combination operators:
-
-+------------------+----------+--------------------------------------------------+
-| Operator         | Argument | Purpose                                          |
-+==================+==========+==================================================+
-| ``$and``         | Array    | Matches if all the selectors in the array match. |
-+------------------+----------+--------------------------------------------------+
-| ``$or``          | Array    | Matches if any of the selectors in the array     |
-|                  |          | match. All selectors must use the same index.    |
-+------------------+----------+--------------------------------------------------+
-| ``$not``         | Selector | Matches if the given selector does not match.    |
-+------------------+----------+--------------------------------------------------+
-| ``$nor``         | Array    | Matches if none of the selectors in the array    |
-|                  |          | match.                                           |
-+------------------+----------+--------------------------------------------------+
-| ``$all``         | Array    | Matches an array value if it contains all the    |
-|                  |          | elements of the argument array.                  |
-+------------------+----------+--------------------------------------------------+
-| ``$elemMatch``   | Selector | Matches and returns all documents that contain an|
-|                  |          | array field with at least one element that       |
-|                  |          | matches all the specified query criteria.        |
-+------------------+----------+--------------------------------------------------+
-| ``$allMatch``    | Selector | Matches and returns all documents that contain an|
-|                  |          | array field with all its elements matching all   |
-|                  |          | the specified query criteria.                    |
-+------------------+----------+--------------------------------------------------+
-| ``$keyMapMatch`` | Selector | Matches and returns all documents that contain a |
-|                  |          | map that contains at least one key that matches  |
-|                  |          | all the specified query criteria.                |
-+------------------+----------+--------------------------------------------------+
-
-.. _find/and:
-
-**The** ``$and`` **operator**
-    ``$and`` operator used with two fields
-
-.. code-block:: javascript
-
-    {
-      "selector": {
-        "$and": [
-          {
-            "title": "Total Recall"
-          },
-          {
-            "year": {
-              "$in": [1984, 1991]
-            }
-          }
-        ]
-      },
-      "fields": [
-        "year",
-        "title",
-        "cast"
-      ]
-    }
-
-The ``$and`` operator matches if all the selectors in the array match. Below is
-an example using the primary index (``_all_docs``):
-
-.. code-block:: javascript
-
-    {
-        "$and": [
-            {
-                "_id": { "$gt": null }
-            },
-            {
-                "year": {
-                    "$in": [2014, 2015]
-                }
-            }
-        ]
-    }
-
-.. _find/or:
-
-**The** ``$or`` **operator**
-
-The ``$or`` operator matches if any of the selectors in the array match. Below
-is an example used with an index on the field ``"year"``:
-
-.. code-block:: javascript
-
-    {
-        "year": 1977,
-        "$or": [
-            { "director": "George Lucas" },
-            { "director": "Steven Spielberg" }
-        ]
-    }
-
-.. _find/not:
-
-**The** ``$not`` **operator**
-
-The ``$not`` operator matches if the given selector does not match. Below is an
-example used with an index on the field ``"year"``:
-
-.. code-block:: javascript
-
-    {
-        "year": {
-            "$gte": 1900
-        },
-        "year": {
-            "$lte": 1903
-        },
-        "$not": {
-            "year": 1901
-        }
-    }
-
-.. _find/nor:
-
-**The** ``$nor`` **operator**
-
-The ``$nor`` operator matches if the given selector does not match. Below is an
-example used with an index on the field ``"year"``:
-
-.. code-block:: javascript
-
-    {
-        "year": {
-            "$gte": 1900
-        },
-        "year": {
-            "$lte": 1910
-        },
-        "$nor": [
-            { "year": 1901 },
-            { "year": 1905 },
-            {  "year": 1907 }
-        ]
-    }
-
-.. _find/all:
-
-**The** ``$all`` **operator**
-
-The ``$all`` operator matches an array value if it contains all the elements of
-the argument array. Below is an example used with the primary index
-(``_all_docs``):
-
-.. code-block:: javascript
-
-    {
-        "_id": {
-            "$gt": null
-        },
-        "genre": {
-            "$all": ["Comedy","Short"]
-        }
-    }
-
-.. _find/elemmatch:
-
-**The** ``$elemMatch`` **operator**
-
-The ``$elemMatch`` operator matches and returns all documents that contain an
-array field with at least one element matching the supplied query criteria.
-Below is an example used with the primary index (``_all_docs``):
-
-.. code-block:: javascript
-
-    {
-        "_id": { "$gt": null },
-        "genre": {
-            "$elemMatch": {
-                "$eq": "Horror"
-            }
-        }
-    }
-
-.. _find/allmatch:
-
-**The** ``$allMatch`` **operator**
-
-The ``$allMatch`` operator matches and returns all documents that contain an
-array field with all its elements matching the supplied query criteria. Below
-is an example used with the primary index (``_all_docs``):
-
-.. code-block:: javascript
-
-    {
-        "_id": { "$gt": null },
-        "genre": {
-            "$allMatch": {
-                "$eq": "Horror"
-            }
-        }
-    }
-
-.. _find/keymapmatch:
-
-**The** ``$keyMapMatch`` **operator**
-
-The ``$keyMapMatch`` operator matches and returns all documents that contain a
-map that contains at least one key that matches all the specified query criteria.
-Below is an example used with the primary index (``_all_docs``):
-
-.. code-block:: javascript
-
-    {
-        "_id": { "$gt": null },
-        "cameras": {
-            "$keyMapMatch": {
-                "$eq": "secondary"
-            }
-        }
-    }
-
-.. _find/condition-operators:
-
-Condition Operators
--------------------
-
-Condition operators are specific to a field, and are used to evaluate the value
-stored in that field. For instance, the basic ``$eq`` operator matches when the
-specified field contains a value that is equal to the supplied argument.
-
-.. note::
-    For a condition operator to function correctly, the field **must exist**
-    in the document for the selector to match. As an example, ``$ne`` means
-    the specified field must exist, and is not equal to the value of the
-    argument.
-
-The basic equality and inequality operators common to most programming
-languages are supported. Strict type matching is used.
-
-In addition, some 'meta' condition operators are available. Some condition
-operators accept any valid JSON content as the argument.  Other condition
-operators require the argument to be in a specific JSON format.
-
-+---------------+-------------+------------+-----------------------------------+
-| Operator type | Operator    | Argument   | Purpose                           |
-+===============+=============+============+===================================+
-| (In)equality  | ``$lt``     | Any JSON   | The field is less than the        |
-|               |             |            | argument                          |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$lte``    | Any JSON   | The field is less than or equal to|
-|               |             |            | the argument.                     |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$eq``     | Any JSON   | The field is equal to the argument|
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$ne``     | Any JSON   | The field is not equal to the     |
-|               |             |            | argument.                         |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$gte``    | Any JSON   | The field is greater than or equal|
-|               |             |            | to the argument.                  |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$gt``     | Any JSON   | The field is greater than the     |
-|               |             |            | to the argument.                  |
-+---------------+-------------+------------+-----------------------------------+
-| Object        | ``$exists`` | Boolean    | Check whether the field exists or |
-|               |             |            | not, regardless of its value.     |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$type``   | String     | Check the document field's type.  |
-|               |             |            | Valid values are ``"null"``,      |
-|               |             |            | ``"boolean"``, ``"number"``,      |
-|               |             |            | ``"string"``, ``"array"``, and    |
-|               |             |            | ``"object"``.                     |
-+---------------+-------------+------------+-----------------------------------+
-| Array         | ``$in``     | Array of   | The document field must exist in  |
-|               |             | JSON values| the list provided.                |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$nin``    | Array of   | The document field not must exist |
-|               |             | JSON values| in the list provided.             |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$size``   | Integer    | Special condition to match the    |
-|               |             |            | length of an array field in a     |
-|               |             |            | document. Non-array fields cannot |
-|               |             |            | match this condition.             |
-+---------------+-------------+------------+-----------------------------------+
-| Miscellaneous | ``$mod``    | [Divisor,  | Divisor and Remainder are both    |
-|               |             | Remainder] | positive or negative integers.    |
-|               |             |            | Non-integer values result in a    |
-|               |             |            | 404. Matches documents where      |
-|               |             |            | ``field % Divisor == Remainder``  |
-|               |             |            | is true, and only when the        |
-|               |             |            | document field is an integer.     |
-+---------------+-------------+------------+-----------------------------------+
-|               | ``$regex``  | String     | A regular expression pattern to   |
-|               |             |            | match against the document field. |
-|               |             |            | Only matches when the field is a  |
-|               |             |            | string value and matches the      |
-|               |             |            | supplied regular expression. The  |
-|               |             |            | matching algorithms are based on  |
-|               |             |            | the Perl Compatible Regular       |
-|               |             |            | Expression (PCRE) library. For    |
-|               |             |            | more information about what is    |
-|               |             |            | implemented, see the see the      |
-|               |             |            | `Erlang Regular Expression        |
-|               |             |            | <http://erlang.org/doc            |
-|               |             |            | /man/re.html>`_                   |
-+---------------+-------------+------------+-----------------------------------+
-
-.. warning::
-    Regular expressions do not work with indexes, so they should not be used to
-    filter large data sets. They can, however, be used to restrict a
-    :ref:`partial index <find/partial_indexes>`.
-
-.. _find/expressions:
-
-Creating Selector Expressions
------------------------------
-
-We have seen examples of combining selector expressions, such as :ref:`using
-explicit $and and $eq operators <find/combination_operators>`.
-
-In general, whenever you have an operator that takes an argument, that argument
-can itself be another operator with arguments of its own. This enables us to
-build up more complex selector expressions.
-
-However, only equality operators such as ``$eq``, ``$gt``, ``$gte``, ``$lt``,
-and ``$lte`` (but not ``$ne``) can be used as the basis of a query. You should
-include at least one of these in a selector.
-
-For example, if you try to perform a query that attempts to match all documents
-that have a field called `afieldname` containing a value that begins with the
-letter `A`, this will trigger a warning because no index could be used and
-the database performs a full scan of the primary index:
-
-    **Request**
-
-    .. code-block:: http
-
-        POST /movies/_find HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Content-Length: 112
-        Host: localhost:5984
-
-        {
-            "selector": {
-                "afieldname": {"$regex": "^A"}
-            }
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Thu, 01 Sep 2016 17:25:51 GMT
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "warning":"no matching index found, create an index to optimize
-            query time",
-            "docs":[
-            ]
-        }
-
-.. warning::
-    It's always recommended that you create an appropriate index when deploying
-    in production.
-
-Most selector expressions work exactly as you would expect for the given
-operator. But it is not always the case: for example, comparison of strings is
-done with ICU and can can give surprising results if you were expecting ASCII
-ordering. See :ref:`views/collation` for more details.
-
-.. _find/sort:
-
-Sort Syntax
-===========
-
-The ``sort`` field contains a list of field name and direction pairs, expressed
-as a basic array. The first field name and direction pair is the topmost level
-of sort. The second pair, if provided, is the next level of sort.
-
-The field can be any field, using dotted notation if desired for sub-document
-fields.
-
-The direction value is ``"asc"`` for ascending, and ``"desc"`` for descending.
-If you omit the direction value, the default ``"asc"`` is used.
-
-Example, sorting by 2 fields:
-
-    .. code-block:: javascript
-
-        [{"fieldName1": "desc"}, {"fieldName2": "desc" }]
-
-Example, sorting by 2 fields, assuming default direction for both :
-
-    .. code-block:: javascript
-
-        ["fieldNameA", "fieldNameB"]
-
-A typical requirement is to search for some content using a selector, then to
-sort the results according to the specified field, in the required direction.
-
-To use sorting, ensure that:
-
--  At least one of the sort fields is included in the selector.
--  There is an index already defined, with all the sort fields in the same
-    order.
--  Each object in the sort array has a single key.
-
-If an object in the sort array does not have a single key, the resulting sort
-order is implementation specific and might change.
-
-Find does not support multiple fields with different sort orders, so the
-directions must be either all ascending or all descending.
-
-For field names in text search sorts, it is sometimes necessary for a
-field type to be specified, for example:
-
-``{ "<fieldname>:string": "asc"}``
-
-If possible, an attempt is made to discover the field type based on the
-selector. In ambiguous cases the field type must be provided explicitly.
-
-The sorting order is undefined when fields contain different data types.
-This is an important difference between text and view indexes. Sorting
-behavior for fields with different data types might change in future
-versions.
-
-    A simple query, using sorting:
-
-.. code-block:: javascript
-
-    {
-        "selector": {"Actor_name": "Robert De Niro"},
-        "sort": [{"Actor_name": "asc"}, {"Movie_runtime": "asc"}]
-    }
-.. _find/filter:
-
-Filtering Fields
-================
-
-It is possible to specify exactly which fields are returned for a document when
-selecting from a database. The two advantages are:
-
--  Your results are limited to only those parts of the document that are
-    required for your application.
--  A reduction in the size of the response.
-
-The fields returned are specified as an array.
-
-Only the specified filter fields are included, in the response. There is no
-automatic inclusion of the ``_id`` or other metadata fields when a field list
-is included.
-
-Example of selective retrieval of fields from matching documents:
-
-    .. code-block:: javascript
-
-        {
-            "selector": { "Actor_name": "Robert De Niro" },
-            "fields": ["Actor_name", "Movie_year", "_id", "_rev"]
-        }
-
-Pagination
-==========
-
-Mango queries support pagination via the bookmark field. Every ``_find``
-response contains a bookmark - a token that CouchDB uses to determine
-where to resume from when subsequent queries are made. To get the next
-set of query results, add the bookmark that was received in the previous
-response to your next request. Remember to keep the `selector` the same,
-otherwise you will receive unexpected results. To paginate backwards,
-you can use a previous bookmark to return the previous set of results.
-
-Note that the presence of a bookmark doesn't guarantee that there are
-more results. You can to test whether you have reached the end of the
-result set by comparing the number of results returned with the page
-size requested - if results returned < `limit`, there are no more.
-
-.. _find/statistics:
-
-Execution Statistics
-====================
-
-Find can return basic execution statistics for a specific request. Combined with
-the :ref:`_explain <api/db/find/explain>` endpoint, this should provide some
-insight as to whether indexes are being used effectively.
-
-The execution statistics currently include:
-
-+--------------------------------+--------------------------------------------+
-| Field                          | Description                                |
-+================================+============================================+
-| ``total_keys_examined``        | Number of index keys examined.             |
-|                                | Currently always 0.                        |
-+--------------------------------+--------------------------------------------+
-| ``total_docs_examined``        | Number of documents fetched from the       |
-|                                | database / index, equivalent to using      |
-|                                | ``include_docs=true`` in a view.           |
-|                                | These may then be filtered in-memory to    |
-|                                | further narrow down the result set based   |
-|                                | on the selector.                           |
-+--------------------------------+--------------------------------------------+
-| ``total_quorum_docs_examined`` | Number of documents fetched from the       |
-|                                | database using an out-of-band document     |
-|                                | fetch. This is only non-zero when read     |
-|                                | quorum > 1 is specified in the query       |
-|                                | parameters.                                |
-+--------------------------------+--------------------------------------------+
-| ``results_returned``           | Number of results returned from the query. |
-|                                | Ideally this should not be significantly   |
-|                                | lower than the total documents / keys      |
-|                                | examined.                                  |
-+--------------------------------+--------------------------------------------+
-| ``execution_time_ms``          | Total execution time in milliseconds as    |
-|                                | measured by the database.                  |
-+--------------------------------+--------------------------------------------+
-
-.. _api/db/find/index:
-
-================
-``/db/_index``
-================
-
-.. _api/db/find/index-post:
-
-Mango is a declarative JSON querying language for CouchDB databases.
-Mango wraps several index types, starting with the Primary Index
-out-of-the-box. Mango indexes, with index type `json`, are
-built using MapReduce Views.
-
-.. http:post:: /{db}/_index
-    :synopsis: Create a new index.
-
-    Create a new index on a database
-
-    :param db: Database name
-
-    :<header Content-Type: - :mimetype:`application/json`
-
-    :query json index: JSON object describing the index to create.
-    :query string ddoc: Name of the design document in which the index will be
-        created. By default, each index will be created in its own design
-        document.
-        Indexes can be grouped into design documents for efficiency. However, a
-        change to one index in a design document will invalidate all other
-        indexes in the same document (similar to views). *Optional*
-    :query string name: Name of the index. If no name is provided, a name will
-        be generated automatically. *Optional*
-    :query string type: Can be ``"json"`` or ``"text"``. Defaults to json.
-        Geospatial indexes will be supported in the future. *Optional*
-        Text indexes are supported via a third party library *Optional*
-    :query boolean partitioned: Determines whether a JSON index is partitioned
-        or global. The default value of ``partitioned`` is the ``partitioned``
-        property of the database. To create a global index on a
-        partitioned database, specify
-        ``false`` for the ``"partitioned"`` field. If you specify ``true``
-        for the  ``"partitioned"`` field on an unpartitioned database, an
-        error occurs.
-
-    :>header Content-Type: :mimetype:`application/json`
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json string result: Flag to show whether the index was created or one
-        already exists. Can be ``"created"`` or ``"exists"``.
-    :>json string id: Id of the design document the index was created in.
-    :>json string name: Name of the index created.
-
-    :code 200: Index created successfully or already exists
-    :code 400: Invalid request
-    :code 401: Admin permission required
-    :code 404: Database not found
-    :code 500: Execution error
-
-    The `Index object` is a JSON object with the following fields:
-
-    :json array fields: array of field names following the :ref:`sort
-       syntax <find/sort>`. Nested fields are also allowed, e.g. `"person.name"`.
-    :json json partial_filter_selector: A :ref:`selector <find/selectors>`
-       to apply to documents at indexing time, creating a
-       :ref:`partial index <find/partial_indexes>`. *Optional*
-
-    Example of creating a new index for a field called ``foo``:
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_index HTTP/1.1
-        Content-Type: application/json
-        Content-Length: 116
-        Host: localhost:5984
-
-        {
-            "index": {
-                "fields": ["foo"]
-            },
-            "name" : "foo-index",
-            "type" : "json"
-        }
-
-The returned JSON confirms the index has been created:
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 96
-        Content-Type: application/json
-        Date: Thu, 01 Sep 2016 18:17:48 GMT
-        Server: CouchDB (Erlang OTP/18)
-
-        {
-            "result":"created",
-            "id":"_design/a5f4711fc9448864a13c81dc71e660b524d7410c",
-            "name":"foo-index"
-        }
-
-Example index creation using all available query parameters
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_index HTTP/1.1
-        Content-Type: application/json
-        Content-Length: 396
-        Host: localhost:5984
-
-        {
-            "index": {
-                "partial_filter_selector": {
-                    "year": {
-                        "$gt": 2010
-                    },
-                    "limit": 10,
-                    "skip": 0
-                },
-                "fields": [
-                    "_id",
-                    "_rev",
-                    "year",
-                    "title"
-                ]
-            },
-            "ddoc": "example-ddoc",
-            "name": "example-index",
-            "type": "json",
-            "partitioned": false
-        }
-
-By default, a JSON index will include all documents that have the indexed fields
-present, including those which have null values.
-
-.. _find/partial_indexes:
-
-Partial Indexes
-===============
-
-Partial indexes allow documents to be filtered at indexing time, potentially
-offering significant performance improvements for query selectors that don't
-map cleanly to a range query on an index.
-
-Let's look at an example query:
-
-.. code-block:: javascript
-
-    {
-      "selector": {
-        "status": {
-          "$ne": "archived"
-        },
-        "type": "user"
-      }
-    }
-
-Without a partial index, this requires a full index scan to find all the
-documents of ``"type":"user"`` that do not have a status of ``"archived"``.
-This is because a normal index can only be used to match contiguous rows,
-and the ``"$ne"`` operator cannot guarantee that.
-
-To improve response times, we can create an index which excludes documents
-where  ``"status": { "$ne": "archived" }`` at index time using the
-``"partial_filter_selector"`` field:
-
-.. code-block:: http
-
-        POST /db/_index HTTP/1.1
-        Content-Type: application/json
-        Content-Length: 144
-        Host: localhost:5984
-
-        {
-          "index": {
-            "partial_filter_selector": {
-              "status": {
-                "$ne": "archived"
-              }
-            },
-            "fields": ["type"]
-          },
-          "ddoc" : "type-not-archived",
-          "type" : "json"
-        }
-
-Partial indexes are not currently used by the query planner unless specified
-by a ``"use_index"`` field, so we need to modify the original query:
-
-.. code-block:: javascript
-
-    {
-      "selector": {
-        "status": {
-          "$ne": "archived"
-        },
-        "type": "user"
-      },
-      "use_index": "type-not-archived"
-    }
-
-Technically, we don't need to include the filter on the ``"status"`` field
-in the query selector - the partial index ensures this is always true -
-but including it makes the intent of the selector clearer and will make
-it easier to take advantage of future improvements to query planning
-(e.g. automatic selection of partial indexes).
-
-.. note::
-    An index with fields is only used, when the selector includes
-    all of the fields indexed. For instance, if an index contains ``["a". "b"]``
-    but the selector only requires field ``["a"]`` to exist in the matching
-    documents, the index would not be valid for the query. All indexes,
-    however, can be treated as if they include the special fields ``_id`` and
-    ``_rev``. They **never** need to be specified in the query selector.
-
-.. _api/db/find/index-get:
-
-.. http:get:: /{db}/_index
-    :synopsis: List all indexes.
-
-    When you make a ``GET`` request to ``/db/_index``, you get a list of all
-    indexes in the database. In addition to the information available through
-    this API, indexes are also stored in design documents <index-functions>.
-    Design documents are regular documents that have an ID starting with
-    ``_design/``. Design documents can be retrieved and modified in the same
-    way as any other document, although this is not necessary when using Mango.
-
-    :param db: Database name.
-
-    :>header Content-Type: :mimetype:`application/json`
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json number total_rows: Number of indexes
-    :>json object indexes: Array of index definitions
-
-    :code 200: Success
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 500: Execution error
-
-    Format of index objects:
-        -  **ddoc**: ID of the design document the index belongs to. This ID
-            can be used to retrieve the design document containing the index,
-            by making a ``GET`` request to ``/db/ddoc``, where ``ddoc`` is the
-            value of this field.
-        -  **name**: Name of the index.
-        -  **type**: Type of the index. Currently "json" is the only
-            supported type.
-        -  **def**: Definition of the index, containing the indexed fields
-            and the sort order: ascending or descending.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_index HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 238
-        Content-Type: application/json
-        Date: Thu, 01 Sep 2016 18:17:48 GMT
-        Server: CouchDB (Erlang OTP/18)
-
-        {
-            "total_rows": 2,
-            "indexes": [
-            {
-                "ddoc": null,
-                "name": "_all_docs",
-                "type": "special",
-                "def": {
-                    "fields": [
-                        {
-                            "_id": "asc"
-                        }
-                    ]
-                }
-            },
-            {
-                "ddoc": "_design/a5f4711fc9448864a13c81dc71e660b524d7410c",
-                "name": "foo-index",
-                "type": "json",
-                "def": {
-                    "fields": [
-                        {
-                            "foo": "asc"
-                        }
-                    ]
-                }
-            }
-          ]
-        }
-
-.. _api/db/find/index-delete:
-
-.. http:delete:: /{db}/_index/{designdoc}/json/{name}
-    :synopsis: Delete an index
-
-    :param db: Database name.
-    :param designdoc: Design document name.
-    :param name: Index name.
-
-    :>header Content-Type: :mimetype:`application/json`
-
-    :>json string ok: `"true"` if successful.
-
-    :code 200: Success
-    :code 400: Invalid request
-    :code 401: Writer permission required
-    :code 404: Index not found
-    :code 500: Execution error
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /db/_index/_design/a5f4711fc9448864a13c81dc71e660b524d7410c/json/foo-index HTTP/1.1
-        Accept: */*
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Thu, 01 Sep 2016 19:21:40 GMT
-        Server: CouchDB (Erlang OTP/18)
-
-        {
-            "ok": true
-        }
-
-.. _api/db/find/explain:
-
-================
-``/db/_explain``
-================
-
-.. http:post:: /{db}/_explain
-    :synopsis: Identify which index is being used by a particular query.
-
-    Shows which index is being used by the query.  Parameters are the same as
-    :ref:`_find <api/db/_find>`
-
-    :param db: Database name
-
-    :<header Content-Type: :mimetype:`application/json`
-
-    :>header Content-Type: :mimetype:`application/json`
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json string dbname: Name of database
-    :>json object index: Index used to fulfill the query
-    :>json object selector: Query selector used
-    :>json object opts: Query options used
-    :>json number limit: Limit parameter used
-    :>json number skip: Skip parameter used
-    :>json array fields: Fields to be returned by the query
-    :>json object range: Range parameters passed to the underlying view
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 500: Execution error
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /movies/_explain HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Content-Length: 168
-        Host: localhost:5984
-
-        {
-            "selector": {
-                "year": {"$gt": 2010}
-            },
-            "fields": ["_id", "_rev", "year", "title"],
-            "sort": [{"year": "asc"}],
-            "limit": 2,
-            "skip": 0
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Thu, 01 Sep 2016 15:41:53 GMT
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "dbname": "movies",
-            "index": {
-                "ddoc": "_design/0d61d9177426b1e2aa8d0fe732ec6e506f5d443c",
-                "name": "0d61d9177426b1e2aa8d0fe732ec6e506f5d443c",
-                "type": "json",
-                "def": {
-                    "fields": [
-                        {
-                            "year": "asc"
-                        }
-                    ]
-                }
-            },
-            "selector": {
-                "year": {
-                    "$gt": 2010
-                }
-            },
-            "opts": {
-                "use_index": [],
-                "bookmark": "nil",
-                "limit": 2,
-                "skip": 0,
-                "sort": {},
-                "fields": [
-                    "_id",
-                    "_rev",
-                    "year",
-                    "title"
-                ],
-                "r": [
-                    49
-                ],
-                "conflicts": false
-            },
-            "limit": 2,
-            "skip": 0,
-            "fields": [
-                "_id",
-                "_rev",
-                "year",
-                "title"
-            ],
-            "range": {
-                "start_key": [
-                    2010
-                ],
-                "end_key": [
-                    {}
-                ]
-            }
-        }
-
-Index selection
-===============
-
-`_find` chooses which index to use for responding to a query, unless you specify
-an index at query time.
-
-The query planner looks at the selector section and finds the index with the
-closest match to operators and fields used in the query. If there are two
-or more json type indexes that match, the index with the smallest
-number of fields in the index is preferred.
-If there are still two or more candidate indexes,
-the index with the first alphabetical name is chosen.
-
-.. note::
-    It's good practice to specify indexes explicitly in your queries. This
-    prevents existing queries being affected by new indexes that might get added
-    in a production environment.
diff --git a/src/api/database/index.rst b/src/api/database/index.rst
deleted file mode 100644
index 1fbce3e..0000000
--- a/src/api/database/index.rst
+++ /dev/null
@@ -1,47 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/database:
-
-=========
-Databases
-=========
-
-The Database endpoint provides an interface to an entire database with in
-CouchDB. These are database-level, rather than document-level requests.
-
-For all these requests, the database name within the URL path
-should be the database name that you wish to perform the operation on.
-For example, to obtain the meta information for the database
-``recipes``, you would use the HTTP request:
-
-.. code-block:: none
-
-    GET /recipes
-
-For clarity, the form below is used in the URL paths:
-
-.. code-block:: none
-
-    GET /db
-
-Where ``db`` is the name of any database.
-
-.. toctree::
-    common
-    bulk-api
-    find
-    shard
-    changes
-    compact
-    security
-    misc
diff --git a/src/api/database/misc.rst b/src/api/database/misc.rst
deleted file mode 100644
index 598f6e6..0000000
--- a/src/api/database/misc.rst
+++ /dev/null
@@ -1,504 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/purge:
-
-==============
-``/db/_purge``
-==============
-
-.. http:post:: /{db}/_purge
-    :synopsis: Purges documents entirely from database
-
-    A database purge permanently removes the references to documents
-    in the database. Normal deletion of a document within CouchDB does not
-    remove the document from the database, instead, the document is marked as
-    ``_deleted=true`` (and a new revision is created). This is to ensure that
-    deleted documents can be replicated to other databases as having been
-    deleted. This also means that you can check the status of a document and
-    identify that the document has been deleted by its absence.
-
-    The purge request must include the document IDs, and for each
-    document ID, one or more revisions that must be purged. Documents can be
-    previously deleted, but it is not necessary. Revisions must be leaf
-    revisions.
-
-    The response will contain a list of the document IDs and revisions
-    successfully purged.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :<json object: Mapping of document ID to list of revisions to purge
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json string purge_seq: Purge sequence string
-    :>json object purged: Mapping of document ID to list of purged revisions
-    :code 201: Request completed successfully
-    :code 202: Request was accepted, and was completed successfully on at least
-               one replica, but quorum was not reached.
-    :code 400: Invalid database name or JSON payload
-    :code 415: Bad :header:`Content-Type` value
-    :code 500: Internal server error or timeout
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_purge HTTP/1.1
-        Accept: application/json
-        Content-Length: 76
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "c6114c65e295552ab1019e2b046b10e": [
-                "3-b06fcd1c1c9e0ec7c480ee8aa467bf3b",
-                "3-c50a32451890a3f1c3e423334cc92745"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 107
-        Content-Type: application/json
-        Date: Fri, 02 Jun 2017 18:55:54 GMT
-        Server: CouchDB/2.0.0-2ccd4bf (Erlang OTP/18)
-
-        {
-          "purge_seq": null,
-          "purged": {
-            "c6114c65e295552ab1019e2b046b10e": [
-                "3-c50a32451890a3f1c3e423334cc92745"
-              ]
-          }
-        }
-
-.. figure:: ../../../images/rev-tree1.png
-     :align: center
-     :alt: Document Revision Tree 1
-
-     Document Revision Tree 1
-
-For example, given the above purge tree and issuing the above purge request,
-the whole document will be purged, as it contains only a single branch with a
-leaf revision  `3-c50a32451890a3f1c3e423334cc92745` that will be purged.
-As a result of this purge operation, a document with
-`_id:c6114c65e295552ab1019e2b046b10e` will be completely removed from the
-database's document b+tree, and sequence b+tree. It will not be available
-through ``_all_docs`` or ``_changes`` endpoints, as though this document never
-existed. Also as a result of purge operation, the database's ``purge_seq`` and
-``update_seq`` will be increased.
-
-Notice, how revision `3-b06fcd1c1c9e0ec7c480ee8aa467bf3b` was ignored. Revisions
-that have already been purged and non-leaf revisions are ignored in a purge
-request.
-
-If a document has two conflict revisions with the following revision history:
-
-.. figure:: ../../../images/rev-tree2.png
-     :align: center
-     :alt: Document Revision Tree 1
-
-     Document Revision Tree 2
-
-the above purge request will purge only one branch, leaving the document's
-revision tree with only a single branch:
-
-.. figure:: ../../../images/rev-tree3.png
-    :align: center
-    :alt: Document Revision Tree 3
-
-    Document Revision Tree 3
-
-As a result of this purge operation, a new updated version of the document will
-be available in ``_all_docs`` and ``_changes``, creating a new record in ``_changes``.
-The database's ``purge_seq`` and ``update_seq`` will be increased.
-
-Internal Replication
-======================
-Purges are automatically replicated between replicas of the same database. Each
-database has an internal purge tree that stores a certain number of the most
-recent purges. This allows internal synchronization between replicas of the same
-database.
-
-External Replication
-======================
-Purge operations are not replicated to other external databases. External
-replication works by identifying a source's document revisions that are missing
-on target, and copying these revisions from source to target. A purge operation
-completely purges revisions from a document's purge tree making external
-replication of purges impossible.
-
-    .. note::
-      If you need a purge to be effective across multiple effective databases, you
-      must run the purge separately on each of the databases.
-
-Updating Indexes
-================
-
-The number of purges on a database is tracked using a purge sequence. This is
-used by the view indexer to optimize the updating of views that contain the
-purged documents.
-
-Each internal database indexer, including the view indexer, keeps its own purge
-sequence. The purge sequence stored in the index can be much smaller than the
-database's purge sequence up to the number of purge requests allowed to be
-stored in the purge trees of the database. Multiple purge requests can be
-processed by the indexer without incurring a rebuild of the index. The index
-will be updated according to these purge requests.
-
-The index of documents is based on the winner of the revision tree. Depending on
-which revision is specified in the purge request, the index update observes the
-following behavior:
-
-- If the winner of the revision tree is not specified in the purge request,
-  there is no change to the index record of this document.
-- If the winner of the revision tree is specified in the purge request, and
-  there is still a revision left after purging, the index record of the document
-  will be built according to the new winner of the revision tree.
-- If all revisions of the document are specified in the purge request, the index
-  record of the document will be deleted. The document will no longer be found
-  in searches.
-
-.. _api/db/_purged_infos_limit:
-
-==============================
-``/db/_purged_infos_limit``
-==============================
-
-.. http:get:: /{db}/_purged_infos_limit
-    :synopsis: Returns the limit of historical purges to store in the database
-
-    Gets the current ``purged_infos_limit`` (purged documents limit) setting,
-    the maximum number of historical purges (purged document Ids with their
-    revisions) that can be stored in the database.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_purged_infos_limit HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 5
-        Content-Type: application/json
-        Date: Wed, 14 Jun 2017 14:43:42 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        1000
-
-.. http:put:: /{db}/_purged_infos_limit
-    :synopsis: Sets the limit of historical purges to store in the database
-
-    Sets the maximum number of purges (requested purged Ids with their
-    revisions) that will be tracked in the database, even after compaction has
-    occurred. You can set the purged documents limit on a database with a scalar
-    integer of the limit that you want to set as the request body.
-
-    The default value of historical stored purges is 1000. This means up to 1000
-    purges can be synchronized between replicas of the same databases in case of
-    one of the replicas was down when purges occurred.
-
-    This request sets the soft limit for stored purges. During the compaction
-    CouchDB will try to keep only `_purged_infos_limit` of purges in the
-    database, but occasionally the number of stored purges can exceed this
-    value. If a database has not completed purge synchronization with active
-    indexes or active internal replications, it may temporarily store a higher
-    number of historical purges.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 200: Request completed successfully
-    :code 400: Invalid JSON data
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /db/_purged_infos_limit HTTP/1.1
-        Accept: application/json
-        Content-Length: 4
-        Content-Type: application/json
-        Host: localhost:5984
-
-        1500
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Wed, 14 Jun 2017 14:45:34 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-.. _api/db/missing_revs:
-
-=====================
-``/db/_missing_revs``
-=====================
-
-.. http:post:: /{db}/_missing_revs
-    :synopsis: By given list of document revisions returns the document
-               revisions that do not exist in the database
-
-    With given a list of document revisions, returns the document revisions
-    that do not exist in the database.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :<json object: Mapping of document ID to list of revisions to lookup
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json object missing_revs: Mapping of document ID to list of missed
-      revisions
-    :code 200: Request completed successfully
-    :code 400: Invalid database name or JSON payload
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_missing_revs HTTP/1.1
-        Accept: application/json
-        Content-Length: 76
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "c6114c65e295552ab1019e2b046b10e": [
-                "3-b06fcd1c1c9e0ec7c480ee8aa467bf3b",
-                "3-0e871ef78849b0c206091f1a7af6ec41"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 64
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 10:53:24 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "missing_revs":{
-                "c6114c65e295552ab1019e2b046b10e": [
-                    "3-b06fcd1c1c9e0ec7c480ee8aa467bf3b"
-                ]
-            }
-        }
-
-.. _api/db/revs_diff:
-
-==================
-``/db/_revs_diff``
-==================
-
-.. http:post:: /{db}/_revs_diff
-    :synopsis: By given list of document revisions returns differences between
-               the given revisions and ones that are in the database
-
-    Given a set of document/revision IDs, returns the subset of those that do
-    not correspond to revisions stored in the database.
-
-    Its primary use is by the replicator, as an important optimization: after
-    receiving a set of new revision IDs from the source database, the
-    replicator sends this set to the destination database's ``_revs_diff`` to
-    find out which of them already exist there. It can then avoid fetching and
-    sending already-known document bodies.
-
-    Both the request and response bodies are JSON objects whose keys are
-    document IDs; but the values are structured differently:
-
-    - In the request, a value is an array of revision IDs for that document.
-
-    - In the response, a value is an object with a ``missing``: key, whose
-      value is a list of revision IDs for that document (the ones that are not
-      stored in the database) and optionally a ``possible_ancestors`` key,
-      whose value is an array of revision IDs that are known that might be
-      ancestors of the missing revisions.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :<json object: Mapping of document ID to list of revisions to lookup
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json array missing: List of missed revisions for specified document
-    :>json array possible_ancestors: List of revisions that *may be* ancestors
-      for specified document and its current revision in requested database
-    :code 200: Request completed successfully
-    :code 400: Invalid database name or JSON payload
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_revs_diff HTTP/1.1
-        Accept: application/json
-        Content-Length: 113
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "190f721ca3411be7aa9477db5f948bbb": [
-                "3-bb72a7682290f94a985f7afac8b27137",
-                "4-10265e5a26d807a3cfa459cf1a82ef2e",
-                "5-067a00dff5e02add41819138abb3284d"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 88
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 16:56:02 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "190f721ca3411be7aa9477db5f948bbb": {
-                "missing": [
-                    "3-bb72a7682290f94a985f7afac8b27137",
-                    "5-067a00dff5e02add41819138abb3284d"
-                ],
-                "possible_ancestors": [
-                    "4-10265e5a26d807a3cfa459cf1a82ef2e"
-                ]
-            }
-        }
-
-.. _api/db/revs_limit:
-
-===================
-``/db/_revs_limit``
-===================
-
-.. http:get:: /{db}/_revs_limit
-    :synopsis: Returns the limit of historical revisions to store for
-               a single document in the database
-
-    Gets the current ``revs_limit`` (revision limit) setting.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_revs_limit HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 5
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 17:27:30 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        1000
-
-.. http:put:: /{db}/_revs_limit
-    :synopsis: Sets the limit of historical revisions to store for
-               a single document in the database
-
-    Sets the maximum number of document revisions that will be tracked by
-    CouchDB, even after compaction has occurred. You can set the revision limit
-    on a database with a scalar integer of the limit that you want to set as
-    the request body.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 200: Request completed successfully
-    :code 400: Invalid JSON data
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /db/_revs_limit HTTP/1.1
-        Accept: application/json
-        Content-Length: 5
-        Content-Type: application/json
-        Host: localhost:5984
-
-        1000
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 17:47:52 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
diff --git a/src/api/database/security.rst b/src/api/database/security.rst
deleted file mode 100644
index 52ed7aa..0000000
--- a/src/api/database/security.rst
+++ /dev/null
@@ -1,189 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/security:
-
-=================
-``/db/_security``
-=================
-
-.. http:get:: /{db}/_security
-    :synopsis: Returns the special security object for the database
-
-    Returns the current security object from the specified database.
-
-    The security object consists of two compulsory elements, ``admins``
-    and ``members``, which are used to specify the list of users and/or roles
-    that have admin and members rights to the database respectively:
-
-    - ``members``: they can read all types of documents from the DB, and they
-      can write (and edit) documents to the DB except for design documents.
-
-    - ``admins``: they have all the privileges of ``members`` plus the
-      privileges: write (and edit) design documents, add/remove database admins
-      and members and set the :ref:`database revisions limit
-      <api/db/revs_limit>`. They can not create a database nor delete a
-      database.
-
-    Both ``members`` and ``admins`` objects contain two array-typed fields:
-
-    - ``names``: List of CouchDB user names
-    - ``roles``: List of users roles
-
-    Any additional fields in the security object are optional.
-    The entire security object is made available to validation and other
-    internal functions so that the database can control and limit
-    functionality.
-
-    If both the names and roles fields of either the admins or members
-    properties are empty arrays, or are not existent, it means the database
-    has no admins or members.
-
-    Having no admins, only server admins (with the reserved ``_admin`` role)
-    are able to update design documents and make other admin level changes.
-
-    Having no members or roles, any user can write regular documents (any
-    non-design document) and read documents from the database.
-
-    Since CouchDB 3.x newly created databases have by default the _admin role
-    to prevent unintentional access.
-
-    If there are any member names or roles defined for a database, then only
-    authenticated users having a matching name or role are allowed to read
-    documents from the database (or do a :get:`/{db}` call).
-
-    .. note::
-        If the security object for a database has never been set, then the
-        value returned will be empty.
-
-        Also note, that security objects are not regular versioned documents
-        (that is, they are not under MVCC rules). This is a design choice to
-        speed up authorization checks (avoids traversing a database's documents
-        B-Tree).
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json object admins: Object with two fields as ``names`` and ``roles``.
-      See description above for more info.
-    :>json object members: Object with two fields as ``names`` and ``roles``.
-      See description above for more info.
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_security HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 109
-        Content-Type: application/json
-        Date: Mon, 12 Aug 2013 19:05:29 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "admins": {
-                "names": [
-                    "superuser"
-                ],
-                "roles": [
-                    "admins"
-                ]
-            },
-            "members": {
-                "names": [
-                    "user1",
-                    "user2"
-                ],
-                "roles": [
-                    "developers"
-                ]
-            }
-        }
-
-.. http:put:: /{db}/_security
-    :synopsis: Sets the special security object for the database
-
-    Sets the security object for the given database.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :<json object admins: Object with two fields as ``names`` and ``roles``.
-      :ref:`See description above for more info <api/db/security>`.
-    :<json object members: Object with two fields as ``names`` and ``roles``.
-      :ref:`See description above for more info <api/db/security>`.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: bash
-
-        shell> curl http://localhost:5984/pineapple/_security -X PUT -H 'content-type: application/json' -H 'accept: application/json' -d '{"admins":{"names":["superuser"],"roles":["admins"]},"members":{"names": ["user1","user2"],"roles": ["developers"]}}'
-
-    .. code-block:: http
-
-        PUT /db/_security HTTP/1.1
-        Accept: application/json
-        Content-Length: 121
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "admins": {
-                "names": [
-                    "superuser"
-                ],
-                "roles": [
-                    "admins"
-                ]
-            },
-            "members": {
-                "names": [
-                    "user1",
-                    "user2"
-                ],
-                "roles": [
-                    "developers"
-                ]
-            }
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Tue, 13 Aug 2013 11:26:28 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
diff --git a/src/api/database/shard.rst b/src/api/database/shard.rst
deleted file mode 100644
index 4abc4f0..0000000
--- a/src/api/database/shard.rst
+++ /dev/null
@@ -1,223 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/db/shards:
-
-===============
-``/db/_shards``
-===============
-
-.. versionadded:: 2.0
-
-.. http:get:: /{db}/_shards
-    :synopsis: Displays the shard map layout of a database
-
-    The response will contain a list of database shards. Each shard will
-    have its internal database range, and the nodes on which replicas of
-    those shards are stored.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json object shards: Mapping of shard ranges to individual shard replicas
-                          on each node in the cluster
-    :code 200: Request completed successfully
-    :code 400: Invalid database name
-    :code 401: Read privilege required
-    :code 415: Bad :header:`Content-Type` value
-    :code 500: Internal server error or timeout
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_shards HTTP/1.1
-        Accept: */*
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 621
-        Content-Type: application/json
-        Date: Fri, 18 Jan 2019 19:55:14 GMT
-        Server: CouchDB/2.4.0 (Erlang OTP/19)
-
-        {
-          "shards": {
-            "00000000-1fffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "20000000-3fffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "40000000-5fffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "60000000-7fffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "80000000-9fffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "a0000000-bfffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "c0000000-dfffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ],
-            "e0000000-ffffffff": [
-              "couchdb@node1.example.com",
-              "couchdb@node2.example.com",
-              "couchdb@node3.example.com"
-            ]
-          }
-        }
-
-.. _api/db/shards/doc:
-
-==============================
-``/db/_shards/doc``
-==============================
-
-.. http:get:: /{db}/_shards/{docid}
-    :synopsis: Returns the specific shard in which a document is stored
-
-    Returns information about the specific shard into which a given document
-    has been stored, along with information about the nodes on which that
-    shard has a replica.
-
-    :param db: Database name
-    :param docid: Document ID
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json string range: The shard range in which the document is stored
-    :>json array nodes: List of nodes serving a replica of the shard
-    :code 200: Request completed successfully
-    :code 401: Read privilege required
-    :code 404: Database or document not found
-    :code 500: Internal server error or timeout
-
-    **Request**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 94
-        Content-Type: application/json
-        Date: Fri, 18 Jan 2019 20:08:07 GMT
-        Server: CouchDB/2.3.0-9d4cb03c2 (Erlang OTP/19)
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 94
-        Content-Type: application/json
-        Date: Fri, 18 Jan 2019 20:26:33 GMT
-        Server: CouchDB/2.3.0-9d4cb03c2 (Erlang OTP/19)
-
-        {
-          "range": "e0000000-ffffffff",
-          "nodes": [
-            "node1@127.0.0.1",
-            "node2@127.0.0.1",
-            "node3@127.0.0.1"
-          ]
-        }
-
-.. _api/db/sync_shards:
-
-=====================
-``/db/_sync_shards``
-=====================
-
-.. versionadded:: 2.3.1
-
-.. http:post:: /{db}/_sync_shards
-    :synopsis: Trigger a synchronization of all shard replicas
-               in the database
-
-    For the given database, force-starts internal shard synchronization
-    for all replicas of all database shards.
-
-    This is typically only used when performing cluster maintenance,
-    such as :ref:`moving a shard <cluster/sharding/move>`.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json boolean ok: Operation status. Available in case of success
-    :>json string error: Error type. Available if response code is ``4xx``
-    :>json string reason: Error description. Available if response code is
-      ``4xx``
-    :code 202: Request accepted
-    :code 400: Invalid database name
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Database not found
-    :code 500: Internal server error or timeout
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_sync_shards HTTP/1.1
-        Host: localhost:5984
-        Accept: */*
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 202 Accepted
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Fri, 18 Jan 2019 20:19:23 GMT
-        Server: CouchDB/2.3.0-9d4cb03c2 (Erlang OTP/19)
-        X-Couch-Request-ID: 14f0b8d252
-        X-CouchDB-Body-Time: 0
-
-        {
-            "ok": true
-        }
-
-.. note::
-
-    Admins may want to bump their ``[mem3] sync_concurrency`` value to a
-    larger figure for the duration of the shards sync.
diff --git a/src/api/ddoc/common.rst b/src/api/ddoc/common.rst
deleted file mode 100644
index 95aef4f..0000000
--- a/src/api/ddoc/common.rst
+++ /dev/null
@@ -1,221 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/ddoc:
-
-==========================
-``/db/_design/design-doc``
-==========================
-
-.. http:head:: /{db}/_design/{ddoc}
-    :synopsis: Returns bare information in the HTTP Headers for
-               the design document
-
-    Returns the HTTP Headers containing a minimal amount of information about
-    the specified design document.
-
-    .. seealso::
-        :head:`/{db}/{docid}`
-
-.. http:get:: /{db}/_design/{ddoc}
-    :synopsis: Returns the design document
-
-    Returns the contents of the design document specified with the name of the
-    design document and from the specified database from the URL. Unless you
-    request a specific revision, the latest revision of the document will
-    always be returned.
-
-    .. seealso::
-        :get:`/{db}/{docid}`
-
-.. http:put:: /{db}/_design/{ddoc}
-    :synopsis: Creates a new design document or new version of an existing one
-
-    The :method:`PUT` method creates a new named design document, or creates
-    a new revision of the existing design document.
-
-    The design documents have some agreement upon their fields and structure.
-    Currently it is the following:
-
-    * **language** (*string*): Defines :ref:`Query Server <query-server>`
-      to process design document functions
-    * **options** (*object*): View's default options
-    * **filters** (*object*): :ref:`Filter functions <filterfun>` definition
-    * **lists** (*object*): :ref:`List functions <listfun>` definition. *Deprecated.*
-    * **rewrites** (*array* or *string*): Rewrite rules definition. *Deprecated.*
-    * **shows** (*object*): :ref:`Show functions <showfun>` definition. *Deprecated.*
-    * **updates** (*object*): :ref:`Update functions <updatefun>` definition
-    * **validate_doc_update** (*string*): :ref:`Validate document update
-      <vdufun>` function source
-    * **views** (*object*): :ref:`View functions <viewfun>` definition.
-    * **autoupdate** (*boolean*): Indicates whether to automatically build
-      indexes defined in this design document. Default is ``true``.
-
-    Note, that for ``filters``, ``lists``, ``shows`` and ``updates`` fields
-    objects are mapping of function name to string function source code. For
-    ``views`` mapping is the same except that values are objects with ``map``
-    and ``reduce`` (optional) keys which also contains functions source code.
-
-    .. seealso::
-        :put:`/{db}/{docid}`
-
-.. http:delete:: /{db}/_design/{ddoc}
-    :synopsis: Deletes the design document
-
-    Deletes the specified document from the database. You must supply the
-    current (latest) revision, either by using the ``rev`` parameter to
-    specify the revision.
-
-    .. seealso::
-        :delete:`/{db}/{docid}`
-
-.. http:copy:: /{db}/_design/{ddoc}
-    :synopsis: Copies the design document
-
-    The :method:`COPY` (which is non-standard HTTP) copies an existing design
-    document to a new or existing one.
-
-    Given that view indexes on disk are named after their MD5 hash of the
-    view definition, and that a `COPY` operation won't actually change
-    that definition, the copied views won't have to be reconstructed.
-    Both views will be served from the same index on disk.
-
-    .. seealso::
-        :copy:`/{db}/{docid}`
-
-.. _api/ddoc/attachment:
-
-=====================================
-``/db/_design/design-doc/attachment``
-=====================================
-
-.. http:head:: /{db}/_design/{ddoc}/{attname}
-    :synopsis: Returns bare information in the HTTP Headers for the attachment
-
-    Returns the HTTP headers containing a minimal amount of information about
-    the specified attachment.
-
-    .. seealso::
-        :head:`/{db}/{docid}/{attname}`
-
-.. http:get:: /{db}/_design/{ddoc}/{attname}
-    :synopsis: Gets the attachment of a design document
-
-    Returns the file attachment associated with the design document. The raw
-    data of the associated attachment is returned (just as if you were
-    accessing a static file.
-
-    .. seealso::
-        :get:`/{db}/{docid}/{attname}`
-
-.. http:put:: /{db}/_design/{ddoc}/{attname}
-    :synopsis: Adds an attachment of a design document
-
-    Uploads the supplied content as an attachment to the specified design
-    document. The attachment name provided must be a URL encoded string.
-
-    .. seealso::
-        :put:`/{db}/{docid}/{attname}`
-
-.. http:delete:: /{db}/_design/{ddoc}/{attname}
-    :synopsis: Deletes an attachment of a design document
-
-    Deletes the attachment of the specified design document.
-
-    .. seealso::
-        :delete:`/{db}/{docid}/{attname}`
-
-.. _api/ddoc/info:
-
-================================
-``/db/_design/design-doc/_info``
-================================
-
-.. http:get:: /{db}/_design/{ddoc}/_info
-    :synopsis: Returns view index information for the specified design document
-
-    Obtains information about the specified design document, including the
-    index, index size and current status of the design document and associated
-    index information.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json string name: Design document name
-    :>json object view_index: :ref:`api/ddoc/view_index_info`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/_design/recipe/_info HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 263
-        Content-Type: application/json
-        Date: Sat, 17 Aug 2013 12:54:17 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "name": "recipe",
-            "view_index": {
-                "compact_running": false,
-                "language": "python",
-                "purge_seq": 0,
-                "signature": "a59a1bb13fdf8a8a584bc477919c97ac",
-                "sizes": {
-                  "active": 926691,
-                  "disk": 1982704,
-                  "external": 1535701
-                },
-                "update_seq": 12397,
-                "updater_running": false,
-                "waiting_clients": 0,
-                "waiting_commit": false
-            }
-        }
-
-.. _api/ddoc/view_index_info:
-
-View Index Information
-======================
-
-The response from :get:`/{db}/_design/{ddoc}/_info` contains
-``view_index`` (*object*) field with the next structure:
-
-* **compact_running** (*boolean*):  Indicates whether a compaction routine
-  is currently running on the view
-* **sizes.active** (*number*): The size of live data inside the view, in bytes
-* **sizes.external** (*number*): The uncompressed size of view contents in bytes
-* **sizes.file** (*number*): Size in bytes of the view as stored on disk
-* **language** (*string*): Language for the defined views
-* **purge_seq** (*number*): The purge sequence that has been processed
-* **signature** (*string*): MD5 signature of the views for the design document
-* **update_seq** (*number* / *string*): The update sequence of the corresponding
-  database that has been indexed
-* **updater_running** (*boolean*): Indicates if the view is currently
-  being updated
-* **waiting_clients** (*number*): Number of clients waiting on views from
-  this design document
-* **waiting_commit** (*boolean*): Indicates if there are outstanding commits
-  to the underlying database that need to processed
diff --git a/src/api/ddoc/index.rst b/src/api/ddoc/index.rst
deleted file mode 100644
index 1eeb647..0000000
--- a/src/api/ddoc/index.rst
+++ /dev/null
@@ -1,35 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/design-docs:
-
-================
-Design Documents
-================
-
-In CouchDB, design documents provide the main interface for building a CouchDB
-application. The design document defines the views used to extract information
-from CouchDB through one or more views. Design documents are created within
-your CouchDB instance in the same way as you create database documents, but the
-content and definition of the documents is different. Design Documents are
-named using an ID defined with the design document URL path, and this URL can
-then be used to access the database contents.
-
-Views and lists operate together to provide automated (and formatted) output
-from your database.
-
-.. toctree::
-    common
-    views
-    search
-    render
-    rewrites
diff --git a/src/api/ddoc/render.rst b/src/api/ddoc/render.rst
deleted file mode 100644
index d7c7b0e..0000000
--- a/src/api/ddoc/render.rst
+++ /dev/null
@@ -1,413 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/ddoc/show:
-
-==========================================
-``/db/_design/design-doc/_show/show-name``
-==========================================
-
-.. warning::
-
-    Show functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. http:get:: /{db}/_design/{ddoc}/_show/{func}
-    :synopsis: Executes a show function against null document
-
-.. http:post:: /{db}/_design/{ddoc}/_show/{func}
-    :synopsis: Same as GET method for the related endpoint
-
-    Applies :ref:`show function <showfun>` for ``null`` document.
-
-    The request and response parameters are depended upon function
-    implementation.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param func: Show function name
-    :>header ETag: Response signature
-    :query string format: Format of the returned response.
-      Used by :js:func:`provides` function
-    :code 200: Request completed successfully
-    :code 500: Query server error
-
-    **Function**:
-
-    .. code-block:: javascript
-
-        function(doc, req) {
-            if (!doc) {
-                return {body: "no doc"}
-            } else {
-                return {body: doc.description}
-            }
-        }
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/_design/recipe/_show/description HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Length: 6
-        Content-Type: text/html; charset=utf-8
-        Date: Wed, 21 Aug 2013 12:34:07 GMT
-        Etag: "7Z2TO7FPEMZ0F4GH0RJCRIOAU"
-        Server: CouchDB (Erlang/OTP)
-        Vary: Accept
-
-        no doc
-
-.. _api/ddoc/show/id:
-
-=================================================
-``/db/_design/design-doc/_show/show-name/doc-id``
-=================================================
-
-.. warning::
-
-    Show functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. http:get:: /{db}/_design/{ddoc}/_show/{func}/{docid}
-    :synopsis: Executes a show function against the specified document
-
-.. http:post:: /{db}/_design/{ddoc}/_show/{func}/{docid}
-    :synopsis: Same as GET method for the related endpoint
-
-    Applies :ref:`show function <showfun>` for the specified document.
-
-    The request and response parameters are depended upon function
-    implementation.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param func: Show function name
-    :param docid: Document ID
-    :>header ETag: Response signature
-    :query string format: Format of the returned response.
-      Used by :js:func:`provides` function
-    :code 200: Request completed successfully
-    :code 500: Query server error
-
-    **Function**:
-
-    .. code-block:: javascript
-
-        function(doc, req) {
-            if (!doc) {
-                return {body: "no doc"}
-            } else {
-                return {body: doc.description}
-            }
-        }
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/_design/recipe/_show/description/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Length: 88
-        Content-Type: text/html; charset=utf-8
-        Date: Wed, 21 Aug 2013 12:38:08 GMT
-        Etag: "8IEBO8103EI98HDZL5Z4I1T0C"
-        Server: CouchDB (Erlang/OTP)
-        Vary: Accept
-
-        An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.
-
-.. _api/ddoc/list:
-
-====================================================
-``/db/_design/design-doc/_list/list-name/view-name``
-====================================================
-
-.. warning::
-
-    List functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. http:get:: /{db}/_design/{ddoc}/_list/{func}/{view}
-    :synopsis: Executes a list function against the view from the same
-               design document
-
-.. http:post:: /{db}/_design/{ddoc}/_list/{func}/{view}
-    :synopsis: Same as GET method for the related endpoint
-
-    Applies :ref:`list function <listfun>` for the :ref:`view function
-    <viewfun>` from the same design document.
-
-    The request and response parameters are depended upon function
-    implementation.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param func: List function name
-    :param view: View function name
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-    :query string format: Format of the returned response.
-      Used by :js:func:`provides` function
-    :code 200: Request completed successfully
-    :code 500: Query server error
-
-    **Function**:
-
-    .. code-block:: javascript
-
-        function(head, req) {
-            var row = getRow();
-            if (!row){
-                return 'no ingredients'
-            }
-            send(row.key);
-            while(row=getRow()){
-                send(', ' + row.key);
-            }
-        }
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/_design/recipe/_list/ingredients/by_name HTTP/1.1
-        Accept: text/plain
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: text/plain; charset=utf-8
-        Date: Wed, 21 Aug 2013 12:49:15 GMT
-        Etag: "D52L2M1TKQYDD1Y8MEYJR8C84"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-        Vary: Accept
-
-        meatballs, spaghetti, tomato sauce
-
-.. _api/ddoc/list/ddoc:
-
-===============================================================
-``/db/_design/design-doc/_list/list-name/other-ddoc/view-name``
-===============================================================
-
-.. warning::
-
-    List functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. http:get:: /{db}/_design/{ddoc}/_list/{func}/{other-ddoc}/{view}
-    :synopsis: Executes a list function against the view from other
-               design document
-
-.. http:post:: /{db}/_design/{ddoc}/_list/{func}/{other-ddoc}/{view}
-    :synopsis: Same as GET method for the related endpoint
-
-    Applies :ref:`list function <listfun>` for the :ref:`view function
-    <viewfun>` from the other design document.
-
-    The request and response parameters are depended upon function
-    implementation.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param func: List function name
-    :param other-ddoc: Other design document name that holds view function
-    :param view: View function name
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-    :query string format: Format of the returned response.
-      Used by :js:func:`provides` function
-    :code 200: Request completed successfully
-    :code 500: Query server error
-
-    **Function**:
-
-    .. code-block:: javascript
-
-        function(head, req) {
-            var row = getRow();
-            if (!row){
-                return 'no ingredients'
-            }
-            send(row.key);
-            while(row=getRow()){
-                send(', ' + row.key);
-            }
-        }
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/_design/ingredient/_list/ingredients/recipe/by_ingredient?key="spaghetti" HTTP/1.1
-        Accept: text/plain
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: text/plain; charset=utf-8
-        Date: Wed, 21 Aug 2013 12:49:15 GMT
-        Etag: "5L0975X493R0FB5Z3043POZHD"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-        Vary: Accept
-
-        spaghetti
-
-.. _api/ddoc/update:
-
-==============================================
-``/db/_design/design-doc/_update/update-name``
-==============================================
-
-.. http:post:: /{db}/_design/{ddoc}/_update/{func}
-    :synopsis: Executes an update function against the null document
-
-    Executes :ref:`update function <updatefun>` on server side for ``null``
-    document.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param func: Update function name
-    :>header X-Couch-Id: Created/updated document's ID
-    :>header X-Couch-Update-NewRev: Created/updated document's revision
-    :code 200: No document was created or updated
-    :code 201: Document was created or updated
-    :code 500: Query server error
-
-    **Function**:
-
-    .. code-block:: javascript
-
-        function(doc, req) {
-            if (!doc){
-              return [null, {'code': 400,
-                             'json': {'error': 'missed',
-                                      'reason': 'no document to update'}}]
-            } else {
-                doc.ingredients.push(req.body);
-                return [doc, {'json': {'status': 'ok'}}];
-            }
-        }
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /recipes/_design/recipe/_update/ingredients HTTP/1.1
-        Accept: application/json
-        Content-Length: 10
-        Content-Type: application/json
-        Host: localhost:5984
-
-        "something"
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 404 Object Not Found
-        Cache-Control: must-revalidate
-        Content-Length: 52
-        Content-Type: application/json
-        Date: Wed, 21 Aug 2013 14:00:58 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "error": "missed",
-            "reason": "no document to update"
-        }
-
-.. _api/ddoc/update/id:
-
-=====================================================
-``/db/_design/design-doc/_update/update-name/doc-id``
-=====================================================
-
-.. http:put:: /{db}/_design/{ddoc}/_update/{func}/{docid}
-    :synopsis: Executes an update function against the specified document
-
-    Executes :ref:`update function <updatefun>` on server side for the specified
-    document.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param func: Update function name
-    :param docid: Document ID
-    :>header X-Couch-Id: Created/updated document's ID
-    :>header X-Couch-Update-NewRev: Created/updated document's revision
-    :code 200: No document was created or updated
-    :code 201: Document was created or updated
-    :code 500: Query server error
-
-    **Function**:
-
-    .. code-block:: javascript
-
-        function(doc, req) {
-            if (!doc){
-                return [null, {'code': 400,
-                               'json': {'error': 'missed',
-                                        'reason': 'no document to update'}}]
-            } else {
-                doc.ingredients.push(req.body);
-                return [doc, {'json': {'status': 'ok'}}];
-            }
-        }
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /recipes/_design/recipe/_update/ingredients/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        Content-Length: 5
-        Content-Type: application/json
-        Host: localhost:5984
-
-        "love"
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 16
-        Content-Type: application/json
-        Date: Wed, 21 Aug 2013 14:11:34 GMT
-        Server: CouchDB (Erlang/OTP)
-        X-Couch-Id: SpaghettiWithMeatballs
-        X-Couch-Update-NewRev: 12-a5e099df5720988dae90c8b664496baf
-
-        {
-            "status": "ok"
-        }
diff --git a/src/api/ddoc/rewrites.rst b/src/api/ddoc/rewrites.rst
deleted file mode 100644
index 5eb0f49..0000000
--- a/src/api/ddoc/rewrites.rst
+++ /dev/null
@@ -1,192 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/ddoc/rewrite:
-
-========================================
-``/db/_design/design-doc/_rewrite/path``
-========================================
-
-.. warning::
-
-    Rewrites are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. http:any:: /{db}/_design/{ddoc}/_rewrite/{path}
-    :synopsis: Rewrites HTTP request for the specified path by using stored
-               array of routing rules or JavaScript function
-
-    Rewrites the specified path by rules defined in the specified design
-    document. The rewrite rules are defined by the ``rewrites`` field of the
-    design document. The ``rewrites`` field can either be a *string* containing
-    the a rewrite function or an *array* of rule definitions.
-
-Using a stringified function for ``rewrites``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. versionadded:: 2.0
-
-    When the ``rewrites`` field is a stringified function, the query server is used
-    to pre-process and route requests.
-
-    The function takes a :ref:`request2_object`.
-
-    The return value of the function will cause the server to rewrite the
-    request to a new location or immediately return a response.
-
-    To rewrite the request, return an object containing the following
-    properties:
-
-    - **path** (*string*): Rewritten path.
-    - **query** (*array*): Rewritten query. If omitted, the original
-      query keys are used.
-    - **headers** (*object*): Rewritten headers. If omitted, the original
-      request headers are used.
-    - **method** (*string*): HTTP method of rewritten request (``"GET"``,
-      ``"POST"``, etc). If omitted, the original request method is used.
-    - **body** (*string*): Body for ``"POST"``/``"PUT"`` requests. If omitted,
-      the original request body is used.
-
-    To immediately respond to the request, return an object containing the
-    following properties:
-
-    - **code** (*number*): Returned HTTP status code (``200``, ``404``, etc).
-    - **body** (*string*): Body of the response to user.
-
-    **Example A**. Restricting access.
-
-    .. code-block:: javascript
-
-        function(req2) {
-          var path = req2.path.slice(4),
-            isWrite = /^(put|post|delete)$/i.test(req2.method),
-            isFinance = req2.userCtx.roles.indexOf("finance") > -1;
-          if (path[0] == "finance" && isWrite && !isFinance) {
-            // Deny writes to  DB "finance" for users
-            // having no "finance" role
-            return {
-              code: 403,
-              body: JSON.stringify({
-                error: "forbidden".
-                reason: "You are not allowed to modify docs in this DB"
-              })
-            };
-          }
-          // Pass through all other requests
-          return { path: "../../../" + path.join("/") };
-        }
-
-    **Example B**. Different replies for JSON and HTML requests.
-
-    .. code-block:: javascript
-
-        function(req2) {
-          var path = req2.path.slice(4),
-            h = headers,
-            wantsJson = (h.Accept || "").indexOf("application/json") > -1,
-            reply = {};
-          if (!wantsJson) {
-            // Here we should prepare reply object
-            // for plain HTML pages
-          } else {
-            // Pass through JSON requests
-            reply.path = "../../../"+path.join("/");
-          }
-          return reply;
-        }
-
-Using an array of rules for ``rewrites``
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-    When the ``rewrites`` field is an array of rule objects, the server will
-    rewrite the request based on the first matching rule in the array.
-
-    Each rule in the array is an *object* with the following fields:
-
-    - **method** (*string*): HTTP request method to bind the request method to
-      the rule. If omitted, uses ``"*"``, which matches all methods.
-    - **from** (*string*): The pattern used to compare against the URL and
-      define dynamic variables.
-    - **to** (*string*): The path to rewrite the URL to. It can contain
-      variables depending on binding variables discovered during pattern
-      matching and query args (URL args and from the query member).
-    - **query** (*object*): Query args passed to the rewritten URL. They may
-      contain dynamic variables.
-
-    The ``to`` and ``from`` paths may contains string patterns with leading
-    ``:`` or ``*`` characters to define dynamic variables in the match.
-
-    The first rule in the ``rewrites`` array that matches the incoming request
-    is used to define the rewrite. To match the incoming request, the
-    rule's ``method`` must match the request's HTTP method and the rule's
-    ``from`` must match the request's path using the following pattern matching
-    logic.
-
-    - The *from* pattern and URL are first split on ``/`` to get a list of
-      tokens. For example, if *from* field is ``/somepath/:var/*`` and the URL
-      is ``/somepath/a/b/c``, the tokens are ``somepath``, ``:var``, and
-      ``*`` for the *from* pattern and ``somepath``, ``a``, ``b``, and
-      ``c`` for the URL.
-    - Each token starting with ``:`` in the pattern will match the
-      corresponding token in the URL and define a new dynamic variable whose
-      name is the remaining string after the ``:`` and value is the token from
-      the URL. In this example, the ``:var`` token will match ``b``
-      and set ``var`` = ``a``.
-    - The star token ``*`` in the pattern will match any number of tokens in
-      the URL and must be the last token in the pattern. It will define a
-      dynamic variable with the remaining tokens. In this example, the ``*``
-      token will match the ``b`` and ``c`` tokens and set ``*`` =
-      ``b/c``.
-    - The remaining tokens must match exactly for the pattern to be considered
-      a match. In this example, ``somepath`` in the pattern matches
-      ``somepath`` in the URL and all tokens in the URL have matched, causing
-      this rule to be a match.
-
-    Once a rule is found, the request URL is rewritten using the ``to`` and
-    ``query`` fields. Dynamic variables are substituted into the ``:`` and
-    ``*`` variables in these fields to produce the final URL.
-
-    If no rule matches, a :statuscode:`404` response is returned.
-
-    Examples:
-
-    +-----------------------------------+----------+------------------+-------+
-    |               Rule                |    URL   |  Rewrite to      | Tokens|
-    +===================================+==========+==================+=======+
-    | {"from": "/a",                    | /a       | /some            |       |
-    |  "to": "/some"}                   |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-    | {"from": "/a/\*",                 | /a/b/c   | /some/b/c        |       |
-    |  "to": "/some/\*}                 |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-    | {"from": "/a/b",                  | /a/b?k=v | /some?k=v        | k=v   |
-    |  "to": "/some"}                   |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-    | {"from": "/a/b",                  | /a/b     | /some/b?var=b    | var=b |
-    |  "to": "/some/:var"}              |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-    | {"from": "/a/:foo/",              | /a/b/c   | /some/b/c?foo=b  | foo=b |
-    |  "to": "/some/:foo/"}             |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-    | {"from": "/a/:foo",               | /a/b     | /some/?k=b&foo=b | foo=b |
-    |  "to": "/some",                   |          |                  |       |
-    |  "query": { "k": ":foo" }}        |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-    | {"from": "/a",                    | /a?foo=b | /some/?b&foo=b   | foo=b |
-    |  "to": "/some/:foo"}              |          |                  |       |
-    +-----------------------------------+----------+------------------+-------+
-
-    Request method, header, query parameters, request payload and response body
-    are dependent on the endpoint to which the URL will be rewritten.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param path: URL path to rewrite
diff --git a/src/api/ddoc/search.rst b/src/api/ddoc/search.rst
deleted file mode 100644
index 3a48021..0000000
--- a/src/api/ddoc/search.rst
+++ /dev/null
@@ -1,168 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/ddoc/search:
-
-=============================================
-``/db/_design/design-doc/_search/index-name``
-=============================================
-
-.. warning::
-    Search endpoints require a running search plugin connected to each cluster
-    node. See :ref:`Search Plugin Installation <install/search>` for details.
-
-.. versionadded:: 3.0
-
-.. http:get:: /{db}/_design/{ddoc}/_search/{index}
-    :synopsis: Returns results for the specified search index
-
-    Executes a search request against the named index in the specified design document.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param index: Search index name
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-
-    :query string bookmark: A bookmark received from a previous search. This parameter
-        enables paging through the results. If there are no more results after the
-        bookmark, you get a response with an empty rows array and the same bookmark,
-        confirming the end of the result list.
-    :query json counts: An array of names of string fields for which counts
-        are requested. The response contains counts for each unique value of this field
-        name among the documents that match the search query. :ref:`Faceting
-        <ddoc/search/faceting>` must be enabled for this parameter to function.
-    :query json drilldown: This field can be used several times. Each use defines a pair
-        with a field name and a value. The search matches only documents containing the
-        value that was provided in the named field. It differs from using
-        ``"fieldname:value"`` in the ``q`` parameter only in that the values are not
-        analyzed. :ref:`Faceting <ddoc/search/faceting>` must be enabled for this
-        parameter to function.
-    :query string group_field: Field by which to group search matches. :query number
-        group_limit: Maximum group count. This field can be used only if ``group_field``
-        is specified.
-    :query json group_sort: This field defines the order of the groups in a search that
-        uses ``group_field``. The default sort order is relevance.
-    :query json highlight_fields: Specifies which fields to highlight. If specified, the
-        result object contains a ``highlights`` field with an entry for each specified
-        field.
-    :query string highlight_pre_tag: A string that is inserted before the highlighted
-        word in the highlights output.
-    :query string highlight_post_tag: A string that is inserted after the highlighted
-        word in the highlights output.
-    :query number highlight_number: Number of fragments that are returned in highlights.
-        If the search term occurs less often than the number of fragments that are
-        specified, longer fragments are returned.
-    :query number highlight_size: Number of characters in each fragment for highlights.
-    :query boolean include_docs: Include the full content of the documents in the
-        response.
-    :query json include_fields: A JSON array of field names to include in search
-        results. Any fields that are included must be indexed with the store:true option.
-    :query number limit: Limit the number of the returned documents to the specified
-        number. For a grouped search, this parameter limits the number of documents per
-        group.
-    :query string q: Alias for ``query``.
-    :query string query: Required. The Lucene query string.
-    :query json ranges: This field defines ranges for faceted, numeric search fields. The
-        value is a JSON object where the fields names are faceted numeric search fields,
-        and the values of the fields are JSON objects. The field names of the JSON objects
-        are names for ranges. The values are strings that describe the range, for example
-        "[0 TO 10]".
-    :query json sort: Specifies the sort order of the results. In a grouped search (when
-        ``group_field`` is used), this parameter specifies the sort order within a group.
-        The default sort order is relevance. A JSON string of the form
-        ``"fieldname<type>"`` or ``-fieldname<type>`` for descending order, where
-        fieldname is the name of a string or number field, and ``type`` is either a
-        number, a string, or a JSON array of strings. The ``type`` part is optional, and
-        defaults to number. Some examples are ``"foo"``, ``"-foo"``, ``"bar<string>"``,
-        ``"-foo<number>"`` and [``"-foo<number>"``, ``"bar<string>"``]. String fields that
-        are used for sorting must not be analyzed fields. Fields that are used for sorting
-        must be indexed by the same indexer that is used for the search query.
-    :query string stale: Set to ``ok`` to allow the use of an out-of-date index.
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json array rows: Array of view row objects. By default the information
-      returned contains only the document ID and revision.
-    :>json number total_rows: Number of documents in the database/view.
-    :>json string bookmark: Opaque identifier to enable pagination.
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 404: Specified database, design document or view is missed
-
-.. note::
-    You must enable :ref:`faceting <ddoc/search/faceting>` before you can use the
-    ``counts``, ``drilldown``, and ``ranges`` parameters.
-
-.. note::
-    Faceting and grouping are not supported on partitioned searches, so the following
-    query parameters should not be used on those requests: ``counts``, ``drilldown``,
-    ``ranges``, and ``group_field``, ``group_limit``, group_sort``.
-
-.. note::
-    Do not combine the ``bookmark`` and ``stale`` options. These options constrain the
-    choice of shard replicas to use for the response. When used together, the options
-    might cause problems when contact is attempted with replicas that are slow or not
-    available.
-
-.. seealso::
-    For more information about how search works, see the
-    :ref:`Search User Guide<ddoc/search>`.
-
-==================================================
-``/db/_design/design-doc/_search_info/index-name``
-==================================================
-
-.. warning::
-    Search endpoints require a running search plugin connected to each cluster
-    node. See :ref:`Search Plugin Installation <install/search>` for details.
-
-.. versionadded:: 3.0
-
-.. http:get:: /{db}/_design/{ddoc}/_search_info/{index}
-    :synopsis: Returns metadata for the specified search index
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param index: Search index name
-    :code 200: Request completed successfully
-    :code 400: Request body is wrong (malformed or missing one of the mandatory fields)
-    :code 500: A server error (or other kind of error) occurred
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_design/cookbook/_search_info/ingredients HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: javascript
-
-    {
-        "name": "_design/cookbook/ingredients",
-        "search_index": {
-            "pending_seq": 7125496,
-            "doc_del_count": 129180,
-            "doc_count": 1066173,
-            "disk_size": 728305827,
-            "committed_seq": 7125496
-        }
-    }
diff --git a/src/api/ddoc/views.rst b/src/api/ddoc/views.rst
deleted file mode 100644
index 7cb6af8..0000000
--- a/src/api/ddoc/views.rst
+++ /dev/null
@@ -1,918 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/ddoc/view:
-
-==========================================
-``/db/_design/design-doc/_view/view-name``
-==========================================
-
-.. http:get:: /{db}/_design/{ddoc}/_view/{view}
-    :synopsis: Returns results for the specified stored view
-
-    Executes the specified view function from the specified design document.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param view: View function name
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-
-    :query boolean conflicts: Include `conflicts` information in response.
-      Ignored if ``include_docs`` isn't ``true``. Default is ``false``.
-    :query boolean descending: Return the documents in descending order by key.
-      Default is ``false``.
-    :query json endkey: Stop returning records when the specified key is
-      reached.
-    :query json end_key: Alias for ``endkey`` param
-    :query string endkey_docid: Stop returning records when the specified
-      document ID is reached. Ignored if ``endkey`` is not set.
-    :query string end_key_doc_id: Alias for ``endkey_docid``.
-    :query boolean group: Group the results using the reduce function to a
-      group or single row. Implies ``reduce`` is ``true`` and the maximum
-      ``group_level``. Default is ``false``.
-    :query number group_level: Specify the group level to be used. Implies
-      ``group`` is ``true``.
-    :query boolean include_docs: Include the associated document with each row.
-      Default is ``false``.
-    :query boolean attachments: Include the Base64-encoded content of
-      :ref:`attachments <api/doc/attachments>` in the documents that are
-      included if ``include_docs`` is ``true``. Ignored if ``include_docs`` isn't
-      ``true``. Default is ``false``.
-    :query boolean att_encoding_info: Include encoding information in
-      attachment stubs if ``include_docs`` is ``true`` and the particular
-      attachment is compressed. Ignored if ``include_docs`` isn't ``true``.
-      Default is ``false``.
-    :query boolean inclusive_end: Specifies whether the specified end key
-      should be included in the result. Default is ``true``.
-    :query json key: Return only documents that match the specified key.
-    :query json-array keys: Return only documents where the key matches one of
-      the keys specified in the array.
-    :query number limit: Limit the number of the returned documents to the
-      specified number.
-    :query boolean reduce: Use the reduction function. Default is ``true`` when
-      a reduce function is defined.
-    :query number skip: Skip this number of records before starting to return
-      the results. Default is ``0``.
-    :query boolean sorted: Sort returned rows (see :ref:`Sorting Returned Rows
-     <api/ddoc/view/sorting>`). Setting this to ``false`` offers a performance
-     boost. The ``total_rows`` and ``offset`` fields are not available when this
-     is set to ``false``. Default is ``true``.
-    :query boolean stable: Whether or not the view results should be returned
-     from a stable set of shards. Default is ``false``.
-    :query string stale: Allow the results from a stale view to be used.
-      Supported values: ``ok`` and ``update_after``.
-      ``ok`` is equivalent to ``stable=true&update=false``.
-      ``update_after`` is equivalent to ``stable=true&update=lazy``.
-      The default behavior is equivalent to ``stable=false&update=true``.
-      Note that this parameter is deprecated. Use ``stable`` and ``update`` instead.
-      See :ref:`views/generation` for more details.
-    :query json startkey: Return records starting with the specified key.
-    :query json start_key: Alias for ``startkey``.
-    :query string startkey_docid: Return records starting with the specified
-      document ID. Ignored if ``startkey`` is not set.
-    :query string start_key_doc_id: Alias for ``startkey_docid`` param
-    :query string update: Whether or not the view in question should be updated
-     prior to responding to the user. Supported values: ``true``, ``false``,
-     ``lazy``. Default is ``true``.
-    :query boolean update_seq: Whether to include in the response an
-      ``update_seq`` value indicating the sequence id of the database the view
-      reflects. Default is ``false``.
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json number offset: Offset where the document list started.
-    :>json array rows: Array of view row objects. By default the information
-      returned contains only the document ID and revision.
-    :>json number total_rows: Number of documents in the database/view.
-    :>json object update_seq: Current update sequence for the database.
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 404: Specified database, design document or view is missed
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/_design/ingredients/_view/by_name HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Wed, 21 Aug 2013 09:12:06 GMT
-        ETag: "2FOLSBSW4O6WB798XU4AQYA9B"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": 0,
-            "rows": [
-                {
-                    "id": "SpaghettiWithMeatballs",
-                    "key": "meatballs",
-                    "value": 1
-                },
-                {
-                    "id": "SpaghettiWithMeatballs",
-                    "key": "spaghetti",
-                    "value": 1
-                },
-                {
-                    "id": "SpaghettiWithMeatballs",
-                    "key": "tomato sauce",
-                    "value": 1
-                }
-            ],
-            "total_rows": 3
-        }
-
-.. versionchanged:: 1.6.0 added ``attachments`` and ``att_encoding_info``
-    parameters
-.. versionchanged:: 2.0.0 added ``sorted`` parameter
-.. versionchanged:: 2.1.0 added ``stable`` and ``update`` parameters
-
-.. warning::
-    Using the ``attachments`` parameter to include attachments in view results
-    is not recommended for large attachment sizes. Also note that the
-    Base64-encoding that is used leads to a 33% overhead (i.e. one third) in
-    transfer size for attachments.
-
-.. http:post:: /{db}/_design/{ddoc}/_view/{view}
-    :synopsis: Returns results for the specified view
-
-    Executes the specified view function from the specified design document.
-    :method:`POST` view functionality supports identical parameters and behavior
-    as specified in the :get:`/{db}/_design/{ddoc}/_view/{view}` API but allows for the
-    query string parameters to be supplied as keys in a JSON object in the body
-    of the `POST` request.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /recipes/_design/ingredients/_view/by_name HTTP/1.1
-        Accept: application/json
-        Content-Length: 37
-        Host: localhost:5984
-
-        {
-            "keys": [
-                "meatballs",
-                "spaghetti"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Wed, 21 Aug 2013 09:14:13 GMT
-        ETag: "6R5NM8E872JIJF796VF7WI3FZ"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": 0,
-            "rows": [
-                {
-                    "id": "SpaghettiWithMeatballs",
-                    "key": "meatballs",
-                    "value": 1
-                },
-                {
-                    "id": "SpaghettiWithMeatballs",
-                    "key": "spaghetti",
-                    "value": 1
-                }
-            ],
-            "total_rows": 3
-        }
-
-.. _api/ddoc/view/options:
-
-View Options
-============
-
-There are two view indexing options that can be defined in a design document
-as boolean properties of an ``options`` object. Unlike the others querying
-options, these aren't URL parameters because they take effect when the view
-index is generated, not when it's accessed:
-
-- **local_seq** (*boolean*): Makes documents' local sequence numbers available
-  to map functions (as a ``_local_seq`` document property)
-- **include_design** (*boolean*): Allows map functions to be called on design
-  documents as well as regular documents
-
-.. _api/ddoc/view/indexing:
-
-Querying Views and Indexes
-==========================
-
-The definition of a view within a design document also creates an index based
-on the key information defined within each view. The production and use of the
-index significantly increases the speed of access and searching or selecting
-documents from the view.
-
-However, the index is not updated when new documents are added or modified in
-the database. Instead, the index is generated or updated, either when the view
-is first accessed, or when the view is accessed after a document has been
-updated. In each case, the index is updated before the view query is executed
-against the database.
-
-View indexes are updated incrementally in the following situations:
-
-- A new document has been added to the database.
-- A document has been deleted from the database.
-- A document in the database has been updated.
-
-View indexes are rebuilt entirely when the view definition changes. To achieve
-this, a 'fingerprint' of the view definition is created when the design
-document is updated. If the fingerprint changes, then the view indexes are
-entirely rebuilt. This ensures that changes to the view definitions are
-reflected in the view indexes.
-
-.. note::
-    View index rebuilds occur when one view from the same the view group (i.e.
-    all the views defined within a single a design document) has been
-    determined as needing a rebuild. For example, if you have a design
-    document with different views, and you update the database, all three view
-    indexes within the design document will be updated.
-
-Because the view is updated when it has been queried, it can result in a delay
-in returned information when the view is accessed, especially if there are a
-large number of documents in the database and the view index does not exist.
-There are a number of ways to mitigate, but not completely eliminate, these
-issues. These include:
-
-- Create the view definition (and associated design documents) on your database
-  before allowing insertion or updates to the documents. If this is allowed
-  while the view is being accessed, the index can be updated incrementally.
-- Manually force a view request from the database. You can do this either
-  before users are allowed to use the view, or you can access the view manually
-  after documents are added or updated.
-- Use the :ref:`changes feed <api/db/changes>` to monitor for changes to the
-  database and then access the view to force the corresponding view index to be
-  updated.
-
-None of these can completely eliminate the need for the indexes to be rebuilt
-or updated when the view is accessed, but they may lessen the effects on
-end-users of the index update affecting the user experience.
-
-Another alternative is to allow users to access a 'stale' version of the view
-index, rather than forcing the index to be updated and displaying the updated
-results. Using a stale view may not return the latest information, but will
-return the results of the view query using an existing version of the index.
-
-For example, to access the existing stale view ``by_recipe`` in the
-``recipes`` design document:
-
-.. code-block:: text
-
-    http://localhost:5984/recipes/_design/recipes/_view/by_recipe?stale=ok
-
-Accessing a stale view:
-
-- Does not trigger a rebuild of the view indexes, even if there have been
-  changes since the last access.
-
-- Returns the current version of the view index, if a current version exists.
-
-- Returns an empty result set if the given view index does not exist.
-
-As an alternative, you use the ``update_after`` value to the ``stale``
-parameter. This causes the view to be returned as a stale view, but for the
-update process to be triggered after the view information has been returned to
-the client.
-
-In addition to using stale views, you can also make use of the ``update_seq``
-query argument. Using this query argument generates the view information
-including the update sequence of the database from which the view was
-generated. The returned value can be compared this to the current update
-sequence exposed in the database information (returned by :get:`/{db}`).
-
-.. _api/ddoc/view/sorting:
-
-Sorting Returned Rows
-=====================
-
-Each element within the returned array is sorted using
-native UTF-8 sorting
-according to the contents of the key portion of the
-emitted content. The basic
-order of output is as follows:
-
-- ``null``
-
-- ``false``
-
-- ``true``
-
-- Numbers
-
-- Text (case sensitive, lowercase first)
-
-- Arrays (according to the values of each element, in order)
-
-- Objects (according to the values of keys, in key order)
-
-**Request**:
-
-.. code-block:: http
-
-    GET /db/_design/test/_view/sorting HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 21 Aug 2013 10:09:25 GMT
-    ETag: "8LA1LZPQ37B6R9U8BK9BGQH27"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "offset": 0,
-        "rows": [
-            {
-                "id": "dummy-doc",
-                "key": null,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": false,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": true,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 0,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 1,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 10,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 42,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "10",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "hello",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "Hello",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "\u043f\u0440\u0438\u0432\u0435\u0442",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [
-                    1,
-                    2,
-                    3
-                ],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [
-                    2,
-                    3
-                ],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [
-                    3
-                ],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": {},
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": {
-                    "foo": "bar"
-                },
-                "value": null
-            }
-        ],
-        "total_rows": 17
-    }
-
-You can reverse the order of the returned view information
-by using the ``descending`` query value set to true:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /db/_design/test/_view/sorting?descending=true HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 21 Aug 2013 10:09:25 GMT
-    ETag: "Z4N468R15JBT98OM0AMNSR8U"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "offset": 0,
-        "rows": [
-            {
-                "id": "dummy-doc",
-                "key": {
-                    "foo": "bar"
-                },
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": {},
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [
-                    3
-                ],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [
-                    2,
-                    3
-                ],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [
-                    1,
-                    2,
-                    3
-                ],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": [],
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "\u043f\u0440\u0438\u0432\u0435\u0442",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "Hello",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "hello",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": "10",
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 42,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 10,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 1,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": 0,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": true,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": false,
-                "value": null
-            },
-            {
-                "id": "dummy-doc",
-                "key": null,
-                "value": null
-            }
-        ],
-        "total_rows": 17
-    }
-
-Sorting order and startkey/endkey
----------------------------------
-
-The sorting direction is applied before the filtering applied using the
-``startkey`` and ``endkey`` query arguments. For example the following query:
-
-.. code-block:: http
-
-    GET http://couchdb:5984/recipes/_design/recipes/_view/by_ingredient?startkey=%22carrots%22&endkey=%22egg%22 HTTP/1.1
-    Accept: application/json
-
-will operate correctly when listing all the matching entries between
-``carrots`` and ``egg``. If the order of output is reversed with the
-``descending`` query argument, the view request will return no entries:
-
-.. code-block:: http
-
-    GET /recipes/_design/recipes/_view/by_ingredient?descending=true&startkey=%22carrots%22&endkey=%22egg%22 HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-    {
-        "total_rows" : 26453,
-        "rows" : [],
-        "offset" : 21882
-    }
-
-The results will be empty because the entries in the view are reversed before
-the key filter is applied, and therefore the ``endkey`` of “egg” will be seen
-before the ``startkey`` of “carrots”, resulting in an empty list.
-
-Instead, you should reverse the values supplied to the ``startkey`` and
-``endkey`` parameters to match the descending sorting applied to the keys.
-Changing the previous example to:
-
-.. code-block:: http
-
-    GET /recipes/_design/recipes/_view/by_ingredient?descending=true&startkey=%22egg%22&endkey=%22carrots%22 HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-.. _api/ddoc/view/sorting/raw:
-
-Raw collation
--------------
-
-By default CouchDB uses an `ICU`_ driver for sorting view results. It's possible
-use binary collation instead for faster view builds where Unicode collation is
-not important.
-
-To use raw collation add ``"options":{"collation":"raw"}``  within the view object of the
-design document. After that, views will be regenerated and new order applied for the
-appropriate view.
-
-.. seealso::
-    :ref:`views/collation`
-
-.. _ICU: http://site.icu-project.org/
-
-.. _api/ddoc/view/limiting:
-
-Using Limits and Skipping Rows
-==============================
-
-By default, views return all results. That's ok when the number of results is
-small, but this may lead to problems when there are billions results, since the
-client may have to read them all and consume all available memory.
-
-But it's possible to reduce output result rows by specifying ``limit`` query
-parameter. For example, retrieving the list of recipes using the ``by_title``
-view and limited to 5 returns only 5 records, while there are total 2667
-records in view:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_design/recipes/_view/by_title?limit=5 HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 21 Aug 2013 09:14:13 GMT
-    ETag: "9Q6Q2GZKPH8D5F8L7PB6DBSS9"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "offset" : 0,
-        "rows" : [
-            {
-                "id" : "3-tiersalmonspinachandavocadoterrine",
-                "key" : "3-tier salmon, spinach and avocado terrine",
-                "value" : [
-                    null,
-                    "3-tier salmon, spinach and avocado terrine"
-                ]
-            },
-            {
-                "id" : "Aberffrawcake",
-                "key" : "Aberffraw cake",
-                "value" : [
-                    null,
-                    "Aberffraw cake"
-                ]
-            },
-            {
-                "id" : "Adukiandorangecasserole-microwave",
-                "key" : "Aduki and orange casserole - microwave",
-                "value" : [
-                    null,
-                    "Aduki and orange casserole - microwave"
-                ]
-            },
-            {
-                "id" : "Aioli-garlicmayonnaise",
-                "key" : "Aioli - garlic mayonnaise",
-                "value" : [
-                    null,
-                    "Aioli - garlic mayonnaise"
-                ]
-            },
-            {
-                "id" : "Alabamapeanutchicken",
-                "key" : "Alabama peanut chicken",
-                "value" : [
-                    null,
-                    "Alabama peanut chicken"
-                ]
-            }
-        ],
-        "total_rows" : 2667
-    }
-
-To omit some records you may use ``skip`` query parameter:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/_design/recipes/_view/by_title?limit=3&skip=2 HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 21 Aug 2013 09:14:13 GMT
-    ETag: "H3G7YZSNIVRRHO5FXPE16NJHN"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "offset" : 2,
-        "rows" : [
-            {
-                "id" : "Adukiandorangecasserole-microwave",
-                "key" : "Aduki and orange casserole - microwave",
-                "value" : [
-                    null,
-                    "Aduki and orange casserole - microwave"
-                ]
-            },
-            {
-                "id" : "Aioli-garlicmayonnaise",
-                "key" : "Aioli - garlic mayonnaise",
-                "value" : [
-                    null,
-                    "Aioli - garlic mayonnaise"
-                ]
-            },
-            {
-                "id" : "Alabamapeanutchicken",
-                "key" : "Alabama peanut chicken",
-                "value" : [
-                    null,
-                    "Alabama peanut chicken"
-                ]
-            }
-        ],
-        "total_rows" : 2667
-    }
-
-.. warning::
-    Using ``limit`` and ``skip`` parameters is not recommended for results
-    pagination. Read :ref:`pagination recipe <views/pagination>` why it's so
-    and how to make it better.
-
-.. _api/ddoc/view/multiple_queries:
-
-Sending multiple queries to a view
-==================================
-
-.. versionadded:: 2.2
-
-.. http:post:: /{db}/_design/{ddoc}/_view/{view}/queries
-    :synopsis: Returns results for the specified queries
-
-    Executes multiple specified view queries against the view function
-    from the specified design document.
-
-    :param db: Database name
-    :param ddoc: Design document name
-    :param view: View function name
-
-    :<header Content-Type: - :mimetype:`application/json`
-    :<header Accept: - :mimetype:`application/json`
-
-    :<json queries:  An array of query objects with fields for the
-        parameters of each individual view query to be executed. The field names
-        and their meaning are the same as the query parameters of a
-        regular :ref:`view request <api/ddoc/view>`.
-
-    :>header Content-Type: - :mimetype:`application/json`
-    :>header ETag: Response signature
-    :>header Transfer-Encoding: ``chunked``
-
-    :>json array results: An array of result objects - one for each query. Each
-        result object contains the same fields as the response to a regular
-        :ref:`view request <api/ddoc/view>`.
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request
-    :code 401: Read permission required
-    :code 404: Specified database, design document or view is missing
-    :code 500: View function execution error
-
-**Request**:
-
-.. code-block:: http
-
-    POST /recipes/_design/recipes/_view/by_title/queries HTTP/1.1
-    Content-Type: application/json
-    Accept: application/json
-    Host: localhost:5984
-
-    {
-        "queries": [
-            {
-                "keys": [
-                    "meatballs",
-                    "spaghetti"
-                ]
-            },
-            {
-                "limit": 3,
-                "skip": 2
-            }
-        ]
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Type: application/json
-    Date: Wed, 20 Dec 2016 11:17:07 GMT
-    ETag: "1H8RGBCK3ABY6ACDM7ZSC30QK"
-    Server: CouchDB (Erlang/OTP)
-    Transfer-Encoding: chunked
-
-    {
-        "results" : [
-            {
-                "offset": 0,
-                "rows": [
-                    {
-                        "id": "SpaghettiWithMeatballs",
-                        "key": "meatballs",
-                        "value": 1
-                    },
-                    {
-                        "id": "SpaghettiWithMeatballs",
-                        "key": "spaghetti",
-                        "value": 1
-                    },
-                    {
-                        "id": "SpaghettiWithMeatballs",
-                        "key": "tomato sauce",
-                        "value": 1
-                    }
-                ],
-                "total_rows": 3
-            },
-            {
-                "offset" : 2,
-                "rows" : [
-                    {
-                        "id" : "Adukiandorangecasserole-microwave",
-                        "key" : "Aduki and orange casserole - microwave",
-                        "value" : [
-                            null,
-                            "Aduki and orange casserole - microwave"
-                        ]
-                    },
-                    {
-                        "id" : "Aioli-garlicmayonnaise",
-                        "key" : "Aioli - garlic mayonnaise",
-                        "value" : [
-                            null,
-                            "Aioli - garlic mayonnaise"
-                        ]
-                    },
-                    {
-                        "id" : "Alabamapeanutchicken",
-                        "key" : "Alabama peanut chicken",
-                        "value" : [
-                            null,
-                            "Alabama peanut chicken"
-                        ]
-                    }
-                ],
-                "total_rows" : 2667
-            }
-        ]
-    }
diff --git a/src/api/document/attachments.rst b/src/api/document/attachments.rst
deleted file mode 100644
index 8baf07e..0000000
--- a/src/api/document/attachments.rst
+++ /dev/null
@@ -1,315 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/doc/attachment:
-
-======================
-``/db/doc/attachment``
-======================
-
-.. http:head:: /{db}/{docid}/{attname}
-    :synopsis: Returns bare information in the HTTP Headers for the attachment
-
-    Returns the HTTP headers containing a minimal amount of information about
-    the specified attachment. The method supports the same query arguments as
-    the :get:`/{db}/{docid}/{attname}` method, but only the header information
-    (including attachment size, encoding and the MD5 hash as an
-    :header:`ETag`), is returned.
-
-    :param db: Database name
-    :param docid: Document ID
-    :param attname: Attachment name
-
-    :<header If-Match: Document's revision. Alternative to ``rev`` query
-      parameter
-    :<header If-None-Match: Attachment's base64 encoded MD5 binary digest.
-      *Optional*
-
-    :query string rev: Document's revision. *Optional*
-
-    :>header Accept-Ranges: :ref:`Range request aware
-      <api/doc/attachment/range>`. Used for attachments with
-      :mimetype:`application/octet-stream` content type
-    :>header Content-Encoding: Used compression codec. Available if
-      attachment's ``content_type`` is in :config:option:`list of compressible
-      types <attachments/compressible_types>`
-    :>header Content-Length: Attachment size. If compression codec was used,
-      this value is about compressed size, not actual
-    :>header ETag: Double quoted base64 encoded MD5 binary digest
-
-    :code 200: Attachment exists
-    :code 401: Read privilege required
-    :code 404: Specified database, document or attachment was not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        HEAD /recipes/SpaghettiWithMeatballs/recipe.txt HTTP/1.1
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Accept-Ranges: none
-        Cache-Control: must-revalidate
-        Content-Encoding: gzip
-        Content-Length: 100
-        Content-Type: text/plain
-        Date: Thu, 15 Aug 2013 12:42:42 GMT
-        ETag: "vVa/YgiE1+Gh0WfoFJAcSg=="
-        Server: CouchDB (Erlang/OTP)
-
-.. http:get:: /{db}/{docid}/{attname}
-    :synopsis: Gets the attachment of a document
-
-    Returns the file attachment associated with the document. The raw data of
-    the associated attachment is returned (just as if you were accessing a
-    static file. The returned :header:`Content-Type` will be the same as the
-    content type set when the document attachment was submitted into the
-    database.
-
-    :param db: Database name
-    :param docid: Document ID
-    :param attname: Attachment name
-
-    :<header If-Match: Document's revision. Alternative to ``rev`` query
-      parameter
-    :<header If-None-Match: Attachment's base64 encoded MD5 binary digest.
-      *Optional*
-
-    :query string rev: Document's revision. *Optional*
-
-    :>header Accept-Ranges: :ref:`Range request aware
-      <api/doc/attachment/range>`. Used for attachments with
-      :mimetype:`application/octet-stream`
-    :>header Content-Encoding: Used compression codec. Available if
-      attachment's ``content_type`` is in :config:option:`list of compressible
-      types <attachments/compressible_types>`
-    :>header Content-Length: Attachment size. If compression codec is used,
-      this value is about compressed size, not actual
-    :>header ETag: Double quoted base64 encoded MD5 binary digest
-
-    :response: Stored content
-
-    :code 200: Attachment exists
-    :code 401: Read privilege required
-    :code 404: Specified database, document or attachment was not found
-
-.. http:put:: /{db}/{docid}/{attname}
-    :synopsis: Adds an attachment of a document
-
-    Uploads the supplied content as an attachment to the specified document.
-    The attachment name provided must be a URL encoded string. You must supply
-    the Content-Type header, and for an existing document you must also supply
-    either the ``rev`` query argument or the :header:`If-Match` HTTP header. If
-    the revision is omitted, a new, otherwise empty document will be created
-    with the provided attachment, or a conflict will occur.
-
-    If case when uploading an attachment using an existing attachment name,
-    CouchDB will update the corresponding stored content of the database. Since
-    you must supply the revision information to add an attachment to the
-    document, this serves as validation to update the existing attachment.
-
-    .. note::
-        Uploading an attachment updates the corresponding document revision.
-        Revisions are tracked for the parent document, not individual
-        attachments.
-
-    :param db: Database name
-    :param docid: Document ID
-    :param attname: Attachment name
-
-    :<header Content-Type: Attachment MIME type. Default: :mimetype:`application/octet-stream` *Optional*
-    :<header If-Match: Document revision. Alternative to ``rev`` query parameter
-
-    :query string rev: Document revision. *Optional*
-
-    :>json string id: Document ID
-    :>json boolean ok: Operation status
-    :>json string rev: Revision MVCC token
-
-    :code 201: Attachment created and stored on disk
-    :code 202: Request was accepted, but changes are not yet stored on disk
-    :code 400: Invalid request body or parameters
-    :code 401: Write privileges required
-    :code 404: Specified database, document or attachment was not found
-    :code 409: Document's revision wasn't specified or it's not the latest
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /recipes/SpaghettiWithMeatballs/recipe.txt HTTP/1.1
-        Accept: application/json
-        Content-Length: 86
-        Content-Type: text/plain
-        Host: localhost:5984
-        If-Match: 1-917fa2381192822767f010b95b45325b
-
-        1. Cook spaghetti
-        2. Cook meatballs
-        3. Mix them
-        4. Add tomato sauce
-        5. ...
-        6. PROFIT!
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 85
-        Content-Type: application/json
-        Date: Thu, 15 Aug 2013 12:38:04 GMT
-        ETag: "2-ce91aed0129be8f9b0f650a2edcfd0a4"
-        Location: http://localhost:5984/recipes/SpaghettiWithMeatballs/recipe.txt
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "SpaghettiWithMeatballs",
-            "ok": true,
-            "rev": "2-ce91aed0129be8f9b0f650a2edcfd0a4"
-        }
-
-.. http:delete:: /{db}/{docid}/{attname}
-    :synopsis: Deletes an attachment of a document
-
-    Deletes the attachment with filename ``{attname}`` of the specified ``doc``.
-    You must supply the ``rev`` query parameter or :header:`If-Match` with the
-    current revision to delete the attachment.
-
-    .. note::
-        Deleting an attachment updates the corresponding document revision.
-        Revisions are tracked for the parent document, not individual attachments.
-
-    :param db: Database name
-    :param docid: Document ID
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header If-Match: Document revision. Alternative to ``rev`` query parameter
-
-    :query string rev: Document revision. *Required*
-    :query string batch: Store changes in :ref:`batch mode
-      <api/doc/batch-writes>` Possible values: ``ok``. *Optional*
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Double quoted document's new revision
-    :>json string id: Document ID
-    :>json boolean ok: Operation status
-    :>json string rev: Revision MVCC token
-    :code 200: Attachment successfully removed
-    :code 202: Request was accepted, but changes are not yet stored on disk
-    :code 400: Invalid request body or parameters
-    :code 401: Write privileges required
-    :code 404: Specified database, document or attachment was not found
-    :code 409: Document's revision wasn't specified or it's not the latest
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /recipes/SpaghettiWithMeatballs?rev=6-440b2dd39c20413045748b42c6aba6e2 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    Alternatively, instead of ``rev`` query parameter you may use
-    :header:`If-Match` header:
-
-    .. code-block:: http
-
-        DELETE /recipes/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        If-Match: 6-440b2dd39c20413045748b42c6aba6e2
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 85
-        Content-Type: application/json
-        Date: Wed, 14 Aug 2013 12:23:13 GMT
-        ETag: "7-05185cf5fcdf4b6da360af939431d466"
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "SpaghettiWithMeatballs",
-            "ok": true,
-            "rev": "7-05185cf5fcdf4b6da360af939431d466"
-        }
-
-.. _api/doc/attachment/range:
-
-HTTP Range Requests
-===================
-
-HTTP allows you to specify byte ranges for requests. This allows the
-implementation of resumable downloads and skippable audio and video streams
-alike. This is available for all attachments inside CouchDB.
-
-This is just a real quick run through how this looks under the hood. Usually,
-you will have larger binary files to serve from CouchDB, like MP3s and videos,
-but to make things a little more obvious, I use a text file here (Note that I
-use the :mimetype:`application/octet-stream` :header`Content-Type` instead of
-:mimetype:`text/plain`).
-
-.. code-block:: bash
-
-    shell> cat file.txt
-    My hovercraft is full of eels!
-
-Now let's store this text file as an attachment in CouchDB. First, we create a
-database:
-
-.. code-block:: bash
-
-    shell> curl -X PUT http://127.0.0.1:5984/test
-    {"ok":true}
-
-Then we create a new document and the file attachment in one go:
-
-.. code-block:: bash
-
-    shell> curl -X PUT http://127.0.0.1:5984/test/doc/file.txt \
-                -H "Content-Type: application/octet-stream" -d@file.txt
-    {"ok":true,"id":"doc","rev":"1-287a28fa680ae0c7fb4729bf0c6e0cf2"}
-
-Now we can request the whole file easily:
-
-.. code-block:: bash
-
-    shell> curl -X GET http://127.0.0.1:5984/test/doc/file.txt
-    My hovercraft is full of eels!
-
-But say we only want the first 13 bytes:
-
-.. code-block:: bash
-
-    shell> curl -X GET http://127.0.0.1:5984/test/doc/file.txt \
-                -H "Range: bytes=0-12"
-    My hovercraft
-
-HTTP supports many ways to specify single and even multiple byte
-ranges. Read all about it in :rfc:`2616#section-14.27`.
-
-.. note::
-    Databases that have been created with CouchDB 1.0.2 or earlier will support
-    range requests in |version|, but they are using a less-optimal algorithm.
-    If you plan to make heavy use of this feature, make sure to compact your
-    database with CouchDB |version| to take advantage of a better algorithm to
-    find byte ranges.
diff --git a/src/api/document/common.rst b/src/api/document/common.rst
deleted file mode 100644
index b8e5cd1..0000000
--- a/src/api/document/common.rst
+++ /dev/null
@@ -1,1215 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/doc:
-
-===========
-``/db/doc``
-===========
-
-.. http:head:: /{db}/{docid}
-    :synopsis: Returns bare information in the HTTP Headers for the document
-
-    Returns the HTTP Headers containing a minimal amount of information about
-    the specified document. The method supports the same query arguments as the
-    :get:`/{db}/{docid}` method, but only the header information (including
-    document size, and the revision as an ETag), is returned.
-
-    The :header:`ETag` header shows the current revision for the requested
-    document, and the :header:`Content-Length` specifies the length of the
-    data, if the document were requested in full.
-
-    Adding any of the query arguments (see :get:`/{db}/{docid}`), then the
-    resulting HTTP Headers will correspond to what would be returned.
-
-    :param db: Database name
-    :param docid: Document ID
-    :<header If-None-Match: Double quoted document's revision token
-    :>header Content-Length: Document size
-    :>header ETag: Double quoted document's revision token
-    :code 200: Document exists
-    :code 304: Document wasn't modified since specified revision
-    :code 401: Read privilege required
-    :code 404: Document not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        HEAD /db/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 660
-        Content-Type: application/json
-        Date: Tue, 13 Aug 2013 21:35:37 GMT
-        ETag: "12-151bb8678d45aaa949ec3698ef1c7e78"
-        Server: CouchDB (Erlang/OTP)
-
-.. http:get:: /{db}/{docid}
-    :synopsis: Returns the document
-
-    Returns document by the specified ``docid`` from the specified ``db``.
-    Unless you request a specific revision, the latest revision of the document
-    will always be returned.
-
-    :param db: Database name
-    :param docid: Document ID
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`multipart/related`
-                     - :mimetype:`multipart/mixed`
-                     - :mimetype:`text/plain`
-    :<header If-None-Match: Double quoted document's revision token
-
-    :query boolean attachments: Includes attachments bodies in response.
-      Default is ``false``
-    :query boolean att_encoding_info: Includes encoding information in
-      attachment stubs if the particular attachment is compressed. Default is
-      ``false``.
-    :query array atts_since: Includes attachments only since specified
-      revisions. Doesn't includes attachments for specified revisions.
-      *Optional*
-    :query boolean conflicts: Includes information about conflicts in document.
-      Default is ``false``
-    :query boolean deleted_conflicts: Includes information about deleted
-      conflicted revisions. Default is ``false``
-    :query boolean latest: Forces retrieving latest "leaf" revision, no matter
-      what `rev` was requested. Default is ``false``
-    :query boolean local_seq: Includes last update sequence for the
-      document. Default is ``false``
-    :query boolean meta: Acts same as specifying all ``conflicts``,
-      ``deleted_conflicts`` and ``revs_info`` query parameters. Default is
-      ``false``
-    :query array open_revs: Retrieves documents of specified leaf revisions.
-      Additionally, it accepts value as ``all`` to return all leaf revisions.
-      *Optional*
-    :query string rev: Retrieves document of specified revision. *Optional*
-    :query boolean revs: Includes list of all known document revisions.
-      Default is ``false``
-    :query boolean revs_info: Includes detailed information for all known
-      document revisions. Default is ``false``
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`multipart/related`
-                           - :mimetype:`multipart/mixed`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Double quoted document's revision token. Not available when
-      retrieving conflicts-related information
-    :>header Transfer-Encoding: ``chunked``. Available if requested with query
-      parameter ``open_revs``
-
-    :>json string _id: Document ID
-    :>json string _rev: Revision MVCC token
-    :>json boolean _deleted: Deletion flag. Available if document was removed
-    :>json object _attachments: Attachment's stubs. Available if document has
-      any attachments
-    :>json array _conflicts: List of conflicted revisions. Available if
-      requested with ``conflicts=true`` query parameter
-    :>json array _deleted_conflicts: List of deleted conflicted revisions.
-      Available if requested with ``deleted_conflicts=true`` query parameter
-    :>json string _local_seq: Document's update sequence in current database.
-      Available if requested with ``local_seq=true`` query parameter
-    :>json array _revs_info: List of objects with information about local
-      revisions and their status. Available if requested with ``open_revs``
-      query parameter
-    :>json object _revisions: List of local revision tokens without.
-      Available if requested with ``revs=true`` query parameter
-
-    :code 200: Request completed successfully
-    :code 304: Document wasn't modified since specified revision
-    :code 400: The format of the request or revision was invalid
-    :code 401: Read privilege required
-    :code 404: Document not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /recipes/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 660
-        Content-Type: application/json
-        Date: Tue, 13 Aug 2013 21:35:37 GMT
-        ETag: "1-917fa2381192822767f010b95b45325b"
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "_id": "SpaghettiWithMeatballs",
-            "_rev": "1-917fa2381192822767f010b95b45325b",
-            "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-            "ingredients": [
-                "spaghetti",
-                "tomato sauce",
-                "meatballs"
-            ],
-            "name": "Spaghetti with meatballs"
-        }
-
-.. http:put:: /{db}/{docid}
-    :synopsis: Creates a new document or new version of an existing document
-
-    The :method:`PUT` method creates a new named document, or creates a new
-    revision of the existing document. Unlike the :post:`/{db}`, you must
-    specify the document ID in the request URL.
-
-    When updating an existing document, the current document revision must be
-    included in the document (i.e. the request body), as the ``rev`` query
-    parameter, or in the ``If-Match`` request header.
-
-    :param db: Database name
-    :param docid: Document ID
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`multipart/related`
-    :<header If-Match: Document's revision. Alternative to `rev` query
-      parameter or document key. *Optional*
-
-    :query string rev: Document's revision if updating an existing document.
-      Alternative to ``If-Match`` header or document key. *Optional*
-    :query string batch: Stores document in :ref:`batch mode
-      <api/doc/batch-writes>`. Possible values: ``ok``. *Optional*
-    :query boolean new_edits: Prevents insertion of a :ref:`conflicting
-      document <replication/conflicts>`. Possible values: ``true`` (default)
-      and ``false``. If ``false``, a well-formed ``_rev`` must be included in
-      the document. ``new_edits=false`` is used by the replicator to insert
-      documents into the target database even if that leads to the creation
-      of conflicts. *Optional*, **The ``false`` value is intended for use
-      only by the replicator.**
-
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-                           - :mimetype:`multipart/related`
-    :>header ETag: Quoted document's new revision
-    :>header Location: Document URI
-    :>json string id: Document ID
-    :>json boolean ok: Operation status
-    :>json string rev: Revision MVCC token
-
-    :code 201: Document created and stored on disk
-    :code 202: Document data accepted, but not yet stored on disk
-    :code 400: Invalid request body or parameters
-    :code 401: Write privileges required
-    :code 404: Specified database or document ID doesn't exists
-    :code 409: Document with the specified ID already exists or specified
-      revision is not latest for target document
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /recipes/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        Content-Length: 196
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-            "ingredients": [
-                "spaghetti",
-                "tomato sauce",
-                "meatballs"
-            ],
-            "name": "Spaghetti with meatballs"
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 85
-        Content-Type: application/json
-        Date: Wed, 14 Aug 2013 20:31:39 GMT
-        ETag: "1-917fa2381192822767f010b95b45325b"
-        Location: http://localhost:5984/recipes/SpaghettiWithMeatballs
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "SpaghettiWithMeatballs",
-            "ok": true,
-            "rev": "1-917fa2381192822767f010b95b45325b"
-        }
-
-.. http:delete:: /{db}/{docid}
-    :synopsis: Deletes the document
-
-    Marks the specified document as deleted by adding a field ``_deleted`` with
-    the value ``true``. Documents with this field will not be returned within
-    requests anymore, but stay in the database. You must supply the current
-    (latest) revision, either by using the ``rev`` parameter or by using the
-    :header:`If-Match` header to specify the revision.
-
-    .. note::
-        CouchDB doesn't completely delete the specified document. Instead, it
-        leaves a tombstone with very basic information about the document. The
-        tombstone is required so that the delete action can be replicated
-        across databases.
-
-    .. seealso::
-        :ref:`Retrieving Deleted Documents <api/doc/retrieving-deleted-documents>`
-
-    :param db: Database name
-    :param docid: Document ID
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header If-Match: Document's revision. Alternative to `rev` query
-      parameter
-
-    :query string rev: Actual document's revision
-    :query string batch: Stores document in :ref:`batch mode
-      <api/doc/batch-writes>` Possible values: ``ok``. *Optional*
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Double quoted document's new revision
-    :>json string id: Document ID
-    :>json boolean ok: Operation status
-    :>json string rev: Revision MVCC token
-    :code 200: Document successfully removed
-    :code 202: Request was accepted, but changes are not yet stored on disk
-    :code 400: Invalid request body or parameters
-    :code 401: Write privileges required
-    :code 404: Specified database or document ID doesn't exists
-    :code 409: Specified revision is not the latest for target document
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /recipes/FishStew?rev=1-9c65296036141e575d32ba9c034dd3ee HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    Alternatively, instead of ``rev`` query parameter you may use
-    :header:`If-Match` header:
-
-    .. code-block:: http
-
-        DELETE /recipes/FishStew HTTP/1.1
-        Accept: application/json
-        If-Match: 1-9c65296036141e575d32ba9c034dd3ee
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 71
-        Content-Type: application/json
-        Date: Wed, 14 Aug 2013 12:23:13 GMT
-        ETag: "2-056f5f44046ecafc08a2bc2b9c229e20"
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "FishStew",
-            "ok": true,
-            "rev": "2-056f5f44046ecafc08a2bc2b9c229e20"
-        }
-
-.. _api/doc/copy:
-
-.. http:copy:: /{db}/{docid}
-    :synopsis: Copies the document within the same database
-
-    The :method:`COPY` (which is non-standard HTTP) copies an existing
-    document to a new or existing document. Copying a document is only possible
-    within the same database.
-
-    The source document is specified on the request line, with the
-    :header:`Destination` header of the request specifying the target
-    document.
-
-    :param db: Database name
-    :param docid: Document ID
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Destination: Destination document. Must contain the target
-      document ID, and optionally the target document revision, if copying to
-      an existing document.  See :ref:`Copying to an Existing Document
-      <copy_to_existing_document>`.
-    :<header If-Match: Source document's revision. Alternative to ``rev`` query
-      parameter
-    :query string rev: Revision to copy from. *Optional*
-    :query string batch: Stores document in :ref:`batch mode
-      <api/doc/batch-writes>` Possible values: ``ok``. *Optional*
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Double quoted document's new revision
-    :>header Location: Document URI
-    :>json string id: Document document ID
-    :>json boolean ok: Operation status
-    :>json string rev: Revision MVCC token
-    :code 201: Document successfully created
-    :code 202: Request was accepted, but changes are not yet stored on disk
-    :code 400: Invalid request body or parameters
-    :code 401: Read or write privileges required
-    :code 404: Specified database, document ID  or revision doesn't exists
-    :code 409: Document with the specified ID already exists or specified
-      revision is not latest for target document
-
-    **Request**:
-
-    .. code-block:: http
-
-        COPY /recipes/SpaghettiWithMeatballs HTTP/1.1
-        Accept: application/json
-        Destination: SpaghettiWithMeatballs_Italian
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 93
-        Content-Type: application/json
-        Date: Wed, 14 Aug 2013 14:21:00 GMT
-        ETag: "1-e86fdf912560c2321a5fcefc6264e6d9"
-        Location: http://localhost:5984/recipes/SpaghettiWithMeatballs_Italian
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "SpaghettiWithMeatballs_Italian",
-            "ok": true,
-            "rev": "1-e86fdf912560c2321a5fcefc6264e6d9"
-        }
-
-.. _api/doc/attachments:
-
-Attachments
-===========
-
-If the document includes attachments, then the returned structure will contain
-a summary of the attachments associated with the document, but not the
-attachment data itself.
-
-The JSON for the returned document will include the ``_attachments`` field,
-with one or more attachment definitions.
-
-The ``_attachments`` object keys are attachments names while values are
-information objects with next structure:
-
-- **content_type** (*string*): Attachment MIME type
-- **data** (*string*): Base64-encoded content. Available if attachment content
-  is requested by using the following query parameters:
-
-    - ``attachments=true`` when querying a document
-    - ``attachments=true&include_docs=true`` when querying a
-      :ref:`changes feed <api/db/changes>` or a :ref:`view <api/ddoc/view>`
-    - ``atts_since``.
-
-- **digest** (*string*): Content hash digest.
-  It starts with prefix which announce hash type (``md5-``) and continues with
-  Base64-encoded hash digest
-- **encoded_length** (*number*): Compressed attachment size in bytes.
-  Available if ``content_type`` is in :config:option:`list of compressible
-  types <attachments/compressible_types>` when the attachment was added and the
-  following query parameters are specified:
-
-      - ``att_encoding_info=true`` when querying a document
-      - ``att_encoding_info=true&include_docs=true`` when querying a
-        :ref:`changes feed <api/db/changes>` or a :ref:`view <api/ddoc/view>`
-
-- **encoding** (*string*): Compression codec. Available if ``content_type`` is
-  in :config:option:`list of compressible types
-  <attachments/compressible_types>` when the attachment was added and the
-  following query parameters are specified:
-
-      - ``att_encoding_info=true`` when querying a document
-      - ``att_encoding_info=true&include_docs=true`` when querying a
-        :ref:`changes feed <api/db/changes>` or a :ref:`view <api/ddoc/view>`
-
-- **length** (*number*): Real attachment size in bytes. Not available if
-  attachment content requested
-- **revpos** (*number*): Revision *number* when attachment was added
-- **stub** (*boolean*): Has ``true`` value if object contains stub info and no
-  content. Otherwise omitted in response
-
-Basic Attachments Info
-----------------------
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/SpaghettiWithMeatballs HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 660
-    Content-Type: application/json
-    Date: Tue, 13 Aug 2013 21:35:37 GMT
-    ETag: "5-fd96acb3256302bf0dd2f32713161f2a"
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_attachments": {
-            "grandma_recipe.txt": {
-                "content_type": "text/plain",
-                "digest": "md5-Ids41vtv725jyrN7iUvMcQ==",
-                "length": 1872,
-                "revpos": 4,
-                "stub": true
-            },
-            "my_recipe.txt": {
-                "content_type": "text/plain",
-                "digest": "md5-198BPPNiT5fqlLxoYYbjBA==",
-                "length": 85,
-                "revpos": 5,
-                "stub": true
-            },
-            "photo.jpg": {
-                "content_type": "image/jpeg",
-                "digest": "md5-7Pv4HW2822WY1r/3WDbPug==",
-                "length": 165504,
-                "revpos": 2,
-                "stub": true
-            }
-        },
-        "_id": "SpaghettiWithMeatballs",
-        "_rev": "5-fd96acb3256302bf0dd2f32713161f2a",
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs"
-    }
-
-Retrieving Attachments Content
-------------------------------
-
-It's possible to retrieve document with all attached files content by using
-``attachments=true`` query parameter:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /db/pixel?attachments=true HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 553
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 11:32:40 GMT
-    ETag: "4-f1bcae4bf7bbb92310079e632abfe3f4"
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_attachments": {
-            "pixel.gif": {
-                "content_type": "image/gif",
-                "data": "R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7",
-                "digest": "md5-2JdGiI2i2VELZKnwMers1Q==",
-                "revpos": 2
-            },
-            "pixel.png": {
-                "content_type": "image/png",
-                "data": "iVBORw0KGgoAAAANSUhEUgAAAAEAAAABAQMAAAAl21bKAAAAAXNSR0IArs4c6QAAAANQTFRFAAAAp3o92gAAAAF0Uk5TAEDm2GYAAAABYktHRACIBR1IAAAACXBIWXMAAAsTAAALEwEAmpwYAAAAB3RJTUUH3QgOCx8VHgmcNwAAAApJREFUCNdjYAAAAAIAAeIhvDMAAAAASUVORK5CYII=",
-                "digest": "md5-Dgf5zxgGuchWrve73evvGQ==",
-                "revpos": 3
-            }
-        },
-        "_id": "pixel",
-        "_rev": "4-f1bcae4bf7bbb92310079e632abfe3f4"
-    }
-
-Or retrieve attached files content since specific revision using ``atts_since``
-query parameter:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/SpaghettiWithMeatballs?atts_since=[%224-874985bc28906155ba0e2e0538f67b05%22]  HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 760
-    Content-Type: application/json
-    Date: Tue, 13 Aug 2013 21:35:37 GMT
-    ETag: "5-fd96acb3256302bf0dd2f32713161f2a"
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_attachments": {
-            "grandma_recipe.txt": {
-                "content_type": "text/plain",
-                "digest": "md5-Ids41vtv725jyrN7iUvMcQ==",
-                "length": 1872,
-                "revpos": 4,
-                "stub": true
-            },
-            "my_recipe.txt": {
-                "content_type": "text/plain",
-                "data": "MS4gQ29vayBzcGFnaGV0dGkKMi4gQ29vayBtZWV0YmFsbHMKMy4gTWl4IHRoZW0KNC4gQWRkIHRvbWF0byBzYXVjZQo1LiAuLi4KNi4gUFJPRklUIQ==",
-                "digest": "md5-198BPPNiT5fqlLxoYYbjBA==",
-                "revpos": 5
-            },
-            "photo.jpg": {
-                "content_type": "image/jpeg",
-                "digest": "md5-7Pv4HW2822WY1r/3WDbPug==",
-                "length": 165504,
-                "revpos": 2,
-                "stub": true
-            }
-        },
-        "_id": "SpaghettiWithMeatballs",
-        "_rev": "5-fd96acb3256302bf0dd2f32713161f2a",
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs"
-    }
-
-Efficient Multiple Attachments Retrieving
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-As noted above, retrieving document with ``attachments=true`` returns a
-large JSON object with all attachments included.  When your document and
-files are smaller it's ok, but if you have attached something bigger like media
-files (audio/video), parsing such response might be very expensive.
-
-To solve this problem, CouchDB allows to get documents in
-:mimetype:`multipart/related` format:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/secret?attachments=true HTTP/1.1
-    Accept: multipart/related
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Content-Length: 538
-    Content-Type: multipart/related; boundary="e89b3e29388aef23453450d10e5aaed0"
-    Date: Sat, 28 Sep 2013 08:08:22 GMT
-    ETag: "2-c1c6c44c4bc3c9344b037c8690468605"
-    Server: CouchDB (Erlang OTP)
-
-    --e89b3e29388aef23453450d10e5aaed0
-    Content-Type: application/json
-
-    {"_id":"secret","_rev":"2-c1c6c44c4bc3c9344b037c8690468605","_attachments":{"recipe.txt":{"content_type":"text/plain","revpos":2,"digest":"md5-HV9aXJdEnu0xnMQYTKgOFA==","length":86,"follows":true}}}
-    --e89b3e29388aef23453450d10e5aaed0
-    Content-Disposition: attachment; filename="recipe.txt"
-    Content-Type: text/plain
-    Content-Length: 86
-
-    1. Take R
-    2. Take E
-    3. Mix with L
-    4. Add some A
-    5. Serve with X
-
-    --e89b3e29388aef23453450d10e5aaed0--
-
-In this response the document contains only attachments stub information and
-quite short while all attachments goes as separate entities which reduces
-memory footprint and processing overhead (you'd noticed, that attachment
-content goes as raw data, not in base64 encoding, right?).
-
-Retrieving Attachments Encoding Info
-------------------------------------
-
-By using ``att_encoding_info=true`` query parameter you may retrieve
-information about compressed attachments size and used codec.
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/SpaghettiWithMeatballs?att_encoding_info=true HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 736
-    Content-Type: application/json
-    Date: Tue, 13 Aug 2013 21:35:37 GMT
-    ETag: "5-fd96acb3256302bf0dd2f32713161f2a"
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_attachments": {
-            "grandma_recipe.txt": {
-                "content_type": "text/plain",
-                "digest": "md5-Ids41vtv725jyrN7iUvMcQ==",
-                "encoded_length": 693,
-                "encoding": "gzip",
-                "length": 1872,
-                "revpos": 4,
-                "stub": true
-            },
-            "my_recipe.txt": {
-                "content_type": "text/plain",
-                "digest": "md5-198BPPNiT5fqlLxoYYbjBA==",
-                "encoded_length": 100,
-                "encoding": "gzip",
-                "length": 85,
-                "revpos": 5,
-                "stub": true
-            },
-            "photo.jpg": {
-                "content_type": "image/jpeg",
-                "digest": "md5-7Pv4HW2822WY1r/3WDbPug==",
-                "length": 165504,
-                "revpos": 2,
-                "stub": true
-            }
-        },
-        "_id": "SpaghettiWithMeatballs",
-        "_rev": "5-fd96acb3256302bf0dd2f32713161f2a",
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs"
-    }
-
-Creating Multiple Attachments
------------------------------
-
-To create a document with multiple attachments with single request you need
-just inline base64 encoded attachments data into the document body:
-
-.. code-block:: javascript
-
-    {
-      "_id":"multiple_attachments",
-      "_attachments":
-      {
-        "foo.txt":
-        {
-          "content_type":"text\/plain",
-          "data": "VGhpcyBpcyBhIGJhc2U2NCBlbmNvZGVkIHRleHQ="
-        },
-
-       "bar.txt":
-        {
-          "content_type":"text\/plain",
-          "data": "VGhpcyBpcyBhIGJhc2U2NCBlbmNvZGVkIHRleHQ="
-        }
-      }
-    }
-
-Alternatively, you can upload a document with attachments more efficiently in
-:mimetype:`multipart/related` format. This avoids having to Base64-encode
-the attachments, saving CPU and bandwidth. To do this, set the
-:header:`Content-Type` header of the :put:`/{db}/{docid}` request to
-:mimetype:`multipart/related`.
-
-The first MIME body is the document itself, which should have its own
-:header:`Content-Type` of :mimetype:`application/json"`. It also should
-include  an ``_attachments`` metadata object in which each attachment object
-has a key ``follows`` with value ``true``.
-
-The subsequent MIME bodies are the attachments.
-
-**Request**:
-
-.. code-block:: http
-
-    PUT /temp/somedoc HTTP/1.1
-    Accept: application/json
-    Content-Length: 372
-    Content-Type: multipart/related;boundary="abc123"
-    Host: localhost:5984
-    User-Agent: HTTPie/0.6.0
-
-    --abc123
-    Content-Type: application/json
-
-    {
-        "body": "This is a body.",
-        "_attachments": {
-            "foo.txt": {
-                "follows": true,
-                "content_type": "text/plain",
-                "length": 21
-            },
-            "bar.txt": {
-                "follows": true,
-                "content_type": "text/plain",
-                "length": 20
-            }
-        }
-    }
-
-    --abc123
-
-    this is 21 chars long
-    --abc123
-
-    this is 20 chars lon
-    --abc123--
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 72
-    Content-Type: application/json
-    Date: Sat, 28 Sep 2013 09:13:24 GMT
-    ETag: "1-5575e26acdeb1df561bb5b70b26ba151"
-    Location: http://localhost:5984/temp/somedoc
-    Server: CouchDB (Erlang OTP)
-
-    {
-        "id": "somedoc",
-        "ok": true,
-        "rev": "1-5575e26acdeb1df561bb5b70b26ba151"
-    }
-
-Getting a List of Revisions
-===========================
-
-You can obtain a list of the revisions for a given document by adding
-the ``revs=true`` parameter to the request URL:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/SpaghettiWithMeatballs?revs=true  HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 584
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 11:38:26 GMT
-    ETag: "5-fd96acb3256302bf0dd2f32713161f2a"
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_id": "SpaghettiWithMeatballs",
-        "_rev": "8-6f5ad8db0f34af24a6e0984cd1a6cfb9",
-        "_revisions": {
-            "ids": [
-                "6f5ad8db0f34af24a6e0984cd1a6cfb9",
-                "77fba3a059497f51ec99b9b478b569d2",
-                "136813b440a00a24834f5cb1ddf5b1f1",
-                "fd96acb3256302bf0dd2f32713161f2a",
-                "874985bc28906155ba0e2e0538f67b05",
-                "0de77a37463bf391d14283e626831f2e",
-                "d795d1b924777732fdea76538c558b62",
-                "917fa2381192822767f010b95b45325b"
-            ],
-            "start": 8
-        },
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs"
-    }
-
-The returned JSON structure includes the original document, including a
-``_revisions`` structure that includes the revision information in next form:
-
-- **ids** (*array*): Array of valid revision IDs, in reverse order
-  (latest first)
-- **start** (*number*): Prefix number for the latest revision
-
-Obtaining an Extended Revision History
-======================================
-
-You can get additional information about the revisions for a given document by
-supplying the ``revs_info`` argument to the query:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/SpaghettiWithMeatballs?revs_info=true  HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 802
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 11:40:55 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_id": "SpaghettiWithMeatballs",
-        "_rev": "8-6f5ad8db0f34af24a6e0984cd1a6cfb9",
-        "_revs_info": [
-            {
-                "rev": "8-6f5ad8db0f34af24a6e0984cd1a6cfb9",
-                "status": "available"
-            },
-            {
-                "rev": "7-77fba3a059497f51ec99b9b478b569d2",
-                "status": "deleted"
-            },
-            {
-                "rev": "6-136813b440a00a24834f5cb1ddf5b1f1",
-                "status": "available"
-            },
-            {
-                "rev": "5-fd96acb3256302bf0dd2f32713161f2a",
-                "status": "missing"
-            },
-            {
-                "rev": "4-874985bc28906155ba0e2e0538f67b05",
-                "status": "missing"
-            },
-            {
-                "rev": "3-0de77a37463bf391d14283e626831f2e",
-                "status": "missing"
-            },
-            {
-                "rev": "2-d795d1b924777732fdea76538c558b62",
-                "status": "missing"
-            },
-            {
-                "rev": "1-917fa2381192822767f010b95b45325b",
-                "status": "missing"
-            }
-        ],
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs"
-    }
-
-The returned document contains ``_revs_info`` field with extended revision
-information, including the availability and status of each revision. This array
-field contains objects with following structure:
-
-- **rev** (*string*): Full revision string
-- **status** (*string*): Status of the revision.
-  Maybe one of:
-
-  - ``available``: Revision is available for retrieving with `rev` query
-    parameter
-  - ``missing``: Revision is not available
-  - ``deleted``: Revision belongs to deleted document
-
-Obtaining a Specific Revision
-=============================
-
-To get a specific revision, use the ``rev`` argument to the request, and
-specify the full revision number. The specified revision of the document will
-be returned, including a ``_rev`` field specifying the revision that was
-requested.
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/SpaghettiWithMeatballs?rev=6-136813b440a00a24834f5cb1ddf5b1f1  HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 271
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 11:40:55 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_id": "SpaghettiWithMeatballs",
-        "_rev": "6-136813b440a00a24834f5cb1ddf5b1f1",
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs"
-    }
-
-.. _api/doc/retrieving-deleted-documents:
-
-Retrieving Deleted Documents
-----------------------------
-
-CouchDB doesn't actually delete documents via :delete:`/{db}/{docid}`.
-Instead, it leaves tombstone with very basic information about the
-document. If you just :get:`/{db}/{docid}` CouchDB returns :statuscode:`404`
-response:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/FishStew  HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 404 Object Not Found
-    Cache-Control: must-revalidate
-    Content-Length: 41
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 12:23:27 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "error": "not_found",
-        "reason": "deleted"
-    }
-
-However, you may retrieve document's tombstone by using ``rev`` query parameter
-with :get:`/{db}/{docid}` request:
-
-**Request**:
-
-.. code-block:: http
-
-    GET /recipes/FishStew?rev=2-056f5f44046ecafc08a2bc2b9c229e20  HTTP/1.1
-    Accept: application/json
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 79
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 12:30:22 GMT
-    ETag: "2-056f5f44046ecafc08a2bc2b9c229e20"
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "_deleted": true,
-        "_id": "FishStew",
-        "_rev": "2-056f5f44046ecafc08a2bc2b9c229e20"
-    }
-
-Updating an Existing Document
-=============================
-
-To update an existing document you must specify the current revision
-number within the ``_rev`` parameter.
-
-**Request**:
-
-.. code-block:: http
-
-    PUT /recipes/SpaghettiWithMeatballs HTTP/1.1
-    Accept: application/json
-    Content-Length: 258
-    Content-Type: application/json
-    Host: localhost:5984
-
-    {
-        "_rev": "1-917fa2381192822767f010b95b45325b",
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs",
-        "serving": "hot"
-    }
-
-Alternatively, you can supply the current revision number in the ``If-Match``
-HTTP header of the request:
-
-.. code-block:: http
-
-    PUT /recipes/SpaghettiWithMeatballs HTTP/1.1
-    Accept: application/json
-    Content-Length: 258
-    Content-Type: application/json
-    If-Match: 1-917fa2381192822767f010b95b45325b
-    Host: localhost:5984
-
-    {
-        "description": "An Italian-American dish that usually consists of spaghetti, tomato sauce and meatballs.",
-        "ingredients": [
-            "spaghetti",
-            "tomato sauce",
-            "meatballs"
-        ],
-        "name": "Spaghetti with meatballs",
-        "serving": "hot"
-    }
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 85
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 20:33:56 GMT
-    ETag: "2-790895a73b63fb91dd863388398483dd"
-    Location: http://localhost:5984/recipes/SpaghettiWithMeatballs
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "id": "SpaghettiWithMeatballs",
-        "ok": true,
-        "rev": "2-790895a73b63fb91dd863388398483dd"
-    }
-
-Copying from a Specific Revision
-================================
-
-To copy *from* a specific version, use the ``rev`` argument to the query string
-or :header:`If-Match`:
-
-**Request**:
-
-.. code-block:: http
-
-    COPY /recipes/SpaghettiWithMeatballs HTTP/1.1
-    Accept: application/json
-    Destination: SpaghettiWithMeatballs_Original
-    If-Match: 1-917fa2381192822767f010b95b45325b
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 93
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 14:21:00 GMT
-    ETag: "1-917fa2381192822767f010b95b45325b"
-    Location: http://localhost:5984/recipes/SpaghettiWithMeatballs_Original
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "id": "SpaghettiWithMeatballs_Original",
-        "ok": true,
-        "rev": "1-917fa2381192822767f010b95b45325b"
-    }
-
-.. _copy_to_existing_document:
-
-Copying to an Existing Document
-===============================
-
-To copy to an existing document, you must specify the current revision string
-for the target document by appending the ``rev`` parameter to the
-:header:`Destination` header string.
-
-**Request**:
-
-.. code-block:: http
-
-    COPY /recipes/SpaghettiWithMeatballs?rev=8-6f5ad8db0f34af24a6e0984cd1a6cfb9 HTTP/1.1
-    Accept: application/json
-    Destination: SpaghettiWithMeatballs_Original?rev=1-917fa2381192822767f010b95b45325b
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 93
-    Content-Type: application/json
-    Date: Wed, 14 Aug 2013 14:21:00 GMT
-    ETag: "2-62e778c9ec09214dd685a981dcc24074""
-    Location: http://localhost:5984/recipes/SpaghettiWithMeatballs_Original
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "id": "SpaghettiWithMeatballs_Original",
-        "ok": true,
-        "rev": "2-62e778c9ec09214dd685a981dcc24074"
-    }
diff --git a/src/api/document/index.rst b/src/api/document/index.rst
deleted file mode 100644
index 6d5160f..0000000
--- a/src/api/document/index.rst
+++ /dev/null
@@ -1,23 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/document:
-
-=========
-Documents
-=========
-
-Details on how to create, read, update and delete documents within a database.
-
-.. toctree::
-    common
-    attachments
diff --git a/src/api/index.rst b/src/api/index.rst
deleted file mode 100644
index 494ccef..0000000
--- a/src/api/index.rst
+++ /dev/null
@@ -1,42 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api:
-
-=============
-API Reference
-=============
-
-The components of the API URL path help determine the part of the CouchDB
-server that is being accessed. The result is the structure of the URL request
-both identifies and effectively describes the area of the database you are
-accessing.
-
-As with all URLs, the individual components are separated by a forward slash.
-
-As a general rule, URL components and JSON fields starting with the ``_``
-(underscore) character represent a special component or entity within the
-server or returned object. For example, the URL fragment ``/_all_dbs`` gets a
-list of all of the databases in a CouchDB instance.
-
-This reference is structured according to the URL structure, as below.
-
-.. toctree::
-    :maxdepth: 2
-
-    basics
-    server/index
-    database/index
-    document/index
-    ddoc/index
-    partitioned-dbs
-    local
diff --git a/src/api/local.rst b/src/api/local.rst
deleted file mode 100644
index 0fa78a4..0000000
--- a/src/api/local.rst
+++ /dev/null
@@ -1,255 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/local:
-
-=================================
-Local (non-replicating) Documents
-=================================
-
-The Local (non-replicating) document interface allows you to create local
-documents that are not replicated to other databases. These documents can be
-used to hold configuration or other information that is required specifically
-on the local CouchDB instance.
-
-Local documents have the following limitations:
-
-- Local documents are not replicated to other databases.
-
-- Local documents are not output by views, or the :ref:`api/db/all_docs` view.
-
-From CouchDB 2.0, Local documents can be listed by using the /db/_local_docs
-endpoint.
-
-Local documents can be used when you want to store configuration or
-other information for the current (local) instance of a given database.
-
-A list of the available methods and URL paths are provided below:
-
-+--------+------------------------+--------------------------------------------+
-| Method | Path                   | Description                                |
-+========+========================+============================================+
-| GET,   | /db/_local_docs        | Returns a list of all the                  |
-| POST   |                        | non-replicated documents in the database   |
-+--------+------------------------+--------------------------------------------+
-| GET    | /db/_local/id          | Returns the latest revision of the         |
-|        |                        | non-replicated document                    |
-+--------+------------------------+--------------------------------------------+
-| PUT    | /db/_local/id          | Inserts a new version of the               |
-|        |                        | non-replicated document                    |
-+--------+------------------------+--------------------------------------------+
-| DELETE | /db/_local/id          | Deletes the non-replicated document        |
-+--------+------------------------+--------------------------------------------+
-| COPY   | /db/_local/id          | Copies the non-replicated document         |
-+--------+------------------------+--------------------------------------------+
-
-.. _api/local/doc:
-
-``/db/_local_docs``
-===================
-
-.. http:get:: /{db}/_local_docs
-    :synopsis: Returns a built-in view of all local (non-replicating) documents
-      in this database
-
-    Returns a JSON structure of all of the local documents in a given
-    database. The information is returned as a JSON structure containing meta
-    information about the return structure, including a list of all local
-    documents and basic contents, consisting the ID, revision and key. The key
-    is the from the local document's ``_id``.
-
-    :param db: Database name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query boolean conflicts: Includes `conflicts` information in response.
-      Ignored if ``include_docs`` isn't ``true``. Default is ``false``.
-    :query boolean descending: Return the local documents in descending by
-      key order. Default is ``false``.
-    :query string endkey: Stop returning records when the specified key is
-      reached. *Optional*.
-    :query string end_key: Alias for ``endkey`` param.
-    :query string endkey_docid: Stop returning records when the specified
-        local document ID is reached. *Optional*.
-    :query string end_key_doc_id: Alias for ``endkey_docid`` param.
-    :query boolean include_docs: Include the full content of the local
-      documents in the return. Default is ``false``.
-    :query boolean inclusive_end: Specifies whether the specified end key
-      should be included in the result. Default is ``true``.
-    :query string key: Return only local documents that match the specified
-      key. *Optional*.
-    :query string keys: Return only local documents that match the specified
-      keys. *Optional*.
-    :query number limit: Limit the number of the returned local documents to
-      the specified number. *Optional*.
-    :query number skip: Skip this number of records before starting to return
-      the results. Default is ``0``.
-    :query string startkey: Return records starting with the specified key.
-      *Optional*.
-    :query string start_key: Alias for ``startkey`` param.
-    :query string startkey_docid: Return records starting with the specified
-      local document ID. *Optional*.
-    :query string start_key_doc_id: Alias for ``startkey_docid`` param.
-    :query boolean update_seq: Response includes an ``update_seq`` value
-      indicating which sequence id of the underlying database the view
-      reflects. Default is ``false``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json number offset: Offset where the local document list started
-    :>json array rows: Array of view row objects. By default the information
-      returned contains only the local document ID and revision.
-    :>json number total_rows: Number of local documents in the database. Note
-      that this is not the number of rows returned in the actual query.
-    :>json number update_seq: Current update sequence for the database
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_local_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 23 Dec 2017 16:22:56 GMT
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "offset": null,
-            "rows": [
-                {
-                    "id": "_local/localdoc01",
-                    "key": "_local/localdoc01",
-                    "value": {
-                        "rev": "0-1"
-                    }
-                },
-                {
-                    "id": "_local/localdoc02",
-                    "key": "_local/localdoc02",
-                    "value": {
-                        "rev": "0-1"
-                    }
-                },
-                {
-                    "id": "_local/localdoc03",
-                    "key": "_local/localdoc03",
-                    "value": {
-                        "rev": "0-1"
-                    }
-                },
-                {
-                    "id": "_local/localdoc04",
-                    "key": "_local/localdoc04",
-                    "value": {
-                        "rev": "0-1"
-                    }
-                },
-                {
-                    "id": "_local/localdoc05",
-                    "key": "_local/localdoc05",
-                    "value": {
-                        "rev": "0-1"
-                    }
-                }
-            ],
-            "total_rows": null
-        }
-
-.. http:post:: /{db}/_local_docs
-    :synopsis: Returns a built-in view of all local (non-replicating) documents
-      in this database
-
-    :method:`POST` `_local_docs` functionality supports identical parameters and behavior
-    as specified in the :get:`/{db}/_local_docs` API but allows for the query string
-    parameters to be supplied as keys in a JSON object in the body of the `POST` request.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /db/_local_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 70
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "keys" : [
-                "_local/localdoc02",
-                "_local/localdoc05"
-            ]
-        }
-
-    The returned JSON is the all documents structure, but with only the
-    selected keys in the output:
-
-    .. code-block:: javascript
-
-        {
-            "total_rows" : null,
-            "rows" : [
-                {
-                    "value" : {
-                        "rev" : "0-1"
-                    },
-                    "id" : "_local/localdoc02",
-                    "key" : "_local/localdoc02"
-                },
-                {
-                    "value" : {
-                        "rev" : "0-1"
-                    },
-                    "id" : "_local/localdoc05",
-                    "key" : "_local/localdoc05"
-                }
-            ],
-            "offset" : null
-        }
-
-``/db/_local/id``
-=================
-
-.. http:get:: /{db}/_local/{docid}
-    :synopsis: Returns the latest revision of the local document
-
-    Gets the specified local document. The semantics are identical to accessing
-    a standard document in the specified database, except that the document is
-    not replicated. See :get:`/{db}/{docid}`.
-
-.. http:put:: /{db}/_local/{docid}
-    :synopsis: Inserts a new version of the local document
-
-    Stores the specified local document. The semantics are identical to storing
-    a standard document in the specified database, except that the document is
-    not replicated. See :put:`/{db}/{docid}`.
-
-.. http:delete:: /{db}/_local/{docid}
-    :synopsis: Deletes the local document
-
-    Deletes the specified local document. The semantics are identical to
-    deleting a standard document in the specified database, except that the
-    document is not replicated. See :delete:`/{db}/{docid}`.
-
-.. http:copy:: /{db}/_local/{docid}
-    :synopsis: Copies the local document within the same database
-
-    Copies the specified local document. The semantics are identical to copying
-    a standard document in the specified database, except that the document is
-    not replicated. See :copy:`/{db}/{docid}`.
diff --git a/src/api/partitioned-dbs.rst b/src/api/partitioned-dbs.rst
deleted file mode 100644
index e5ec55e..0000000
--- a/src/api/partitioned-dbs.rst
+++ /dev/null
@@ -1,231 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/partioned-dbs:
-
-=====================
-Partitioned Databases
-=====================
-
-Partitioned databases allow for data colocation in a cluster, which provides
-significant performance improvements for queries constrained to a single
-partition.
-
-See the guide for
-:ref:`getting started with partitioned databases <partitioned-dbs>`
-
-``/db/_partition/partition``
-============================
-
-.. http:get:: /{db}/_partition/{partition}
-    :synopsis: Returns document and size info for the given partition
-
-    This endpoint returns information describing the provided partition.
-    It includes document and deleted document counts along with external
-    and active data sizes.
-
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_partition/sensor-260 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 119
-        Content-Type: application/json
-        Date: Thu, 24 Jan 2019 17:19:59 GMT
-        Server: CouchDB/2.3.0-a1e11cea9 (Erlang OTP/21)
-
-        {
-          "db_name": "my_new_db",
-          "doc_count": 1,
-          "doc_del_count": 0,
-          "partition": "sensor-260",
-          "sizes": {
-            "active": 244,
-            "external": 347
-          }
-        }
-
-``/db/_partition/partition/_all_docs``
-======================================
-
-.. http:get:: /{db}/_partition/{partition}/_all_docs
-    :synopsis: Return all docs in the specified partition
-
-    :param db: Database name
-    :param partition: Partition name
-
-    This endpoint is a convenience endpoint for automatically setting
-    bounds on the provided partition range. Similar results can be had
-    by using the global ``/db/_all_docs`` endpoint with appropriately
-    configured values for ``start_key`` and ``end_key``.
-
-    Refer to the :ref:`view endpoint <api/ddoc/view>` documentation for
-    a complete description of the available query parameters and the format
-    of the returned data.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /db/_partition/sensor-260/_all_docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 16:22:56 GMT
-        ETag: "1W2DJUZFZSZD9K78UFA3GZWB4"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-          "offset": 0,
-          "rows": [
-            {
-              "id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-              "key": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-              "value": {
-                "rev": "1-05ed6f7abf84250e213fcb847387f6f5"
-              }
-            }
-          ],
-          "total_rows": 1
-        }
-
-.. _api/partitioned/views:
-
-``/db/_partition/partition/_design/design-doc/_view/view-name``
-===============================================================
-
-.. http:get:: /{db}/_partition/{partition}/_design/{ddoc}/_view/{view}
-    :synopsis: Execute a partitioned query
-
-    :param db: Database name
-    :param partition: Partition name
-    :param ddoc: Design document id
-    :param view: View name
-
-    This endpoint is responsible for executing a partitioned query. The
-    returned view result will only contain rows with the specified
-    partition name.
-
-    Refer to the :ref:`view endpoint <api/ddoc/view>` documentation for
-    a complete description of the available query parameters and the format
-    of the returned data.
-
-    .. code-block:: http
-
-        GET /db/_partition/sensor-260/_design/sensor-readings/_view/by_sensor HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Wed, 21 Aug 2013 09:12:06 GMT
-        ETag: "2FOLSBSW4O6WB798XU4AQYA9B"
-        Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-          "offset": 0,
-          "rows": [
-            {
-              "id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-              "key": [
-                "sensor-260",
-                "0"
-              ],
-              "value": null
-            },
-            {
-              "id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-              "key": [
-                "sensor-260",
-                "1"
-              ],
-              "value": null
-            },
-            {
-              "id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-              "key": [
-                "sensor-260",
-                "2"
-              ],
-              "value": null
-            },
-            {
-              "id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-              "key": [
-                "sensor-260",
-                "3"
-              ],
-              "value": null
-            }
-          ],
-          "total_rows": 4
-        }
-.. _api/partitioned/find:
-
-``/db/_partition/partition_id/_find``
-=====================================
-
-.. http:post:: /{db}/_partition/{partition_id}/_find
-    :synopsis: Query the partition specified by ``partition_id``
-
-    :param db: Database name
-    :param partition id: Name of the partition to query
-
-    This endpoint is responsible for finding a partition query by its ID.
-    The returned view result will only contain rows with the
-    specified partition id.
-
-    Refer to the :ref:`find endpoint <api/db/_find>`
-    documentation for a complete description of the
-    available parameters and the format
-    of the returned data.
-.. _api/partitioned/explain:
-
-``/db/_partition/partition_id/_explain``
-========================================
-
-.. http:post:: /{db}/_partition/{partition_id}/_explain
-    :synopsis: Find index that is used with a query
-
-    :param db: Database name
-    :partition id: Name of the partition to query
-
-    This endpoint shows which index is being used by the query.
-
-    Refer to the :ref:`explain endpoint <api/db/find/explain>`
-    documentation for a complete description of the available
-    parameters and the format of the returned data.
diff --git a/src/api/server/authn.rst b/src/api/server/authn.rst
deleted file mode 100644
index c5e2b7b..0000000
--- a/src/api/server/authn.rst
+++ /dev/null
@@ -1,471 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/auth:
-
-==============
-Authentication
-==============
-
-Interfaces for obtaining session and authorization data.
-
-.. note::
-    We also strongly recommend you :ref:`set up SSL <config/ssl>` to
-    improve all authentication methods' security.
-
-.. _api/auth/basic:
-
-Basic Authentication
-====================
-
-`Basic authentication`_ (:rfc:`2617`) is a quick and simple way to authenticate
-with CouchDB. The main drawback is the need to send user credentials with each
-request which may be insecure and could hurt operation performance (since
-CouchDB must compute the password hash with every request):
-
-**Request**:
-
-.. code-block:: http
-
-    GET / HTTP/1.1
-    Accept: application/json
-    Authorization: Basic cm9vdDpyZWxheA==
-    Host: localhost:5984
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 177
-    Content-Type: application/json
-    Date: Mon, 03 Dec 2012 00:44:47 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "couchdb":"Welcome",
-        "uuid":"0a959b9b8227188afc2ac26ccdf345a6",
-        "version":"1.3.0",
-        "vendor": {
-            "version":"1.3.0",
-            "name":"The Apache Software Foundation"
-        }
-    }
-
-.. _Basic authentication: http://en.wikipedia.org/wiki/Basic_access_authentication
-
-.. _api/auth/cookie:
-
-Cookie Authentication
-=====================
-
-For cookie authentication (:rfc:`2109`) CouchDB generates a token that the
-client can use for the next few requests to CouchDB. Tokens are valid until
-a timeout. When CouchDB sees a valid token in a subsequent request, it will
-authenticate the user by this token without requesting the password again. By
-default, cookies are valid for 10 minutes, but it's adjustable via :config:option:`timeout
-<chttpd_auth/timeout>`. Also it's possible to make cookies
-:config:option:`persistent <chttpd_auth/allow_persistent_cookies>`.
-
-To obtain the first token and thus authenticate a user for the first time, the
-``username`` and ``password`` must be sent to the :ref:`_session API
-<api/auth/session>`.
-
-.. _api/auth/session:
-
-``/_session``
--------------
-
-.. http:post:: /_session
-    :synopsis: Authenticates user by Cookie-based user login
-
-    Initiates new session for specified user credentials by providing `Cookie`
-    value.
-
-    :<header Content-Type: - :mimetype:`application/x-www-form-urlencoded`
-                           - :mimetype:`application/json`
-    :query string next: Enforces redirect after successful login to the
-      specified location. This location is relative from server root.
-      *Optional*.
-    :form name: User name
-    :form password: Password
-    :>header Set-Cookie: Authorization token
-    :>json boolean ok: Operation status
-    :>json string name: Username
-    :>json array roles: List of user roles
-    :code 200: Successfully authenticated
-    :code 302: Redirect after successful authentication
-    :code 401: Username or password wasn't recognized
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /_session HTTP/1.1
-        Accept: application/json
-        Content-Length: 24
-        Content-Type: application/x-www-form-urlencoded
-        Host: localhost:5984
-
-        name=root&password=relax
-
-    It's also possible to send data as JSON:
-
-    .. code-block:: http
-
-        POST /_session HTTP/1.1
-        Accept: application/json
-        Content-Length: 37
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "name": "root",
-            "password": "relax"
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 43
-        Content-Type: application/json
-        Date: Mon, 03 Dec 2012 01:23:14 GMT
-        Server: CouchDB (Erlang/OTP)
-        Set-Cookie: AuthSession=cm9vdDo1MEJCRkYwMjq0LO0ylOIwShrgt8y-UkhI-c6BGw; Version=1; Path=/; HttpOnly
-
-        {"ok":true,"name":"root","roles":["_admin"]}
-
-    If ``next`` query parameter was provided the response will trigger
-    redirection to the specified location in case of successful authentication:
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /_session?next=/blog/_design/sofa/_rewrite/recent-posts HTTP/1.1
-        Accept: application/json
-        Content-Type: application/x-www-form-urlencoded
-        Host: localhost:5984
-
-        name=root&password=relax
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 302 Moved Temporarily
-        Cache-Control: must-revalidate
-        Content-Length: 43
-        Content-Type: application/json
-        Date: Mon, 03 Dec 2012 01:32:46 GMT
-        Location: http://localhost:5984/blog/_design/sofa/_rewrite/recent-posts
-        Server: CouchDB (Erlang/OTP)
-        Set-Cookie: AuthSession=cm9vdDo1MEJDMDEzRTp7Vu5GKCkTxTVxwXbpXsBARQWnhQ; Version=1; Path=/; HttpOnly
-
-        {"ok":true,"name":null,"roles":["_admin"]}
-
-.. http:get:: /_session
-    :synopsis: Returns Cookie-based login user information
-
-    Returns information about the authenticated user, including a
-    :ref:`userctx_object`, the authentication method and database that were
-    used, and a list of configured authentication handlers on the server.
-
-    :query boolean basic: Accept `Basic Auth` by requesting this resource.
-      *Optional*.
-    :>json boolean ok: Operation status
-    :>json object userCtx: User context for the current user
-    :>json object info: Server authentication configuration
-    :code 200: Successfully authenticated.
-    :code 401: Username or password wasn't recognized.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_session HTTP/1.1
-        Host: localhost:5984
-        Accept: application/json
-        Cookie: AuthSession=cm9vdDo1MEJDMDQxRDpqb-Ta9QfP9hpdPjHLxNTKg_Hf9w
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 175
-        Content-Type: application/json
-        Date: Fri, 09 Aug 2013 20:27:45 GMT
-        Server: CouchDB (Erlang/OTP)
-        Set-Cookie: AuthSession=cm9vdDo1MjA1NTBDMTqmX2qKt1KDR--GUC80DQ6-Ew_XIw; Version=1; Path=/; HttpOnly
-
-        {
-            "info": {
-                "authenticated": "cookie",
-                "authentication_db": "_users",
-                "authentication_handlers": [
-                    "cookie",
-                    "default"
-                ]
-            },
-            "ok": true,
-            "userCtx": {
-                "name": "root",
-                "roles": [
-                    "_admin"
-                ]
-            }
-        }
-
-.. http:delete:: /_session
-    :synopsis: Logout Cookie-based user
-
-    Closes user's session by instructing the browser to clear the cookie. This
-    does not invalidate the session from the server's perspective, as there is
-    no way to do this because CouchDB cookies are stateless. This means calling
-    this endpoint is purely optional from a client perspective, and it does not
-    protect against theft of a session cookie.
-
-    :code 200: Successfully close session.
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /_session HTTP/1.1
-        Accept: application/json
-        Cookie: AuthSession=cm9vdDo1MjA1NEVGMDo1QXNQkqC_0Qmgrk8Fw61_AzDeXw
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Fri, 09 Aug 2013 20:30:12 GMT
-        Server: CouchDB (Erlang/OTP)
-        Set-Cookie: AuthSession=; Version=1; Path=/; HttpOnly
-
-        {
-            "ok": true
-        }
-
-.. _api/auth/proxy:
-
-Proxy Authentication
-====================
-
-.. note::
-    To use this authentication method make sure that the
-    ``{chttpd_auth, proxy_authentication_handler}`` value is added to the
-    list of the active :config:option:`chttpd/authentication_handlers`:
-
-    .. code-block:: ini
-
-        [chttpd]
-        authentication_handlers = {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, proxy_authentication_handler}, {chttpd_auth, default_authentication_handler}
-
-`Proxy authentication` is very useful in case your application already uses
-some external authentication service and you don't want to duplicate users and
-their roles in CouchDB.
-
-This authentication method allows creation of a :ref:`userctx_object` for
-remotely authenticated user. By default, the client just needs to pass specific
-headers to CouchDB with related requests:
-
-- :config:option:`X-Auth-CouchDB-UserName <chttpd_auth/x_auth_username>`:
-  username;
-- :config:option:`X-Auth-CouchDB-Roles <chttpd_auth/x_auth_roles>`:
-  comma-separated (``,``) list of user roles;
-- :config:option:`X-Auth-CouchDB-Token <chttpd_auth/x_auth_token>`:
-  authentication token. When
-  :config:option:`proxy_use_secret <chttpd_auth/proxy_use_secret>`
-  is set (which is strongly recommended!), this header provides an HMAC of the
-  username to authenticate and the secret token to prevent requests from
-  untrusted sources. (Use the SHA1 of the username and sign with the secret)
-
-**Creating the token (example with openssl)**:
-
-.. code-block:: sh
-
-    echo -n "foo" | openssl dgst -sha1 -hmac "the_secret"
-    # (stdin)= 22047ebd7c4ec67dfbcbad7213a693249dbfbf86
-
-**Request**:
-
-.. code-block:: http
-
-    GET /_session HTTP/1.1
-    Host: localhost:5984
-    Accept: application/json
-    Content-Type: application/json; charset=utf-8
-    X-Auth-CouchDB-Roles: users,blogger
-    X-Auth-CouchDB-UserName: foo
-    X-Auth-CouchDB-Token: 22047ebd7c4ec67dfbcbad7213a693249dbfbf86
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 190
-    Content-Type: application/json
-    Date: Fri, 14 Jun 2013 10:16:03 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "info": {
-            "authenticated": "proxy",
-            "authentication_db": "_users",
-            "authentication_handlers": [
-                "cookie",
-                "proxy",
-                "default"
-            ]
-        },
-        "ok": true,
-        "userCtx": {
-            "name": "foo",
-            "roles": [
-                "users",
-                "blogger"
-            ]
-        }
-    }
-
-Note that you don't need to request :ref:`session <api/auth/session>`
-to be authenticated by this method if all required HTTP headers are provided.
-
-.. _api/auth/jwt:
-
-JWT Authentication
-====================
-
-.. note::
-    To use this authentication method, make sure that the
-    ``{chttpd_auth, jwt_authentication_handler}`` value is added to the
-    list of the active :config:option:`chttpd/authentication_handlers`:
-
-    .. code-block:: ini
-
-        [chttpd]
-        authentication_handlers = {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, jwt_authentication_handler}, {chttpd_auth, default_authentication_handler}
-
-``JWT authentication`` enables CouchDB to use externally-generated JWT tokens
-instead of defining users or roles in the ``_users`` database.
-
-The JWT authentication handler requires that all JWT tokens are signed by a key that
-CouchDB has been configured to trust (there is no support for JWT's "NONE" algorithm).
-
-Additionally, CouchDB can be configured to reject JWT tokens that are
-missing a configurable set of claims (e.g, a CouchDB administrator
-could insist on the ``exp`` claim).
-
-Only claims listed in required checks are validated. Additional claims will be ignored.
-
-Two sections of config exist to configure JWT authentication;
-
-The :config:option:`required_claims <jwt_auth/required_claims>` config
-setting is a comma-separated list of additional mandatory JWT claims
-that must be present in any presented JWT token. A :statuscode:`400`
-is sent if any are missing.
-
-The ``alg`` claim is mandatory as it used to lookup the correct key for verifying the
-signature.
-
-The ``sub`` claim is mandatory and is used as the CouchDB user's name if the JWT token
-is valid.
-
-A private claim called ``_couchdb.roles`` is optional. If presented,
-as a JSON array of strings, it is used as the CouchDB user's roles
-list as long as the JWT token is valid.
-
-.. code-block:: ini
-
-    ; [jwt_keys]
-    ; Configure at least one key here if using the JWT auth handler.
-    ; If your JWT tokens do not include a "kid" attribute, use "_default"
-    ; as the config key, otherwise use the kid as the config key.
-    ; Examples
-    ; hmac:_default = aGVsbG8=
-    ; hmac:foo = aGVsbG8=
-    ; The config values can represent symmetric and asymmetric keys.
-    ; For symmetric keys, the value is base64 encoded;
-    ; hmac:_default = aGVsbG8= # base64-encoded form of "hello"
-    ; For asymmetric keys, the value is the PEM encoding of the public
-    ; key with newlines replaced with the escape sequence \n.
-    ; rsa:foo = -----BEGIN PUBLIC KEY-----\nMIIBIjAN...IDAQAB\n-----END PUBLIC KEY-----\n
-    ; ec:bar = -----BEGIN PUBLIC KEY-----\nMHYwEAYHK...AzztRs\n-----END PUBLIC KEY-----\n
-
-The ``jwt_key`` section lists all the keys that this CouchDB server trusts. You
-should ensure that all nodes of your cluster have the same list.
-
-JWT tokens that do not include a ``kid`` claim will be validated against the
-``$alg:_default`` key.
-
-It is mandatory to specify the algorithm associated with every key for security
-reasons (notably presenting a HMAC-signed token using an RSA or EC public key
-that the server trusts:
-https://auth0.com/blog/critical-vulnerabilities-in-json-web-token-libraries/).
-
-**Request**:
-
-.. code-block:: http
-
-    GET /_session HTTP/1.1
-    Host: localhost:5984
-    Accept: application/json
-    Content-Type: application/json; charset=utf-8
-    Authorization: Bearer <JWT token>
-
-**Response**:
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 188
-    Content-Type: application/json
-    Date: Sun, 19 Apr 2020 08:29:15 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "info": {
-            "authenticated": "jwt",
-            "authentication_db": "_users",
-            "authentication_handlers": [
-                "cookie",
-                "proxy",
-                "default"
-            ]
-        },
-        "ok": true,
-        "userCtx": {
-            "name": "foo",
-            "roles": [
-                "users",
-                "blogger"
-            ]
-        }
-    }
-
-Note that you don't need to request :ref:`session <api/auth/session>`
-to be authenticated by this method if the required HTTP header is provided.
diff --git a/src/api/server/common.rst b/src/api/server/common.rst
deleted file mode 100644
index fa1c1be..0000000
--- a/src/api/server/common.rst
+++ /dev/null
@@ -1,2648 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/server/root:
-
-=====
-``/``
-=====
-
-.. http:get:: /
-    :synopsis: Returns the welcome message and version information
-
-    Accessing the root of a CouchDB instance returns meta information about the
-    instance. The response is a JSON structure containing information about the
-    server, including a welcome message and the version of the server.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET / HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 179
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 06:33:33 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "couchdb": "Welcome",
-            "uuid": "85fb71bf700c17267fef77535820e371",
-            "vendor": {
-                "name": "The Apache Software Foundation",
-                "version": "1.3.1"
-            },
-            "version": "1.3.1"
-        }
-
-.. _api/server/active_tasks:
-
-==================
-``/_active_tasks``
-==================
-
-.. versionchanged:: 2.1.0 Because of how the scheduling replicator works, continuous replication jobs could be periodically stopped and then started later. When they are not running they will not appear in the ``_active_tasks`` endpoint
-
-.. versionchanged:: 3.3 Added `"bulk_get_attempts"` and `"bulk_get_docs"` fields for replication jobs.
-
-.. http:get:: /_active_tasks
-    :synopsis: Obtains a list of the tasks running in the server
-
-    List of running tasks, including the task type, name, status
-    and process ID. The result is a JSON array of the currently running tasks,
-    with each task being described with a single object. Depending on operation
-    type set of response object fields might be different.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json number changes_done: Processed changes
-    :>json string database: Source database
-    :>json string pid: Process ID
-    :>json number progress: Current percentage progress
-    :>json number started_on: Task start time as unix timestamp
-    :>json string status: Task status message
-    :>json string task: Task name
-    :>json number total_changes: Total changes to process
-    :>json string type: Operation Type
-    :>json number updated_on: Unix timestamp of last operation update
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_active_tasks HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 1690
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 06:37:31 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        [
-            {
-                "changes_done": 64438,
-                "database": "mailbox",
-                "pid": "<0.12986.1>",
-                "progress": 84,
-                "started_on": 1376116576,
-                "total_changes": 76215,
-                "type": "database_compaction",
-                "updated_on": 1376116619
-            },
-            {
-                "changes_done": 14443,
-                "database": "mailbox",
-                "design_document": "c9753817b3ba7c674d92361f24f59b9f",
-                "pid": "<0.10461.3>",
-                "progress": 18,
-                "started_on": 1376116621,
-                "total_changes": 76215,
-                "type": "indexer",
-                "updated_on": 1376116650
-            },
-            {
-                "changes_done": 5454,
-                "database": "mailbox",
-                "design_document": "_design/meta",
-                "pid": "<0.6838.4>",
-                "progress": 7,
-                "started_on": 1376116632,
-                "total_changes": 76215,
-                "type": "indexer",
-                "updated_on": 1376116651
-            },
-            {
-                "checkpointed_source_seq": 68585,
-                "continuous": false,
-                "doc_id": null,
-                "doc_write_failures": 0,
-                "bulk_get_attempts": 4524,
-                "bulk_get_docs": 4524,
-                "docs_read": 4524,
-                "docs_written": 4524,
-                "missing_revisions_found": 4524,
-                "pid": "<0.1538.5>",
-                "progress": 44,
-                "replication_id": "9bc1727d74d49d9e157e260bb8bbd1d5",
-                "revisions_checked": 4524,
-                "source": "mailbox",
-                "source_seq": 154419,
-                "started_on": 1376116644,
-                "target": "http://mailsrv:5984/mailbox",
-                "type": "replication",
-                "updated_on": 1376116651
-            }
-        ]
-
-.. _api/server/all_dbs:
-
-=============
-``/_all_dbs``
-=============
-
-.. http:get:: /_all_dbs
-    :synopsis: Returns a list of all the databases
-
-    Returns a list of all the databases in the CouchDB instance.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query boolean descending: Return the databases in descending order by key.
-      Default is ``false``.
-    :query json endkey: Stop returning databases when the specified key is
-      reached.
-    :query json end_key: Alias for ``endkey`` param
-    :query number limit: Limit the number of the returned databases to the
-      specified number.
-    :query number skip: Skip this number of databases before starting to return
-      the results. Default is ``0``.
-    :query json startkey: Return databases starting with the specified key.
-    :query json start_key: Alias for ``startkey``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_all_dbs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 52
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 06:57:48 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        [
-           "_users",
-           "contacts",
-           "docs",
-           "invoices",
-           "locations"
-        ]
-
-.. _api/server/dbs_info:
-
-==============
-``/_dbs_info``
-==============
-
-.. versionadded:: 3.2
-
-.. http:get:: /_dbs_info
-    :synopsis: Returns all databases information
-
-    Returns a list of all the databases information in the CouchDB instance.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query boolean descending: Return databases information in descending order
-      by key. Default is ``false``.
-    :query json endkey: Stop returning databases information when the specified
-      key is reached.
-    :query json end_key: Alias for ``endkey`` param
-    :query number limit: Limit the number of the returned databases information
-      to the specified number.
-    :query number skip: Skip this number of databases before starting to return
-      the results. Default is ``0``.
-    :query json startkey: Return databases information starting with the
-      specified key.
-    :query json start_key: Alias for ``startkey``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_dbs_info HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Thu, 18 Nov 2021 14:37:35 GMT
-        Server: CouchDB (Erlang OTP/23)
-
-        [
-          {
-            "key": "animals",
-            "info": {
-              "db_name": "animals",
-              "update_seq": "52232",
-              "sizes": {
-                "file": 1178613587,
-                "external": 1713103872,
-                "active": 1162451555
-              },
-              "purge_seq": 0,
-              "doc_del_count": 0,
-              "doc_count": 52224,
-              "disk_format_version": 6,
-              "compact_running": false,
-              "cluster": {
-                "q": 8,
-                "n": 3,
-                "w": 2,
-                "r": 2
-              },
-              "instance_start_time": "0"
-            }
-          }
-        ]
-
-.. versionadded:: 2.2
-
-.. http:post:: /_dbs_info
-    :synopsis: Returns information of a list of the specified databases
-
-    Returns information of a list of the specified databases in the CouchDB
-    instance. This enables you to request information about multiple databases
-    in a single request, in place of multiple :get:`/{db}` requests.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :<json array keys: Array of database names to be requested
-    :code 200: Request completed successfully
-    :code 400: Missing keys or exceeded keys in request
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /_dbs_info HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        Content-Type: application/json
-
-        {
-            "keys": [
-                "animals",
-                "plants"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 20 Dec 2017 06:57:48 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        [
-          {
-            "key": "animals",
-            "info": {
-              "db_name": "animals",
-              "update_seq": "52232",
-              "sizes": {
-                "file": 1178613587,
-                "external": 1713103872,
-                "active": 1162451555
-              },
-              "purge_seq": 0,
-              "doc_del_count": 0,
-              "doc_count": 52224,
-              "disk_format_version": 6,
-              "compact_running": false,
-              "cluster": {
-                "q": 8,
-                "n": 3,
-                "w": 2,
-                "r": 2
-              },
-              "instance_start_time": "0"
-            }
-          },
-          {
-            "key": "plants",
-            "info": {
-              "db_name": "plants",
-              "update_seq": "303",
-              "sizes": {
-                "file": 3872387,
-                "external": 2339,
-                "active": 67475
-              },
-              "purge_seq": 0,
-              "doc_del_count": 0,
-              "doc_count": 11,
-              "disk_format_version": 6,
-              "compact_running": false,
-              "cluster": {
-                "q": 8,
-                "n": 3,
-                "w": 2,
-                "r": 2
-              },
-              "instance_start_time": "0"
-            }
-          }
-        ]
-
-.. note::
-    The supported number of the specified databases in the list can be limited
-    by modifying the `max_db_number_for_dbs_info_req` entry in configuration
-    file. The default limit is 100. Increasing the limit, while possible, creates
-    load on the server so it is advisable to have more requests with 100 dbs,
-    rather than a few requests with 1000s of dbs at a time.
-
-.. _api/server/cluster_setup:
-
-===================
-``/_cluster_setup``
-===================
-
-.. versionadded:: 2.0
-.. http:get:: /_cluster_setup
-    :synopsis: Return the status of the cluster setup wizard
-
-    Returns the status of the node or cluster, per the cluster setup wizard.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query array ensure_dbs_exist: List of system databases to ensure exist
-        on the node/cluster. Defaults to
-        ``["_users","_replicator"]``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json string state: Current ``state`` of the node and/or cluster (see
-        below)
-    :code 200: Request completed successfully
-
-    The ``state`` returned indicates the current node or cluster state, and
-    is one of the following:
-
-    - ``cluster_disabled``: The current node is completely unconfigured.
-    - ``single_node_disabled``: The current node is configured as a single
-      (standalone) node (``[cluster] n=1``), but either does not have a
-      server-level admin user defined, or does not have the standard system
-      databases created. If the ``ensure_dbs_exist`` query parameter is
-      specified, the list of databases provided overrides the default list
-      of standard system databases.
-    - ``single_node_enabled``: The current node is configured as a single
-      (standalone) node, has a server-level admin user defined, and has
-      the ``ensure_dbs_exist`` list (explicit or default) of databases
-      created.
-    - ``cluster_enabled``: The current node has ``[cluster] n`` > 1, is not
-      bound to ``127.0.0.1`` and has a server-level admin user defined.
-      However, the full set of standard system databases have not been
-      created yet. If the ``ensure_dbs_exist`` query parameter is
-      specified, the list of databases provided overrides the default list
-      of standard system databases.
-    - ``cluster_finished``: The current node has ``[cluster] n`` > 1, is not
-      bound to ``127.0.0.1``, has a server-level admin user defined *and*
-      has the ``ensure_dbs_exist`` list (explicit or default) of databases
-      created.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_cluster_setup HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        X-CouchDB-Body-Time: 0
-        X-Couch-Request-ID: 5c058bdd37
-        Server: CouchDB/2.1.0-7f17678 (Erlang OTP/17)
-        Date: Sun, 30 Jul 2017 06:33:18 GMT
-        Content-Type: application/json
-        Content-Length: 29
-        Cache-Control: must-revalidate
-
-        {"state":"cluster_enabled"}
-
-.. http:post:: /_cluster_setup
-    :synopsis: Sets up a node as a single node or as part of a cluster.
-
-    Configure a node as a single (standalone) node, as part of a cluster,
-    or finalise a cluster.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :<json string action: - **enable_single_node**: Configure the current node
-                            as a single, standalone CouchDB server.
-                          - **enable_cluster**: Configure the local or remote
-                            node as one node, preparing it to be joined to a
-                            new CouchDB cluster.
-                          - **add_node**: Add the specified remote node to
-                            this cluster's list of nodes, joining it to the
-                            cluster.
-                          - **finish_cluster**: Finalise the cluster by
-                            creating the standard system databases.
-    :<json string bind_address: The IP address to which to bind the current
-        node. The special value ``0.0.0.0`` may be specified to bind to all
-        interfaces on the host. (enable_cluster and enable_single_node only)
-    :<json string username: The username of the server-level administrator to
-        create. (enable_cluster and enable_single_node only), or the remote
-        server's administrator username (add_node)
-    :<json string password: The password for the server-level administrator to
-        create. (enable_cluster and enable_single_node only), or the remote
-        server's administrator username (add_node)
-    :<json number port: The TCP port to which to bind this node
-        (enable_cluster and enable_single_node only) or the TCP port to which
-        to bind a remote node (add_node only).
-    :<json number node_count: The total number of nodes to be joined into
-        the cluster, including this one. Used to determine the value of the
-        cluster's ``n``, up to a maximum of 3. (enable_cluster only)
-    :<json string remote_node: The IP address of the remote node to setup as
-        part of this cluster's list of nodes. (enable_cluster only)
-    :<json string remote_current_user: The username of the server-level
-        administrator authorized on the remote node. (enable_cluster only)
-    :<json string remote_current_password: The password of the server-level
-        administrator authorized on the remote node. (enable_cluster only)
-    :<json string host: The remote node IP of the node to add to the cluster.
-        (add_node only)
-    :<json array ensure_dbs_exist: List of system databases to ensure exist
-        on the node/cluster. Defaults to
-        ``["_users","_replicator"]``.
-
-    *No example request/response included here. For a worked example, please
-    see* :ref:`cluster/setup/api`.
-
-.. _api/server/db_updates:
-
-================
-``/_db_updates``
-================
-
-.. versionadded:: 1.4
-
-.. http:get:: /_db_updates
-    :synopsis: Return the server changes of databases
-
-    Returns a list of all database events in the CouchDB instance. The
-    existence of the ``_global_changes`` database is required to use this
-    endpoint.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query string feed: - **normal**: Returns all historical DB changes, then
-                          closes the connection. *Default.*
-                        - **longpoll**: Closes the connection after the first
-                          event.
-                        - **continuous**: Send a line of JSON per event.
-                          Keeps the socket open until ``timeout``.
-                        - **eventsource**: Like, ``continuous``, but sends
-                          the events in `EventSource
-                          <http://dev.w3.org/html5/eventsource/>`_ format.
-    :query number timeout: Number of *milliseconds* until CouchDB closes the
-      connection. Default is ``60000``.
-    :query number heartbeat: Period in *milliseconds* after which an empty
-        line is sent in the results. Only applicable for ``longpoll``,
-        ``continuous``, and ``eventsource`` feeds. Overrides any timeout to
-        keep the feed alive indefinitely. Default is ``60000``. May be ``true``
-        to use default value.
-    :query string since: Return only updates since the specified sequence ID.
-        If the sequence ID is specified but does not exist, all changes are returned.
-        May be the string ``now`` to begin showing only new updates.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header Transfer-Encoding: ``chunked``
-    :>json array results: An array of database events. For ``longpoll`` and
-        ``continuous`` modes, the entire response is the contents of the
-        ``results`` array.
-    :>json string last_seq: The last sequence ID reported.
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    The ``results`` field of database updates:
-
-    :json string db_name: Database name.
-    :json string type: A database event is one of ``created``, ``updated``,
-      ``deleted``.
-    :json json seq: Update sequence of the event.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_db_updates HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 18 Mar 2017 19:01:35 GMT
-        Etag: "C1KU98Y6H0LGM7EQQYL6VSL07"
-        Server: CouchDB/2.0.0 (Erlang OTP/17)
-        Transfer-Encoding: chunked
-        X-Couch-Request-ID: ad87efc7ff
-        X-CouchDB-Body-Time: 0
-
-        {
-            "results":[
-                {"db_name":"mailbox","type":"created","seq":"1-g1AAAAFReJzLYWBg4MhgTmHgzcvPy09JdcjLz8gvLskBCjMlMiTJ____PyuDOZExFyjAnmJhkWaeaIquGIf2JAUgmWQPMiGRAZcaB5CaePxqEkBq6vGqyWMBkgwNQAqobD4h"},
-                {"db_name":"mailbox","type":"deleted","seq":"2-g1AAAAFReJzLYWBg4MhgTmHgzcvPy09JdcjLz8gvLskBCjMlMiTJ____PyuDOZEpFyjAnmJhkWaeaIquGIf2JAUgmWQPMiGRAZcaB5CaePxqEkBq6vGqyWMBkgwNQAqobD4hdQsg6vYTUncAou4-IXUPIOpA7ssCAIFHa60"},
-            ],
-            "last_seq": "2-g1AAAAFReJzLYWBg4MhgTmHgzcvPy09JdcjLz8gvLskBCjMlMiTJ____PyuDOZEpFyjAnmJhkWaeaIquGIf2JAUgmWQPMiGRAZcaB5CaePxqEkBq6vGqyWMBkgwNQAqobD4hdQsg6vYTUncAou4-IXUPIOpA7ssCAIFHa60"
-        }
-
-.. _api/server/membership:
-
-================
-``/_membership``
-================
-
-.. versionadded:: 2.0
-
-.. http:get:: /_membership
-    :synopsis: Returns a list of nodes
-
-    Displays the nodes that are part of the cluster as ``cluster_nodes``. The
-    field ``all_nodes`` displays all nodes this node knows about, including the
-    ones that are part of the cluster. The endpoint is useful when setting up a
-    cluster, see :ref:`cluster/nodes`
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_membership HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 11 Jul 2015 07:02:41 GMT
-        Server: CouchDB (Erlang/OTP)
-        Content-Length: 142
-
-        {
-            "all_nodes": [
-                "node1@127.0.0.1",
-                "node2@127.0.0.1",
-                "node3@127.0.0.1"
-            ],
-            "cluster_nodes": [
-                "node1@127.0.0.1",
-                "node2@127.0.0.1",
-                "node3@127.0.0.1"
-            ]
-        }
-
-.. _api/server/replicate:
-
-===============
-``/_replicate``
-===============
-
-.. versionchanged:: 3.3 Added `"bulk_get_attempts"` and `"bulk_get_docs"`  fields to the replication history response object.
-
-.. http:post:: /_replicate
-    :synopsis: Starts or cancels the replication
-
-    Request, configure, or stop, a replication operation.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :<json boolean cancel: Cancels the replication
-    :<json boolean continuous: Configure the replication to be continuous
-    :<json boolean create_target: Creates the target database.
-      Required administrator's privileges on target server.
-    :<json object create_target_params: An object that contains parameters
-      to be used when creating the target database. Can include the
-      standard ``q`` and ``n`` parameters.
-    :<json boolean winning_revs_only: Replicate winning revisions only.
-    :<json array doc_ids: Array of document IDs to be synchronized.
-      ``doc_ids``, ``filter``, and ``selector`` are mutually exclusive.
-    :<json string filter: The name of a :ref:`filter function <filterfun>`.
-      ``doc_ids``, ``filter``, and ``selector`` are mutually exclusive.
-    :<json json selector: A :ref:`selector <find/selectors>` to filter
-      documents for synchronization. Has the same behavior as the
-      :ref:`selector objects <selectorobj>` in replication documents.
-      ``doc_ids``, ``filter``, and ``selector`` are mutually exclusive.
-    :<json string source_proxy: Address of a proxy server through which
-      replication from the source should occur (protocol can be "http" or
-      "socks5")
-    :<json string target_proxy: Address of a proxy server through which
-      replication to the target should occur (protocol can be "http" or
-      "socks5")
-    :<json string/object source: Fully qualified source database URL or an
-      object which contains the full URL of the source database with additional
-      parameters like headers. Eg: 'http://example.com/source_db_name' or
-      {"url":"url in here", "headers": {"header1":"value1", ...}} . For
-      backwards compatibility, CouchDB 3.x will auto-convert bare database
-      names by prepending the address and port CouchDB is listening on, to
-      form a complete URL. This behaviour is deprecated in 3.x and will be
-      removed in CouchDB 4.0.
-    :<json string/object target: Fully qualified target database URL or an
-      object which contains the full URL of the target database with additional
-      parameters like headers. Eg: 'http://example.com/target_db_name' or
-      {"url":"url in here", "headers": {"header1":"value1", ...}} . For
-      backwards compatibility, CouchDB 3.x will auto-convert bare database
-      names by prepending the address and port CouchDB is listening on, to
-      form a complete URL. This behaviour is deprecated in 3.x and will be
-      removed in CouchDB 4.0.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>json array history: Replication history (see below)
-    :>json boolean ok: Replication status
-    :>json number replication_id_version: Replication protocol version
-    :>json string session_id: Unique session ID
-    :>json number source_last_seq: Last sequence number read from source
-      database
-    :code 200: Replication request successfully completed
-    :code 202: Continuous replication request has been accepted
-    :code 400: Invalid JSON data
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Either the source or target DB is not found or attempt to
-      cancel unknown replication task
-    :code 500: JSON specification was invalid
-
-    The specification of the replication request is controlled through the
-    JSON content of the request. The JSON should be an object with the
-    fields defining the source, target and other options.
-
-    The `Replication history` is an array of objects with following structure:
-
-    :json number doc_write_failures: Number of document write failures
-    :json number docs_read:  Number of documents read
-    :json number docs_written:  Number of documents written to target
-    :json number bulk_get_attempts: The total count of attempted doc revisions
-        fetched with ``_bulk_get``.
-    :json number bulk_get_docs: The total count of successful docs fetched with
-        ``_bulk_get``.
-    :json number end_last_seq:  Last sequence number in changes stream
-    :json string end_time:  Date/Time replication operation completed in
-      :rfc:`2822` format
-    :json number missing_checked:  Number of missing documents checked
-    :json number missing_found:  Number of missing documents found
-    :json number recorded_seq:  Last recorded sequence number
-    :json string session_id:  Session ID for this replication operation
-    :json number start_last_seq:  First sequence number in changes stream
-    :json string start_time:  Date/Time replication operation started in
-      :rfc:`2822` format
-
-.. note::
-    As of CouchDB 2.0.0, fully qualified URLs are required for both the
-    replication ``source`` and ``target`` parameters.
-
-    **Request**
-
-    .. code-block:: http
-
-        POST /_replicate HTTP/1.1
-        Accept: application/json
-        Content-Length: 80
-        Content-Type: application/json
-        Host: localhost:5984
-
-        {
-            "source": "http://127.0.0.1:5984/db_a",
-            "target": "http://127.0.0.1:5984/db_b"
-        }
-
-    **Response**
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 692
-        Content-Type: application/json
-        Date: Sun, 11 Aug 2013 20:38:50 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "history": [
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 10,
-                    "bulk_get_attempts": 10,
-                    "bulk_get_docs": 10,
-                    "docs_written": 10,
-                    "end_last_seq": 28,
-                    "end_time": "Sun, 11 Aug 2013 20:38:50 GMT",
-                    "missing_checked": 10,
-                    "missing_found": 10,
-                    "recorded_seq": 28,
-                    "session_id": "142a35854a08e205c47174d91b1f9628",
-                    "start_last_seq": 1,
-                    "start_time": "Sun, 11 Aug 2013 20:38:50 GMT"
-                },
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 1,
-                    "bulk_get_attempts": 1,
-                    "bulk_get_docs": 1,
-                    "docs_written": 1,
-                    "end_last_seq": 1,
-                    "end_time": "Sat, 10 Aug 2013 15:41:54 GMT",
-                    "missing_checked": 1,
-                    "missing_found": 1,
-                    "recorded_seq": 1,
-                    "session_id": "6314f35c51de3ac408af79d6ee0c1a09",
-                    "start_last_seq": 0,
-                    "start_time": "Sat, 10 Aug 2013 15:41:54 GMT"
-                }
-            ],
-            "ok": true,
-            "replication_id_version": 3,
-            "session_id": "142a35854a08e205c47174d91b1f9628",
-            "source_last_seq": 28
-        }
-
-Replication Operation
-=====================
-
-The aim of the replication is that at the end of the process, all active
-documents on the source database are also in the destination database and all
-documents that were deleted in the source databases are also deleted (if they
-exist) on the destination database.
-
-Replication can be described as either push or pull replication:
-
-- *Pull replication* is where the ``source`` is the remote CouchDB instance,
-  and the ``target`` is the local database.
-
-  Pull replication is the most useful solution to use if your source database
-  has a permanent IP address, and your destination (local) database may have a
-  dynamically assigned IP address (for example, through DHCP). This is
-  particularly important if you are replicating to a mobile or other device
-  from a central server.
-
-- *Push replication* is where the ``source`` is a local database, and
-  ``target`` is a remote database.
-
-Specifying the Source and Target Database
-=========================================
-
-You must use the URL specification of the CouchDB database if you want to
-perform replication in either of the following two situations:
-
-- Replication with a remote database (i.e. another instance of CouchDB on the
-  same host, or a different host)
-
-- Replication with a database that requires authentication
-
-For example, to request replication between a database local to the CouchDB
-instance to which you send the request, and a remote database you might use the
-following request:
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Content-Type: application/json
-    Accept: application/json
-
-    {
-        "source" : "recipes",
-        "target" : "http://coucdb-remote:5984/recipes",
-    }
-
-In all cases, the requested databases in the ``source`` and ``target``
-specification must exist. If they do not, an error will be returned within the
-JSON object:
-
-.. code-block:: javascript
-
-    {
-        "error" : "db_not_found"
-        "reason" : "could not open http://couchdb-remote:5984/ol1ka/",
-    }
-
-You can create the target database (providing your user credentials allow it)
-by adding the ``create_target`` field to the request object:
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Content-Type: application/json
-    Accept: application/json
-
-    {
-        "create_target" : true
-        "source" : "recipes",
-        "target" : "http://couchdb-remote:5984/recipes",
-    }
-
-The ``create_target`` field is not destructive. If the database already
-exists, the replication proceeds as normal.
-
-Single Replication
-==================
-
-You can request replication of a database so that the two databases can be
-synchronized. By default, the replication process occurs one time and
-synchronizes the two databases together. For example, you can request a single
-synchronization between two databases by supplying the ``source`` and
-``target`` fields within the request JSON content.
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Accept: application/json
-    Content-Type: application/json
-
-    {
-        "source" : "recipes",
-        "target" : "recipes-snapshot",
-    }
-
-In the above example, the databases ``recipes`` and ``recipes-snapshot`` will
-be synchronized. These databases are local to the CouchDB instance where the
-request was made. The response will be a JSON structure containing the success
-(or failure) of the synchronization process, and statistics about the process:
-
-.. code-block:: javascript
-
-    {
-        "ok" : true,
-        "history" : [
-            {
-                "docs_read" : 1000,
-                "bulk_get_attempts": 1000,
-                "bulk_get_docs": 1000,
-                "session_id" : "52c2370f5027043d286daca4de247db0",
-                "recorded_seq" : 1000,
-                "end_last_seq" : 1000,
-                "doc_write_failures" : 0,
-                "start_time" : "Thu, 28 Oct 2010 10:24:13 GMT",
-                "start_last_seq" : 0,
-                "end_time" : "Thu, 28 Oct 2010 10:24:14 GMT",
-                "missing_checked" : 0,
-                "docs_written" : 1000,
-                "missing_found" : 1000
-            }
-        ],
-        "session_id" : "52c2370f5027043d286daca4de247db0",
-        "source_last_seq" : 1000
-    }
-
-Continuous Replication
-======================
-
-Synchronization of a database with the previously noted methods happens only
-once, at the time the replicate request is made. To have the target database
-permanently replicated from the source, you must set the ``continuous`` field
-of the JSON object within the request to true.
-
-With continuous replication changes in the source database are replicated to
-the target database in perpetuity until you specifically request that
-replication ceases.
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Accept: application/json
-    Content-Type: application/json
-
-    {
-        "continuous" : true
-        "source" : "recipes",
-        "target" : "http://couchdb-remote:5984/recipes",
-    }
-
-Changes will be replicated between the two databases as long as a network
-connection is available between the two instances.
-
-.. note::
-    Two keep two databases synchronized with each other, you need to set
-    replication in both directions; that is, you must replicate from ``source``
-    to ``target``, and separately from ``target`` to ``source``.
-
-Canceling Continuous Replication
-================================
-
-You can cancel continuous replication by adding the ``cancel`` field to the
-JSON request object and setting the value to true. Note that the structure of
-the request must be identical to the original for the cancellation request to
-be honoured. For example, if you requested continuous replication, the
-cancellation request must also contain the ``continuous`` field.
-
-For example, the replication request:
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Content-Type: application/json
-    Accept: application/json
-
-    {
-        "source" : "recipes",
-        "target" : "http://couchdb-remote:5984/recipes",
-        "create_target" : true,
-        "continuous" : true
-    }
-
-Must be canceled using the request:
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Accept: application/json
-    Content-Type: application/json
-
-    {
-        "cancel" : true,
-        "continuous" : true
-        "create_target" : true,
-        "source" : "recipes",
-        "target" : "http://couchdb-remote:5984/recipes",
-    }
-
-Requesting cancellation of a replication that does not exist results in a 404
-error.
-
-.. _api/server/_scheduler/jobs:
-
-====================
-``/_scheduler/jobs``
-====================
-
-.. http:get:: /_scheduler/jobs
-    :synopsis: Retrieve information about replication jobs
-
-    List of replication jobs. Includes replications created via
-    :ref:`api/server/replicate` endpoint as well as those created from
-    replication documents. Does not include replications which have completed
-    or have failed to start because replication documents were malformed. Each
-    job description will include source and target information, replication id,
-    a history of recent event, and a few other things.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :query number limit: How many results to return
-    :query number skip: How many result to skip starting at the beginning,
-                        ordered by replication ID
-    :>json number offset: How many results were skipped
-    :>json number total_rows: Total number of replication jobs
-    :>json string id: Replication ID.
-    :>json string database: Replication document database
-    :>json string doc_id: Replication document ID
-    :>json list history: Timestamped history of events as a list of objects
-    :>json string pid: Replication process ID
-    :>json string node: Cluster node where the job is running
-    :>json string source: Replication source
-    :>json string target: Replication target
-    :>json string start_time: Timestamp of when the replication was started
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_scheduler/jobs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 1690
-        Content-Type: application/json
-        Date: Sat, 29 Apr 2017 05:05:16 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "jobs": [
-                {
-                    "database": "_replicator",
-                    "doc_id": "cdyno-0000001-0000003",
-                    "history": [
-                        {
-                            "timestamp": "2017-04-29T05:01:37Z",
-                            "type": "started"
-                        },
-                        {
-                            "timestamp": "2017-04-29T05:01:37Z",
-                            "type": "added"
-                        }
-                    ],
-                    "id": "8f5b1bd0be6f9166ccfd36fc8be8fc22+continuous",
-                    "info": {
-                        "changes_pending": 0,
-                        "checkpointed_source_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ",
-                        "doc_write_failures": 0,
-                        "docs_read": 113,
-                        "docs_written": 113,
-                        "bulk_get_attempts": 113,
-                        "bulk_get_docs": 113,
-                        "missing_revisions_found": 113,
-                        "revisions_checked": 113,
-                        "source_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ",
-                        "through_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ"
-                    },
-                    "node": "node1@127.0.0.1",
-                    "pid": "<0.1850.0>",
-                    "source": "http://myserver.com/foo",
-                    "start_time": "2017-04-29T05:01:37Z",
-                    "target": "http://adm:*****@localhost:15984/cdyno-0000003/",
-                    "user": null
-                },
-                {
-                    "database": "_replicator",
-                    "doc_id": "cdyno-0000001-0000002",
-                    "history": [
-                        {
-                            "timestamp": "2017-04-29T05:01:37Z",
-                            "type": "started"
-                        },
-                        {
-                            "timestamp": "2017-04-29T05:01:37Z",
-                            "type": "added"
-                        }
-                    ],
-                    "id": "e327d79214831ca4c11550b4a453c9ba+continuous",
-                    "info": {
-                        "changes_pending": null,
-                        "checkpointed_source_seq": 0,
-                        "doc_write_failures": 0,
-                        "docs_read": 12,
-                        "docs_written": 12,
-                        "bulk_get_attempts": 12,
-                        "bulk_get_docs": 12,
-                        "missing_revisions_found": 12,
-                        "revisions_checked": 12,
-                        "source_seq": "12-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE1lzgQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSexgk4yMkhITjS0wdWUBADfEJBg",
-                        "through_seq": "12-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE1lzgQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSexgk4yMkhITjS0wdWUBADfEJBg"
-                    },
-                    "node": "node2@127.0.0.1",
-                    "pid": "<0.1757.0>",
-                    "source": "http://myserver.com/foo",
-                    "start_time": "2017-04-29T05:01:37Z",
-                    "target": "http://adm:*****@localhost:15984/cdyno-0000002/",
-                    "user": null
-                }
-            ],
-            "offset": 0,
-            "total_rows": 2
-         }
-
-.. _api/server/_scheduler/docs:
-
-====================
-``/_scheduler/docs``
-====================
-
-.. versionchanged:: 2.1.0 Use this endpoint to monitor the state of
-                    document-based replications. Previously needed to poll both
-                    documents and ``_active_tasks`` to get a complete state
-                    summary
-
-.. versionchanged:: 3.0.0 In error states the `"info"` field switched
-                    from being a string to being an object
-
-.. versionchanged:: 3.3 Added `"bulk_get_attempts"` and `"bulk_get_docs"` the
-                    `"info"` object.
-
-.. http:get:: /_scheduler/docs
-    :synopsis: Retrieve information about replication documents from the
-               ``_replicator`` database.
-
-    List of replication document states. Includes information about all the
-    documents, even in ``completed`` and ``failed`` states. For each document
-    it returns the document ID, the database, the replication ID, source and
-    target, and other information.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :query number limit: How many results to return
-    :query number skip: How many result to skip starting at the beginning, if
-                        ordered by document ID
-    :>json number offset: How many results were skipped
-    :>json number total_rows: Total number of replication documents.
-    :>json string id: Replication ID, or ``null`` if state is ``completed`` or
-                      ``failed``
-    :>json string state: One of following states (see :ref:`replicator/states`
-                         for descriptions): ``initializing``, ``running``,
-                         ``completed``, ``pending``, ``crashing``, ``error``,
-                         ``failed``
-    :>json string database: Database where replication document came from
-    :>json string doc_id: Replication document ID
-    :>json string node: Cluster node where the job is running
-    :>json string source: Replication source
-    :>json string target: Replication target
-    :>json string start_time: Timestamp of when the replication was started
-    :>json string last_updated: Timestamp of last state update
-    :>json object info: Will contain additional information about the
-                        state. For errors, this will be an object with
-                        an ``"error"`` field and string value. For
-                        success states, see below.
-    :>json number error_count: Consecutive errors count. Indicates how many
-                               times in a row this replication has crashed.
-                               Replication will be retried with an exponential
-                               backoff based on this number. As soon as the
-                               replication succeeds this count is reset to 0.
-                               To can be used to get an idea why a particular
-                               replication is not making progress.
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    The ``info`` field of a scheduler doc:
-
-    :json number revisions_checked: The count of revisions which have been
-        checked since this replication began.
-    :json number missing_revisions_found: The count of revisions which were
-        found on the source, but missing from the target.
-    :json number docs_read: The count of docs which have been read from the
-        source.
-    :json number docs_written: The count of docs which have been written to the
-        target.
-    :json number bulk_get_attempts: The total count of attempted doc revisions
-        fetched with ``_bulk_get``.
-    :json number bulk_get_docs: The total count of successful docs fetched with
-        ``_bulk_get``.
-    :json number changes_pending: The count of changes not yet replicated.
-    :json number doc_write_failures: The count of docs which failed to be
-        written to the target.
-    :json object checkpointed_source_seq: The source sequence id which was last
-        successfully replicated.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_scheduler/docs HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-        Date: Sat, 29 Apr 2017 05:10:08 GMT
-        Server: Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "docs": [
-                {
-                    "database": "_replicator",
-                    "doc_id": "cdyno-0000001-0000002",
-                    "error_count": 0,
-                    "id": "e327d79214831ca4c11550b4a453c9ba+continuous",
-                    "info": {
-                        "changes_pending": 15,
-                        "checkpointed_source_seq": "60-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYEyVygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSSpgk4yMkhITjS0wdWUBAENCJEg",
-                        "doc_write_failures": 0,
-                        "docs_read": 67,
-                        "bulk_get_attempts": 67,
-                        "bulk_get_docs": 67,
-                        "docs_written": 67,
-                        "missing_revisions_found": 67,
-                        "revisions_checked": 67,
-                        "source_seq": "67-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE2VygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSepgk4yMkhITjS0wdWUBAEVKJE8",
-                        "through_seq": "67-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE2VygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSepgk4yMkhITjS0wdWUBAEVKJE8"
-                    },
-                    "last_updated": "2017-04-29T05:01:37Z",
-                    "node": "node2@127.0.0.1",
-                    "source_proxy": null,
-                    "target_proxy": null,
-                    "source": "http://myserver.com/foo",
-                    "start_time": "2017-04-29T05:01:37Z",
-                    "state": "running",
-                    "target": "http://adm:*****@localhost:15984/cdyno-0000002/"
-                },
-                {
-                    "database": "_replicator",
-                    "doc_id": "cdyno-0000001-0000003",
-                    "error_count": 0,
-                    "id": "8f5b1bd0be6f9166ccfd36fc8be8fc22+continuous",
-                    "info": {
-                        "changes_pending": null,
-                        "checkpointed_source_seq": 0,
-                        "doc_write_failures": 0,
-                        "bulk_get_attempts": 12,
-                        "bulk_get_docs": 12,
-                        "docs_read": 12,
-                        "docs_written": 12,
-                        "missing_revisions_found": 12,
-                        "revisions_checked": 12,
-                        "source_seq": "12-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE1lzgQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSexgk4yMkhITjS0wdWUBADfEJBg",
-                        "through_seq": "12-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE1lzgQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSexgk4yMkhITjS0wdWUBADfEJBg"
-                    },
-                    "last_updated": "2017-04-29T05:01:37Z",
-                    "node": "node1@127.0.0.1",
-                    "source_proxy": null,
-                    "target_proxy": null,
-                    "source": "http://myserver.com/foo",
-                    "start_time": "2017-04-29T05:01:37Z",
-                    "state": "running",
-                    "target": "http://adm:*****@localhost:15984/cdyno-0000003/"
-                }
-            ],
-            "offset": 0,
-            "total_rows": 2
-        }
-
-.. http:get:: /_scheduler/docs/{replicator_db}
-    :synopsis: Retrieve information about replication documents from a specific
-               replicator database.
-
-    Get information about replication documents from a replicator database.
-    The default replicator database is ``_replicator`` but other replicator
-    databases can exist if their name ends with the suffix ``/_replicator``.
-
-    .. note:: As a convenience slashes (``/``) in replicator db names do not
-       have to be escaped. So ``/_scheduler/docs/other/_replicator`` is valid
-       and equivalent to ``/_scheduler/docs/other%2f_replicator``
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :query number limit: How many results to return
-    :query number skip: How many result to skip starting at the beginning, if
-                        ordered by document ID
-    :>json number offset: How many results were skipped
-    :>json number total_rows: Total number of replication documents.
-    :>json string id: Replication ID, or ``null`` if state is ``completed`` or
-                      ``failed``
-    :>json string state: One of following states (see :ref:`replicator/states`
-                         for descriptions): ``initializing``, ``running``,
-                         ``completed``, ``pending``, ``crashing``, ``error``,
-                         ``failed``
-    :>json string database: Database where replication document came from
-    :>json string doc_id: Replication document ID
-    :>json string node: Cluster node where the job is running
-    :>json string source: Replication source
-    :>json string target: Replication target
-    :>json string start_time: Timestamp of when the replication was started
-    :>json string last_update: Timestamp of last state update
-    :>json object info: Will contain additional information about the
-                        state. For errors, this will be an object with
-                        an ``"error"`` field and string value. For
-                        success states, see below.
-    :>json number error_count: Consecutive errors count. Indicates how many
-                               times in a row this replication has crashed.
-                               Replication will be retried with an exponential
-                               backoff based on this number. As soon as the
-                               replication succeeds this count is reset to 0.
-                               To can be used to get an idea why a particular
-                               replication is not making progress.
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    The ``info`` field of a scheduler doc:
-
-    :json number revisions_checked: The count of revisions which have been
-        checked since this replication began.
-    :json number missing_revisions_found: The count of revisions which were
-        found on the source, but missing from the target.
-    :json number docs_read: The count of docs which have been read from the
-        source.
-    :json number docs_written: The count of docs which have been written to the
-        target.
-    :json number bulk_get_attempts: The total count of attempted doc revisions
-        fetched with ``_bulk_get``.
-    :json number bulk_get_docs: The total count of successful docs fetched with
-        ``_bulk_get``.
-    :json number changes_pending: The count of changes not yet replicated.
-    :json number doc_write_failures: The count of docs which failed to be
-        written to the target.
-    :json object checkpointed_source_seq: The source sequence id which was last
-        successfully replicated.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_scheduler/docs/other/_replicator HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-        Date: Sat, 29 Apr 2017 05:10:08 GMT
-        Server: Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "docs": [
-                {
-                    "database": "other/_replicator",
-                    "doc_id": "cdyno-0000001-0000002",
-                    "error_count": 0,
-                    "id": "e327d79214831ca4c11550b4a453c9ba+continuous",
-                    "info": {
-                        "changes_pending": 0,
-                        "checkpointed_source_seq": "60-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYEyVygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSSpgk4yMkhITjS0wdWUBAENCJEg",
-                        "doc_write_failures": 0,
-                        "docs_read": 67,
-                        "bulk_get_attempts": 67,
-                        "bulk_get_docs": 67,
-                        "docs_written": 67,
-                        "missing_revisions_found": 67,
-                        "revisions_checked": 67,
-                        "source_seq": "67-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE2VygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSepgk4yMkhITjS0wdWUBAEVKJE8",
-                        "through_seq": "67-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE2VygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSepgk4yMkhITjS0wdWUBAEVKJE8"
-                    },
-                    "last_updated": "2017-04-29T05:01:37Z",
-                    "node": "node2@127.0.0.1",
-                    "source_proxy": null,
-                    "target_proxy": null,
-                    "source": "http://myserver.com/foo",
-                    "start_time": "2017-04-29T05:01:37Z",
-                    "state": "running",
-                    "target": "http://adm:*****@localhost:15984/cdyno-0000002/"
-                }
-            ],
-            "offset": 0,
-            "total_rows": 1
-        }
-
-.. http:get:: /_scheduler/docs/{replicator_db}/{docid}
-    :synopsis: Retrieve information about a particular replication document
-
-    .. note:: As a convenience slashes (``/``) in replicator db names do not
-       have to be escaped. So ``/_scheduler/docs/other/_replicator`` is valid
-       and equivalent to ``/_scheduler/docs/other%2f_replicator``
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-    :>json string id: Replication ID, or ``null`` if state is ``completed`` or
-                      ``failed``
-    :>json string state: One of following states (see :ref:`replicator/states`
-                         for descriptions): ``initializing``, ``running``,
-                         ``completed``, ``pending``, ``crashing``, ``error``,
-                         ``failed``
-    :>json string database: Database where replication document came from
-    :>json string doc_id: Replication document ID
-    :>json string node: Cluster node where the job is running
-    :>json string source: Replication source
-    :>json string target: Replication target
-    :>json string start_time: Timestamp of when the replication was started
-    :>json string last_update: Timestamp of last state update
-    :>json object info: Will contain additional information about the
-                        state. For errors, this will be an object with
-                        an ``"error"`` field and string value. For
-                        success states, see below.
-    :>json number error_count: Consecutive errors count. Indicates how many
-                               times in a row this replication has crashed.
-                               Replication will be retried with an exponential
-                               backoff based on this number. As soon as the
-                               replication succeeds this count is reset to 0.
-                               To can be used to get an idea why a particular
-                               replication is not making progress.
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    The ``info`` field of a scheduler doc:
-
-    :json number revisions_checked: The count of revisions which have been
-        checked since this replication began.
-    :json number missing_revisions_found: The count of revisions which were
-        found on the source, but missing from the target.
-    :json number docs_read: The count of docs which have been read from the
-        source.
-    :json number docs_written: The count of docs which have been written to the
-        target.
-    :json number bulk_get_attempts: The total count of attempted doc revisions
-        fetched with ``_bulk_get``.
-    :json number bulk_get_docs: The total count of successful docs fetched with
-        ``_bulk_get``.
-    :json number changes_pending: The count of changes not yet replicated.
-    :json number doc_write_failures: The count of docs which failed to be
-        written to the target.
-    :json object checkpointed_source_seq: The source sequence id which was last
-        successfully replicated.
-
-     **Request**:
-
-    .. code-block:: http
-
-        GET /_scheduler/docs/other/_replicator/cdyno-0000001-0000002 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-        Date: Sat, 29 Apr 2017 05:10:08 GMT
-        Server: Server: CouchDB (Erlang/OTP)
-        Transfer-Encoding: chunked
-
-        {
-            "database": "other/_replicator",
-            "doc_id": "cdyno-0000001-0000002",
-            "error_count": 0,
-            "id": "e327d79214831ca4c11550b4a453c9ba+continuous",
-            "info": {
-                "changes_pending": 0,
-                "checkpointed_source_seq": "60-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYEyVygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSSpgk4yMkhITjS0wdWUBAENCJEg",
-                "doc_write_failures": 0,
-                "docs_read": 67,
-                "bulk_get_attempts": 67,
-                "bulk_get_docs": 67,
-                "docs_written": 67,
-                "missing_revisions_found": 67,
-                "revisions_checked": 67,
-                "source_seq": "67-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE2VygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSepgk4yMkhITjS0wdWUBAEVKJE8",
-                "through_seq": "67-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE2VygQLsBsZm5pZJJpjKcRqRxwIkGRqA1H-oSepgk4yMkhITjS0wdWUBAEVKJE8"
-            },
-            "last_updated": "2017-04-29T05:01:37Z",
-            "node": "node2@127.0.0.1",
-            "source_proxy": null,
-            "target_proxy": null,
-            "source": "http://myserver.com/foo",
-            "start_time": "2017-04-29T05:01:37Z",
-            "state": "running",
-            "target": "http://adm:*****@localhost:15984/cdyno-0000002/"
-        }
-
-.. _api/server/name:
-
-======================
-``/_node/{node-name}``
-======================
-
-.. http:get:: /_node/{node-name}
-    :synopsis: Returns node name
-
-    The ``/_node/{node-name}`` endpoint can be used to confirm the Erlang
-    node name of the server that processes the request. This is most useful
-    when accessing ``/_node/_local`` to retrieve this information. Repeatedly
-    retrieving this information for a CouchDB endpoint can be useful to determine
-    if a CouchDB cluster is correctly proxied through a reverse load balancer.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_node/_local HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-      HTTP/1.1 200 OK
-      Cache-Control: must-revalidate
-      Content-Length: 27
-      Content-Type: application/json
-      Date: Tue, 28 Jan 2020 19:25:51 GMT
-      Server: CouchDB (Erlang OTP)
-      X-Couch-Request-ID: 5b8db6c677
-      X-CouchDB-Body-Time: 0
-
-      {"name":"node1@127.0.0.1"}
-
-.. _api/server/stats:
-
-=============================
-``/_node/{node-name}/_stats``
-=============================
-
-.. http:get:: /_node/{node-name}/_stats
-    :synopsis: Returns server statistics
-
-    The ``_stats`` resource returns a JSON object containing the statistics
-    for the running server. The object is structured with top-level sections
-    collating the statistics for a range of entries, with each individual
-    statistic being easily identified, and the content of each statistic is
-    self-describing.
-
-    Statistics are sampled internally on a :ref:`configurable interval
-    <config/stats>`. When monitoring the ``_stats`` endpoint, you need to use
-    a polling frequency of at least twice this to observe accurate results.
-    For example, if the :ref:`interval <config/stats>` is 10 seconds,
-    poll ``_stats`` at least every 5 seconds.
-
-    The literal string ``_local`` serves as an alias for the local node name, so
-    for all stats URLs, ``{node-name}`` may be replaced with ``_local``, to
-    interact with the local node's statistics.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_node/_local/_stats/couchdb/request_time HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 187
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 11:41:11 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-          "value": {
-            "min": 0,
-            "max": 0,
-            "arithmetic_mean": 0,
-            "geometric_mean": 0,
-            "harmonic_mean": 0,
-            "median": 0,
-            "variance": 0,
-            "standard_deviation": 0,
-            "skewness": 0,
-            "kurtosis": 0,
-            "percentile": [
-              [
-                50,
-                0
-              ],
-              [
-                75,
-                0
-              ],
-              [
-                90,
-                0
-              ],
-              [
-                95,
-                0
-              ],
-              [
-                99,
-                0
-              ],
-              [
-                999,
-                0
-              ]
-            ],
-            "histogram": [
-              [
-                0,
-                0
-              ]
-            ],
-            "n": 0
-          },
-          "type": "histogram",
-          "desc": "length of a request inside CouchDB without MochiWeb"
-        }
-
-The fields provide the current, minimum and maximum, and a collection of
-statistical means and quantities. The quantity in each case is not defined, but
-the descriptions below provide sufficient detail to determine units.
-
-Statistics are reported by 'group'.  The statistics are divided into the
-following top-level sections:
-
-- ``couch_log``: Logging subsystem
-- ``couch_replicator``: Replication scheduler and subsystem
-- ``couchdb``: Primary CouchDB database operations
-- ``fabric``: Cluster-related operations
-- ``global_changes``: Global changes feed
-- ``mem3``: Node membership-related statistics
-- ``pread``: CouchDB file-related exceptions
-- ``rexi``: Cluster internal RPC-related statistics
-
-The type of the statistic is included in the ``type`` field, and is one of
-the following:
-
-- ``counter``: Monotonically increasing counter, resets on restart
-- ``histogram``: Binned set of values with meaningful subdivisions.
-  Scoped to the current :ref:`collection interval <config/stats>`.
-- ``gauge``: Single numerical value that can go up and down
-
-You can also access individual statistics by quoting the statistics sections
-and statistic ID as part of the URL path. For example, to get the
-``request_time`` statistics within the ``couchdb`` section for the target
-node, you can use:
-
-.. code-block:: http
-
-    GET /_node/_local/_stats/couchdb/request_time HTTP/1.1
-
-This returns an entire statistics object, as with the full request, but
-containing only the requested individual statistic.
-
-.. _api/server/prometheus:
-
-==================================
-``/_node/{node-name}/_prometheus``
-==================================
-
-.. http:get:: /_node/{node-name}/_prometheus
-    :synopsis: Returns server statistics in prometheus format
-
-    The ``_prometheus`` resource returns a text/plain response that consolidates our
-    :ref:`api/server/stats`, and :ref:`api/server/system` endpoints. The format is
-    determined by `Prometheus <https://prometheus.io/docs/introduction/overview/>`_.
-    The format version is 2.0.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_node/_local/_prometheus HTTP/1.1
-        Accept: text/plain
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 187
-        Content-Type: text/plain; version=2.0
-        Date: Sat, 10 May 2020 11:41:11 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        # TYPE couchdb_couch_log_requests_total counter
-        couchdb_couch_log_requests_total{level="alert"} 0
-        couchdb_couch_log_requests_total{level="critical"} 0
-        couchdb_couch_log_requests_total{level="debug"} 0
-        couchdb_couch_log_requests_total{level="emergency"} 0
-        couchdb_couch_log_requests_total{level="error"} 0
-        couchdb_couch_log_requests_total{level="info"} 8
-        couchdb_couch_log_requests_total{level="notice"} 51
-        couchdb_couch_log_requests_total{level="warning"} 0
-        # TYPE couchdb_couch_replicator_changes_manager_deaths_total counter
-        couchdb_couch_replicator_changes_manager_deaths_total 0
-        # TYPE couchdb_couch_replicator_changes_queue_deaths_total counter
-        couchdb_couch_replicator_changes_queue_deaths_total 0
-        # TYPE couchdb_couch_replicator_changes_read_failures_total counter
-        couchdb_couch_replicator_changes_read_failures_total 0
-        # TYPE couchdb_couch_replicator_changes_reader_deaths_total counter
-        couchdb_couch_replicator_changes_reader_deaths_total 0
-        # TYPE couchdb_couch_replicator_checkpoints_failure_total counter
-        couchdb_couch_replicator_checkpoints_failure_total 0
-        # TYPE couchdb_couch_replicator_checkpoints_total counter
-        couchdb_couch_replicator_checkpoints_total 0
-        # TYPE couchdb_couch_replicator_cluster_is_stable gauge
-        couchdb_couch_replicator_cluster_is_stable 1
-        # TYPE couchdb_couch_replicator_connection_acquires_total counter
-        couchdb_couch_replicator_connection_acquires_total 0
-        # TYPE couchdb_couch_replicator_connection_closes_total counter
-        couchdb_couch_replicator_connection_closes_total 0
-        # TYPE couchdb_couch_replicator_connection_creates_total counter
-        couchdb_couch_replicator_connection_creates_total 0
-        # TYPE couchdb_couch_replicator_connection_owner_crashes_total counter
-        couchdb_couch_replicator_connection_owner_crashes_total 0
-        # TYPE couchdb_couch_replicator_connection_releases_total counter
-        couchdb_couch_replicator_connection_releases_total 0
-        # TYPE couchdb_couch_replicator_connection_worker_crashes_total counter
-        couchdb_couch_replicator_connection_worker_crashes_total 0
-        # TYPE couchdb_couch_replicator_db_scans_total counter
-        couchdb_couch_replicator_db_scans_total 1
-        # TYPE couchdb_couch_replicator_docs_completed_state_updates_total counter
-        couchdb_couch_replicator_docs_completed_state_updates_total 0
-        # TYPE couchdb_couch_replicator_docs_db_changes_total counter
-        couchdb_couch_replicator_docs_db_changes_total 0
-        # TYPE couchdb_couch_replicator_docs_dbs_created_total counter
-        couchdb_couch_replicator_docs_dbs_created_total 0
-        # TYPE couchdb_couch_replicator_docs_dbs_deleted_total counter
-        couchdb_couch_replicator_docs_dbs_deleted_total 0
-        # TYPE couchdb_couch_replicator_docs_dbs_found_total counter
-        couchdb_couch_replicator_docs_dbs_found_total 2
-        # TYPE couchdb_couch_replicator_docs_failed_state_updates_total counter
-        couchdb_couch_replicator_docs_failed_state_updates_total 0
-        # TYPE couchdb_couch_replicator_failed_starts_total counter
-        couchdb_couch_replicator_failed_starts_total 0
-        # TYPE couchdb_couch_replicator_jobs_adds_total counter
-        couchdb_couch_replicator_jobs_adds_total 0
-        # TYPE couchdb_couch_replicator_jobs_crashed gauge
-        couchdb_couch_replicator_jobs_crashed 0
-        # TYPE couchdb_couch_replicator_jobs_crashes_total counter
-        couchdb_couch_replicator_jobs_crashes_total 0
-        # TYPE couchdb_couch_replicator_jobs_duplicate_adds_total counter
-        couchdb_couch_replicator_jobs_duplicate_adds_total 0
-        # TYPE couchdb_couch_replicator_jobs_pending gauge
-        couchdb_couch_replicator_jobs_pending 0
-        # TYPE couchdb_couch_replicator_jobs_removes_total counter
-        couchdb_couch_replicator_jobs_removes_total 0
-        # TYPE couchdb_couch_replicator_jobs_running gauge
-        couchdb_couch_replicator_jobs_running 0
-        # TYPE couchdb_couch_replicator_jobs_starts_total counter
-        couchdb_couch_replicator_jobs_starts_total 0
-        # TYPE couchdb_couch_replicator_jobs_stops_total counter
-        couchdb_couch_replicator_jobs_stops_total 0
-        # TYPE couchdb_couch_replicator_jobs_total gauge
-        couchdb_couch_replicator_jobs_total 0
-        # TYPE couchdb_couch_replicator_requests_total counter
-        couchdb_couch_replicator_requests_total 0
-        # TYPE couchdb_couch_replicator_responses_failure_total counter
-        couchdb_couch_replicator_responses_failure_total 0
-        # TYPE couchdb_couch_replicator_responses_total counter
-        couchdb_couch_replicator_responses_total 0
-        # TYPE couchdb_couch_replicator_stream_responses_failure_total counter
-        couchdb_couch_replicator_stream_responses_failure_total 0
-        # TYPE couchdb_couch_replicator_stream_responses_total counter
-        couchdb_couch_replicator_stream_responses_total 0
-        # TYPE couchdb_couch_replicator_worker_deaths_total counter
-        couchdb_couch_replicator_worker_deaths_total 0
-        # TYPE couchdb_couch_replicator_workers_started_total counter
-        couchdb_couch_replicator_workers_started_total 0
-        # TYPE couchdb_auth_cache_requests_total counter
-        couchdb_auth_cache_requests_total 0
-        # TYPE couchdb_auth_cache_misses_total counter
-        couchdb_auth_cache_misses_total 0
-        # TYPE couchdb_collect_results_time_seconds summary
-        couchdb_collect_results_time_seconds{quantile="0.5"} 0.0
-        couchdb_collect_results_time_seconds{quantile="0.75"} 0.0
-        couchdb_collect_results_time_seconds{quantile="0.9"} 0.0
-        couchdb_collect_results_time_seconds{quantile="0.95"} 0.0
-        couchdb_collect_results_time_seconds{quantile="0.99"} 0.0
-        couchdb_collect_results_time_seconds{quantile="0.999"} 0.0
-        couchdb_collect_results_time_seconds_sum 0.0
-        couchdb_collect_results_time_seconds_count 0
-        # TYPE couchdb_couch_server_lru_skip_total counter
-        couchdb_couch_server_lru_skip_total 0
-        # TYPE couchdb_database_purges_total counter
-        couchdb_database_purges_total 0
-        # TYPE couchdb_database_reads_total counter
-        couchdb_database_reads_total 0
-        # TYPE couchdb_database_writes_total counter
-        couchdb_database_writes_total 0
-        # TYPE couchdb_db_open_time_seconds summary
-        couchdb_db_open_time_seconds{quantile="0.5"} 0.0
-        couchdb_db_open_time_seconds{quantile="0.75"} 0.0
-        couchdb_db_open_time_seconds{quantile="0.9"} 0.0
-        couchdb_db_open_time_seconds{quantile="0.95"} 0.0
-        couchdb_db_open_time_seconds{quantile="0.99"} 0.0
-        couchdb_db_open_time_seconds{quantile="0.999"} 0.0
-        couchdb_db_open_time_seconds_sum 0.0
-        couchdb_db_open_time_seconds_count 0
-        # TYPE couchdb_dbinfo_seconds summary
-        couchdb_dbinfo_seconds{quantile="0.5"} 0.0
-        couchdb_dbinfo_seconds{quantile="0.75"} 0.0
-        couchdb_dbinfo_seconds{quantile="0.9"} 0.0
-        couchdb_dbinfo_seconds{quantile="0.95"} 0.0
-        couchdb_dbinfo_seconds{quantile="0.99"} 0.0
-        couchdb_dbinfo_seconds{quantile="0.999"} 0.0
-        couchdb_dbinfo_seconds_sum 0.0
-        couchdb_dbinfo_seconds_count 0
-        # TYPE couchdb_document_inserts_total counter
-        couchdb_document_inserts_total 0
-        # TYPE couchdb_document_purges_failure_total counter
-        couchdb_document_purges_failure_total 0
-        # TYPE couchdb_document_purges_success_total counter
-        couchdb_document_purges_success_total 0
-        # TYPE couchdb_document_purges_total_total counter
-        couchdb_document_purges_total_total 0
-        # TYPE couchdb_document_writes_total counter
-        couchdb_document_writes_total 0
-        # TYPE couchdb_httpd_aborted_requests_total counter
-        couchdb_httpd_aborted_requests_total 0
-        # TYPE couchdb_httpd_all_docs_timeouts_total counter
-        couchdb_httpd_all_docs_timeouts_total 0
-        # TYPE couchdb_httpd_bulk_docs_seconds summary
-        couchdb_httpd_bulk_docs_seconds{quantile="0.5"} 0.0
-        couchdb_httpd_bulk_docs_seconds{quantile="0.75"} 0.0
-        couchdb_httpd_bulk_docs_seconds{quantile="0.9"} 0.0
-        couchdb_httpd_bulk_docs_seconds{quantile="0.95"} 0.0
-        couchdb_httpd_bulk_docs_seconds{quantile="0.99"} 0.0
-        couchdb_httpd_bulk_docs_seconds{quantile="0.999"} 0.0
-        couchdb_httpd_bulk_docs_seconds_sum 0.0
-        couchdb_httpd_bulk_docs_seconds_count 0
-        ...remaining couchdb metrics from _stats and _system
-
-If an additional port config option is specified, then a client can call this API using
-that port which does not require authentication. This option is ``false`` (OFF)
-by default. When the option is ``true`` (ON), the default ports for a 3 node cluster
-are ``17986``, ``27986``, ``37986``.
-See :ref:`Configuration of Prometheus Endpoint <config/prometheus>` for details.
-
-.. code-block:: http
-
-        GET /_node/_local/_prometheus HTTP/1.1
-        Accept: text/plain
-        Host: localhost:17986
-
-.. _api/server/system:
-
-==============================
-``/_node/{node-name}/_system``
-==============================
-
-.. http:get:: /_node/{node-name}/_system
-    :synopsis: Returns system-level server statistics
-
-    The ``_system`` resource returns a JSON object containing various
-    system-level statistics for the running server. The object is structured
-    with top-level sections collating the statistics for a range of entries,
-    with each individual statistic being easily identified, and the content of
-    each statistic is self-describing.
-
-    The literal string ``_local`` serves as an alias for the local node name, so
-    for all stats URLs, ``{node-name}`` may be replaced with ``_local``, to
-    interact with the local node's statistics.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_node/_local/_system HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 187
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 11:41:11 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-          "uptime": 259,
-          "memory": {
-          ...
-        }
-
-    These statistics are generally intended for CouchDB developers only.
-
-.. _api/server/restart:
-
-===============================
-``/_node/{node-name}/_restart``
-===============================
-
-.. http:post:: /_node/{node-name}/_restart
-    :synopsis: Restarts CouchDB application on a given node
-
-    This API is to facilitate integration testing only
-    it is not meant to be used in production
-
-    :code 200: Request completed successfully
-
-.. _api/server/search_analyze:
-
-==========================================
-``/_search_analyze``
-==========================================
-
-.. warning::
-    Search endpoints require a running search plugin connected to each cluster
-    node. See :ref:`Search Plugin Installation <install/search>` for details.
-
-.. versionadded:: 3.0
-
-.. http:post:: /_search_analyze
-    :synopsis: Tests the results of analyzer tokenization
-
-    Tests the results of Lucene analyzer tokenization on sample text.
-
-    :param field: Type of analyzer
-    :param text:  Analyzer token you want to test
-    :code 200: Request completed successfully
-    :code 400: Request body is wrong (malformed or missing one of the mandatory fields)
-    :code 500: A server error (or other kind of error) occurred
-
-**Request**:
-
-.. code-block:: http
-
-    POST /_search_analyze HTTP/1.1
-    Host: localhost:5984
-    Content-Type: application/json
-
-    {"analyzer":"english", "text":"running"}
-
-**Response**:
-
-.. code-block:: javascript
-
-    {
-        "tokens": [
-            "run"
-        ]
-    }
-
-.. _api/server/utils:
-
-===========
-``/_utils``
-===========
-
-.. http:get:: /_utils
-    :synopsis: Redirects to /_utils/
-
-    Accesses the built-in Fauxton administration interface for CouchDB.
-
-    :>header Location: New URI location
-    :code 301: Redirects to :get:`/_utils/`
-
-.. http:get:: /_utils/
-    :synopsis: CouchDB administration interface (Fauxton)
-
-    :>header Content-Type: :mimetype:`text/html`
-    :>header Last-Modified: Static files modification timestamp
-    :code 200: Request completed successfully
-
-.. _api/server/up:
-
-========
-``/_up``
-========
-
-.. versionadded:: 2.0
-
-.. http:get:: /_up
-    :synopsis: Health check endpoint
-
-    Confirms that the server is up, running, and ready to respond to requests.
-    If :config:option:`maintenance_mode <couchdb/maintenance_mode>` is
-    ``true`` or ``nolb``, the endpoint will return a 404 response.
-
-    :>header Content-Type: :mimetype:`application/json`
-    :code 200: Request completed successfully
-    :code 404: The server is unavailable for requests at this time.
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 16
-        Content-Type: application/json
-        Date: Sat, 17 Mar 2018 04:46:26 GMT
-        Server: CouchDB/2.2.0-f999071ec (Erlang OTP/19)
-        X-Couch-Request-ID: c57a3b2787
-        X-CouchDB-Body-Time: 0
-
-        {"status":"ok"}
-
-.. _api/server/uuids:
-
-===========
-``/_uuids``
-===========
-
-.. versionchanged:: 2.0.0
-
-.. http:get:: /_uuids
-    :synopsis: Generates a list of UUIDs from the server
-
-    Requests one or more Universally Unique Identifiers (UUIDs) from the
-    CouchDB instance. The response is a JSON object providing a list of UUIDs.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :query number count: Number of UUIDs to return. Default is ``1``.
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :>header ETag: Response hash
-    :code 200: Request completed successfully
-    :code 400: Requested more UUIDs than is :config:option:`allowed
-               <uuids/max_count>` to retrieve
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_uuids?count=10 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Length: 362
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 11:46:25 GMT
-        ETag: "DGRWWQFLUDWN5MRKSLKQ425XV"
-        Expires: Fri, 01 Jan 1990 00:00:00 GMT
-        Pragma: no-cache
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "uuids": [
-                "75480ca477454894678e22eec6002413",
-                "75480ca477454894678e22eec600250b",
-                "75480ca477454894678e22eec6002c41",
-                "75480ca477454894678e22eec6003b90",
-                "75480ca477454894678e22eec6003fca",
-                "75480ca477454894678e22eec6004bef",
-                "75480ca477454894678e22eec600528f",
-                "75480ca477454894678e22eec6005e0b",
-                "75480ca477454894678e22eec6006158",
-                "75480ca477454894678e22eec6006161"
-            ]
-        }
-
-The UUID type is determined by the :config:option:`UUID algorithm
-<uuids/algorithm>` setting in the CouchDB configuration.
-
-The UUID type may be changed at any time through the
-:ref:`Configuration API <api/config/section/key>`. For example, the UUID type
-could be changed to ``random`` by sending this HTTP request:
-
-.. code-block:: http
-
-    PUT http://couchdb:5984/_node/nonode@nohost/_config/uuids/algorithm HTTP/1.1
-    Content-Type: application/json
-    Accept: */*
-
-    "random"
-
-You can verify the change by obtaining a list of UUIDs:
-
-.. code-block:: javascript
-
-    {
-        "uuids" : [
-            "031aad7b469956cf2826fcb2a9260492",
-            "6ec875e15e6b385120938df18ee8e496",
-            "cff9e881516483911aa2f0e98949092d",
-            "b89d37509d39dd712546f9510d4a9271",
-            "2e0dbf7f6c4ad716f21938a016e4e59f"
-        ]
-    }
-
-.. _api/server/favicon:
-
-================
-``/favicon.ico``
-================
-
-.. http:get:: /favicon.ico
-    :synopsis: Returns the site icon
-
-    Binary content for the `favicon.ico` site icon.
-
-    :>header Content-Type: :mimetype:`image/x-icon`
-    :code 200: Request completed successfully
-    :code 404: The requested content could not be found
-
-.. _api/server/reshard:
-
-=============
-``/_reshard``
-=============
-
-.. versionadded:: 2.4
-
-.. http:get:: /_reshard
-    :synopsis: Retrieve summary information about resharding on the cluster
-
-    Returns a count of completed, failed, running, stopped, and total jobs
-    along with the state of resharding on the cluster.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :>json string state: ``stopped`` or ``running``
-    :>json string state_reason: ``null`` or string describing additional
-                                information or reason associated with the state
-    :>json number completed: Count of completed resharding jobs
-    :>json number failed: Count of failed resharding jobs
-    :>json number running: Count of running resharding jobs
-    :>json number stopped: Count of stopped resharding jobs
-    :>json number total: Total count of resharding jobs
-
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_reshard HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-            "completed": 21,
-            "failed": 0,
-            "running": 3,
-            "state": "running",
-            "state_reason": null,
-            "stopped": 0,
-            "total": 24
-        }
-
-.. http:get:: /_reshard/state
-    :synopsis: Retrieve the state of resharding on the cluster
-
-    Returns the resharding state and optional information about the state.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :>json string state: ``stopped`` or ``running``
-    :>json string state_reason: Additional  information  or  reason  associated
-                                with the state
-
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_reshard/state HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-            "reason": null,
-            "state": "running"
-        }
-
-.. http:put:: /_reshard/state
-    :synopsis: Change resharding state on the cluster
-
-    Change the resharding state on the cluster. The states are
-    ``stopped`` or ``running``. This starts and stops global resharding on all
-    the nodes of the cluster. If there are any running jobs, they
-    will be stopped when the state changes to ``stopped``. When the state
-    changes back to ``running`` those job will continue running.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :<json string state: ``stopped`` or ``running``
-    :<json string state_reason: Optional string describing additional
-                                information or reason associated with the state
-
-    :>json boolean ok: ``true``
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request. Could be a bad or missing state name.
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /_reshard/state HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-        {
-            "state": "stopped",
-            "reason": "Rebalancing in progress"
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-            "ok": true
-        }
-
-.. http:get:: /_reshard/jobs
-    :synopsis: Retrieve information about all the resharding jobs on the cluster
-
-    .. note:: The shape of the response and the ``total_rows`` and ``offset``
-              field in particular are meant to be consistent with the
-              ``_scheduler/jobs`` endpoint.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :>json list jobs: Array of json objects, one for each resharding job. For
-                      the fields of each job see the /_reshard/job/{jobid}
-                      endpoint.
-    :>json number offset: Offset in the list of jobs object. Currently
-                          hard-coded at ``0``.
-    :>json number total_rows: Total number of resharding jobs on the cluster.
-
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_reshard/jobs HTTP/1.1
-        Accept: application/json
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-            "jobs": [
-                {
-                    "history": [
-                        {
-                            "detail": null,
-                            "timestamp": "2019-03-28T15:28:02Z",
-                            "type": "new"
-                        },
-                        {
-                            "detail": "initial_copy",
-                            "timestamp": "2019-03-28T15:28:02Z",
-                            "type": "running"
-                        },
-                        ...
-                    ],
-                    "id": "001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a",
-                    "job_state": "completed",
-                    "node": "node1@127.0.0.1",
-                    "source": "shards/00000000-1fffffff/d1.1553786862",
-                    "split_state": "completed",
-                    "start_time": "2019-03-28T15:28:02Z",
-                    "state_info": {},
-                    "target": [
-                        "shards/00000000-0fffffff/d1.1553786862",
-                        "shards/10000000-1fffffff/d1.1553786862"
-                    ],
-                    "type": "split",
-                    "update_time": "2019-03-28T15:28:08Z"
-                },
-                ...
-            ],
-            "offset": 0,
-            "total_rows": 24
-        }
-
-.. http:get:: /_reshard/jobs/{jobid}
-    :synopsis: Retrieve information about a particular resharding job
-
-    Get information about the resharding job identified by ``jobid``.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :>json string id: Job ID.
-    :>json string type: Currently only ``split`` is implemented.
-    :>json string job_state: The running state of the job. Could be one of
-                             ``new``, ``running``, ``stopped``, ``completed``
-                             or ``failed``.
-    :>json string split_state: State detail specific to shard splitting. It
-                               indicates how far has shard splitting
-                               progressed, and can be one of ``new``,
-                               ``initial_copy``, ``topoff1``,
-                               ``build_indices``, ``topoff2``,
-                               ``copy_local_docs``, ``update_shardmap``,
-                               ``wait_source_close``, ``topoff3``,
-                               ``source_delete`` or ``completed``.
-    :>json object state_info: Optional additional info associated with the
-                              current state.
-    :>json string source: For ``split`` jobs this will be the source shard.
-    :>json list target: For ``split`` jobs this will be a list of two or more
-                        target shards.
-    :>json list history: List of json objects recording a job's state
-                         transition history.
-
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_reshard/jobs/001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a HTTP/1.1
-        Accept: application/json
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-
-            "id": "001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a",
-            "job_state": "completed",
-            "node": "node1@127.0.0.1",
-            "source": "shards/00000000-1fffffff/d1.1553786862",
-            "split_state": "completed",
-            "start_time": "2019-03-28T15:28:02Z",
-            "state_info": {},
-            "target": [
-                "shards/00000000-0fffffff/d1.1553786862",
-                "shards/10000000-1fffffff/d1.1553786862"
-            ],
-            "type": "split",
-            "update_time": "2019-03-28T15:28:08Z",
-            "history": [
-                {
-                    "detail": null,
-                    "timestamp": "2019-03-28T15:28:02Z",
-                    "type": "new"
-                },
-                {
-                    "detail": "initial_copy",
-                    "timestamp": "2019-03-28T15:28:02Z",
-                    "type": "running"
-                },
-                ...
-            ]
-        }
-
-.. http:post:: /_reshard/jobs
-    :synopsis: Create one or more resharding jobs
-
-    Depending on what fields are specified in the request, one or more
-    resharding jobs will be created. The response is a json array of results.
-    Each result object represents a single resharding job for a particular node
-    and range. Some of the responses could be successful and some could fail.
-    Successful results will have the ``"ok": true`` key and and value, and
-    failed jobs will have the ``"error": "{error_message}"`` key and value.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :<json string type: Type of job. Currently only ``"split"`` is accepted.
-
-    :<json string db: Database to split. This is mutually exclusive with the
-                      ``"shard``" field.
-
-    :<json string node: Split shards on a particular node. This is an optional
-                        parameter. The value should be one of the nodes
-                        returned from the ``_membership`` endpoint.
-
-    :<json string range: Split shards copies in the given range. The range
-                         format is ``hhhhhhhh-hhhhhhhh`` where ``h`` is a
-                         hexadecimal digit. This format is used since this is
-                         how the ranges are represented in the file system.
-                         This is parameter is optional and is mutually
-                         exclusive with the ``"shard"`` field.
-
-    :<json string shard: Split a particular shard. The shard should be
-                         specified as ``"shards/{range}/{db}.{suffix}"``. Where
-                         ``range`` has the ``hhhhhhhh-hhhhhhhh`` format, ``db``
-                         is the database name, and ``suffix`` is the shard
-                         (timestamp) creation suffix.
-
-    :>json boolean ok: ``true`` if job created successfully.
-
-    :<json string error: Error message if a job could be not be created.
-
-    :<json string node: Cluster node where the job was created and is running.
-
-    :code 201: One or more jobs were successfully created
-    :code 400: Invalid request. Parameter validation might have failed.
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Db, node, range or shard was not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /_reshard/jobs HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-
-       {
-           "db": "db3",
-           "range": "80000000-ffffffff",
-           "type": "split"
-       }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Content-Type: application/json
-
-        [
-            {
-                "id": "001-30d7848a6feeb826d5e3ea5bb7773d672af226fd34fd84a8fb1ca736285df557",
-                "node": "node1@127.0.0.1",
-                "ok": true,
-                "shard": "shards/80000000-ffffffff/db3.1554148353"
-            },
-            {
-                "id": "001-c2d734360b4cb3ff8b3feaccb2d787bf81ce2e773489eddd985ddd01d9de8e01",
-                "node": "node2@127.0.0.1",
-                "ok": true,
-                "shard": "shards/80000000-ffffffff/db3.1554148353"
-            }
-        ]
-
-.. http:delete:: /_reshard/jobs/{jobid}
-    :synopsis: Remove a resharding job
-
-    If the job is running, stop the job and then remove it.
-
-    :>json boolean ok: ``true`` if the job was removed successfully.
-
-    :code 200: The job was removed successfully
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: The job was not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /_reshard/jobs/001-171d1211418996ff47bd610b1d1257fc4ca2628868def4a05e63e8f8fe50694a HTTP/1.1
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-            "ok": true
-        }
-
-.. http:get:: /_reshard/jobs/{jobid}/state
-    :synopsis: Retrieve the state of a single resharding job
-
-    Returns the running state of a resharding job identified by ``jobid``.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :<json string state: One of ``new``, ``running``, ``stopped``,
-                         ``completed`` or ``failed``.
-
-    :<json string state_reason: Additional information associated with the
-                                state
-
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: The job was not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_reshard/jobs/001-b3da04f969bbd682faaab5a6c373705cbcca23f732c386bb1a608cfbcfe9faff/state HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: application/json
-
-        {
-            "reason": null,
-            "state": "running"
-        }
-
-.. http:put:: /_reshard/jobs/{jobid}/state
-    :synopsis: Change the state of a resharding job
-
-    Change the state of a particular resharding job identified by ``jobid``.
-    The state can be changed from ``stopped`` to ``running`` or from
-    ``running`` to ``stopped``. If an individual job is ``stopped`` via this
-    API it will stay ``stopped`` even after the global resharding state is
-    toggled from ``stopped`` to ``running``. If the job is already
-    ``completed`` its state will stay ``completed``.
-
-    :<header Accept: - :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-
-    :<json string state: ``stopped`` or ``running``
-    :<json string state_reason: Optional string describing additional
-                                information or reason associated with the state
-
-    :>json boolean ok: ``true``
-
-    :code 200: Request completed successfully
-    :code 400: Invalid request. Could be a bad state name, for example.
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: The job was not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /_reshard/state/001-b3da04f969bbd682faaab5a6c373705cbcca23f732c386bb1a608cfbcfe9faff/state HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-        {
-            "state": "stopped",
-            "reason": "Rebalancing in progress"
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-       HTTP/1.1 200 OK
-       Content-Type: application/json
-
-       {
-            "ok": true
-       }
diff --git a/src/api/server/configuration.rst b/src/api/server/configuration.rst
deleted file mode 100644
index 9208a75..0000000
--- a/src/api/server/configuration.rst
+++ /dev/null
@@ -1,324 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/config:
-
-=============
-Configuration
-=============
-
-The CouchDB Server Configuration API provide an interface to query and update
-the various configuration values within a running CouchDB instance.
-
-Accessing the local node's configuration
-========================================
-
-The literal string ``_local`` serves as an alias for the local node name, so
-for all configuration URLs, ``{node-name}`` may be replaced with ``_local``, to
-interact with the local node's configuration.
-
-``/_node/{node-name}/_config``
-==============================
-
-.. http:get:: /_node/{node-name}/_config
-    :synopsis: Obtains a list of the entire server configuration
-
-    Returns the entire CouchDB server configuration as a JSON structure. The
-    structure is organized by different configuration sections, with
-    individual values.
-
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**
-
-    .. code-block:: http
-
-        GET /_node/nonode@nohost/_config HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 4148
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 12:01:42 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "attachments": {
-                "compressible_types": "text/*, application/javascript, application/json,  application/xml",
-                "compression_level": "8"
-            },
-            "couchdb": {
-                "users_db_suffix": "_users",
-                "database_dir": "/var/lib/couchdb",
-                "max_attachment_chunk_size": "4294967296",
-                "max_dbs_open": "100",
-                "os_process_timeout": "5000",
-                "uri_file": "/var/lib/couchdb/couch.uri",
-                "util_driver_dir": "/usr/lib64/couchdb/erlang/lib/couch-1.5.0/priv/lib",
-                "view_index_dir": "/var/lib/couchdb"
-            },
-            "chttpd": {
-                "allow_jsonp": "false",
-                "backlog": "512",
-                "bind_address": "0.0.0.0",
-                "port": "5984",
-                "require_valid_user": "false",
-                "socket_options": "[{sndbuf, 262144}, {nodelay, true}]",
-                "server_options": "[{recbuf, undefined}]",
-                "secure_rewrites": "true"
-            },
-            "httpd": {
-                "authentication_handlers": "{couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}",
-                "bind_address": "192.168.0.2",
-                "max_connections": "2048",
-                "port": "5984",
-            },
-            "log": {
-                "writer": "file",
-                "file": "/var/log/couchdb/couch.log",
-                "include_sasl": "true",
-                "level": "info"
-            },
-            "query_server_config": {
-                "reduce_limit": "true"
-            },
-            "replicator": {
-                "max_http_pipeline_size": "10",
-                "max_http_sessions": "10"
-            },
-            "stats": {
-                "interval": "10"
-            },
-            "uuids": {
-                "algorithm": "utc_random"
-            }
-        }
-
-.. versionchanged: 2.0.0 The config endpoint from ``/_config`` to
-   ``/_node/{node-name}/_config``.
-
-.. _api/config/section:
-
-``/_node/{node-name}/_config/{section}``
-========================================
-
-.. http:get:: /_node/{node-name}/_config/{section}
-    :synopsis: Returns all the configuration values for the specified section
-
-    Gets the configuration structure for a single section.
-
-    :param section: Configuration section name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_node/nonode@nohost/_config/httpd HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 444
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 12:10:40 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "authentication_handlers": "{couch_httpd_auth, cookie_authentication_handler}, {couch_httpd_auth, default_authentication_handler}",
-            "bind_address": "127.0.0.1",
-            "default_handler": "{couch_httpd_db, handle_request}",
-            "port": "5984"
-        }
-
-.. _api/config/section/key:
-
-``/_node/{node-name}/_config/{section}/{key}``
-==============================================
-
-.. http:get:: /_node/{node-name}/_config/{section}/{key}
-    :synopsis: Returns a specific section/configuration value
-
-    Gets a single configuration value from within a specific configuration
-    section.
-
-    :param section: Configuration section name
-    :param key: Configuration option name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /_node/nonode@nohost/_config/log/level HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 8
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 12:12:59 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        "debug"
-
-    .. note::
-        The returned value will be the JSON of the value, which may be a string
-        or numeric value, or an array or object. Some client environments may
-        not parse simple strings or numeric values as valid JSON.
-
-.. http:put:: /_node/{node-name}/_config/{section}/{key}
-    :synopsis: Sets the specified configuration value
-
-    Updates a configuration value. The new value should be supplied in the
-    request body in the corresponding JSON format. If you are setting a string
-    value, you must supply a valid JSON string. In response CouchDB sends old
-    value for target section key.
-
-    :param section: Configuration section name
-    :param key: Configuration option name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :<header Content-Type: :mimetype:`application/json`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-    :code 400: Invalid JSON request body
-    :code 401: CouchDB Server Administrator privileges required
-    :code 500: Error setting configuration
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /_node/nonode@nohost/_config/log/level HTTP/1.1
-        Accept: application/json
-        Content-Length: 7
-        Content-Type: application/json
-        Host: localhost:5984
-
-        "info"
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 8
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 12:12:59 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        "debug"
-
-.. http:delete:: /_node/{node-name}/_config/{section}/{key}
-    :synopsis: Removes the current setting
-
-    Deletes a configuration value. The returned JSON will be the value of the
-    configuration parameter before it was deleted.
-
-    :param section: Configuration section name
-    :param key: Configuration option name
-    :<header Accept: - :mimetype:`application/json`
-                     - :mimetype:`text/plain`
-    :>header Content-Type: - :mimetype:`application/json`
-                           - :mimetype:`text/plain; charset=utf-8`
-    :code 200: Request completed successfully
-    :code 401: CouchDB Server Administrator privileges required
-    :code 404: Specified configuration option not found
-
-    **Request**:
-
-    .. code-block:: http
-
-        DELETE /_node/nonode@nohost/_config/log/level HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 7
-        Content-Type: application/json
-        Date: Sat, 10 Aug 2013 12:29:03 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        "info"
-
-.. _api/config/reload:
-
-``/_node/{node-name}/_config/_reload``
-======================================
-
-.. versionadded:: 3.0
-
-.. http:post:: /_node/{node-name}/_config/_reload
-    :synopsis: Reload the configuration from disk
-
-    Reloads the configuration from disk. This has a side effect of
-    flushing any in-memory configuration changes that have not been
-    committed to disk.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /_node/nonode@nohost/_config/_reload HTTP/1.1
-        Host: localhost:5984
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Tues, 21 Jan 2020 11:09:35
-        Server: CouchDB/3.0.0 (Erlang OTP)
-
-        {"ok":true}
diff --git a/src/api/server/index.rst b/src/api/server/index.rst
deleted file mode 100644
index f3098e8..0000000
--- a/src/api/server/index.rst
+++ /dev/null
@@ -1,26 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _api/server:
-
-======
-Server
-======
-
-The CouchDB server interface provides the basic interface to a
-CouchDB server for obtaining CouchDB information and getting and setting
-configuration information.
-
-.. toctree::
-    common
-    authn
-    configuration
diff --git a/src/best-practices/documents.rst b/src/best-practices/documents.rst
deleted file mode 100644
index 0af2913..0000000
--- a/src/best-practices/documents.rst
+++ /dev/null
@@ -1,349 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices/documents:
-
-==============================
-Document Design Considerations
-==============================
-
-When designing your database, and your document structure, there are a number of
-best practices to take into consideration. Especially for people accustomed to
-relational databases, some of these techniques may be non-obvious.
-
-Don't rely on CouchDB's auto-UUID generation
---------------------------------------------
-
-While CouchDB will generate a unique identifier for the ``_id`` field of any doc
-that you create, in most cases you are better off generating them yourself for
-a few reasons:
-
-- If for any reason you miss the ``200 OK`` reply from CouchDB, and storing the
-  document is attempted again, you would end up with the same document content
-  stored under multiple ``_id``\ s. This could easily happen with intermediary
-  proxies and cache systems that may not inform developers that the failed
-  transaction is being retried.
-- ``_id``\ s are the only unique enforced value within CouchDB so you might
-  as well make use of this. CouchDB stores its documents in a B+ tree. Each
-  additional or updated document is stored as a leaf node, and may require
-  re-writing intermediary and parent nodes. You may be able to take advantage of
-  sequencing your own ids more effectively than the automatically generated ids
-  if you can arrange them to be sequential yourself.
-
-Alternatives to auto-incrementing sequences
--------------------------------------------
-
-Because of replication, as well as the distributed nature of CouchDB, it is not
-practical to use auto-incrementing sequences with CouchDB. These are often used
-to ensure unique identifiers for each row in a database table. CouchDB generates
-unique ids on its own and you can specify your own as well, so you don't really
-need a sequence here. If you use a sequence for something else, you will be
-better off finding another way to express it in CouchDB in another way.
-
-Pre-aggregating your data
--------------------------
-
-If your intent for CouchDB is as a collect-and-report model, not a real-time view,
-you may not need to store a single document for every event you're recording.
-In this case, pre-aggregating your data may be a good idea. You probably don't
-need 1000 documents per second if all you are trying to do is to track
-summary statistics about those documents. This reduces the computational pressure
-on CouchDB's MapReduce engine(s), as well as reduces its storage requirements.
-
-In this case, using an in-memory store to summarize your statistical information,
-then writing out to CouchDB every 10 seconds / 1 minute / whatever level of
-granularity you need would greatly reduce the number of documents you'll put in
-your database.
-
-Later, you can then further `decimate
-<https://en.wikipedia.org/wiki/Downsampling_(signal_processing)>`_ your data by
-walking the entire database and generating documents to be stored in a new
-database with a lower level of granularity (say, 1 document a day). You can then
-delete the older, more fine-grained database when you're done with it.
-
-Designing an application to work with replication
--------------------------------------------------
-
-Whilst CouchDB includes replication and a conflict-flagging mechanism, this is
-not the whole story for building an application which replicates in a way which
-users expect.
-
-Here we consider a simple example of a bookmarks application. The idea is that
-a user can replicate their own bookmarks, work with them on another machine,
-and then synchronise their changes later.
-
-Let's start with a very simple definition of bookmarks: an ordered, nestable
-mapping of name to URL. Internally the application might represent it like
-this:
-
-.. code-block:: javascript
-
-    [
-      {"name":"Weather", "url":"http://www.bbc.co.uk/weather"},
-      {"name":"News", "url":"http://news.bbc.co.uk/"},
-      {"name":"Tech", "bookmarks": [
-        {"name":"Register", "url":"http://www.theregister.co.uk/"},
-        {"name":"CouchDB", "url":"http://couchdb.apache.org/"}
-      ]}
-    ]
-
-It can then present the bookmarks menu and sub-menus by traversing this structure.
-
-Now consider this scenario: the user has a set of bookmarks on her PC, and then
-replicates it to her laptop. On the laptop, she changes the News link to point
-to CNN, renames "Register" to "The Register", and adds a new link to slashdot
-just after it. On the desktop, her husband deletes the Weather link, and adds a
-new link to CNET in the Tech folder.
-
-So after these changes, the laptop has:
-
-.. code-block:: javascript
-
-    [
-      {"name":"Weather", "url":"http://www.bbc.co.uk/weather"},
-      {"name":"News", "url":"http://www.cnn.com/"},
-      {"name":"Tech", "bookmarks": [
-        {"name":"The Register", "url":"http://www.theregister.co.uk/"},
-        {"name":"Slashdot", "url":"http://www.slashdot.new/"},
-        {"name":"CouchDB", "url":"http://couchdb.apache.org/"}
-      ]}
-    ]
-
-and the PC has:
-
-.. code-block:: javascript
-
-    [
-      {"name":"News", "url":"http://www.cnn.com/"},
-      {"name":"Tech", "bookmarks": [
-        {"name":"Register", "url":"http://www.theregister.co.uk/"},
-        {"name":"CouchDB", "url":"http://couchdb.apache.org/"},
-        {"name":"CNET", "url":"http://news.cnet.com/"}
-      ]}
-    ]
-
-Upon the next synchronisation, we want the expected merge to take place. That
-is: links which were changed, added or deleted on one side are also changed,
-added or deleted on the other side - with no human intervention required unless
-absolutely necessary.
-
-We will also assume that both sides are doing a CouchDB "compact" operation
-periodically, and are disconnected for more than this time before they
-resynchronise.
-
-All of the approaches below which allow automated merging of changes rely on
-having some sort of history, back to the point where the replicas diverged.
-
-CouchDB does not provide a mechanism for this itself. It stores arbitrary
-numbers of old _ids for one document (trunk now has a mechanism for pruning the
-_id history), for the purposes of replication. However it will not keep the
-documents themselves through a compaction cycle, except where there are
-conflicting versions of a document.
-
-*Do not rely on the CouchDB revision history mechanism to help you build an
-application-level version history.* Its sole purpose is to ensure eventually
-consistent replication between databases. It is up to you to maintain history
-explicitly in whatever form makes sense for your application, and to prune it
-to avoid excessive storage utilisation, whilst not pruning past the point where
-live replicas last diverged.
-
-Approach 1: Single JSON doc
-^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The above structure is already valid JSON, and so could be represented in
-CouchDB just by wrapping it in an object and storing as a single document:
-
-.. code-block:: javascript
-
-    {
-      "bookmarks":
-      // ... same as above
-    }
-
-This makes life very easy for the application, as the ordering and nesting is
-all taken care of. The trouble here is that on replication, only two sets of
-bookmarks will be visible: example B and example C. One will be chosen as the
-main revision, and the other will be stored as a conflicting revision.
-
-At this point, the semantics are very unsatisfactory from the user's point of
-view. The best that can be offered is a choice saying "Which of these two sets
-of bookmarks do you wish to keep: B or C?" However neither represents the
-desired outcome. There is also insufficient data to be able to correctly merge
-them, since the base revision A is lost.
-
-This is going to be highly unsatisfactory for the user, who will have to apply
-one set of changes again manually.
-
-Approach 2: Separate document per bookmark
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-An alternative solution is to make each field (bookmark) a separate document in
-its own right. Adding or deleting a bookmark is then just a case of adding or
-deleting a document, which will never conflict (although if the same bookmark
-is added on both sides, then you will end up with two copies of it). Changing a
-bookmark will only conflict if both sides made changes to the same one, and
-then it is reasonable to ask the user to choose between them.
-
-Since there will now be lots of small documents, you may either wish to keep a
-completely separate database for bookmarks, or else add an attribute to
-distinguish bookmarks from other kinds of document in the database. In the
-latter case, a view can be made to return only bookmark documents.
-
-Whilst replication is now fixed, care is needed with the "ordered" and
-"nestable" properties of bookmarks.
-
-For ordering, one suggestion is to give each item a floating-point index, and
-then when inserting an object between A and B, give it an index which is the
-average of A and B's indices. Unfortunately, this will fail after a while when
-you run out of precision, and the user will be bemused to find that their most
-recent bookmarks no longer remember the exact position they were put in.
-
-A better way is to keep a string representation of index, which can grow as the
-tree is subdivided. This will not suffer the above problem, but it may result
-in this string becoming arbitrarily long after time. They could be renumbered,
-but the renumbering operation could introduce a lot of conflicts, especially if
-attempted by both sides independently.
-
-For "nestable", you can have a separate doc which represents a list of
-bookmarks, and each bookmark can have a "belongs to" field which identifies the
-list. It may be useful anyway to be able to have multiple top-level bookmark
-sets (Bob's bookmarks, Jill's bookmarks etc). Some care is needed when deleting
-a list or sub-list, to ensure that all associated bookmarks are also deleted,
-otherwise they will become orphaned.
-
-Building the entire bookmark set can be performed through the use of emitting
-a compound key that describes the path to the document, then using group levels
-to retrieve the position of the tree in the document. The following code
-excerpt describes a tree of files, where the path to the file is stored in
-the document under the ``"path"`` key:
-
-.. code-block:: javascript
-
-    // map function
-    function(doc) {
-      if (doc.type === "file") {
-        if (doc.path.substr(-1) === "/") {
-          var raw_path = doc.path.slice(0, -1);
-        } else {
-          var raw_path = doc.path;
-        }
-        emit (raw_path.split('/'), 1);
-      }
-    }
-
-    // reduce
-    _sum
-
-This will emit rows into the view of the form ``["opt", "couchdb", "etc",
-"local.ini"]`` for a ``doc.path`` of ``/opt/couchdb/etc/local.ini``. You can
-then query a list of files in the ``/opt/couchdb/etc`` directory by specifying
-a ``startkey`` of ``["opt", "couchdb", "etc"]`` and an ``endkey`` of ``["opt",
-"couchdb", "etc", {}]``.
-
-Approach 3: Immutable history / event sourcing
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Another approach to consider is `Event Sourcing
-<https://martinfowler.com/eaaDev/EventSourcing.html>`_ or Command Logging, as
-implemented in many NoSQL databases and as used in many `operational
-transformation <https://en.wikipedia.org/wiki/Operational_transformation>`_
-systems.
-
-In this model, instead of storing individual bookmarks, you store records of
-changes made - "Bookmark added", "Bookmark changed", "Bookmark moved",
-"Bookmark deleted". These are stored in an append-only fashion. Since records
-are never modified or deleted, only added to, there are never any replication
-conflicts.
-
-These records can also be stored as an array in a single CouchDB document.
-Replication can cause a conflict, but in this case it is easy to resolve by
-simply combining elements from the two arrays.
-
-In order to see the full set of bookmarks, you need to start with a baseline
-set (initially empty) and run all the change records since the baseline was
-created; and/or you need to maintain a most-recent version and update it with
-changes not yet seen.
-
-Care is needed after replication when merging together history from multiple
-sources. You may get different results depending on how you order them -
-consider taking all A's changes before B's, taking all B's before A's, or
-interleaving them (e.g. if each change has a timestamp).
-
-Also, over time the amount of storage used can grow arbitrarily large, even if
-the set of bookmarks itself is small. This can be controlled by moving the
-baseline version forwards and then keeping only the changes after that point.
-However, care is needed not to move the baseline version forward so far that
-there are active replicas out there which last synchronised before that time,
-as this may result in conflicts which cannot be resolved automatically.
-
-If there is any uncertainty, it is best to present the user with a prompt to
-assist with merging the content in the application itself.
-
-Approach 4: Keep historic versions explicitly
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-If you are going to keep a command log history, then it may be simpler just to
-keep old revisions of the bookmarks list itself around. The intention is to
-subvert CouchDB's automatic behaviour of purging old revisions, by keeping
-these revisions as separate documents.
-
-You can keep a pointer to the 'most current' revision, and each revision can
-point to its predecessor. On replication, merging can take place by diffing
-each of the previous versions (in effect synthesising the command logs) back to
-a common ancestor.
-
-This is the sort of behaviour which revision control systems such as `Git
-<http://git-scm.org/>`_ implement as a matter of routine, although generally
-comparing text files line-by-line rather than comparing JSON objects
-field-by-field.
-
-Systems like Git will accumulate arbitrarily large amounts of history (although
-they will attempt to compress it by packing multiple revisions so that only
-their diffs are stored). With Git you can use "history rewriting" to remove old
-history, but this may prohibit merging if history doesn't go back far enough in
-time.
-
-Adding client-side security with a translucent database
--------------------------------------------------------
-
-Many applications do not require a thick layer of security at the server. It is
-possible to use a modest amount of encryption and one-way functions to obscure
-the sensitive columns or key-value pairs, a technique often called a
-translucent database. (See `a description <http://www.wayner.org/node/52>`_.)
-
-The simplest solutions use a one-way function like SHA-256 at the client to
-scramble the name and password before storing the information.  This solution
-gives the client control of the data in the database without requiring a thick
-layer on the database to test each transaction. Some advantages are:
-
-* Only the client or someone with the knowledge of the name and password can compute
-  the value of SHA256 and recover the data.
-* Some columns are still left in the clear, an advantage for computing aggregated
-  statistics.
-* Computation of SHA256 is left to the client side computer which usually has cycles
-  to spare.
-* The system prevents server-side snooping by insiders and any attacker who might
-  penetrate the OS or any of the tools running upon it.
-
-There are limitations:
-
-* There is no root password. If the person forgets their name and password, their
-  access is gone forever. This limits its use to databases that can continue by
-  issuing a new user name and password.
-
-There are many variations on this theme detailed in the book `Translucent Databases
-<http://www.wayner.org/node/46>`_, including:
-
-* Adding a backdoor with public-key cryptography.
-* Adding a second layer with steganography.
-* Dealing with typographical errors.
-* Mixing encryption with one-way functions.
diff --git a/src/best-practices/forms.rst b/src/best-practices/forms.rst
deleted file mode 100644
index e97c986..0000000
--- a/src/best-practices/forms.rst
+++ /dev/null
@@ -1,143 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices/forms:
-
-====================================
-Document submission using HTML Forms
-====================================
-
-It is possible to write to a CouchDB document directly from an HTML form by
-using a document :ref:`update function <updatefun>`. Here's how:
-
-The HTML form
-=============
-
-First, write an HTML form. Here's a simple "Contact Us" form excerpt:
-
-.. code-block:: html
-
-    <form action="/dbname/_design/ddocname/_update/contactform" method="post">
-        <div>
-            <label for="name">Name:</label>
-            <input type="text" id="name" name="name" />
-        </div>
-        <div>
-            <label for="mail">Email:</label>
-            <input type="text" id="mail" name="email" />
-        </div>
-        <div>
-            <label for="msg">Message:</label>
-            <textarea id="msg" name="message"></textarea>
-        </div>
-    </form>
-
-Customize the ``/dbname/_design/ddocname/_update/contactform`` portion of the
-form action URL to reflect the exact path to your database, design document
-and update function (see below).
-
-As CouchDB
-:ref:`no longer recommends the use of CouchDB-hosted web applications <indexes>`
-, you may want to use a reverse proxy to expose CouchDB as a subdirectory of
-your web application.  If so, add that prefix to the ``action`` destination in
-the form.
-
-Another option is to alter CouchDB's :ref:`CORS <cors>` settings and use a
-cross-domain POST. *Be sure you understand all security implications before
-doing this!*
-
-The update function
-===================
-
-Then, write an update function. This is the server-side JavaScript function
-that will receive the ``POST``-ed data.
-
-The first argument to the function will be the document that is being processed
-(if it exists). Because we are using ``POST`` and not ``PUT``, this should be
-empty in our scenario - but we should check to be sure. The ``POST``-ed data
-will be passed as the second parameter to the function, along with any query
-parameters and the full request headers.
-
-Here's a sample handler that extracts the form data, generates a document _id
-based on the email address and timestamp, and saves the document. It then
-returns a JSON success response back to the browser.
-
-.. code-block:: javascript
-
-    function(doc, req) {
-
-        if (doc) {
-            return [doc, toJSON({"error": "request already filed"})]
-        }
-
-        if !(req.form && req.form.email) {
-            return [null, toJSON({"error": "incomplete form"})]
-        }
-
-        var date = new Date()
-        var newdoc = req.form
-        newdoc._id = req.form.email + "_" + date.toISOString()
-
-        return [newdoc, toJSON({"success":"ok"})]
-    }
-
-Place the above function in your design document under the ``updates`` key.
-
-Note that this function does not attempt any sort of input validation or
-sanitization. That is best handled by a
-:ref:`validate document update function <vdufun>` instead.  (A "VDU" will
-validate any document written to the database, not just those that use your
-update function.)
-
-If the first element passed to ``return`` is a document, the HTTP response
-headers will include ``X-Couch-Id``, the ``_id`` value for the newly created
-document, and ``X-Couch-Update-NewRev``, the ``_rev`` value for the newly
-created document. This is handy if your client-side code wants to access or
-update the document in a future call.
-
-Example output
-==============
-
-Here's the worked sample above, using ``curl`` to simulate the form POST.
-
-.. code-block:: bash
-
-    $ curl -X PUT localhost:5984/testdb/_design/myddoc -d '{ "updates": { "contactform": "function(doc, req) { ... }" } }'
-    {"ok":true,"id":"_design/myddoc","rev":"1-2a2b0951fcaf7287817573b03bba02ed"}
-
-    $ curl --data "name=Lin&email=lin@example.com&message=I Love CouchDB" http://localhost:5984/testdb/_design/myddoc/_update/contactform
-    *   Trying 127.0.0.1...
-    * TCP_NODELAY set
-    * Connected to localhost (127.0.0.1) port 5984 (#1)
-    > POST /testdb/_design/myddoc/_update/contactform HTTP/1.1
-    > Host: localhost:5984
-    > User-Agent: curl/7.59.0
-    > Accept: */*
-    > Content-Length: 53
-    > Content-Type: application/x-www-form-urlencoded
-    >
-    * upload completely sent off: 53 out of 53 bytes
-    < HTTP/1.1 201 Created
-    < Content-Length: 16
-    < Content-Type: text/html; charset=utf-8
-    < Date: Thu, 05 Apr 2018 19:56:42 GMT
-    < Server: CouchDB/2.2.0-948a1311c (Erlang OTP/19)
-    < X-Couch-Id: lin%40example.com_2018-04-05T19:51:22.278Z
-    < X-Couch-Request-ID: 03a5f4fbe0
-    < X-Couch-Update-NewRev: 1-34483732407fcc6cfc5b60ace48b9da9
-    < X-CouchDB-Body-Time: 0
-    <
-    * Connection #1 to host localhost left intact
-    {"success":"ok"}
-
-    $ curl http://localhost:5984/testdb/lin\@example.com_2018-04-05T19:51:22.278Z
-    {"_id":"lin@example.com_2018-04-05T19:51:22.278Z","_rev":"1-34483732407fcc6cfc5b60ace48b9da9","name":"Lin","email":"lin@example.com","message":"I Love CouchDB"}
diff --git a/src/best-practices/index.rst b/src/best-practices/index.rst
deleted file mode 100644
index f7e3ebd..0000000
--- a/src/best-practices/index.rst
+++ /dev/null
@@ -1,32 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices:
-
-==============
-Best Practices
-==============
-
-In this chapter, we present some of the best ways to use Apache CouchDB. These
-usage patterns reflect many years of real-world use. We hope that these will
-jump-start your next project, or improve the performance of your current
-system.
-
-.. toctree::
-    :maxdepth: 2
-
-    documents
-    forms
-    iso-date
-    jsdevel
-    views
-    reverse-proxies
diff --git a/src/best-practices/iso-date.rst b/src/best-practices/iso-date.rst
deleted file mode 100644
index c91c962..0000000
--- a/src/best-practices/iso-date.rst
+++ /dev/null
@@ -1,64 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices/iso-date:
-
-============================================
-Using an ISO Formatted Date for Document IDs
-============================================
-
-The `ISO 8601 date standard <http://en.wikipedia.org/wiki/ISO_8601>`_ describes a useful
-scheme for representing a date string in a Year-Month-DayTHour:Minute:Second.microsecond
-format. For time-bound documents in a CouchDB database this can be a very handy way to
-create a unique identifier, since JavaScript can directly use it to create a Date object.
-Using this sample ``map`` function:
-
-.. code-block:: javascript
-
-    function(doc) {
-      var dt = new Date(doc._id);
-      emit([dt.getDate(), doc.widget], 1);
-    }
-
-simply use ``group_level`` to zoom in on whatever time you wish to use.
-
-.. code-block:: bash
-
-    curl -X GET "http://localhost:5984/transactions/_design/widget_count/_view/toss?group_level=1"
-
-    {"rows":[
-    {"key":[20],"value":10},
-    {"key":[21],"value":20}
-    ]}
-
-    curl -X GET "http://localhost:5984/transactions/_design/widget_count/_view/toss?group_level=2"
-
-    {"rows":[
-    {"key":[20,widget],"value":10},
-    {"key":[21,widget],"value":10},
-    {"key":[21,thing],"value":10}
-    ]}
-
-Another method is using ``parseint()`` and ``datetime.substr()`` to cut out useful values
-for a return key:
-
-.. code-block:: javascript
-
-    function (doc) {
-      var datetime = doc._id;
-      var year = parseInt(datetime.substr(0, 4));
-      var month = parseInt(datetime.substr(5, 2), 10);
-      var day = parseInt(datetime.substr(8, 2), 10);
-      var hour = parseInt(datetime.substr(11, 2), 10);
-      var minute = parseInt(datetime.substr(14, 2), 10);
-      emit([doc.widget, year, month, day, hour, minute], 1);
-    }
diff --git a/src/best-practices/jsdevel.rst b/src/best-practices/jsdevel.rst
deleted file mode 100644
index d214870..0000000
--- a/src/best-practices/jsdevel.rst
+++ /dev/null
@@ -1,48 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices/jsdevel:
-
-===========================
-JavaScript development tips
-===========================
-
-Working with Apache CouchDB's JavaScript environment is a lot different than
-working with traditional JavaScript development environments. Here are some
-tips and tricks that will ease the difficulty.
-
-.. rst-class:: open
-
-- Check the JavaScript version being used by your CouchDB. As of version 3.2.0,
-  this is reported in the output of ``GET /_node/_local/_versions``. Prior to
-  version 3.2.0, you will   need to see which JavaScript library is installed by
-  your CouchDB binary   distribution, provided by your operating system, or
-  linked by your compilation process.
-
-  If the version is 1.8.5, this is an **old** version of JavaScript, only
-  supporting the ECMA-262 5th edition ("ES5") of the language. ES6/2015 and
-  newer constructs **cannot** be used.
-
-  Fortunately, there are many tools available for transpiling modern JavaScript
-  into code compatible with older JS engines. The `Babel Project website
-  <http://babeljs.io/repl>`_, for example, offers an in-browser text editor
-  which transpiles JavaScript in real-time. Configuring CouchDB-compatibility
-  is as easy as enabling the ``ENV PRESET`` option, and typing "firefox 4.0"
-  into the *TARGETS* field.
-
-- The ``log()`` function will log output to the CouchDB log file or stream.
-  You can log strings, objects, and arrays directly, without first converting
-  to JSON.  Use this in conjunction with a local CouchDB instance for best
-  results.
-
-- Be sure to guard all document accesses to avoid exceptions when fields
-  or subfields are missing: ``if (doc && doc.myarray && doc.myarray.length)...``
diff --git a/src/best-practices/reverse-proxies.rst b/src/best-practices/reverse-proxies.rst
deleted file mode 100644
index a04fc76..0000000
--- a/src/best-practices/reverse-proxies.rst
+++ /dev/null
@@ -1,315 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices/reverse-proxies:
-
-========================
-Reverse Proxies
-========================
-
-Reverse proxying with HAProxy
-=============================
-
-CouchDB recommends the use of `HAProxy`_ as a load balancer and reverse proxy.
-The team's experience with using it in production has shown it to be superior
-for configuration and monitoring capabilities, as well as overall performance.
-
-CouchDB's sample haproxy configuration is present in the `code repository`_ and
-release tarball as ``rel/haproxy.cfg``. It is included below. This example
-is for a 3 node CouchDB cluster:
-
-.. code-block:: text
-
-    global
-        maxconn 512
-        spread-checks 5
-
-    defaults
-        mode http
-        log global
-        monitor-uri /_haproxy_health_check
-        option log-health-checks
-        option httplog
-        balance roundrobin
-        option forwardfor
-        option redispatch
-        retries 4
-        option http-server-close
-        timeout client 150000
-        timeout server 3600000
-        timeout connect 500
-
-        stats enable
-        stats uri /_haproxy_stats
-        # stats auth admin:admin # Uncomment for basic auth
-
-    frontend http-in
-         # This requires HAProxy 1.5.x
-         # bind *:$HAPROXY_PORT
-         bind *:5984
-         default_backend couchdbs
-
-    backend couchdbs
-        option httpchk GET /_up
-        http-check disable-on-404
-        server couchdb1 x.x.x.x:5984 check inter 5s
-        server couchdb2 x.x.x.x:5984 check inter 5s
-        server couchdb2 x.x.x.x:5984 check inter 5s
-
-.. _HAProxy: http://haproxy.org/
-.. _code repository: https://github.com/apache/couchdb/blob/main/rel/haproxy.cfg
-
-Reverse proxying with nginx
-===========================
-
-Basic Configuration
--------------------
-
-Here's a basic excerpt from an nginx config file in
-``<nginx config directory>/sites-available/default``. This will proxy all
-requests from ``http://domain.com/...`` to ``http://localhost:5984/...``
-
-.. code-block:: text
-
-    location / {
-        proxy_pass http://localhost:5984;
-        proxy_redirect off;
-        proxy_buffering off;
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    }
-
-Proxy buffering **must** be disabled, or continuous replication will not
-function correctly behind nginx.
-
-Reverse proxying CouchDB in a subdirectory with nginx
------------------------------------------------------
-
-It can be useful to provide CouchDB as a subdirectory of your overall domain,
-especially to avoid CORS concerns. Here's an excerpt of a basic nginx
-configuration that proxies the URL ``http://domain.com/couchdb`` to
-``http://localhost:5984`` so that requests appended to the subdirectory, such
-as ``http://domain.com/couchdb/db1/doc1`` are proxied to
-``http://localhost:5984/db1/doc1``.
-
-.. code-block:: text
-
-    location /couchdb {
-        rewrite ^ $request_uri;
-        rewrite ^/couchdb/(.*) /$1 break;
-        proxy_pass http://localhost:5984$uri;
-        proxy_redirect off;
-        proxy_buffering off;
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    }
-
-Session based replication is default functionality since CouchDB 2.3.0. To enable session based replication with reverse proxied CouchDB in a subdirectory.
-
-.. code-block:: text
-
-    location /_session {
-        proxy_pass http://localhost:5984/_session;
-        proxy_redirect off;
-        proxy_buffering off;
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-    }
-
-Authentication with nginx as a reverse proxy
---------------------------------------------
-
-Here's a sample config setting with basic authentication enabled, placing
-CouchDB in the ``/couchdb`` subdirectory:
-
-.. code-block:: text
-
-    location /couchdb {
-        auth_basic "Restricted";
-        auth_basic_user_file htpasswd;
-        rewrite /couchdb/(.*) /$1 break;
-        proxy_pass http://localhost:5984;
-        proxy_redirect off;
-        proxy_buffering off;
-        proxy_set_header Host $host;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header Authorization "";
-    }
-
-This setup leans entirely on nginx performing authorization, and forwarding
-requests to CouchDB with no authentication (with CouchDB in Admin Party mode),
-which isn't sufficient in CouchDB 3.0 anymore as Admin Party has been removed.
-You'd need to at the very least hard-code user credentials into this version
-with headers.
-
-For a better solution, see :ref:`api/auth/proxy`.
-
-SSL with nginx
---------------------------------------------
-
-In order to enable SSL, just enable the nginx SSL module, and add another
-proxy header:
-
-.. code-block:: text
-
-    ssl on;
-    ssl_certificate PATH_TO_YOUR_PUBLIC_KEY.pem;
-    ssl_certificate_key PATH_TO_YOUR_PRIVATE_KEY.key;
-    ssl_protocols SSLv3;
-    ssl_session_cache shared:SSL:1m;
-
-    location / {
-        proxy_pass http://localhost:5984;
-        proxy_redirect off;
-        proxy_set_header Host $host;
-        proxy_buffering off;
-        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
-        proxy_set_header X-Forwarded-Ssl on;
-    }
-
-The ``X-Forwarded-Ssl`` header tells CouchDB that it should use the ``https``
-scheme instead of the ``http`` scheme. Otherwise, all CouchDB-generated
-redirects will fail.
-
-Reverse Proxying with Caddy 2
-=============================
-
-Caddy is ``https-by-default``, and will automatically acquire, install,
-activate and, when necessary, renew a trusted SSL certificate for you
-- all in the background.
-Certificates are issued by the **Let's Encrypt** certificate authority.
-
-Basic configuration
--------------------
-
-Here's a basic excerpt from a Caddyfile in
-``/etc/caddy/Caddyfile``. This will proxy all
-requests from ``http(s)://domain.com/...`` to ``http://localhost:5984/...``
-
-.. code-block:: text
-
-    domain.com {
-
-       reverse_proxy localhost:5984
-
-    }
-
-Reverse proxying CouchDB in a subdirectory with Caddy 2
--------------------------------------------------------
-
-It can be useful to provide CouchDB as a subdirectory of your overall domain,
-especially to avoid CORS concerns. Here's an excerpt of a basic Caddy
-configuration that proxies the URL ``http(s)://domain.com/couchdb`` to
-``http://localhost:5984`` so that requests appended to the subdirectory, such
-as ``http(s)://domain.com/couchdb/db1/doc1`` are proxied to
-``http://localhost:5984/db1/doc1``.
-
-.. code-block:: text
-
-    domain.com {
-
-        reverse_proxy /couchdb/* localhost:5984
-
-    }
-
-Reverse proxying + load balancing for CouchDB clusters
-------------------------------------------------------
-
-Here's a basic excerpt from a Caddyfile in
-``/<path>/<to>/<site>/Caddyfile``. This will proxy and evenly distribute all
-requests from ``http(s)://domain.com/...`` among 3 CouchDB cluster nodes
-at ``localhost:15984``, ``localhost:25984`` and ``localhost:35984``.
-
-Caddy will check the status, i.e. health, of each node every 5 seconds;
-if a node goes down, Caddy will avoid proxying requests to that node until it
-comes back online.
-
-.. code-block:: text
-
-    domain.com {
-
-        reverse_proxy http://localhost:15984 http://localhost:25984 http://localhost:35984 {
-        lb_policy round_robin
-        lb_try_interval 500ms
-
-        health_interval 5s
-        }
-
-    }
-
-Authentication with Caddy 2 as a reverse proxy
-----------------------------------------------
-
-Here's a sample config setting with basic authentication enabled, placing
-CouchDB in the ``/couchdb`` subdirectory:
-
-.. code-block:: text
-
-    domain.com {
-
-        basicauth /couchdb/* {
-            couch_username couchdb_hashed_password_base64
-        }
-
-        reverse_proxy /couchdb/* localhost:5984
-
-    }
-
-This setup leans entirely on nginx performing authorization, and forwarding
-requests to CouchDB with no authentication (with CouchDB in Admin Party mode),
-which isn't sufficient in CouchDB 3.0 anymore as Admin Party has been removed.
-You'd need to at the very least hard-code user credentials into this version
-with headers.
-
-For a better solution, see :ref:`api/auth/proxy`.
-
-Reverse Proxying with Apache HTTP Server
-========================================
-
-.. warning::
-    As of this writing, there is no way to fully disable the buffering between
-    Apache HTTPD Server and CouchDB. This may present problems with continuous
-    replication. The Apache CouchDB team strongly recommend the use of an
-    alternative reverse proxy such as ``haproxy`` or ``nginx``, as described
-    earlier in this section.
-
-Basic Configuration
--------------------
-
-Here's a basic excerpt for using a ``VirtualHost`` block config to use Apache
-as a reverse proxy for CouchDB. You need at least to configure Apache with the
-``--enable-proxy --enable-proxy-http`` options and use a version equal to or
-higher than Apache 2.2.7 in order to use the ``nocanon`` option in the
-``ProxyPass`` directive. The ``ProxyPass`` directive adds the ``X-Forwarded-For``
-header needed by CouchDB, and the ``ProxyPreserveHost`` directive ensures the
-original client ``Host`` header is preserved.
-
-.. code-block:: apacheconf
-
-    <VirtualHost *:80>
-       ServerAdmin webmaster@dummy-host.example.com
-       DocumentRoot "/opt/websites/web/www/dummy"
-       ServerName couchdb.localhost
-       AllowEncodedSlashes On
-       ProxyRequests Off
-       KeepAlive Off
-       <Proxy *>
-          Order deny,allow
-          Deny from all
-          Allow from 127.0.0.1
-       </Proxy>
-       ProxyPass / http://localhost:5984 nocanon
-       ProxyPassReverse / http://localhost:5984
-       ProxyPreserveHost On
-       ErrorLog "logs/couchdb.localhost-error_log"
-       CustomLog "logs/couchdb.localhost-access_log" common
-    </VirtualHost>
diff --git a/src/best-practices/views.rst b/src/best-practices/views.rst
deleted file mode 100644
index ffc8399..0000000
--- a/src/best-practices/views.rst
+++ /dev/null
@@ -1,57 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _best-practices/views:
-
-====================
-View recommendations
-====================
-
-Here are some tips and tricks for working with CouchDB's (JavaScript-based)
-views.
-
-Deploying a view change in a live environment
----------------------------------------------
-
-It is possible to change the definition of a view, build the index, then make
-those changes go live without causing downtime for your application. The trick
-to making this work is that CouchDB's JavaScript view index files are based on
-the contents of the design document - not its name, ``_id`` or revision. This
-means that two design documents with identical view code will share the same
-on-disk view index files.
-
-Here is a worked example, assuming your ``/db/_design/ddoc`` needs to be updated.
-
-1. Upload the old design doc to ``/db/_design/ddoc-old`` (or copy the document)
-   if you want an easy way to rollback in case of problems. The ``ddoc-old``
-   document will reference the same view indexes already built for ``_design/ddoc``.
-
-2. Upload the updated design doc to ``/db/_design/ddoc-new``.
-
-3. Query a view in the new design document to trigger secondary index generation.
-   You can track the indexing progress via the ``/_active_tasks`` endpoint, or
-   through the :ref:`fauxton` web interface.
-
-4. When the index is done being built, re-upload the updated design document to
-   ``/db/_design/ddoc`` (or copy the document). The ``ddoc`` document will now
-   reference the same view indexes already built for ``_design/ddoc-new``.
-
-5. Delete ``/db/_design/ddoc-new`` and/or ``/db/_design/ddoc-old`` at your
-   discretion. Don't forget to trigger :ref:`compact/views/cleanup` to reclaim
-   disk space after deleting ``ddoc-old``.
-
-The :ref:`COPY <api/doc/copy>` HTTP verb can be used to copy the design document
-with a single command:
-
-.. code-block:: bash
-
-    curl -X COPY <URL of source design document> -H "Destination: <ID of destination design document>"
diff --git a/src/cluster/databases.rst b/src/cluster/databases.rst
deleted file mode 100644
index 1dcbc2b..0000000
--- a/src/cluster/databases.rst
+++ /dev/null
@@ -1,85 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/databases:
-
-===================
-Database Management
-===================
-
-.. _cluster/databases/create:
-
-Creating a database
-===================
-
-This will create a database with ``3`` replicas and ``8`` shards.
-
-.. code-block:: bash
-
-    curl -X PUT "http://xxx.xxx.xxx.xxx:5984/database-name?n=3&q=8" --user admin-user
-
-The database is in ``data/shards``. Look around on all the nodes and you will
-find all the parts.
-
-If you do not specify ``n`` and ``q`` the default will be used. The default is
-``3`` replicas and ``8`` shards.
-
-.. _cluster/databases/delete:
-
-Deleting a database
-===================
-
-.. code-block:: bash
-
-    curl -X DELETE "http://xxx.xxx.xxx.xxx:5984/database-name --user admin-user
-
-.. _cluster/databases/placement:
-
-Placing a database on specific nodes
-====================================
-
-In BigCouch, the predecessor to CouchDB 2.0's clustering functionality, there
-was the concept of zones. CouchDB 2.0 carries this forward with cluster
-placement rules.
-
-.. warning::
-
-    Use of the ``placement`` argument will **override** the standard
-    logic for shard replica cardinality (specified by ``[cluster] n``.)
-
-First, each node must be labeled with a zone attribute. This defines which zone each node
-is in. You do this by editing the node's document in the system ``_nodes`` database, which
-is accessed node-local via the ``GET /_node/_local/_nodes/{node-name}`` endpoint.
-
-Add a key value pair of the form:
-
-.. code-block:: text
-
-    "zone": "metro-dc-a"
-
-Do this for all of the nodes in your cluster.
-
-In your config file (``local.ini`` or ``default.ini``) on each node, define a
-consistent cluster-wide setting like:
-
-.. code-block:: text
-
-    [cluster]
-    placement = metro-dc-a:2,metro-dc-b:1
-
-In this example, it will ensure that two replicas for a shard will be hosted
-on nodes with the zone attribute set to ``metro-dc-a`` and one replica will
-be hosted on a new with the zone attribute set to ``metro-dc-b``.
-
-Note that you can also use this system to ensure certain nodes in the cluster
-do not host *any* replicas for newly created databases, by giving them a zone
-attribute that does not appear in the ``[cluster]`` placement string.
diff --git a/src/cluster/index.rst b/src/cluster/index.rst
deleted file mode 100644
index 9d2f57a..0000000
--- a/src/cluster/index.rst
+++ /dev/null
@@ -1,39 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster:
-
-==================
-Cluster Management
-==================
-
-As of CouchDB 2.0.0, CouchDB can be run in two different modes of operation:
-    * Standalone: In this mode, CouchDB's clustering is unavailable.
-      CouchDB's HTTP-based replication with other CouchDB installations remains available.
-    * Cluster: A cluster of CouchDB installations internally replicate
-      with each other via optimized network connections.
-      This is intended to be used with servers that are in the same data center.
-      This allows for database sharding to improve performance.
-
-This section details the theory behind CouchDB clusters, and provides specific
-operational instructions on node, database and shard management.
-
-.. toctree::
-    :maxdepth: 2
-
-    theory
-    nodes
-    databases
-    sharding
-    purging
-    tls_erlang_distribution
-    troubleshooting
diff --git a/src/cluster/nodes.rst b/src/cluster/nodes.rst
deleted file mode 100644
index 31cc477..0000000
--- a/src/cluster/nodes.rst
+++ /dev/null
@@ -1,89 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/nodes:
-
-===============
-Node Management
-===============
-
-.. _cluster/nodes/add:
-
-Adding a node
-=============
-
-Go to ``http://server1:5984/_membership`` to see the name of the node and all
-the nodes it is connected to and knows about.
-
-.. code-block:: text
-
-    curl -X GET "http://xxx.xxx.xxx.xxx:5984/_membership" --user admin-user
-
-.. code-block:: javascript
-
-    {
-        "all_nodes":[
-            "node1@xxx.xxx.xxx.xxx"],
-        "cluster_nodes":[
-            "node1@xxx.xxx.xxx.xxx"]
-    }
-
-* ``all_nodes`` are all the nodes that this node knows about.
-* ``cluster_nodes`` are the nodes that are connected to this node.
-
-To add a node simply do:
-
-.. code-block:: text
-
-    curl -X PUT "http://xxx.xxx.xxx.xxx/_node/_local/_nodes/node2@yyy.yyy.yyy.yyy" -d {}
-
-Now look at ``http://server1:5984/_membership`` again.
-
-.. code-block:: javascript
-
-    {
-        "all_nodes":[
-            "node1@xxx.xxx.xxx.xxx",
-            "node2@yyy.yyy.yyy.yyy"
-        ],
-        "cluster_nodes":[
-            "node1@xxx.xxx.xxx.xxx",
-            "node2@yyy.yyy.yyy.yyy"
-        ]
-    }
-
-And you have a 2 node cluster :)
-
-``http://yyy.yyy.yyy.yyy:5984/_membership`` will show the same thing, so you
-only have to add a node once.
-
-.. _cluster/nodes/remove:
-
-Removing a node
-===============
-
-Before you remove a node, make sure that you have moved all
-:ref:`shards <cluster/sharding/move>` away from that node.
-
-To remove ``node2`` from server ``yyy.yyy.yyy.yyy``, you need to first know the
-revision of the document that signifies that node’s existence:
-
-.. code-block:: text
-
-    curl "http://xxx.xxx.xxx.xxx/_node/_local/_nodes/node2@yyy.yyy.yyy.yyy"
-    {"_id":"node2@yyy.yyy.yyy.yyy","_rev":"1-967a00dff5e02add41820138abb3284d"}
-
-With that ``_rev``, you can now proceed to delete the node document:
-
-.. code-block:: text
-
-    curl -X DELETE "http://xxx.xxx.xxx.xxx/_node/_local/_nodes/node2@yyy.yyy.yyy.yyy?rev=1-967a00dff5e02add41820138abb3284d"
diff --git a/src/cluster/purging.rst b/src/cluster/purging.rst
deleted file mode 100644
index d0704ed..0000000
--- a/src/cluster/purging.rst
+++ /dev/null
@@ -1,185 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/purging:
-
-===============
-Clustered Purge
-===============
-The primary purpose of clustered purge is to clean databases that have multiple
-deleted tombstones or single documents that contain large numbers of conflicts.
-But it can also be used to purge any document (deleted or non-deleted) with any
-number of revisions.
-
-Clustered purge is designed to maintain eventual consistency and prevent
-unnecessary invalidation of secondary indexes. For this, every database keeps
-track of a certain number of historical purges requested in the database, as
-well as its current ``purge_seq``. Internal replications and secondary indexes
-process database's purges and periodically update their corresponding purge
-checkpoint documents to report ``purge_seq`` processed by them. To ensure
-eventual consistency, the database will remove stored historical purge requests
-only after they have been processed by internal replication jobs and secondary
-indexes.
-
-Internal Structures
-====================================
-To enable internal replication of purge information between nodes and secondary
-indexes, two internal purge trees were added to a database file to track
-historical purges.
-
-.. code-block:: text
-
-    purge_tree: UUID -> {PurgeSeq, DocId, Revs}
-    purge_seq_tree: PurgeSeq -> {UUID, DocId, Revs}
-
-Each interactive request to ``_purge API``, creates an ordered set of pairs on
-increasing ``purge_seq`` and purge_request, where purge_request is a tuple that
-contains docid and list of revisions. For each purge_request uuid is generated.
-A purge request is added to internal purge trees:
-a tuple ``{UUID -> {PurgeSeq, DocId, Revs}}`` is added to ``purge_tree``,
-a tuple is ``{PurgeSeq -> {UUID, DocId, Revs}}`` added ``to purge_seq_tree``.
-
-Compaction of Purges
-====================================
-
-During the compaction of the database the oldest purge requests are to be
-removed to store only ``purged_infos_limit`` number of purges in the database.
-But in order to keep the database consistent with indexes and other replicas,
-we can only remove purge requests that have already been processed by indexes
-and internal replications jobs. Thus, occasionally purge trees may store
-more than ``purged_infos_limit`` purges. If the number of stored purges in the
-database exceeds ``purged_infos_limit`` by a certain threshold, a warning is
-produced in logs signaling a problem of synchronization of database's purges
-with indexes and other replicas.
-
-Local Purge Checkpoint Documents
-====================================
-Indexes and internal replications of the database with purges create and
-periodically update local checkpoint purge documents:
-``_local/purge-$type-$hash``. These documents report the last ``purge_seq``
-processed by them and the timestamp of the last processing. An example of a
-local checkpoint purge document:
-
-.. code-block:: json
-
-    {
-      "_id": "_local/purge-mrview-86cacdfbaf6968d4ebbc324dd3723fe7",
-      "type": "mrview",
-      "purge_seq": 10,
-      "updated_on": 1540541874,
-      "ddoc_id": "_design/foo",
-      "signature": "5d10247925f826ae3e00966ec24b7bf6"
-    }
-
-The below image shows possible local checkpoint documents that a database may
-have.
-
-.. figure:: ../../images/purge-checkpoint-docs.png
-    :align: center
-    :alt: Local Purge Checkpoint Documents
-
-    Local Purge Checkpoint Documents
-
-Internal Replication
-====================================
-
-.. rst-class:: open
-
-Purge requests are replayed across all nodes in an eventually consistent manner.
-Internal replication of purges consists of two steps:
-
-1. Pull replication. Internal replication first starts by pulling purges from
-target and applying them on source to make sure we don't reintroduce to target
-source's docs/revs that have been already purged on target. In this step, we use
-purge checkpoint documents stored on target to keep track of the last target's
-``purge_seq`` processed by the source. We find purge requests occurred after
-this ``purge_seq``, and replay them on source. This step is done by updating
-the target's checkpoint purge documents with the latest process ``purge_seq``
-and timestamp.
-
-2. Push replication. Then internal replication proceeds as usual with an extra
-step inserted to push source's purge requests to target. In this step, we use
-local internal replication checkpoint documents, that are updated both on target
-and source.
-
-Under normal conditions, an interactive purge request is already sent to every
-node containing a database shard's replica, and applied on every replica.
-Internal replication of purges between nodes is just an extra step to ensure
-consistency between replicas, where all purge requests on one node are replayed
-on another node. In order not to replay the same purge request on a replica,
-each interactive purge request is tagged with a unique ``uuid``. Internal
-replication filters out purge requests with UUIDs that already exist in the
-replica's ``purge_tree``, and applies only purge requests with UUIDs that don't
-exist in the ``purge_tree``. This is the reason why we needed to have two
-internal purge trees: 1) ``purge_tree``: ``{UUID -> {PurgeSeq, DocId, Revs}}``
-allows to quickly find purge requests with UUIDs that already exist in the
-replica; 2) ``purge_seq_tree``: ``{PurgeSeq -> {UUID, DocId, Revs}}`` allows to
-iterate from a given ``purge_seq`` to collect all purge requests happened after
-this ``purge_seq``.
-
-Indexes
-====================================
-Each purge request will bump up ``update_seq`` of the database, so that each
-secondary index is also updated in order to apply the purge requests to maintain
-consistency within the main database.
-
-Config Settings
-====================================
-These settings can be updated in the default.ini or local.ini:
-
-+-----------------------+--------------------------------------------+----------+
-| Field                 | Description                                | Default  |
-+=======================+============================================+==========+
-| max_document_id_number| Allowed maximum number of documents in one | 100      |
-|                       | purge request                              |          |
-+-----------------------+--------------------------------------------+----------+
-| max_revisions_number  | Allowed maximum number of accumulated      | 1000     |
-|                       | revisions in one purge request             |          |
-+-----------------------+--------------------------------------------+----------+
-| allowed_purge_seq_lag | Beside purged_infos_limit, allowed         | 100      |
-|                       | additional buffer to store purge requests  |          |
-+-----------------------+--------------------------------------------+----------+
-| index_lag_warn_seconds| Allowed durations when index is not        | 86400    |
-|                       | updated for local purge checkpoint document|          |
-+-----------------------+--------------------------------------------+----------+
-
-During a database compaction,  we check all checkpoint purge docs. A client (an
-index or internal replication job) is allowed to have the last reported
-``purge_seq`` to be smaller than the current database shard's ``purge_seq`` by
-the value of ``(purged_infos_limit + allowed_purge_seq_lag)``.  If the client's
-``purge_seq`` is even smaller, and the client has not checkpointed within
-``index_lag_warn_seconds``, it prevents compaction of purge trees and we have to
-issue the following log warning for this client:
-
-.. code-block:: text
-
-    Purge checkpoint '_local/purge-mrview-9152d15c12011288629bcffba7693fd4’
-    not updated in 86400 seconds in
-    <<"shards/00000000-1fffffff/testdb12.1491979089">>
-
-If this type of log warning occurs, check the client to see why the processing
-of purge requests is stalled in it.
-
-There is a mapping relationship between a design document of indexes and local
-checkpoint docs. If a design document of indexes is updated or deleted, the
-corresponding local checkpoint document should be also automatically deleted.
-But in an unexpected case, when a design doc was updated/deleted, but its
-checkpoint document still exists in a database, the following warning will be
-issued:
-
-.. code-block:: text
-
-    "Invalid purge doc '<<"_design/bar">>' on database
-    <<"shards/00000000-1fffffff/testdb12.1491979089">>
-    with purge_seq '50'"
-
-If this type of log warning occurs, remove the local purge doc from a database.
diff --git a/src/cluster/sharding.rst b/src/cluster/sharding.rst
deleted file mode 100644
index d7bc3e2..0000000
--- a/src/cluster/sharding.rst
+++ /dev/null
@@ -1,888 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/sharding:
-
-================
-Shard Management
-================
-
-.. _cluster/sharding/intro:
-
-Introduction
-------------
-
-This document discusses how sharding works in CouchDB along with how to
-safely add, move, remove, and create placement rules for shards and
-shard replicas.
-
-A `shard
-<https://en.wikipedia.org/wiki/Shard_(database_architecture)>`__ is a
-horizontal partition of data in a database. Partitioning data into
-shards and distributing copies of each shard (called "shard replicas" or
-just "replicas") to different nodes in a cluster gives the data greater
-durability against node loss. CouchDB clusters automatically shard
-databases and distribute the subsets of documents that compose each
-shard among nodes. Modifying cluster membership and sharding behavior
-must be done manually.
-
-Shards and Replicas
-~~~~~~~~~~~~~~~~~~~
-
-How many shards and replicas each database has can be set at the global
-level, or on a per-database basis. The relevant parameters are ``q`` and
-``n``.
-
-*q* is the number of database shards to maintain. *n* is the number of
-copies of each document to distribute. The default value for ``n`` is ``3``,
-and for ``q`` is ``2``. With ``q=2``, the database is split into 2 shards. With
-``n=3``, the cluster distributes three replicas of each shard. Altogether,
-that's 6 shard replicas for a single database.
-
-In a 3-node cluster with ``q=8``, each node would receive 8 shards. In a 4-node
-cluster, each node would receive 6 shards. We recommend in the general case
-that the number of nodes in your cluster should be a multiple of ``n``, so that
-shards are distributed evenly.
-
-CouchDB nodes have a ``etc/default.ini`` file with a section named
-`cluster <../config/cluster.html>`__ which looks like this:
-
-::
-
-    [cluster]
-    q=2
-    n=3
-
-These settings specify the default sharding parameters for newly created
-databases. These can be overridden in the ``etc/local.ini`` file by copying the
-text above, and replacing the values with your new defaults.
-If ``[couch_peruser]`` ``q`` is set, that value is used for per-user databases.
-(By default, it is set to 1, on the assumption that per-user dbs will be quite
-small and there will be many of them.)  The values can also be set on a
-per-database basis by specifying the ``q`` and ``n`` query parameters when the
-database is created. For example:
-
-.. code-block:: bash
-
-    $ curl -X PUT "$COUCH_URL:5984/database-name?q=4&n=2"
-
-This creates a database that is split into 4 shards and 2 replicas,
-yielding 8 shard replicas distributed throughout the cluster.
-
-Quorum
-~~~~~~
-
-Depending on the size of the cluster, the number of shards per database,
-and the number of shard replicas, not every node may have access to
-every shard, but every node knows where all the replicas of each shard
-can be found through CouchDB's internal shard map.
-
-Each request that comes in to a CouchDB cluster is handled by any one
-random coordinating node. This coordinating node proxies the request to
-the other nodes that have the relevant data, which may or may not
-include itself. The coordinating node sends a response to the client
-once a `quorum
-<https://en.wikipedia.org/wiki/Quorum_(distributed_computing)>`__ of
-database nodes have responded; 2, by default. The default required size
-of a quorum is equal to ``r=w=((n+1)/2)`` where ``r`` refers to the size
-of a read quorum, ``w`` refers to the size of a write quorum, and ``n``
-refers to the number of replicas of each shard. In a default cluster where
-``n`` is 3, ``((n+1)/2)`` would be 2.
-
-.. note::
-    Each node in a cluster can be a coordinating node for any one
-    request. There are no special roles for nodes inside the cluster.
-
-The size of the required quorum can be configured at request time by
-setting the ``r`` parameter for document reads, and the ``w``
-parameter for document writes. The ``_view``, ``_find``, and
-``_search`` endpoints read only one copy no matter what quorum is
-configured, effectively making a quorum of 1 for these requests.
-
-For example, here is a request that directs the coordinating node to
-send a response once at least two nodes have responded:
-
-.. code-block:: bash
-
-    $ curl "$COUCH_URL:5984/{db}/{doc}?r=2"
-
-Here is a similar example for writing a document:
-
-.. code-block:: bash
-
-    $ curl -X PUT "$COUCH_URL:5984/{db}/{doc}?w=2" -d '{...}'
-
-Setting ``r`` or ``w`` to be equal to ``n`` (the number of replicas)
-means you will only receive a response once all nodes with relevant
-shards have responded or timed out, and as such this approach does not
-guarantee `ACIDic consistency
-<https://en.wikipedia.org/wiki/ACID#Consistency>`__. Setting ``r`` or
-``w`` to 1 means you will receive a response after only one relevant
-node has responded.
-
-.. _cluster/sharding/examine:
-
-Examining database shards
--------------------------
-
-There are a few API endpoints that help you understand how a database
-is sharded. Let's start by making a new database on a cluster, and putting
-a couple of documents into it:
-
-.. code-block:: bash
-
-    $ curl -X PUT $COUCH_URL:5984/mydb
-    {"ok":true}
-    $ curl -X PUT $COUCH_URL:5984/mydb/joan -d '{"loves":"cats"}'
-    {"ok":true,"id":"joan","rev":"1-cc240d66a894a7ee7ad3160e69f9051f"}
-    $ curl -X PUT $COUCH_URL:5984/mydb/robert -d '{"loves":"dogs"}'
-    {"ok":true,"id":"robert","rev":"1-4032b428c7574a85bc04f1f271be446e"}
-
-First, the top level :ref:`api/db` endpoint will tell you what the sharding parameters
-are for your database:
-
-.. code-block:: bash
-
-    $ curl -s $COUCH_URL:5984/db | jq .
-    {
-      "db_name": "mydb",
-    ...
-      "cluster": {
-        "q": 8,
-        "n": 3,
-        "w": 2,
-        "r": 2
-      },
-    ...
-    }
-
-So we know this database was created with 8 shards (``q=8``), and each
-shard has 3 replicas (``n=3``) for a total of 24 shard replicas across
-the nodes in the cluster.
-
-Now, let's see how those shard replicas are placed on the cluster with
-the :ref:`api/db/shards` endpoint:
-
-.. code-block:: bash
-
-    $ curl -s $COUCH_URL:5984/mydb/_shards | jq .
-    {
-      "shards": {
-        "00000000-1fffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node4@127.0.0.1"
-        ],
-        "20000000-3fffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ],
-        "40000000-5fffffff": [
-          "node2@127.0.0.1",
-          "node3@127.0.0.1",
-          "node4@127.0.0.1"
-        ],
-        "60000000-7fffffff": [
-          "node1@127.0.0.1",
-          "node3@127.0.0.1",
-          "node4@127.0.0.1"
-        ],
-        "80000000-9fffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node4@127.0.0.1"
-        ],
-        "a0000000-bfffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ],
-        "c0000000-dfffffff": [
-          "node2@127.0.0.1",
-          "node3@127.0.0.1",
-          "node4@127.0.0.1"
-        ],
-        "e0000000-ffffffff": [
-          "node1@127.0.0.1",
-          "node3@127.0.0.1",
-          "node4@127.0.0.1"
-        ]
-      }
-    }
-
-Now we see that there are actually 4 nodes in this cluster, and CouchDB
-has spread those 24 shard replicas evenly across all 4 nodes.
-
-We can also see exactly which shard contains a given document with
-the :ref:`api/db/shards/doc` endpoint:
-
-.. code-block:: bash
-
-    $ curl -s $COUCH_URL:5984/mydb/_shards/joan | jq .
-    {
-      "range": "e0000000-ffffffff",
-      "nodes": [
-        "node1@127.0.0.1",
-        "node3@127.0.0.1",
-        "node4@127.0.0.1"
-      ]
-    }
-    $ curl -s $COUCH_URL:5984/mydb/_shards/robert | jq .
-    {
-      "range": "60000000-7fffffff",
-      "nodes": [
-        "node1@127.0.0.1",
-        "node3@127.0.0.1",
-        "node4@127.0.0.1"
-      ]
-    }
-
-CouchDB shows us the specific shard into which each of the two sample
-documents is mapped.
-
-.. _cluster/sharding/move:
-
-Moving a shard
---------------
-
-When moving shards or performing other shard manipulations on the cluster, it
-is advisable to stop all resharding jobs on the cluster. See
-:ref:`cluster/sharding/stop_resharding` for more details.
-
-This section describes how to manually place and replace shards. These
-activities are critical steps when you determine your cluster is too big
-or too small, and want to resize it successfully, or you have noticed
-from server metrics that database/shard layout is non-optimal and you
-have some "hot spots" that need resolving.
-
-Consider a three-node cluster with q=8 and n=3. Each database has 24
-shards, distributed across the three nodes. If you :ref:`add a fourth
-node <cluster/nodes/add>` to the cluster, CouchDB will not redistribute
-existing database shards to it. This leads to unbalanced load, as the
-new node will only host shards for databases created after it joined the
-cluster. To balance the distribution of shards from existing databases,
-they must be moved manually.
-
-Moving shards between nodes in a cluster involves the following steps:
-
-0. :ref:`Ensure the target node has joined the cluster <cluster/nodes/add>`.
-1. Copy the shard(s) and any secondary
-   :ref:`index shard(s) onto the target node <cluster/sharding/copying>`.
-2. :ref:`Set the target node to maintenance mode <cluster/sharding/mm>`.
-3. Update cluster metadata
-   :ref:`to reflect the new target shard(s) <cluster/sharding/add-shard>`.
-4. Monitor internal replication
-   :ref:`to ensure up-to-date shard(s) <cluster/sharding/verify>`.
-5. :ref:`Clear the target node's maintenance mode <cluster/sharding/mm-2>`.
-6. Update cluster metadata again
-   :ref:`to remove the source shard(s)<cluster/sharding/remove-shard>`
-7. Remove the shard file(s) and secondary index file(s)
-   :ref:`from the source node <cluster/sharding/remove-shard-files>`.
-
-.. _cluster/sharding/copying:
-
-Copying shard files
-~~~~~~~~~~~~~~~~~~~
-
-.. note::
-    Technically, copying database and secondary index
-    shards is optional. If you proceed to the next step without
-    performing this data copy, CouchDB will use internal replication
-    to populate the newly added shard replicas. However, copying files
-    is faster than internal replication, especially on a busy cluster,
-    which is why we recommend performing this manual data copy first.
-
-Shard files live in the ``data/shards`` directory of your CouchDB
-install. Within those subdirectories are the shard files themselves. For
-instance, for a ``q=8`` database called ``abc``, here is its database shard
-files:
-
-::
-
-  data/shards/00000000-1fffffff/abc.1529362187.couch
-  data/shards/20000000-3fffffff/abc.1529362187.couch
-  data/shards/40000000-5fffffff/abc.1529362187.couch
-  data/shards/60000000-7fffffff/abc.1529362187.couch
-  data/shards/80000000-9fffffff/abc.1529362187.couch
-  data/shards/a0000000-bfffffff/abc.1529362187.couch
-  data/shards/c0000000-dfffffff/abc.1529362187.couch
-  data/shards/e0000000-ffffffff/abc.1529362187.couch
-
-Secondary indexes (including JavaScript views, Erlang views and Mango
-indexes) are also sharded, and their shards should be moved to save the
-new node the effort of rebuilding the view. View shards live in
-``data/.shards``. For example:
-
-::
-
-  data/.shards
-  data/.shards/e0000000-ffffffff/_replicator.1518451591_design
-  data/.shards/e0000000-ffffffff/_replicator.1518451591_design/mrview
-  data/.shards/e0000000-ffffffff/_replicator.1518451591_design/mrview/3e823c2a4383ac0c18d4e574135a5b08.view
-  data/.shards/c0000000-dfffffff
-  data/.shards/c0000000-dfffffff/_replicator.1518451591_design
-  data/.shards/c0000000-dfffffff/_replicator.1518451591_design/mrview
-  data/.shards/c0000000-dfffffff/_replicator.1518451591_design/mrview/3e823c2a4383ac0c18d4e574135a5b08.view
-  ...
-
-Since they are files, you can use ``cp``, ``rsync``,
-``scp`` or other file-copying command to copy them from one node to
-another. For example:
-
-.. code-block:: bash
-
-    # one one machine
-    $ mkdir -p data/.shards/{range}
-    $ mkdir -p data/shards/{range}
-    # on the other
-    $ scp {couch-dir}/data/.shards/{range}/{database}.{datecode}* \
-      {node}:{couch-dir}/data/.shards/{range}/
-    $ scp {couch-dir}/data/shards/{range}/{database}.{datecode}.couch \
-      {node}:{couch-dir}/data/shards/{range}/
-
-.. note::
-    Remember to move view files before database files! If a view index
-    is ahead of its database, the database will rebuild it from
-    scratch.
-
-.. _cluster/sharding/mm:
-
-Set the target node to ``true`` maintenance mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Before telling CouchDB about these new shards on the node, the node
-must be put into maintenance mode. Maintenance mode instructs CouchDB to
-return a ``404 Not Found`` response on the ``/_up`` endpoint, and
-ensures it does not participate in normal interactive clustered requests
-for its shards. A properly configured load balancer that uses ``GET
-/_up`` to check the health of nodes will detect this 404 and remove the
-node from circulation, preventing requests from being sent to that node.
-For example, to configure HAProxy to use the ``/_up`` endpoint, use:
-
-::
-
-  http-check disable-on-404
-  option httpchk GET /_up
-
-If you do not set maintenance mode, or the load balancer ignores this
-maintenance mode status, after the next step is performed the cluster
-may return incorrect responses when consulting the node in question. You
-don't want this! In the next steps, we will ensure that this shard is
-up-to-date before allowing it to participate in end-user requests.
-
-To enable maintenance mode:
-
-.. code-block:: bash
-
-    $ curl -X PUT -H "Content-type: application/json" \
-        $COUCH_URL:5984/_node/{node-name}/_config/couchdb/maintenance_mode \
-        -d "\"true\""
-
-Then, verify that the node is in maintenance mode by performing a ``GET
-/_up`` on that node's individual endpoint:
-
-.. code-block:: bash
-
-    $ curl -v $COUCH_URL/_up
-    …
-    < HTTP/1.1 404 Object Not Found
-    …
-    {"status":"maintenance_mode"}
-
-Finally, check that your load balancer has removed the node from the
-pool of available backend nodes.
-
-.. _cluster/sharding/add-shard:
-
-Updating cluster metadata to reflect the new target shard(s)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Now we need to tell CouchDB that the target node (which must already be
-:ref:`joined to the cluster <cluster/nodes/add>`) should be hosting
-shard replicas for a given database.
-
-To update the cluster metadata, use the special ``/_dbs`` database,
-which is an internal CouchDB database that maps databases to shards and
-nodes. This database is automatically replicated between nodes. It is accessible
-only through the special ``/_node/_local/_dbs`` endpoint.
-
-First, retrieve the database's current metadata:
-
-.. code-block:: bash
-
-    $ curl http://localhost/_node/_local/_dbs/{name}
-    {
-      "_id": "{name}",
-      "_rev": "1-e13fb7e79af3b3107ed62925058bfa3a",
-      "shard_suffix": [46, 49, 53, 51, 48, 50, 51, 50, 53, 50, 54],
-      "changelog": [
-        ["add", "00000000-1fffffff", "node1@xxx.xxx.xxx.xxx"],
-        ["add", "00000000-1fffffff", "node2@xxx.xxx.xxx.xxx"],
-        ["add", "00000000-1fffffff", "node3@xxx.xxx.xxx.xxx"],
-        …
-      ],
-      "by_node": {
-        "node1@xxx.xxx.xxx.xxx": [
-          "00000000-1fffffff",
-          …
-        ],
-        …
-      },
-      "by_range": {
-        "00000000-1fffffff": [
-          "node1@xxx.xxx.xxx.xxx",
-          "node2@xxx.xxx.xxx.xxx",
-          "node3@xxx.xxx.xxx.xxx"
-        ],
-        …
-      }
-    }
-
-Here is a brief anatomy of that document:
-
--  ``_id``: The name of the database.
--  ``_rev``: The current revision of the metadata.
--  ``shard_suffix``: A timestamp of the database's creation, marked as
-   seconds after the Unix epoch mapped to the codepoints for ASCII
-   numerals.
--  ``changelog``: History of the database's shards.
--  ``by_node``: List of shards on each node.
--  ``by_range``: On which nodes each shard is.
-
-To reflect the shard move in the metadata, there are three steps:
-
-1. Add appropriate changelog entries.
-2. Update the ``by_node`` entries.
-3. Update the ``by_range`` entries.
-
-.. warning::
-    Be very careful! Mistakes during this process can
-    irreparably corrupt the cluster!
-
-As of this writing, this process must be done manually.
-
-To add a shard to a node, add entries like this to the database
-metadata's ``changelog`` attribute:
-
-.. code-block:: javascript
-
-    ["add", "{range}", "{node-name}"]
-
-The ``{range}`` is the specific shard range for the shard. The ``{node-name}``
-should match the name and address of the node as displayed in ``GET
-/_membership`` on the cluster.
-
-.. note::
-    When removing a shard from a node, specify ``remove`` instead of ``add``.
-
-Once you have figured out the new changelog entries, you will need to
-update the ``by_node`` and ``by_range`` to reflect who is storing what
-shards. The data in the changelog entries and these attributes must
-match. If they do not, the database may become corrupted.
-
-Continuing our example, here is an updated version of the metadata above
-that adds shards to an additional node called ``node4``:
-
-.. code-block:: javascript
-
-    {
-      "_id": "{name}",
-      "_rev": "1-e13fb7e79af3b3107ed62925058bfa3a",
-      "shard_suffix": [46, 49, 53, 51, 48, 50, 51, 50, 53, 50, 54],
-      "changelog": [
-        ["add", "00000000-1fffffff", "node1@xxx.xxx.xxx.xxx"],
-        ["add", "00000000-1fffffff", "node2@xxx.xxx.xxx.xxx"],
-        ["add", "00000000-1fffffff", "node3@xxx.xxx.xxx.xxx"],
-        ...
-        ["add", "00000000-1fffffff", "node4@xxx.xxx.xxx.xxx"]
-      ],
-      "by_node": {
-        "node1@xxx.xxx.xxx.xxx": [
-          "00000000-1fffffff",
-          ...
-        ],
-        ...
-        "node4@xxx.xxx.xxx.xxx": [
-          "00000000-1fffffff"
-        ]
-      },
-      "by_range": {
-        "00000000-1fffffff": [
-          "node1@xxx.xxx.xxx.xxx",
-          "node2@xxx.xxx.xxx.xxx",
-          "node3@xxx.xxx.xxx.xxx",
-          "node4@xxx.xxx.xxx.xxx"
-        ],
-        ...
-      }
-    }
-
-Now you can ``PUT`` this new metadata:
-
-.. code-block:: bash
-
-    $ curl -X PUT http://localhost/_node/_local/_dbs/{name} -d '{...}'
-
-.. _cluster/sharding/sync:
-
-Forcing synchronization of the shard(s)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-.. versionadded:: 2.4.0
-
-Whether you pre-copied shards to your new node or not, you can force
-CouchDB to synchronize all replicas of all shards in a database with the
-:ref:`api/db/sync_shards` endpoint:
-
-.. code-block:: bash
-
-    $ curl -X POST $COUCH_URL:5984/{db}/_sync_shards
-    {"ok":true}
-
-This starts the synchronization process. Note that this will put
-additional load onto your cluster, which may affect performance.
-
-It is also possible to force synchronization on a per-shard basis by
-writing to a document that is stored within that shard.
-
-.. note::
-
-    Admins may want to bump their ``[mem3] sync_concurrency`` value to a
-    larger figure for the duration of the shards sync.
-
-.. _cluster/sharding/verify:
-
-Monitor internal replication to ensure up-to-date shard(s)
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-After you complete the previous step, CouchDB will have started
-synchronizing the shards. You can observe this happening by monitoring
-the ``/_node/{node-name}/_system`` endpoint, which includes the
-``internal_replication_jobs`` metric.
-
-Once this metric has returned to the baseline from before you started
-the shard sync, or is ``0``, the shard replica is ready to serve data
-and we can bring the node out of maintenance mode.
-
-.. _cluster/sharding/mm-2:
-
-Clear the target node's maintenance mode
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-You can now let the node start servicing data requests by
-putting ``"false"`` to the maintenance mode configuration endpoint, just
-as in step 2.
-
-Verify that the node is not in maintenance mode by performing a ``GET
-/_up`` on that node's individual endpoint.
-
-Finally, check that your load balancer has returned the node to the pool
-of available backend nodes.
-
-.. _cluster/sharding/remove-shard:
-
-Update cluster metadata again to remove the source shard
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Now, remove the source shard from the shard map the same way that you
-added the new target shard to the shard map in step 2. Be sure to add
-the ``["remove", {range}, {source-shard}]`` entry to the end of the
-changelog as well as modifying both the ``by_node`` and ``by_range`` sections of
-the database metadata document.
-
-.. _cluster/sharding/remove-shard-files:
-
-Remove the shard and secondary index files from the source node
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Finally, you can remove the source shard replica by deleting its file from the
-command line on the source host, along with any view shard replicas:
-
-.. code-block:: bash
-
-    $ rm {couch-dir}/data/shards/{range}/{db}.{datecode}.couch
-    $ rm -r {couch-dir}/data/.shards/{range}/{db}.{datecode}*
-
-Congratulations! You have moved a database shard replica. By adding and removing
-database shard replicas in this way, you can change the cluster's shard layout,
-also known as a shard map.
-
-Specifying database placement
------------------------------
-
-You can configure CouchDB to put shard replicas on certain nodes at
-database creation time using placement rules.
-
-.. warning::
-
-    Use of the ``placement`` option will **override** the ``n`` option,
-    both in the ``.ini`` file as well as when specified in a ``URL``.
-
-First, each node must be labeled with a zone attribute. This defines which zone
-each node is in. You do this by editing the node’s document in the special
-``/_nodes`` database, which is accessed through the special node-local API
-endpoint at ``/_node/_local/_nodes/{node-name}``. Add a key value pair of the
-form:
-
-::
-
-    "zone": "{zone-name}"
-
-Do this for all of the nodes in your cluster. For example:
-
-.. code-block:: bash
-
-    $ curl -X PUT http://localhost/_node/_local/_nodes/{node-name} \
-        -d '{ \
-            "_id": "{node-name}",
-            "_rev": "{rev}",
-            "zone": "{zone-name}"
-            }'
-
-In the local config file (``local.ini``) of each node, define a
-consistent cluster-wide setting like:
-
-::
-
-    [cluster]
-    placement = {zone-name-1}:2,{zone-name-2}:1
-
-In this example, CouchDB will ensure that two replicas for a shard will
-be hosted on nodes with the zone attribute set to ``{zone-name-1}`` and
-one replica will be hosted on a new with the zone attribute set to
-``{zone-name-2}``.
-
-This approach is flexible, since you can also specify zones on a per-
-database basis by specifying the placement setting as a query parameter
-when the database is created, using the same syntax as the ini file:
-
-.. code-block:: bash
-
-    curl -X PUT $COUCH_URL:5984/{db}?zone={zone}
-
-The ``placement`` argument may also be specified. Note that this *will*
-override the logic that determines the number of created replicas!
-
-Note that you can also use this system to ensure certain nodes in the
-cluster do not host any replicas for newly created databases, by giving
-them a zone attribute that does not appear in the ``[cluster]``
-placement string.
-
-.. _cluster/sharding/splitting_shards:
-
-Splitting Shards
-----------------
-
-The :ref:`api/server/reshard` is an HTTP API for shard manipulation. Currently
-it only supports shard splitting. To perform shard merging, refer to the manual
-process outlined in the :ref:`cluster/sharding/merging_shards` section.
-
-The main way to interact with :ref:`api/server/reshard` is to create resharding
-jobs, monitor those jobs, wait until they complete, remove them, post new jobs,
-and so on. What follows are a few steps one might take to use this API to split
-shards.
-
-At first, it's a good idea to call ``GET /_reshard`` to see a summary of
-resharding on the cluster.
-
-.. code-block:: bash
-
-   $ curl -s $COUCH_URL:5984/_reshard | jq .
-   {
-     "state": "running",
-     "state_reason": null,
-     "completed": 3,
-     "failed": 0,
-     "running": 0,
-     "stopped": 0,
-     "total": 3
-   }
-
-Two important things to pay attention to are the total number of jobs and the state.
-
-The ``state`` field indicates the state of resharding on the cluster. Normally
-it would be ``running``, however, another user could have disabled resharding
-temporarily. Then, the state would be ``stopped`` and hopefully, there would be
-a reason or a comment in the value of the ``state_reason`` field. See
-:ref:`cluster/sharding/stop_resharding` for more details.
-
-The ``total`` number of jobs is important to keep an eye on because there is a
-maximum number of resharding jobs per node, and creating new jobs after the
-limit has been reached will result in an error. Before staring new jobs it's a
-good idea to remove already completed jobs. See :ref:`reshard configuration
-section <config/reshard>` for the default value of ``max_jobs`` parameter and
-how to adjust if needed.
-
-For example, to remove all the completed jobs run:
-
-.. code-block:: bash
-
-    $ for jobid in $(curl -s $COUCH_URL:5984/_reshard/jobs | jq -r '.jobs[] | select (.job_state=="completed") | .id'); do \
-          curl -s -XDELETE $COUCH_URL:5984/_reshard/jobs/$jobid \
-      done
-
-Then it's a good idea to see what the db shard map looks like.
-
-.. code-block:: bash
-
-    $ curl -s $COUCH_URL:5984/db1/_shards | jq '.'
-    {
-      "shards": {
-        "00000000-7fffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ],
-        "80000000-ffffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ]
-      }
-    }
-
-In this example we'll split all the copies of the ``00000000-7fffffff`` range.
-The API allows a combination of parameters such as: splitting all
-the ranges on all the nodes, all the ranges on just one node, or one particular
-range on one particular node. These are specified via the ``db``,
-``node`` and ``range`` job parameters.
-
-To split all the copies of ``00000000-7fffffff`` we issue a request like this:
-
-.. code-block:: bash
-
-    $ curl -s -H "Content-type: application/json" -XPOST $COUCH_URL:5984/_reshard/jobs \
-      -d '{"type": "split", "db":"db1", "range":"00000000-7fffffff"}' | jq '.'
-    [
-      {
-        "ok": true,
-        "id": "001-ef512cfb502a1c6079fe17e9dfd5d6a2befcc694a146de468b1ba5339ba1d134",
-        "node": "node1@127.0.0.1",
-        "shard": "shards/00000000-7fffffff/db1.1554242778"
-      },
-      {
-        "ok": true,
-        "id": "001-cec63704a7b33c6da8263211db9a5c74a1cb585d1b1a24eb946483e2075739ca",
-        "node": "node2@127.0.0.1",
-        "shard": "shards/00000000-7fffffff/db1.1554242778"
-      },
-      {
-        "ok": true,
-        "id": "001-fc72090c006d9b059d4acd99e3be9bb73e986d60ca3edede3cb74cc01ccd1456",
-        "node": "node3@127.0.0.1",
-        "shard": "shards/00000000-7fffffff/db1.1554242778"
-      }
-    ]
-
-The request returned three jobs, one job for each of the three copies.
-
-To check progress of these jobs use ``GET /_reshard/jobs`` or ``GET
-/_reshard/jobs/{jobid}``.
-
-Eventually, these jobs should complete and the shard map should look like this:
-
-.. code-block:: bash
-
-    $ curl -s $COUCH_URL:5984/db1/_shards | jq '.'
-    {
-      "shards": {
-        "00000000-3fffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ],
-        "40000000-7fffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ],
-        "80000000-ffffffff": [
-          "node1@127.0.0.1",
-          "node2@127.0.0.1",
-          "node3@127.0.0.1"
-        ]
-      }
-    }
-
-.. _cluster/sharding/stop_resharding:
-
-Stopping Resharding Jobs
-------------------------
-
-Resharding at the cluster level could be stopped and then restarted. This can
-be helpful to allow external tools which manipulate the shard map to avoid
-interfering with resharding jobs. To stop all resharding jobs on a cluster
-issue a ``PUT`` to ``/_reshard/state`` endpoint with the ``"state": "stopped"``
-key and value. You can also specify an optional note or reason for stopping.
-
-For example:
-
-.. code-block:: bash
-
-    $ curl -s -H "Content-type: application/json" \
-      -XPUT $COUCH_URL:5984/_reshard/state \
-      -d '{"state": "stopped", "reason":"Moving some shards"}'
-    {"ok": true}
-
-This state will then be reflected in the global summary:
-
-.. code-block:: bash
-
-   $ curl -s $COUCH_URL:5984/_reshard | jq .
-   {
-     "state": "stopped",
-     "state_reason": "Moving some shards",
-     "completed": 74,
-     "failed": 0,
-     "running": 0,
-     "stopped": 0,
-     "total": 74
-   }
-
-To restart, issue a ``PUT`` request like above with ``running`` as the state.
-That should resume all the shard splitting jobs since their last checkpoint.
-
-See the API reference for more details: :ref:`api/server/reshard`.
-
-.. _cluster/sharding/merging_shards:
-
-Merging Shards
---------------
-
-The ``q`` value for a database can be set when the database is created or it
-can be increased later by splitting some of the shards
-:ref:`cluster/sharding/splitting_shards`. In order to decrease ``q`` and merge
-some shards together, the database must be regenerated. Here are the steps:
-
-1. If there are running shard splitting jobs on the cluster, stop them via the
-   HTTP API :ref:`cluster/sharding/stop_resharding`.
-2. Create a temporary database with the desired shard settings, by
-   specifying the q value as a query parameter during the PUT
-   operation.
-3. Stop clients accessing the database.
-4. Replicate the primary database to the temporary one. Multiple
-   replications may be required if the primary database is under
-   active use.
-5. Delete the primary database. **Make sure nobody is using it!**
-6. Recreate the primary database with the desired shard settings.
-7. Clients can now access the database again.
-8. Replicate the temporary back to the primary.
-9. Delete the temporary database.
-
-Once all steps have completed, the database can be used again. The
-cluster will create and distribute its shards according to placement
-rules automatically.
-
-Downtime can be avoided in production if the client application(s) can
-be instructed to use the new database instead of the old one, and a cut-
-over is performed during a very brief outage window.
diff --git a/src/cluster/theory.rst b/src/cluster/theory.rst
deleted file mode 100644
index fd0ab36..0000000
--- a/src/cluster/theory.rst
+++ /dev/null
@@ -1,75 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/theory:
-
-======
-Theory
-======
-
-Before we move on, we need some theory.
-
-As you see in ``etc/default.ini`` there is a section called ``[cluster]``
-
-.. code-block:: text
-
-    [cluster]
-    q=2
-    n=3
-
-* ``q`` - The number of shards.
-* ``n`` - The number of copies there is of every document. Replicas.
-
-When creating a database you can send your own values with request and
-thereby override the defaults in ``default.ini``.
-
-The number of copies of a document with the same revision that have to be read
-before CouchDB returns with a ``200`` is equal to a half of total copies of
-the document plus one. It is the same for the number of nodes that need to
-save a document before a write is returned with ``201``. If there are less
-nodes than that number, then ``202`` is returned. Both read and write numbers
-can be specified with a request as ``r`` and ``w`` parameters accordingly.
-
-We will focus on the shards and replicas for now.
-
-A shard is a part of a database. It can be replicated multiple times. The more
-copies of a shard, the more you can scale out. If you have 4 replicas, that
-means that all 4 copies of this specific shard will live on at most 4 nodes.
-No node can have more than one copy of each shard replica. The default for
-CouchDB since 3.0.0 is ``q=2`` and ``n=3``, meaning each database (and secondary
-index) is split into 2 shards, with 3 replicas per shard, for a total of 6
-shard replica files. For a CouchDB cluster only hosting a single database with
-these default values, a maximum of 6 nodes can be used to scale horizontally.
-
-Replicas add failure resistance, as some nodes can be offline without everything
-crashing down.
-
-* ``n=1`` All nodes must be up.
-* ``n=2`` Any 1 node can be down.
-* ``n=3`` Any 2 nodes can be down.
-* etc
-
-Computers go down and sysadmins pull out network cables in a furious rage from
-time to time, so using ``n<2`` is asking for downtime. Having too high a value
-of ``n`` adds servers and complexity without any real benefit. The sweet spot is
-at ``n=3``.
-
-Say that we have a database with 3 replicas and 4 shards. That would give us a
-maximum of 12 nodes: 4*3=12.
-
-We can lose any 2 nodes and still read and write all documents.
-
-What happens if we lose more nodes? It depends on how lucky we are. As long as
-there is at least one copy of every shard online, we can read and write all
-documents.
-
-So, if we are very lucky then we can lose 8 nodes at maximum.
diff --git a/src/cluster/tls_erlang_distribution.rst b/src/cluster/tls_erlang_distribution.rst
deleted file mode 100644
index 07e48ea..0000000
--- a/src/cluster/tls_erlang_distribution.rst
+++ /dev/null
@@ -1,122 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/tls_erlang_distribution:
-
-=======================
-TLS Erlang Distribution
-=======================
-The main purpose is specifically to allow using TLS for Erlang distribution
-between nodes, with the ability to connect to some nodes using TCP as well.
-TLS distribution will enhance data security during data migration between
-nodes.
-
-This section describes how to enable TLS distribution for additional
-verification and security.
-
-Reference: `Using TLS for Erlang Distribution`_
-
-.. _Using TLS for Erlang Distribution: https://erlang.org/doc/apps/ssl/ssl_distribution.html
-
-Generate Certificate
-====================
-For TLS to work properly, at least one public key and one certificate must be
-specified. In the following example (couch_ssl_dist.conf), the PEM file contains
-the ``certificate`` and its ``private key``.
-
-    .. code-block:: text
-
-        [{server,
-          [{certfile, "</path/to/erlserver.pem>"},
-           {secure_renegotiate, true}]},
-         {client,
-          [{secure_renegotiate, true}]}].
-
-The following command is an example of generating a certificate (PEM) file.
-
-    .. code-block:: bash
-
-        $ openssl req -newkey rsa:2048 -new -nodes -x509 -days 3650 -keyout key.pem -out cert.pem
-        $ cat key.pem cert.pem > erlserver.pem && rm key.pem cert.pem
-
-    .. note::
-       This is **not** an endorsement of a specific expiration limit,
-       key size or algorithm.
-
-Config Settings
-===============
-To enable TLS distribution, make sure to set custom parameters in ``vm.args``.
-
-    .. code-block:: text
-
-        # Don't forget to override the paths to point to your cert and conf file!
-
-        -proto_dist couch
-        -couch_dist no_tls \"clouseau@127.0.0.1\"
-        -ssl_dist_optfile <path/to/couch_ssl_dist.conf>
-
-    .. note::
-       * The default value of ``no_tls`` is ``false``. If the user does not
-         set any ``no_tls`` flag, all nodes will use ``TCP``.
-       * To ensure "search" works, make sure to set ``no_tls`` option for the
-         ``clouseau`` node. By default, this will be ``"clouseau@127.0.0.1"``.
-
-The ``no_tls`` flag can have these values:
-
-#. Use ``TLS`` only, set to ``false`` (default value), such as:
-
-    .. code-block:: text
-
-        -couch_dist no_tls false
-
-#. Use ``TCP`` only, set to ``true``, such as:
-
-    .. code-block:: text
-
-        -couch_dist no_tls true
-
-#. Specify some nodes to use ``TCP``, others to use ``TLS``, such as:
-
-    .. code-block:: text
-
-        # Specify node1 and node2 to use TCP, others use TLS
-
-        -couch_dist no_tls \"node1@127.0.0.1\"
-        -couch_dist no_tls \"node2@127.0.0.1\"
-
-    .. code-block:: text
-
-        # Any nodes end with "@127.0.0.1" will use TCP, others use TLS
-
-        -couch_dist no_tls \"*@127.0.0.1\"
-
-    .. note::
-       **Asterisk(*)**: matches a sequence of zero or more occurrences of the regular
-       expression.
-
-       **Question mark(?)**: matches zero or one occurrences of the regular expression.
-
-Connect to Remsh
-================
-Start Erlang using a remote shell connected to Node.
-
-* If the node uses ``TCP``:
-
-    .. code-block:: bash
-
-        $ ./remsh
-
-* If the node uses ``TLS``:
-
-    .. code-block:: bash
-
-        $ ./remsh -t <path/to/couch_ssl_dist.conf>
diff --git a/src/cluster/troubleshooting.rst b/src/cluster/troubleshooting.rst
deleted file mode 100644
index 7989022..0000000
--- a/src/cluster/troubleshooting.rst
+++ /dev/null
@@ -1,115 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cluster/troubleshooting:
-
-============================================
-Troubleshooting CouchDB 3 with WeatherReport
-============================================
-
-.. _cluster/troubleshooting/overview:
-
-Overview
-========
-
-WeatherReport is an OTP application and set of tools that diagnoses
-common problems which could affect a CouchDB version 3 node or cluster
-(version 4 or later is not supported). It is accessed via the
-``weatherreport`` command line escript.
-
-Here is a basic example of using ``weatherreport`` followed immediately
-by the command's output:
-
-.. code-block:: bash
-
-    $ weatherreport --etc /path/to/etc
-    [warning] Cluster member node3@127.0.0.1 is not connected to this node. Please check whether it is down.
-
-.. _cluster/troubleshooting/usage:
-
-Usage
-=====
-
-For most cases, you can just run the ``weatherreport`` command as
-shown above.  However, sometimes you might want to know some extra
-detail, or run only specific checks. For that, there are command-line
-options. Execute ``weatherreport --help`` to learn more about these
-options:
-
-.. code-block:: bash
-
-    $ weatherreport --help
-    Usage: weatherreport [-c <path>] [-d <level>] [-e] [-h] [-l] [check_name ...]
-
-      -c, --etc                 Path to the CouchDB configuration directory
-      -d, --level               Minimum message severity level (default: notice)
-      -l, --list                Describe available diagnostic tasks
-      -e, --expert              Perform more detailed diagnostics
-      -h, --help                Display help/usage
-      check_name                A specific check to run
-
-To get an idea of what checks will be run, use the `--list` option:
-
-.. code-block:: bash
-
-    $ weatherreport --list
-    Available diagnostic checks:
-
-      custodian            Shard safety/liveness checks
-      disk                 Data directory permissions and atime
-      internal_replication Check the number of pending internal replication jobs
-      ioq                  Check the total number of active IOQ requests
-      mem3_sync            Check there is a registered mem3_sync process
-      membership           Cluster membership validity
-      memory_use           Measure memory usage
-      message_queues       Check for processes with large mailboxes
-      node_stats           Check useful erlang statistics for diagnostics
-      nodes_connected      Cluster node liveness
-      process_calls        Check for large numbers of processes with the same current/initial call
-      process_memory       Check for processes with high memory usage
-      safe_to_rebuild      Check whether the node can safely be taken out of service
-      search               Check the local search node is responsive
-      tcp_queues           Measure the length of tcp queues in the kernel
-
-If you want all the gory details about what WeatherReport is doing,
-you can run the checks at a more verbose logging level with
-the ``--level`` option:
-
-.. code-block:: bash
-
-    $ weatherreport --etc /path/to/etc --level debug
-    [debug] Not connected to the local cluster node, trying to connect. alive:false connect_failed:undefined
-    [debug] Starting distributed Erlang.
-    [debug] Connected to local cluster node 'node1@127.0.0.1'.
-    [debug] Local RPC: mem3:nodes([]) [5000]
-    [debug] Local RPC: os:getpid([]) [5000]
-    [debug] Running shell command: ps -o pmem,rss -p 73905
-    [debug] Shell command output:
-    %MEM    RSS
-    0.3  25116
-
-    [debug] Local RPC: erlang:nodes([]) [5000]
-    [debug] Local RPC: mem3:nodes([]) [5000]
-    [warning] Cluster member node3@127.0.0.1 is not connected to this node. Please check whether it is down.
-    [info] Process is using 0.3% of available RAM, totalling 25116 KB of real memory.
-
-Most times you'll want to use the defaults, but any syslog severity
-name will do (from most to least verbose): ``debug, info, notice,
-warning, error, critical, alert, emergency``.
-
-Finally, if you want to run just a single diagnostic or a list of
-specific ones, you can pass their name(s):
-
-.. code-block:: bash
-
-    $ weatherreport --etc /path/to/etc nodes_connected
-    [warning] Cluster member node3@127.0.0.1 is not connected to this node. Please check whether it is down.
diff --git a/src/conf.py b/src/conf.py
deleted file mode 100644
index 7eecf81..0000000
--- a/src/conf.py
+++ /dev/null
@@ -1,121 +0,0 @@
-## Licensed under the Apache License, Version 2.0 (the "License"); you may not
-## use this file except in compliance with the License. You may obtain a copy of
-## the License at
-##
-##   http://www.apache.org/licenses/LICENSE-2.0
-##
-## Unless required by applicable law or agreed to in writing, software
-## distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-## WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-## License for the specific language governing permissions and limitations under
-## the License.
-
-import datetime
-import os
-import sys
-
-import sphinx_rtd_theme
-
-sys.path.insert(0, os.path.abspath("../ext"))
-
-needs_sphinx = "1.5"
-
-extensions = [
-    "sphinx.ext.todo",
-    "sphinx.ext.extlinks",
-    "github",
-    "httpdomain",
-    "configdomain",
-]
-
-source_suffix = ".rst"
-
-nitpicky = True
-
-# should be over-written using rebar-inherited settings
-version = "3.2"
-release = "3.2.0"
-
-project = "Apache CouchDB\u00ae"
-
-copyright = "%d, %s" % (
-    datetime.datetime.now().year,
-    "Apache Software Foundation. CouchDB\u00ae is a registered trademark of the "
-    + "Apache Software Foundation",
-)
-
-primary_domain = "http"
-
-pygments_style = "sphinx"
-
-html_theme = "sphinx_rtd_theme"
-html_theme_path = [sphinx_rtd_theme.get_html_theme_path()]
-
-html_theme_options = {"canonical_url": "http://docs.couchdb.org/en/stable/"}
-
-templates_path = ["../templates"]
-
-html_static_path = ["../static"]
-
-html_title = " ".join([project, version, "Documentation"])
-
-# html_style = "css/rtd_theme.css"
-
-html_logo = "../images/logo.png"
-
-html_favicon = "../images/favicon.ico"
-
-html_use_index = False
-
-html_additional_pages = {"download": "pages/download.html", "index": "pages/index.html"}
-
-html_context = {
-    "ga_code": "UA-658988-6",
-    # Enable the "Edit in GitHub link within the header of each page.
-    "display_github": False,
-    # Set the following variables to generate the resulting github URL for each page.
-    # Format Template: https://{{ github_host|default("github.com") }}/{{ github_user }}/{{ github_repo }}/blob/{{ github_version }}{{ conf_py_path }}{{ pagename }}{{ suffix }}
-    "github_user": "apache",
-    "github_repo": "couchdb-documentation",
-    "github_version": "master/src/",
-}
-
-master_doc = "index"
-
-text_newlines = "native"
-
-latex_documents = [("index", "CouchDB.tex", project, "", "manual", True)]
-
-latex_elements = {"papersize": "a4paper"}
-
-texinfo_documents = [
-    (
-        "contents",
-        "CouchDB",
-        project,
-        "",
-        "CouchDB",
-        "The Apache CouchDB database",
-        "Databases",
-        True,
-    )
-]
-
-extlinks = {
-    "issue": ("%s-%%s" % "https://issues.apache.org/jira/browse/COUCHDB", "COUCHDB-"),
-    "ghissue": ("https://github.com/apache/couchdb/issues/%s", "#"),
-    "commit": (
-        "https://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=commit;h=%s",
-        "#",
-    ),
-}
-
-github_project = "apache/couchdb-documentation"
-
-html_context["git_branch"] = github_branch = "master"
-
-github_docs_path = "src"
-
-
-def setup(app):
-    app.add_css_file("css/rtd_theme.css")
diff --git a/src/config/auth.rst b/src/config/auth.rst
deleted file mode 100644
index cb92f31..0000000
--- a/src/config/auth.rst
+++ /dev/null
@@ -1,389 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-
-.. highlight:: ini
-
-================================
-Authentication and Authorization
-================================
-
-.. _config/admins:
-
-Server Administrators
-=====================
-
-.. config:section:: admins :: Server Administrators
-
-.. versionchanged:: 3.0.0
-
-    CouchDB requires an admin account to start. If an admin account has not
-    been created, CouchDB will print an error message and terminate.
-
-    CouchDB server administrators and passwords are not stored in the
-    ``_users`` database, but in the last ``[admins]`` section that CouchDB
-    finds when loading its ini files. See :config:intro for details on config
-    file order and behaviour. This file (which could be something like
-    ``/opt/couchdb/etc/local.ini`` or
-    ``/opt/couchdb/etc/local.d/10-admins.ini`` when CouchDB is installed from
-    packages) should be appropriately secured and     readable only by system
-    administrators::
-
-        [admins]
-        ;admin = mysecretpassword
-        admin = -hashed-6d3c30241ba0aaa4e16c6ea99224f915687ed8cd,7f4a3e05e0cbc6f48a0035e3508eef90
-        architect = -pbkdf2-43ecbd256a70a3a2f7de40d2374b6c3002918834,921a12f74df0c1052b3e562a23cd227f,10000
-
-    Administrators can be added directly to the ``[admins]`` section, and when
-    CouchDB is restarted, the passwords will be salted and encrypted. You may
-    also use the HTTP interface to create administrator accounts; this way,
-    you don't need to restart CouchDB, and there's no need to temporarily store
-    or transmit passwords in plaintext. The HTTP
-    ``/_node/{node-name}/_config/admins`` endpoint supports querying, deleting
-    or creating new admin accounts:
-
-    .. code-block:: http
-
-        GET /_node/nonode@nohost/_config/admins HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 196
-        Content-Type: application/json
-        Date: Fri, 30 Nov 2012 11:37:18 GMT
-        Server: CouchDB (Erlang/OTP)
-
-    .. code-block:: json
-
-        {
-            "admin": "-hashed-6d3c30241ba0aaa4e16c6ea99224f915687ed8cd,7f4a3e05e0cbc6f48a0035e3508eef90",
-            "architect": "-pbkdf2-43ecbd256a70a3a2f7de40d2374b6c3002918834,921a12f74df0c1052b3e562a23cd227f,10000"
-        }
-
-    If you already have a salted, encrypted password string (for example, from
-    an old ini file, or from a different CouchDB server), then you can store
-    the "raw" encrypted string, without having CouchDB doubly encrypt it.
-
-    .. code-block:: http
-
-        PUT /_node/nonode@nohost/_config/admins/architect?raw=true HTTP/1.1
-        Accept: application/json
-        Content-Type: application/json
-        Content-Length: 89
-        Host: localhost:5984
-
-        "-pbkdf2-43ecbd256a70a3a2f7de40d2374b6c3002918834,921a12f74df0c1052b3e562a23cd227f,10000"
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 89
-        Content-Type: application/json
-        Date: Fri, 30 Nov 2012 11:39:18 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        "-pbkdf2-43ecbd256a70a3a2f7de40d2374b6c3002918834,921a12f74df0c1052b3e562a23cd227f,10000"
-
-    Further details are available in `security`, including configuring the work
-    factor for ``PBKDF2``, and the algorithm itself at
-    `PBKDF2 (RFC-2898) <http://tools.ietf.org/html/rfc2898>`_.
-
-    .. versionchanged::
-        1.4 `PBKDF2` server-side hashed salted password support added, now as a
-        synchronous call for the ``_config/admins`` API.
-
-.. _config/chttpd_auth:
-
-Authentication Configuration
-============================
-
-.. config:section:: chttpd :: Clustered Authentication Configuration
-
-    .. config:option:: require_valid_user :: Force user authentication
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd] section
-
-        When this option is set to ``true``, no requests are allowed from
-        anonymous users. Everyone must be authenticated. ::
-
-            [chttpd]
-            require_valid_user = false
-
-    .. config:option:: require_valid_user_except_for_up :: Force user auth (mostly)
-
-        When this option is set to ``true``, no requests are allowed from
-        anonymous users, *except* for the ``/_up`` endpoint. Everyone else must
-        be authenticated. ::
-
-            [chttpd]
-            require_valid_user_except_for_up = false
-
-.. config:section:: chttpd_auth :: Authentication Configuration
-
-    .. versionchanged:: 3.2 These options were moved to [chttpd_auth] section:
-                        `authentication_redirect`, `timeout`,
-                        `auth_cache_size`, `allow_persistent_cookies`, `iterations`,
-                        `min_iterations`, `max_iterations`, `secret`, `users_db_public`,
-                        `x_auth_roles`, `x_auth_token`, `x_auth_username`,
-                        `cookie_domain`, `same_site`.
-
-    .. config:option:: allow_persistent_cookies :: Persistent cookies
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        When set to ``true``, CouchDB will set the Max-Age and Expires attributes
-        on the cookie, which causes user agents (like browsers) to preserve the cookie
-        over restarts. ::
-
-            [chttpd_auth]
-            allow_persistent_cookies = true
-
-    .. config:option:: cookie_domain :: Cookie Domain
-
-        .. versionadded:: 2.1.1
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        Configures the ``domain`` attribute of the ``AuthSession`` cookie. By default the
-        ``domain`` attribute is empty, resulting in the cookie being set on CouchDB's domain. ::
-
-            [chttpd_auth]
-            cookie_domain = example.com
-
-    .. config:option:: same_site :: SameSite
-
-        .. versionadded:: 3.0.0
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        When this option is set to a non-empty value, a ``SameSite`` attribute is added to
-        the ``AuthSession`` cookie. Valid values are ``none``, ``lax`` or ``strict``.::
-
-            [chttpd_auth]
-            same_site = strict
-
-    .. config:option:: auth_cache_size :: Authentication cache
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        Number of :ref:`userctx_object` to cache in memory, to reduce disk
-        lookups. ::
-
-            [chttpd_auth]
-            auth_cache_size = 50
-
-    .. config:option:: authentication_redirect :: Default redirect for authentication requests
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        Specifies the location for redirection on successful authentication if
-        a ``text/html`` response is accepted by the client (via an ``Accept``
-        header). ::
-
-            [chttpd_auth]
-            authentication_redirect = /_utils/session.html
-
-    .. config:option:: hash_algorithms :: Supported hash algorithms for cookie auth
-
-        .. versionadded:: 3.3
-
-        Sets the HMAC hash algorithm used for cookie authentication. You can provide a
-        comma-separated list of hash algorithms. New cookie sessions or
-        session updates are calculated with the first hash algorithm. All values in the
-        list can be used to decode the cookie session. ::
-
-            [chttpd_auth]
-            hash_algorithms = sha256, sha
-
-        .. note::
-            You can select any hash algorithm the version of erlang used in your CouchDB
-            install supports. The common list of available hashes might be: ::
-
-                sha, sha224, sha256, sha384, sha512
-
-            To retrieve a complete list of supported hash algorithms you can use our
-            ``bin/remsh`` script and retrieve a full list of available hash algorithms
-            with ``crypto:supports(hashs).``.
-
-        .. warning::
-            We do not recommend using the following hash algorithms: ::
-
-                md4, md5
-
-    .. config:option:: iterations :: PBKDF2 iterations count
-
-        .. versionadded:: 1.3
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The number of iterations for password hashing by the PBKDF2 algorithm.
-        A higher  number provides better hash durability, but comes at a cost
-        in performance for each request that requires authentication.
-        When using hundreds of thousands of iterations, use session cookies, or the performance hit will be huge.
-        (The internal hashing algorithm is SHA1, which affects the recommended number of iterations.) ::
-
-            [chttpd_auth]
-            iterations = 10000
-
-    .. config:option:: min_iterations :: Minimum PBKDF2 iterations count
-
-        .. versionadded:: 1.6
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The minimum number of iterations allowed for passwords hashed by the
-        PBKDF2 algorithm. Any user with fewer iterations is forbidden. ::
-
-            [chttpd_auth]
-            min_iterations = 100
-
-    .. config:option:: max_iterations :: Maximum PBKDF2 iterations count
-
-        .. versionadded:: 1.6
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The maximum number of iterations allowed for passwords hashed by the
-        PBKDF2 algorithm. Any user with greater iterations is forbidden. ::
-
-            [chttpd_auth]
-            max_iterations = 100000
-
-    .. config:option:: password_regexp :: Password regular expressions
-
-        .. versionadded:: 3.2
-
-        A list of
-        `Regular Expressions <https://erlang.org/doc/man/re.html#regexp_syntax>`_
-        to check new/changed passwords.
-        When set, new user passwords must **match** all RegExp in this list.
-
-        A RegExp can be paired with a *reason text*:
-        ``[{"RegExp", "reason text"}, ...]``.
-        If a RegExp doesn't match, its *reason text* will be appended to the
-        default reason of ``Password does not conform to requirements.`` ::
-
-            [couch_httpd_auth]
-            ; Password must be 10 chars long and have one or more uppercase and
-            ; lowercase char and one or more numbers.
-            password_regexp = [{".{10,}", "Min length is 10 chars."}, "[A-Z]+", "[a-z]+", "\\d+"]
-
-    .. config:option:: proxy_use_secret :: Force proxy auth to use secret token
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        When this option is set to ``true``, the
-        :option:`chttpd_auth/secret` option is required for
-        :ref:`api/auth/proxy`. ::
-
-            [chttpd_auth]
-            proxy_use_secret = false
-
-    .. config:option:: public_fields :: User documents public fields
-
-        .. versionadded:: 1.4
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        A comma-separated list of field names in user documents (in
-        :option:`couchdb/users_db_suffix`) that can be read by any
-        user. If unset or not specified, authenticated users can only retrieve
-        their own document. ::
-
-            [chttpd_auth]
-            public_fields = first_name, last_name, contacts, url
-
-        .. note::
-            Using the ``public_fields`` allowlist for user document properties
-            requires setting the :option:`chttpd_auth/users_db_public`
-            option to ``true`` (the latter option has no other purpose)::
-
-                [chttpd_auth]
-                users_db_public = true
-
-    .. config:option:: secret :: Authentication secret token
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The secret token is used for :ref:`api/auth/proxy` and for :ref:`api/auth/cookie`. ::
-
-            [chttpd_auth]
-            secret = 92de07df7e7a3fe14808cef90a7cc0d91
-
-    .. config:option:: timeout :: Session timeout
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        Number of seconds since the last request before sessions will be
-        expired. ::
-
-            [chttpd_auth]
-            timeout = 600
-
-    .. config:option:: users_db_public :: Publish user documents
-
-        .. versionadded:: 1.4
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        Allow all users to view user documents. By default, only admins may
-        browse all users documents, while users may browse only their own
-        document. ::
-
-            [chttpd_auth]
-            users_db_public = false
-
-    .. config:option:: x_auth_roles :: Proxy Auth roles header
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The HTTP header name (``X-Auth-CouchDB-Roles`` by default) that
-        contains the list of a user's roles, separated by a comma. Used for
-        :ref:`api/auth/proxy`. ::
-
-            [chttpd_auth]
-            x_auth_roles = X-Auth-CouchDB-Roles
-
-    .. config:option:: x_auth_token :: Proxy Auth token header
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The HTTP header name (``X-Auth-CouchDB-Token`` by default) containing
-        the token used to authenticate the authorization. This token is an
-        `HMAC-SHA1` created from the :option:`chttpd_auth/secret` and
-        :option:`chttpd_auth/x_auth_username`. The secret key should be
-        the same on the client and the CouchDB node. This token is optional if
-        the value of the :option:`chttpd_auth/proxy_use_secret` option is
-        not ``true``. Used for :ref:`api/auth/proxy`. ::
-
-            [chttpd_auth]
-            x_auth_token = X-Auth-CouchDB-Token
-
-    .. config:option:: x_auth_username :: Proxy Auth username header
-
-        .. versionchanged:: 3.2 moved from [couch_httpd_auth] to [chttpd_auth] section
-
-        The HTTP header name (``X-Auth-CouchDB-UserName`` by default)
-        containing the username. Used for :ref:`api/auth/proxy`. ::
-
-            [chttpd_auth]
-            x_auth_username = X-Auth-CouchDB-UserName
-
-.. config:section:: jwt_auth :: JWT Authentication
-
-    .. config:option:: required_claims :: Mandatory claims in JWT tokens
-
-        This parameter is a comma-separated list of additional mandatory JWT claims
-        that must be present in any presented JWT token. A :http:statuscode:`404`
-        is sent if any are missing. ::
-
-            [jwt_auth]
-                required_claims = exp,iat
diff --git a/src/config/cluster.rst b/src/config/cluster.rst
deleted file mode 100644
index 8801b9c..0000000
--- a/src/config/cluster.rst
+++ /dev/null
@@ -1,126 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-======================
-Configuring Clustering
-======================
-
-.. _config/cluster:
-
-Cluster Options
-===============
-
-.. config:section:: cluster :: Cluster Options
-
-    .. config:option:: q :: Default number of shards for newly created database
-
-    Sets the default number of shards for newly created databases. The
-    default value, ``2``, splits a database into 2 separate partitions. ::
-
-        [cluster]
-        q = 2
-
-    For systems with only a few, heavily accessed, large databases, or
-    for servers with many CPU cores, consider increasing this value to
-    ``4`` or ``8``.
-
-    The value of ``q`` can also be overridden on a per-DB basis, at DB
-    creation time.
-
-    .. seealso::
-        :http:put:`PUT /{db} </{db}>`
-
-    .. config:option:: n :: Number of replicas of each document
-
-    Sets the number of replicas of each document in a cluster. CouchDB will
-    only place one replica per node in a cluster. When set up through the
-    :ref:`Cluster Setup Wizard <cluster/setup/wizard>`, a standalone single
-    node will have ``n = 1``, a two node cluster will have ``n = 2``, and
-    any larger cluster will have ``n = 3``. It is recommended not to set
-    ``n`` greater than ``3``. ::
-
-        [cluster]
-        n = 3
-
-    .. config:option:: placement :: Sets the cluster-wide replica placement policy
-
-    .. warning::
-
-        Use of this option will **override** the ``n`` option for replica
-        cardinality. Use with care.
-
-    Sets the cluster-wide replica placement policy when creating new
-    databases. The value must be a comma-delimited list of strings of the
-    format ``zone_name:#``, where ``zone_name`` is a zone as specified in
-    the ``nodes`` database and ``#`` is an integer indicating the number of
-    replicas to place on nodes with a matching ``zone_name``.
-
-    This parameter is not specified by default. ::
-
-        [cluster]
-        placement = metro-dc-a:2,metro-dc-b:1
-
-    .. seealso::
-        :ref:`cluster/databases/placement`
-
-    .. config:option:: seedlist:: Optional, comma-delimited list of nodes that should \
-        be contacted to join cluster
-
-    An optional, comma-delimited list of node names that this node should
-    contact in order to join a cluster. If a seedlist is configured the ``_up``
-    endpoint will return a 404 until the node has successfully contacted at
-    least one of the members of the seedlist and replicated an up-to-date copy
-    of the ``_nodes``, ``_dbs``, and ``_users`` system databases.
-
-        [cluster]
-        seedlist = couchdb@node1.example.com,couchdb@node2.example.com
-
-RPC Performance Tuning
-======================
-
-.. config:section:: rexi :: Internal RPC Tuning
-
-    CouchDB uses distributed Erlang to communicate between nodes in a cluster.
-    The ``rexi`` library provides an optimized RPC mechanism over this
-    communication channel. There are a few configuration knobs for this system,
-    although in general the defaults work well.
-
-    .. config:option:: buffer_count :: Number of buffered messages before dropping
-
-    The local RPC server will buffer messages if a remote node goes unavailable.
-    This flag determines how many messages will be buffered before the local
-    server starts dropping messages. Default value is ``2000``.
-
-    .. config:option:: server_per_node :: Enable or disable one local `gen_server` \
-        process per node
-
-    By default, rexi will spawn one local gen_server process for each node in
-    the cluster. Disabling this flag will cause CouchDB to use a single process
-    for all RPC communication, which is not recommended in high throughput
-    deployments.
-
-    .. config:option:: stream_limit :: Number of send messages without waiting \
-        for acknowledgement from the coordinator
-
-        .. versionadded:: 3.0
-
-    This flag comes into play during streaming operations like views and change
-    feeds. It controls how many messages a remote worker process can send to a
-    coordinator without waiting for an acknowledgement from the coordinator
-    process. If this value is too large the coordinator can become overwhelmed
-    by messages from the worker processes and actually deliver lower overall
-    throughput to the client. In CouchDB 2.x this value was hard-coded to
-    ``10``. In the 3.x series it is configurable and defaults to ``5``.
-    Databases with a high ``q`` value are especially sensitive to this setting.
diff --git a/src/config/compaction.rst b/src/config/compaction.rst
deleted file mode 100644
index 620c2b9..0000000
--- a/src/config/compaction.rst
+++ /dev/null
@@ -1,167 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-==========
-Compaction
-==========
-
-.. _config/database_compaction:
-
-Database Compaction Options
-===========================
-
-.. config:section:: database_compaction :: Database Compaction Options
-
-    .. config:option:: doc_buffer_size :: Documents buffer size
-
-        Specifies the copy buffer's maximum size in bytes::
-
-            [database_compaction]
-            doc_buffer_size = 524288
-
-    .. config:option:: checkpoint_after :: Checkpoint trigger
-
-        Triggers a checkpoint after the specified amount of bytes were
-        successfully copied to the compacted database::
-
-            [database_compaction]
-            checkpoint_after = 5242880
-
-.. _config/view_compaction:
-
-View Compaction Options
-=======================
-
-.. config:section:: view_compaction :: View Compaction Options
-
-    .. config:option:: keyvalue_buffer_size :: Key-Values buffer size
-
-        Specifies maximum copy buffer size in bytes used during compaction::
-
-            [view_compaction]
-            keyvalue_buffer_size = 2097152
-
-.. _config/compactions:
-
-Compaction Daemon
-=================
-
-CouchDB ships with an automated, event-driven daemon internally known as "smoosh" that
-continuously re-prioritizes the database and secondary index files on each node and
-automatically compacts the files that will recover the most free space according to the
-following parameters.
-
-.. config:section:: smoosh :: Compaction Daemon Rules
-
-    .. config:option:: db_channels :: Active database channels
-
-        A comma-delimited list of channels that are sent the names of database
-        files when those files are updated. Each channel can choose whether to
-        enqueue the database for compaction; once a channel has enqueued the
-        database, no additional channel in the list will be given the
-        opportunity to do so.
-
-    .. config:option:: view_channels :: Active secondary index channels
-
-        A comma-delimited list of channels that are sent the names of secondary
-        index files when those files are updated. Each channel can choose
-        whether to enqueue the index for compaction; once a channel has enqueued
-        the index, no additional channel in the list will be given the
-        opportunity to do so.
-
-    .. config:option:: staleness :: Minimum time between priority calculations
-
-        The number of minutes that the (expensive) priority calculation on an
-        individual can be stale for before it is recalculated. Defaults to 5.
-
-    .. config:option:: cleanup_index_files :: Automatically delete orphaned index files
-
-        If set to true, the compaction daemon will delete the files for indexes
-        that are no longer associated with any design document. Defaults to
-        `false` and probably shouldn't be changed unless the node is running low
-        on disk space, and only after considering the ramifications.
-
-    .. config:option:: wait_secs :: Warmup period before triggering first compaction
-
-        The time a channel waits before starting compactions to allow time to
-        observe the system and make a smarter decision about what to compact
-        first. Hardly ever changed from the default of 30 (seconds).
-
-.. config:section:: smoosh.<channel> :: Per-channel configuration
-
-The following settings control the resource allocation for a given compaction
-channel.
-
-    .. config:option:: capacity :: Maximum number of items
-
-        The maximum number of items the channel can hold (lowest priority item
-        is removed to make room for new items). Defaults to 9999.
-
-    .. config:option:: concurrency :: Maximum number of concurrent jobs
-
-        The maximum number of jobs that can run concurrently in this channel.
-        Defaults to 1.
-
-    .. config:option:: from :: Time window start
-
-    .. config:option:: to :: Time window end
-
-        The time period during which this channel is allowed to execute
-        compactions. The value for each of these parameters must obey the format
-        `HH:MM` with HH in [0..23] and MM in [0..59]. Each channel listed in the
-        top-level daemon configuration continuously builds its priority queue
-        regardless of the period defined here. The default is to allow the
-        channel to execute compactions all the time.
-
-    .. config:option:: strict_window :: Run compaction only within the time window
-
-        If set to ``true``, any compaction that is still running after the end of
-        the allowed perio will be suspended, and then resumed during the next
-        window. It defaults to ``false``, in which case any running compactions
-        will be allowed to finish, but no new ones will be started.
-
-There are also several settings that collectively control whether a channel will
-enqueue a file for compaction and how it prioritizes files within its queue:
-
-    .. config:option:: max_priority :: Maximum priority of item to be enqueued
-
-        Each item must have a priority lower than this to be enqueued. Defaults
-        to infinity.
-
-    .. config:option:: max_size :: Maximum size of item to be enqueued
-
-        The item must be no larger than this many bytes in length to be
-        enqueued. Defaults to infinity.
-
-    .. config:option:: min_priority :: Minimum priority of item to be enqueued
-
-        The item must have a priority at least this high to be enqueued.
-        Defaults to 5.0 for ratio and 16 MB for slack.
-
-    .. config:option:: min_changes :: Minimum number of changes of item to be enqueued
-
-        The minimum number of changes since last compaction before the item will
-        be enqueued. Defaults to 0. Currently only works for databases.
-
-    .. config:option:: min_size :: Minimum size of item to be enqueued
-
-        The item must be at least this many bytes in length to be enqueued.
-        Defaults to 1mb (1048576 bytes).
-
-    .. config:option:: priority :: Method for priority calculation
-
-        The method used to calculate priority. Can be ratio (calculated as
-        ``sizes.file/sizes.active``) or slack (calculated as ``sizes.file -
-        sizes.active``). Defaults to ratio.
diff --git a/src/config/couch-peruser.rst b/src/config/couch-peruser.rst
deleted file mode 100644
index 1517c04..0000000
--- a/src/config/couch-peruser.rst
+++ /dev/null
@@ -1,71 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-=================
-Database Per User
-=================
-
-.. _config/couch_peruser:
-
-Database Per User Options
-=========================
-
-.. config:section:: couch_peruser :: Database Per User Options
-
-    .. config:option:: enable :: Enable private per-user database pattern
-
-    If set to ``true``, couch_peruser ensures that a private per-user
-    database exists for each document in ``_users``. These databases are
-    writable only by the corresponding user. Database names are in the following
-    form: ``userdb-{UTF-8 hex encoded username}``. ::
-
-        [couch_peruser]
-        enable = false
-
-    .. note::
-        The ``_users`` database must exist before couch_peruser can be enabled.
-
-    .. tip::
-        Under NodeJS, user names can be converted to and from database names thusly:
-
-    .. code:: javascript
-
-        function dbNameToUsername(prefixedHexName) {
-          return Buffer.from(prefixedHexName.replace('userdb-', ''), 'hex').toString('utf8');
-        }
-
-        function usernameToDbName(name) {
-          return 'userdb-' + Buffer.from(name).toString('hex');
-        }
-
-    .. config:option:: delete_dbs :: Enable deleting user-db after user delete
-
-    If set to ``true`` and a user is deleted, the respective database gets
-    deleted as well. ::
-
-        [couch_peruser]
-        delete_dbs = false
-
-    Note: When using JWT authorization, the provided token must include a custom
-    ``_couchdb.roles=['_admin']`` claim to for the peruser database to be properly
-    created and accessible for the user provided in the ``sub=`` claim.
-
-    .. config:option:: q :: Sharding value for per-user databases
-
-    If set, specify the sharding value for per-user databases. If unset, the
-    cluster default value will be used.
-
-        [couch_peruser]
-        q = 1
diff --git a/src/config/couchdb.rst b/src/config/couchdb.rst
deleted file mode 100644
index b481c10..0000000
--- a/src/config/couchdb.rst
+++ /dev/null
@@ -1,232 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-==================
-Base Configuration
-==================
-
-.. _config/couchdb:
-
-Base CouchDB Options
-====================
-
-.. config:section:: couchdb :: Base CouchDB Options
-
-    .. config:option:: attachment_stream_buffer_size :: Attachment streaming buffer
-
-        Higher values may result in better read performance due to fewer read
-        operations and/or more OS page cache hits. However, they can also
-        increase overall response time for writes when there are many
-        attachment write requests in parallel. ::
-
-            [couchdb]
-            attachment_stream_buffer_size = 4096
-
-    .. config:option:: database_dir :: Databases location directory
-
-        Specifies location of CouchDB database files (``*.couch`` named). This
-        location should be writable and readable for the user the CouchDB
-        service runs as (``couchdb`` by default). ::
-
-            [couchdb]
-            database_dir = /var/lib/couchdb
-
-    .. config:option:: default_security :: Default security
-
-        .. versionchanged:: 3.0 ``admin_only`` is now the default.
-
-        Default security object for databases if not explicitly set. When set
-        to ``everyone``, anyone can performs reads and writes. When set to
-        ``admin_only``, only admins can read and write. When set to
-        ``admin_local``, sharded databases can be read and written by anyone
-        but the shards can only be read and written by admins. ::
-
-            [couchdb]
-            default_security = admin_only
-
-    .. config:option:: enable_database_recovery :: Enable database recovery
-
-        Enable this to only "soft-delete" databases when
-        :ref:`DELETE /{db} <api/db>` DELETE  requests are made. This will
-        rename all shards of the database with a suffix of the form
-        ``<dbname>.YMD.HMS.deleted.couchdb``. You can then manually delete these
-        files later, as desired.
-
-        Default is ``false``. ::
-
-            [couchdb]
-            enable_database_recovery = false
-
-    .. config:option:: file_compression :: Compression method for documents
-
-        .. versionchanged:: 1.2 Added `Google Snappy`_ compression algorithm.
-
-        Method used to compress everything that is appended to database and
-        view index files, except for attachments (see the
-        :section:`attachments` section). Available methods are:
-
-        * ``none``: no compression
-        * ``snappy``: use Google Snappy, a very fast compressor/decompressor
-        * ``deflate_N``: use zlib's deflate; ``N`` is the compression level
-          which ranges from ``1`` (fastest, lowest compression ratio) to ``9``
-          (slowest, highest compression ratio)
-
-        ::
-
-            [couchdb]
-            file_compression = snappy
-
-        .. _Google Snappy: http://code.google.com/p/snappy/
-
-    .. config:option:: maintenance_mode :: Maintenance mode
-
-        A CouchDB node may be put into two distinct maintenance modes by setting
-        this configuration parameter.
-
-        * ``true``: The node will not respond to clustered requests from other
-          nodes and the /_up endpoint will return a 404 response.
-        * ``nolb``: The /_up endpoint will return a 404 response.
-        * ``false``: The node responds normally, /_up returns a 200 response.
-
-        It is expected that the administrator has configured a load balancer
-        in front of the CouchDB nodes in the cluster. This load balancer should
-        use the /_up endpoint to determine whether or not to send HTTP requests
-        to any particular node. For HAProxy, the following config is
-        appropriate:
-
-        .. code-block:: none
-
-          http-check disable-on-404
-          option httpchk GET /_up
-
-    .. config:option:: max_dbs_open :: Limit of simultaneously opened databases
-
-        This option places an upper bound on the number of databases that can
-        be open at once. CouchDB reference counts database accesses internally
-        and will close idle databases as needed. Sometimes it is necessary to
-        keep more than the default open at once, such as in deployments where
-        many databases will be replicating continuously. ::
-
-            [couchdb]
-            max_dbs_open = 100
-
-    .. config:option:: max_document_size :: Limit maximum document body size
-
-        .. versionchanged:: 3.0.0
-
-        Limit maximum document body size. Size is calculated based on the
-        serialized Erlang representation of the JSON document body, because
-        that reflects more accurately the amount of storage consumed on disk.
-        In particular, this limit does not include attachments.
-
-        HTTP requests which create or update documents will fail with error
-        code 413 if one or more documents is larger than this configuration
-        value.
-
-        In case of ``_update`` handlers, document size is checked after the
-        transformation and right before being inserted into the database. ::
-
-            [couchdb]
-            max_document_size = 8000000 ; bytes
-
-        .. warning::
-           Before version 2.1.0 this setting was implemented by simply checking
-           http request body sizes. For individual document updates via `PUT`
-           that approximation was close enough, however that is not the case
-           for ``_bulk_docs`` endpoint. After 2.1.0 a separate configuration
-           parameter was defined: :config:option:`chttpd/max_http_request_size`,
-           which can be used to limit maximum http request sizes. After upgrade,
-           it is advisable to review those settings and adjust them accordingly.
-
-    .. config:option:: os_process_timeout :: External processes time limit
-
-        If an external process, such as a query server or external process,
-        runs for this amount of milliseconds without returning any results, it
-        will be terminated. Keeping this value smaller ensures you get
-        expedient errors, but you may want to tweak it for your specific
-        needs. ::
-
-            [couchdb]
-            os_process_timeout = 5000 ; 5 sec
-
-    .. config:option:: single_node :: Start in single node mode.
-
-        .. versionadded:: 3.0.0
-
-        When this configuration setting is set to ``true``, automatically
-        create the system databases on startup. Must be set ``false`` for a
-        clustered CouchDB installation.
-
-    .. config:option:: uri_file :: Discovery CouchDB help file
-
-        This file contains the full `URI`_ that can be used to access this
-        instance of CouchDB. It is used to help discover the port CouchDB is
-        running on (if it was set to ``0`` (e.g. automatically assigned any
-        free one). This file should be writable and readable for the user that
-        runs the CouchDB service (``couchdb`` by default). ::
-
-            [couchdb]
-            uri_file = /var/run/couchdb/couchdb.uri
-
-        .. _URI: http://en.wikipedia.org/wiki/URI
-
-    .. config:option:: users_db_security_editable :: Protect ``_users`` DB security obj
-
-        .. versionadded:: 3.0.0
-
-        When this configuration setting is set to ``false``, reject any attempts
-        to modify the ``_users`` database security object. Modification of this
-        object is deprecated in 3.x and will be completely disallowed in CouchDB
-        4.x.
-
-    .. config:option:: users_db_suffix :: Users database suffix
-
-        Specifies the suffix (last component of a name) of the system database
-        for storing CouchDB users. ::
-
-            [couchdb]
-            users_db_suffix = _users
-
-        .. warning::
-            If you change the database name, do not forget to remove or clean
-            up the old database, since it will no longer be protected by
-            CouchDB.
-
-    .. config:option:: util_driver_dir :: CouchDB binary utility drivers
-
-        Specifies location of binary drivers (`icu`, `ejson`, etc.). This
-        location and its contents should be readable for the user that runs the
-        CouchDB service. ::
-
-            [couchdb]
-            util_driver_dir = /usr/lib/couchdb/erlang/lib/couch-1.5.0/priv/lib
-
-    .. config:option:: uuid :: CouchDB server UUID
-
-        .. versionadded:: 1.3
-
-        Unique identifier for this CouchDB server instance. ::
-
-            [couchdb]
-            uuid = 0a959b9b8227188afc2ac26ccdf345a6
-
-    .. config:option:: view_index_dir :: View indexes location directory
-
-        Specifies location of CouchDB view index files. This location should be
-        writable and readable for the user that runs the CouchDB service
-        (``couchdb`` by default). ::
-
-            [couchdb]
-            view_index_dir = /var/lib/couchdb
diff --git a/src/config/http.rst b/src/config/http.rst
deleted file mode 100644
index 19f4341..0000000
--- a/src/config/http.rst
+++ /dev/null
@@ -1,684 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-===================
-CouchDB HTTP Server
-===================
-
-.. _config/httpd:
-
-HTTP Server Options
-===================
-
-.. config:section:: chttpd :: Clustered HTTP Server Options
-
-    .. note::
-        In CouchDB 2.x and 3.x, the `chttpd` section refers to the standard, clustered
-        port. All use of CouchDB, aside from a few specific maintenance tasks as
-        described in this documentation, should be performed over this port.
-
-    .. config:option:: bind_address :: HTTP port IP address binding
-
-        Defines the IP address by which the clustered port is available::
-
-            [chttpd]
-            bind_address = 127.0.0.1
-
-        To let CouchDB listen any available IP address, use ``0.0.0.0``::
-
-            [chttpd]
-            bind_address = 0.0.0.0
-
-        For IPv6 support you need to set ``::1`` if you want to let CouchDB
-        listen correctly::
-
-            [chttpd]
-            bind_address = ::1
-
-        or ``::`` for any available::
-
-            [chttpd]
-            bind_address = ::
-
-    .. config:option:: port :: Listen port
-
-        Defines the port number to listen::
-
-            [chttpd]
-            port = 5984
-
-        To let CouchDB use any free port, set this option to ``0``::
-
-            [chttpd]
-            port = 0
-
-    .. config:option:: prefer_minimal :: Sends minimal set of headers
-
-        If a request has the header ``"Prefer": "return=minimal"``, CouchDB
-        will only send the headers that are listed for the ``prefer_minimal``
-        configuration.::
-
-            [chttpd]
-            prefer_minimal = Cache-Control, Content-Length, Content-Range, Content-Type, ETag, Server, Transfer-Encoding, Vary
-
-        .. warning::
-            Removing the Server header from the settings will mean that
-            the CouchDB server header is replaced with the
-            MochiWeb server header.
-
-    .. config:option:: authentication_handlers :: Authentication handlers
-
-        List of authentication handlers used by CouchDB. You may
-        extend them via third-party plugins or remove some of them if you won't
-        let users to use one of provided methods::
-
-            [chttpd]
-            authentication_handlers = {chttpd_auth, cookie_authentication_handler}, {chttpd_auth, default_authentication_handler}
-
-        - ``{chttpd_auth, cookie_authentication_handler}``: used for Cookie auth;
-        - ``{chttpd_auth, proxy_authentication_handler}``: used for Proxy auth;
-        - ``{chttpd_auth, jwt_authentication_handler}``: used for JWT auth;
-        - ``{chttpd_auth, default_authentication_handler}``: used for Basic auth;
-        - ``{couch_httpd_auth, null_authentication_handler}``: disables auth, breaks CouchDB.
-
-    .. config:option:: buffer_response :: Buffered response
-
-        .. versionchanged:: 3.1.1
-
-        Set this to ``true`` to delay the start of a response until the end has
-        been calculated. This increases memory usage, but simplifies client error
-        handling as it eliminates the possibility that a response may be deliberately
-        terminated midway through, due to a timeout. This config value may be changed
-        at runtime, without impacting any in-flight responses.
-
-        Even if this is set to ``false`` (the default), buffered responses can be
-        enabled on a per-request basis for any delayed JSON response call by adding
-        ``?buffer_response=true`` to the request's parameters.
-
-    .. config:option:: allow_jsonp :: Enables JSONP support
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        The ``true`` value of this option enables `JSONP`_ support (it's
-        ``false`` by default)::
-
-            [chttpd]
-            allow_jsonp = false
-
-        .. _JSONP: https://en.wikipedia.org/wiki/JSONP
-
-    .. config:option:: changes_timeout :: Changes feed timeout
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        Specifies default `timeout` value for :ref:`Changes Feed <changes>` in
-        milliseconds (60000 by default)::
-
-            [chttpd]
-            changes_timeout = 60000 ; 60 seconds
-
-    .. config:option:: config_whitelist :: Config options whitelist
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        Sets the configuration modification whitelist. Only whitelisted values
-        may be changed via the :ref:`config API <api/config>`. To allow the
-        admin to change this value over HTTP, remember to include
-        ``{chttpd,config_whitelist}`` itself. Excluding it from the list would
-        require editing this file to update the whitelist::
-
-            [chttpd]
-            config_whitelist = [{chttpd,config_whitelist}, {log,level}, {etc,etc}]
-
-    .. config:option:: enable_cors :: Activates CORS
-
-        .. versionadded:: 1.3
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        Controls :ref:`CORS <config/cors>` feature::
-
-            [chttpd]
-            enable_cors = false
-
-    .. config:option:: secure_rewrites :: Default request handler
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        This option allow to isolate databases via subdomains::
-
-            [chttpd]
-            secure_rewrites = true
-
-    .. config:option:: x_forwarded_host :: X-Forwarder-Host
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        The `x_forwarded_host` header (``X-Forwarded-Host`` by default) is used
-        to forward the original value of the ``Host`` header field in case, for
-        example, if a reverse proxy is rewriting the "Host" header field to
-        some internal host name before forward the request to CouchDB::
-
-            [chttpd]
-            x_forwarded_host = X-Forwarded-Host
-
-        This header has higher priority above ``Host`` one, if only it exists
-        in the request.
-
-    .. config:option:: x_forwarded_proto :: X-Forwarder-Proto
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        `x_forwarded_proto` header (``X-Forwarder-Proto`` by default) is used
-        for identifying the originating protocol of an HTTP request, since a
-        reverse proxy may communicate with CouchDB instance using HTTP even if
-        the request to the reverse proxy is HTTPS::
-
-            [chttpd]
-            x_forwarded_proto = X-Forwarded-Proto
-
-    .. config:option:: x_forwarded_ssl :: X-Forwarder-Ssl
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        The `x_forwarded_ssl` header (``X-Forwarded-Ssl`` by default) tells
-        CouchDB that it should use the `https` scheme instead of the `http`.
-        Actually, it's a synonym for ``X-Forwarded-Proto: https`` header, but
-        used by some reverse proxies::
-
-            [chttpd]
-            x_forwarded_ssl = X-Forwarded-Ssl
-
-    .. config:option:: enable_xframe_options :: Controls X-Frame-Options header
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        Controls :ref:`Enables or disabled <config/xframe_options>` feature::
-
-            [chttpd]
-            enable_xframe_options = false
-
-    .. config:option:: max_http_request_size :: Maximum HTTP request body size
-
-        .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-        Limit the maximum size of the HTTP request body. This setting applies
-        to all requests and it doesn't discriminate between single vs.
-        multi-document operations. So setting it to 1MB would block a
-        `PUT` of a document larger than 1MB, but it might also block a
-        ``_bulk_docs`` update of 1000 1KB documents, or a multipart/related
-        update of a small document followed by two 512KB attachments. This
-        setting is intended to be used as a protection against maliciously
-        large HTTP requests rather than for limiting maximum document sizes. ::
-
-            [chttpd]
-            max_http_request_size = 4294967296 ; 4 GB
-
-        .. warning::
-           Before version 2.1.0 :config:option:`couchdb/max_document_size` was
-           implemented effectively as ``max_http_request_size``. That is, it
-           checked HTTP request bodies instead of document sizes. After the
-           upgrade, it is advisable to review the usage of these configuration
-           settings.
-
-.. config:section:: httpd :: HTTP Server Options
-
-    .. versionchanged:: 3.2 These options were moved to [chttpd] section:
-                        `allow_jsonp`, `changes_timeout`, `config_whitelist`,
-                        `enable_cors`, `secure_rewrites`, `x_forwarded_host`,
-                        `x_forwarded_proto`, `x_forwarded_ssl`,
-                        `enable_xframe_options`, `max_http_request_size`.
-
-    .. config:option:: server_options :: MochiWeb Server Options
-
-        Server options for the MochiWeb component of CouchDB can be added to
-        the configuration files::
-
-            [httpd]
-            server_options = [{backlog, 128}, {acceptor_pool_size, 16}]
-
-        The options supported are a subset of full options supported by the
-        TCP/IP stack. A list of the supported options are provided in the
-        `Erlang inet`_ documentation.
-
-        .. _Erlang inet: http://www.erlang.org/doc/man/inet.html#setopts-2
-
-    .. config:option:: socket_options :: Socket Options
-
-        The socket options for the listening socket in CouchDB, as set at the
-        beginning of ever request, can be specified as a list of tuples. For example::
-
-            [httpd]
-            socket_options = [{sndbuf, 262144}]
-
-        The options supported are a subset of full options supported by the
-        TCP/IP stack. A list of the supported options are provided in the
-        `Erlang inet`_ documentation.
-
-        .. _Erlang inet: http://www.erlang.org/doc/man/inet.html#setopts-2
-
-.. _config/ssl:
-
-HTTPS (SSL/TLS) Options
-=======================
-
-.. config:section:: ssl :: HTTPS (SSL/TLS) Options
-
-    CouchDB supports TLS/SSL natively, without the use of a proxy server.
-
-    HTTPS setup can be tricky, but the configuration in CouchDB was designed to
-    be as easy as possible. All you need is two files; a certificate and a
-    private key. If you have an official certificate from a certificate
-    authority, both should be in your possession already.
-
-    If you just want to try this out and don't want to go through the hassle of
-    obtaining an official certificate, you can create a self-signed certificate.
-    Everything will work the same, but clients will get a warning about an insecure
-    certificate.
-
-    You will need the `OpenSSL`_ command line tool installed. It probably
-    already is.
-
-    .. code-block:: bash
-
-        shell> mkdir /etc/couchdb/cert
-        shell> cd /etc/couchdb/cert
-        shell> openssl genrsa > privkey.pem
-        shell> openssl req -new -x509 -key privkey.pem -out couchdb.pem -days 1095
-        shell> chmod 600 privkey.pem couchdb.pem
-        shell> chown couchdb privkey.pem couchdb.pem
-
-    Now, you need to edit CouchDB's configuration, by editing your
-    ``local.ini`` file. Here is what you need to do.
-
-    Under the ``[ssl]`` section, enable HTTPS and set up the newly generated
-    certificates::
-
-        [ssl]
-        enable = true
-        cert_file = /etc/couchdb/cert/couchdb.pem
-        key_file = /etc/couchdb/cert/privkey.pem
-
-    For more information please read `certificates HOWTO`_.
-
-    Now start (or restart) CouchDB. You should be able to connect to it
-    using HTTPS on port 6984:
-
-    .. code-block:: console
-
-        shell> curl https://127.0.0.1:6984/
-        curl: (60) SSL certificate problem, verify that the CA cert is OK. Details:
-        error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
-        More details here: http://curl.haxx.se/docs/sslcerts.html
-
-        curl performs SSL certificate verification by default, using a "bundle"
-        of Certificate Authority (CA) public keys (CA certs). If the default
-        bundle file isn't adequate, you can specify an alternate file
-        using the --cacert option.
-        If this HTTPS server uses a certificate signed by a CA represented in
-        the bundle, the certificate verification probably failed due to a
-        problem with the certificate (it might be expired, or the name might
-        not match the domain name in the URL).
-        If you'd like to turn off curl's verification of the certificate, use
-        the -k (or --insecure) option.
-
-    Oh no! What happened?! Remember, clients will notify their users that your
-    certificate is self signed. ``curl`` is the client in this case and it
-    notifies you. Luckily you trust yourself (don't you?) and you can specify
-    the ``-k`` option as the message reads:
-
-    .. code-block:: console
-
-        shell> curl -k https://127.0.0.1:6984/
-        {"couchdb":"Welcome","version":"1.5.0"}
-
-    All done.
-
-    For performance reasons, and for ease of setup, you may still wish to
-    terminate HTTPS connections at your load balancer / reverse proxy, then use
-    unencrypted HTTP between it and your CouchDB cluster. This is a recommended
-    approach.
-
-    Additional detail may be available in the `CouchDB wiki`_.
-
-    .. _`certificates HOWTO`: http://www.openssl.org/docs/HOWTO/certificates.txt
-    .. _OpenSSL: http://www.openssl.org/
-    .. _`CouchDB wiki`: https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=48203146
-
-    .. config:option:: cacert_file :: CA Certificate file
-
-        The path to a file containing PEM encoded CA certificates. The CA
-        certificates are used to build the server certificate chain, and for
-        client authentication. Also the CAs are used in the list of acceptable
-        client CAs passed to the client when a certificate is requested. May be
-        omitted if there is no need to verify the client and if there are not
-        any intermediate CAs for the server certificate::
-
-            [ssl]
-            cacert_file = /etc/ssl/certs/ca-certificates.crt
-
-    .. config:option:: cert_file :: Certificate file
-
-        Path to a file containing the user's certificate::
-
-            [ssl]
-            cert_file = /etc/couchdb/cert/couchdb.pem
-
-    .. config:option:: key_file :: Certificate key file
-
-        Path to file containing user's private PEM encoded key::
-
-            [ssl]
-            key_file = /etc/couchdb/cert/privkey.pem
-
-    .. config:option:: password :: Certificate key password
-
-        String containing the user's password. Only used if the private key file
-        is password protected::
-
-            [ssl]
-            password = somepassword
-
-    .. config:option:: ssl_certificate_max_depth :: Maximum peer certificate depth
-
-        Maximum peer certificate depth (must be set even if certificate
-        validation is off)::
-
-            [ssl]
-            ssl_certificate_max_depth = 1
-
-    .. config:option:: verify_fun :: SSL verification function
-
-        The verification fun (optional) if not specified, the default
-        verification fun will be used::
-
-            [ssl]
-            verify_fun = {Module, VerifyFun}
-
-    .. config:option:: verify_ssl_certificates :: Enable certificate verification
-
-        Set to ``true`` to validate peer certificates::
-
-            [ssl]
-            verify_ssl_certificates = false
-
-    .. config:option:: fail_if_no_peer_cert :: Require presence of client certificate if certificate verification is enabled
-
-        Set to ``true`` to terminate the TLS/SSL handshake with a
-        ``handshake_failure`` alert message if the client does not send a
-        certificate. Only used if ``verify_ssl_certificates`` is ``true``. If set
-        to ``false`` it will only fail if the client sends an invalid certificate
-        (an empty certificate is considered valid)::
-
-            [ssl]
-            fail_if_no_peer_cert = false
-
-    .. config:option:: secure_renegotiate :: Enable secure renegotiation
-
-        Set to ``true`` to reject renegotiation attempt that does not live up to
-        RFC 5746::
-
-            [ssl]
-            secure_renegotiate = true
-
-    .. config:option:: ciphers :: Specify permitted server cipher list
-
-        Set to the cipher suites that should be supported which can be
-        specified in erlang format "{ecdhe_ecdsa,aes_128_cbc,sha256}" or
-        in OpenSSL format "ECDHE-ECDSA-AES128-SHA256". ::
-
-            [ssl]
-            ciphers = ["ECDHE-ECDSA-AES128-SHA256", "ECDHE-ECDSA-AES128-SHA"]
-
-    .. config:option:: tls_versions :: Specify permitted server SSL/TLS protocol versions
-
-        Set to a list of permitted SSL/TLS protocol versions::
-
-            [ssl]
-            tls_versions = [tlsv1 | 'tlsv1.1' | 'tlsv1.2']
-
-.. _cors:
-.. _config/cors:
-
-Cross-Origin Resource Sharing
-=============================
-
-.. config:section:: cors :: Cross-Origin Resource Sharing
-
-    .. versionadded:: 1.3 added CORS support, see JIRA :issue:`431`
-    .. versionchanged:: 3.2 moved from [httpd] to [chttpd] section
-
-    `CORS`, or "Cross-Origin Resource Sharing", allows a resource such as a web
-    page running JavaScript inside a browser, to make AJAX requests
-    (XMLHttpRequests) to a different domain, without compromising the security
-    of either party.
-
-    A typical use case is to have a static website hosted on a CDN make
-    requests to another resource, such as a hosted CouchDB instance. This
-    avoids needing an intermediary proxy, using `JSONP` or similar workarounds
-    to retrieve and host content.
-
-    While CouchDB's integrated HTTP server has support for document attachments
-    makes this less of a constraint for pure CouchDB projects, there are many
-    cases where separating the static content from the database access is
-    desirable, and CORS makes this very straightforward.
-
-    By supporting CORS functionality, a CouchDB instance can accept direct
-    connections to protected databases and instances, without the browser
-    functionality being blocked due to same-origin constraints. CORS is
-    supported today on over 90% of recent browsers.
-
-    CORS support is provided as experimental functionality in 1.3, and as such
-    will need to be enabled specifically in CouchDB's configuration. While all
-    origins are forbidden from making requests by default, support is available
-    for simple requests, preflight requests and per-vhost configuration.
-
-    This section requires :option:`chttpd/enable_cors` option have
-    ``true`` value::
-
-        [chttpd]
-        enable_cors = true
-
-    .. config:option:: credentials :: Enable credentials support in CouchDB
-
-        By default, neither authentication headers nor cookies are included in
-        requests and responses. To do so requires both setting
-        ``XmlHttpRequest.withCredentials = true`` on the request object in the
-        browser and enabling credentials support in CouchDB. ::
-
-            [cors]
-            credentials = true
-
-        CouchDB will respond to a credentials-enabled CORS request with an
-        additional header, ``Access-Control-Allow-Credentials=true``.
-
-    .. config:option:: origins :: List of allowed origins
-
-        List of origins separated by a comma, ``*`` means accept all. You can’t
-        set ``origins = *`` and ``credentials = true`` option at the same
-        time::
-
-            [cors]
-            origins = *
-
-        Access can be restricted by protocol, host and optionally by port.
-        Origins must follow the scheme: http://example.com:80::
-
-            [cors]
-            origins = http://localhost, https://localhost, http://couch.mydev.name:8080
-
-        Note that by default, no origins are accepted. You must define them
-        explicitly.
-
-    .. config:option:: headers :: List of accepted headers
-
-        List of accepted headers separated by a comma::
-
-            [cors]
-            headers = X-Couch-Id, X-Couch-Rev
-
-    .. config:option:: methods :: List of accepted methods
-
-        List of accepted methods::
-
-            [cors]
-            methods = GET,POST
-
-    .. config:option:: max_age :: Set Access-Control-Max-Age header
-
-        Sets the ``Access-Control-Max-Age`` header in seconds. Use it to
-        avoid repeated ``OPTIONS`` requests.
-
-            [cors]
-            max_age = 3600
-
-    .. seealso::
-        Original JIRA `implementation ticket <https://issues.apache.org/jira/browse/COUCHDB-431>`_
-
-        Standards and References:
-
-        - IETF RFCs relating to methods: :rfc:`2618`, :rfc:`2817`, :rfc:`5789`
-        - IETF RFC for Web Origins: :rfc:`6454`
-        - W3C `CORS standard <http://www.w3.org/TR/cors>`_
-
-        Mozilla Developer Network Resources:
-
-        - `Same origin policy for URIs <https://developer.mozilla.org/en-US/docs/Same-origin_policy_for_file:_URIs>`_
-        - `HTTP Access Control <https://developer.mozilla.org/En/HTTP_access_control>`_
-        - `Server-side Access Control <https://developer.mozilla.org/En/Server-Side_Access_Control>`_
-        - `JavaScript same origin policy <https://developer.mozilla.org/en-US/docs/Same_origin_policy_for_JavaScript>`_
-
-        Client-side CORS support and usage:
-
-        - `CORS browser support matrix <http://caniuse.com/cors>`_
-        - `COS tutorial <http://www.html5rocks.com/en/tutorials/cors/>`_
-        - `XHR with CORS <http://hacks.mozilla.org/2009/07/cross-site-xmlhttprequest-with-cors/>`_
-
-Per Virtual Host Configuration
-------------------------------
-
-.. warning::
-
-    Virtual Hosts are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-To set the options for a :section:`vhosts`, you will need to create a section
-with the vhost name prefixed by ``cors:``. Example case for the vhost
-`example.com`::
-
-    [cors:example.com]
-    credentials = false
-    ; List of origins separated by a comma
-    origins = *
-    ; List of accepted headers separated by a comma
-    headers = X-CouchDB-Header
-    ; List of accepted methods
-    methods = HEAD, GET
-
-A video from 2010 on vhost and rewrite configuration `is available
-<https://vimeo.com/20773112>`_, but is not guaranteed to match current syntax
-or behaviour.
-
-.. _config/vhosts:
-
-Virtual Hosts
-=============
-
-.. warning::
-
-    Virtual Hosts are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. config:section:: vhosts :: Virtual Hosts
-
-    CouchDB can map requests to different locations based on the ``Host``
-    header, even if they arrive on the same inbound IP address.
-
-    This allows different virtual hosts on the same machine to map to different
-    databases or design documents, etc. The most common use case is to map a
-    virtual host to a :ref:`Rewrite Handler <api/ddoc/rewrite>`, to provide
-    full control over the application's URIs.
-
-    To add a virtual host, add a `CNAME` pointer to the DNS for your domain
-    name. For development and testing, it is sufficient to add an entry in the
-    hosts file, typically `/etc/hosts`` on Unix-like operating systems:
-
-    .. code-block:: text
-
-        # CouchDB vhost definitions, refer to local.ini for further details
-        127.0.0.1       couchdb.local
-
-    Test that this is working:
-
-    .. code-block:: bash
-
-        $ ping -n 2 couchdb.local
-        PING couchdb.local (127.0.0.1) 56(84) bytes of data.
-        64 bytes from localhost (127.0.0.1): icmp_req=1 ttl=64 time=0.025 ms
-        64 bytes from localhost (127.0.0.1): icmp_req=2 ttl=64 time=0.051 ms
-
-    Finally, add an entry to your :ref:`configuration file <config>` in the
-    ``[vhosts]`` section::
-
-        [vhosts]
-        couchdb.local:5984 = /example
-        *.couchdb.local:5984 = /example
-
-    If your CouchDB is listening on the the default HTTP port (80), or is
-    sitting behind a proxy, then you don't need to specify a port number in the
-    ``vhost`` key.
-
-    The first line will rewrite the request to display the content of the
-    `example` database. This rule works only if the ``Host`` header is
-    ``couchdb.local`` and won't work for `CNAMEs`. The second rule, on the
-    other hand, matches all `CNAMEs` to `example` db, so that both
-    `www.couchdb.local` and `db.couchdb.local` will work.
-
-Rewriting Hosts to a Path
--------------------------
-
-Like in the :ref:`_rewrite <api/ddoc/rewrite>` handler you can match some
-variable and use them to create the target path. Some examples::
-
-    [vhosts]
-    *.couchdb.local = /*
-    :dbname. = /:dbname
-    :ddocname.:dbname.example.com = /:dbname/_design/:ddocname/_rewrite
-
-The first rule passes the wildcard as ``dbname``. The second one does the same,
-but uses a variable name. And the third one allows you to use any URL with
-``ddocname`` in any database with ``dbname``.
-
-.. _xframe_options:
-.. _config/xframe_options:
-
-X-Frame-Options
-=============================
-
-X-Frame-Options is a response header that controls whether a http response
-can be embedded in a <frame>, <iframe> or <object>. This is a security
-feature to help against clickjacking.
-
-    [x_frame_options]
-    ; Settings same-origin will return X-Frame-Options: SAMEORIGIN.
-    ; If same origin is set, it will ignore the hosts setting
-    ; same_origin = true
-    ; Settings hosts will
-    ; return X-Frame-Options: ALLOW-FROM https://example.com/
-    ; List of hosts separated by a comma. * means accept all
-    ; hosts =
-
-If xframe_options is enabled it will return ``X-Frame-Options: DENY`` by default.
-If ``same_origin`` is enabled it will return ``X-Frame-Options: SAMEORIGIN``.
-A ``X-FRAME-OPTIONS: ALLOW-FROM url`` will be returned when ``same_origin``
-is false, and the HOST header matches one of the urls in the ``hosts`` config.
-Otherwise a ``X-Frame-Options: DENY`` will be returned.
diff --git a/src/config/index.rst b/src/config/index.rst
deleted file mode 100644
index 7b5f1ba..0000000
--- a/src/config/index.rst
+++ /dev/null
@@ -1,35 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _config:
-
-=============
-Configuration
-=============
-
-.. toctree::
-    :maxdepth: 2
-
-    intro
-    couchdb
-    cluster
-    couch-peruser
-    http
-    auth
-    compaction
-    indexbuilds
-    ioq
-    logging
-    replicator
-    query-servers
-    misc
-    resharding
diff --git a/src/config/indexbuilds.rst b/src/config/indexbuilds.rst
deleted file mode 100644
index 103633c..0000000
--- a/src/config/indexbuilds.rst
+++ /dev/null
@@ -1,67 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-.. _config/index_builds:
-
-===================
-Background Indexing
-===================
-
-Secondary indexes in CouchDB are not updated during document write operations. In order to
-avoid high latencies when reading indexes following a large block of writes, CouchDB
-automatically kicks off background jobs to keep secondary indexes "warm". The daemon
-responsible for this process is internally known as "ken" and can be configured using the
-following settings.
-
-.. config:section:: ken :: Background Index Builds
-
-    .. config:option:: batch_channels :: Steady-state build concurrency
-
-        This setting controls the number of background view builds that can be running in
-        parallel at any given time. The default is 20.
-
-    .. config:option:: incremental_channels :: Additional slots for short jobs
-
-        It is possible for all the slots in the normal build system to be occupied by
-        long-running index rebuilds (e.g. if new design documents are posted to several
-        databases simultaneously). In order to avoid already-built indexes from falling
-        behind when this occurs, CouchDB will allow for a number of short background
-        indexing jobs to run even when all slots are full. This setting controls how many
-        additional short jobs are allowed to run concurrently with the main jobs. The
-        default is 80.
-
-    .. config:option:: max_incremental_updates :: Threshold defining a job as short
-
-        CouchDB estimates whether an indexing job is "incremental" or not by looking at
-        the difference in sequence numbers between the current index and the main
-        database. If the difference is larger than the threshold defined here the
-        background job will only be allowed to run in the main queue. Defaults to 1000.
-
-.. config:section:: ken.ignore :: Auto-Indexing Blocklist
-
-Entries in this configuration section can be used to tell the background indexer to skip
-over specific database shard files. The key must be the exact name of the shard with the
-``.couch`` suffix omitted, for example:
-
-    .. code-block:: ini
-
-        [ken.ignore]
-        shards/00000000-1fffffff/mydb.1567719095 = true
-
-    .. note::
-        In case when you'd like to skip all views from a ddoc, you may add
-        ``autoupdate: false`` to the ddoc. All views of that ddoc will then be skipped.
-
-        More at :http:put:`/{db}/_design/{ddoc}`.
diff --git a/src/config/intro.rst b/src/config/intro.rst
deleted file mode 100644
index 483b1a5..0000000
--- a/src/config/intro.rst
+++ /dev/null
@@ -1,177 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. _config/intro:
-
-===========================
-Introduction To Configuring
-===========================
-
-Configuration files
-===================
-
-By default, CouchDB reads configuration files from the following locations,
-in the following order:
-
-#. ``etc/default.ini``
-#. ``etc/default.d/*.ini``
-#. ``etc/local.ini``
-#. ``etc/local.d/*.ini``
-
-Configuration files in the ``*.d/`` directories are sorted by name, that means
-for example a file with the name ``etc/local.d/00-shared.ini`` is loaded before
-``etc/local.d/10-server-specific.ini``.
-
-All paths are specified relative to the CouchDB installation directory:
-``/opt/couchdb`` recommended on UNIX-like systems, ``C:\CouchDB`` recommended
-on Windows systems, and a combination of two directories on macOS:
-``Applications/Apache CouchDB.app/Contents/Resources/couchdbx-core/etc`` for
-the ``default.ini`` and ``default.d`` directories, and one of
-``/Users/<your-user>/Library/Application Support/CouchDB2/etc/couchdb`` or
-``/Users/<your-user>/Library/Preferences/couchdb2-local.ini`` for
-the ``local.ini`` and ``local.d`` directories.
-
-Settings in successive documents override the settings in earlier entries.
-For example, setting the :option:`chttpd/bind_address` parameter in
-``local.ini`` would override any setting in ``default.ini``.
-
-.. warning::
-    The ``default.ini`` file may be overwritten during an upgrade or
-    re-installation, so localised changes should be made to the ``local.ini``
-    file or files within the ``local.d`` directory.
-
-.. highlight:: sh
-
-The configuration file chain may be changed by setting the ERL_FLAGS
-environment variable::
-
-    export ERL_FLAGS="-couch_ini /path/to/my/default.ini /path/to/my/local.ini"
-
-or by placing the ``-couch_ini ..`` flag directly in the ``etc/vm.args`` file.
-Passing ``-couch_ini ..`` as a command-line argument when launching ``couchdb``
-is the same as setting the ``ERL_FLAGS`` environment variable.
-
-.. warning::
-    The environment variable/command-line flag overrides any ``-couch_ini``
-    option specified in the ``etc/vm.args`` file. And, **BOTH** of these
-    options **completely** override CouchDB from searching in the default
-    locations. Use these options only when necessary, and be sure to track
-    the contents of ``etc/default.ini``, which may change in future releases.
-
-If there is a need to use different ``vm.args`` or ``sys.config`` files, for
-example, in different locations to the ones provided by CouchDB, or you don't
-want to edit the original files, the default locations may be changed by
-setting the COUCHDB_ARGS_FILE or COUCHDB_SYSCONFIG_FILE environment
-variables::
-
-    export COUCHDB_ARGS_FILE="/path/to/my/vm.args"
-    export COUCHDB_SYSCONFIG_FILE="/path/to/my/sys.config"
-
-Parameter names and values
-==========================
-
-All parameter names are *case-sensitive*. Every parameter takes a value of one
-of five types: `boolean`, `integer`, `string`, `tuple`_ and `proplist`_.
-Boolean values can be written as ``true`` or ``false``.
-
-Parameters with value type of `tuple` or `proplist` are following the Erlang
-requirement for style and naming.
-
-.. _proplist: http://www.erlang.org/doc/man/proplists.html
-.. _tuple: http://www.erlang.org/doc/reference_manual/data_types.html#id66049
-
-Setting parameters via the configuration file
-=============================================
-
-The common way to set some parameters is to edit the ``local.ini`` file
-(location explained above).
-
-.. highlight:: ini
-
-For example::
-
-    ; This is a comment
-    [section]
-    param = value ; inline comments are allowed
-
-Each configuration file line may contains `section` definition, `parameter`
-specification, empty (space and newline characters only) or `commented` line.
-You can set up `inline` commentaries for `sections` or `parameters`.
-
-The `section` defines group of parameters that are belongs to some specific
-CouchDB subsystem. For instance, :section:`httpd` section holds not only HTTP
-server parameters, but also others that directly interacts with it.
-
-The `parameter` specification contains two parts divided by the `equal` sign
-(``=``): the parameter name on the left side and the parameter value on the
-right one. The leading and following whitespace for ``=`` is an optional to
-improve configuration readability.
-
-.. note::
-    In case when you'd like to remove some parameter from the `default.ini`
-    without modifying that file, you may override in `local.ini`, but without
-    any value::
-
-        [compactions]
-        _default =
-
-    This could be read as: "remove the `_default` parameter from the
-    `compactions` section if it was ever set before".
-
-The semicolon (``;``) signals the start of a comment. Everything after this
-character is ignored by CouchDB.
-
-After editing the configuration file, CouchDB should be restarted to apply
-any changes.
-
-Setting parameters via the HTTP API
-===================================
-
-.. highlight:: sh
-
-Alternatively, configuration parameters can be set via the
-:ref:`HTTP API <api/config>`. This API allows changing CouchDB configuration
-on-the-fly without requiring a server restart::
-
-    curl -X PUT http://localhost:5984/_node/<name@host>/_config/uuids/algorithm -d '"random"'
-
-The old parameter's value is returned in the response::
-
-    "sequential"
-
-You should be careful changing configuration via the HTTP API since it's
-possible  to make CouchDB unreachable, for example, by changing the
-:option:`chttpd/bind_address`::
-
-    curl -X PUT http://localhost:5984/_node/<name@host>/_config/chttpd/bind_address -d '"10.10.0.128"'
-
-If you make a typo or the specified IP address is not available from your
-network, CouchDB will be unreachable. The only way to resolve this will be
-to remote into the server, correct the config file, and restart CouchDB. To
-protect yourself against such accidents you may set the
-:option:`chttpd/config_whitelist` of permitted configuration parameters for
-updates via the HTTP API. Once this option is set, further changes to
-non-whitelisted parameters must take place via the configuration file, and in
-most cases, will also require a server restart before taking effect.
-
-Configuring the local node
-==========================
-
-.. highlight:: sh
-
-While the :ref:`HTTP API <api/config>` allows configuring all nodes in the
-cluster, as a convenience, you can use the literal string ``_local`` in place
-of the node name, to interact with the local node's configuration.  For
-example::
-
-    curl -X PUT http://localhost:5984/_node/_local/_config/uuids/algorithm -d '"random"'
diff --git a/src/config/ioq.rst b/src/config/ioq.rst
deleted file mode 100644
index ef931b3..0000000
--- a/src/config/ioq.rst
+++ /dev/null
@@ -1,109 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-.. _config/ioq:
-
-========
-IO Queue
-========
-
-CouchDB has an internal subsystem that can prioritize IO associated with certain
-classes of operations. This subsystem can be configured to limit the resources
-devoted to background operations like internal replication and compaction
-according to the settings described below.
-
-.. config:section:: ioq :: IO Queue Configuration
-
-    .. config:option:: concurrency :: Number of in-flight IO requests
-
-        Specifies the maximum number of concurrent in-flight IO requests that
-        the queueing system will submit::
-
-            [ioq]
-            concurrency = 10
-
-    .. config:option:: ratio :: Preference for selecting background over interactive IO
-
-        The fraction of the time that a background IO request will be selected
-        over an interactive IO request when both queues are non-empty::
-
-            [ioq]
-            ratio = 0.01
-
-.. config:section:: ioq.bypass :: Bypass Selected IO Classes
-
-    System administrators can choose to submit specific classes of IO directly
-    to the underlying file descriptor or OS process, bypassing the queues
-    altogether. Installing a bypass can yield higher throughput and lower
-    latency, but relinquishes some control over prioritization. The following
-    classes are recognized:
-
-    .. config:option:: os_process :: Bypass IO messages to external processes
-
-        Messages on their way to an external process (e.g., ``couchjs``).
-
-    .. config:option:: read :: Bypass Disk IO read requests
-
-        Disk IO fulfilling interactive read requests.
-
-    .. config:option:: write :: Bypass Disk IO write requests
-
-        Disk IO required to update a database.
-
-    .. config:option:: view_update :: Bypass Disk IO view update requests
-
-        Disk IO required to update views and other secondary indexes.
-
-    .. config:option:: shard_sync :: Bypass Disk IO for shard syncing requests
-
-        Disk IO issued by the background replication processes that fix any
-        inconsistencies between shard copies.
-
-    .. config:option:: compaction :: Bypass Disk IO for compaction job requests
-
-        Disk IO issued by compaction jobs.
-
-    Without any configuration CouchDB will enqueue all classes of IO. The
-    default.ini configuration file that ships with CouchDB activates a bypass
-    for each of the interactive IO classes and only background IO goes into the
-    queueing system::
-
-        [ioq.bypass]
-        os_process = true
-        read = true
-        write = true
-        view_update = true
-        shard_sync = false
-        compaction = false
-
-Recommendations
-===============
-
-The default configuration protects against excessive IO from background
-operations like compaction disrupting the latency of interactive operations,
-while maximizing the overall IO throughput devoted to those interactive
-requests. There are certain situations where this configuration could be
-sub-optimal:
-
-* An administrator may want to devote a larger portion of the overall IO
-  bandwidth to compaction in order to stay ahead of the incoming write load. In
-  this it may be necessary to disable the bypass for ``write`` (to help with
-  database compaction) and/or ``view_update`` (to help with view index compaction)
-  and then increase the ``ratio`` to give compaction a higher priority.
-
-* A server with a large number of views that do not need to be comlpetely
-  up-to-date may benefit from removing the bypass on ``view_update`` in order to
-  optimize the latency for regular document read and write operations, and build
-  the views during quieter periods.
diff --git a/src/config/logging.rst b/src/config/logging.rst
deleted file mode 100644
index 26422f6..0000000
--- a/src/config/logging.rst
+++ /dev/null
@@ -1,153 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-=======
-Logging
-=======
-
-.. _config/log:
-
-Logging options
-================
-
-.. config:section:: log :: Logging Options
-
-    CouchDB logging configuration.
-
-    .. config:option:: writer :: Set the log writer to use.
-
-        Current writers include:
-
-        - ``stderr``: Logs are sent to stderr.
-        - ``file``: Logs are sent to the file set in
-          :option:`log file <log/file>`.
-        - ``syslog``: Logs are sent to the syslog daemon.
-        - ``journald``: Logs are sent to stderr without timestamp and log
-          levels compatible with sd-daemon.
-
-        You can also specify a full module name here if implement your own
-        writer::
-
-            [log]
-            writer = stderr
-
-    .. config:option:: file :: Logging file path
-
-        Specifies the location of file for logging output. Only used by the
-        ``file`` :option:`writer <log/writer>`::
-
-            [log]
-            file = /var/log/couchdb/couch.log
-
-        This path should be readable and writable for user that runs CouchDB
-        service (`couchdb` by default).
-
-    .. config:option:: write_buffer :: File log write buffer size
-
-       Specifies the size of the file log write buffer in bytes, to enable
-       delayed log writes. Only used by the ``file``
-       :option:`writer <log/writer>`::
-
-            [log]
-            write_buffer = 0
-
-    .. config:option:: write_delay :: Wait delay before commiting logs to disk
-
-        Specifies the wait in milliseconds before committing logs to disk, to
-        enable delayed log writes. Only used by the ``file``
-        :option:`writer <log/writer>`::
-
-            [log]
-            write_delay = 0
-
-    .. config:option:: level :: Logging verbose level
-
-        .. versionchanged:: 1.3 Added ``warning`` level.
-
-        Logging level defines how verbose and detailed logging will be::
-
-            [log]
-            level = info
-
-        Available levels:
-
-        - ``debug``: Detailed debug logging.
-        - ``info``: Informative logging. Includes HTTP requests headlines,
-          startup of an external processes etc.
-        - ``notice``
-        - ``warning`` or ``warn``: Warning messages are alerts about edge situations that
-          may lead to errors. For instance, compaction daemon alerts about low
-          or insufficient disk space at this level.
-        - ``error`` or ``err``: Error level includes only things that go wrong, like crash
-          reports and HTTP error responses (5xx codes).
-        - ``critical`` or ``crit``
-        - ``alert``
-        - ``emergency`` or ``emerg``
-        - ``none``: Disables logging any messages.
-
-    .. config:option:: include_sasl :: Include SASL information in logs
-
-        Includes `SASL`_ information in logs::
-
-            [log]
-            include_sasl = true
-
-        .. _SASL: http://www.erlang.org/doc/apps/sasl/
-
-    .. config:option:: syslog_host :: Syslog host
-
-        .. note::
-
-            Setting `syslog_host` is mandatory for ``syslog`` to work!
-
-        Specifies the syslog host to send logs to. Only used by the
-        ``syslog`` :option:`writer <log/writer>`::
-
-            [log]
-            syslog_host = localhost
-
-    .. config:option:: syslog_port :: Syslog port
-
-        Specifies the syslog port to connect to when sending logs. Only used by
-        the ``syslog`` :option:`writer <log/writer>`::
-
-            [log]
-            syslog_port = 514
-
-    .. config:option:: syslog_appid :: Application name for syslog
-
-        Specifies application name to the ``syslog``
-        :option:`writer <log/writer>`::
-
-            [log]
-            syslog_appid = couchdb
-
-    .. config:option:: syslog_facility :: Syslog designations for message sources
-
-        Specifies the syslog facility to use with the ``syslog``
-        :option:`writer <log/writer>`::
-
-            [log]
-            syslog_facility = local2
-
-    .. note::
-        CouchDB's ``syslog`` only knows how to use UDP logging. Please ensure that your
-        ``syslog`` server has UDP logging enabled.
-
-        For ``rsyslog`` you can enable the UDP module `imudp` in ``/etc/rsyslog.conf``::
-
-            # provides UDP syslog reception
-            module(load="imudp")
-            input(type="imudp" port="514")
diff --git a/src/config/misc.rst b/src/config/misc.rst
deleted file mode 100644
index 5c76d2a..0000000
--- a/src/config/misc.rst
+++ /dev/null
@@ -1,359 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: config
-.. highlight:: ini
-
-========================
-Miscellaneous Parameters
-========================
-
-.. _config/attachments:
-
-Configuration of Attachment Storage
-===================================
-
-.. config:section:: attachments :: Configuration of Attachment Storage
-
-    .. config:option:: compression_level :: Set zlib compression level
-
-        Defines zlib compression level for the attachments from ``1`` (lowest,
-        fastest) to ``9`` (highest, slowest). A value of ``0`` disables
-        compression::
-
-            [attachments]
-            compression_level = 8
-
-    .. config:option:: compressible_types :: List of attachment types for compression
-
-        Since compression is ineffective for some types of files, it is
-        possible to let CouchDB compress only some types of attachments,
-        specified by their MIME type::
-
-            [attachments]
-            compressible_types = text/*, application/javascript, application/json, application/xml
-
-.. _config/stats:
-
-Statistic Calculation
-=====================
-
-.. config:section:: stats :: Statistic Calculation
-
-    .. config:option:: interval :: Statistics gathering interval
-
-        Interval between gathering statistics in seconds::
-
-            [stats]
-            interval = 10
-
-.. _config/uuids:
-
-UUIDs Configuration
-===================
-
-.. config:section:: uuids :: UUIDs Configuration
-
-    .. config:option:: algorithm :: Generation Algorithm
-
-        .. versionchanged:: 1.3 Added ``utc_id`` algorithm.
-
-        CouchDB provides various algorithms to generate the UUID values that
-        are  used for document `_id`'s by default::
-
-            [uuids]
-            algorithm = sequential
-
-        Available algorithms:
-
-        - ``random``: 128 bits of random awesome. All awesome, all the time:
-
-          .. code-block:: javascript
-
-              {
-                  "uuids": [
-                      "5fcbbf2cb171b1d5c3bc6df3d4affb32",
-                      "9115e0942372a87a977f1caf30b2ac29",
-                      "3840b51b0b81b46cab99384d5cd106e3",
-                      "b848dbdeb422164babf2705ac18173e1",
-                      "b7a8566af7e0fc02404bb676b47c3bf7",
-                      "a006879afdcae324d70e925c420c860d",
-                      "5f7716ee487cc4083545d4ca02cd45d4",
-                      "35fdd1c8346c22ccc43cc45cd632e6d6",
-                      "97bbdb4a1c7166682dc026e1ac97a64c",
-                      "eb242b506a6ae330bda6969bb2677079"
-                  ]
-              }
-
-        - ``sequential``: Monotonically increasing ids with random increments.
-          The first 26 hex characters are random, the last 6 increment in
-          random amounts until an overflow occurs. On overflow, the random
-          prefix is regenerated and the process starts over.
-
-          .. code-block:: javascript
-
-              {
-                  "uuids": [
-                      "4e17c12963f4bee0e6ec90da54804894",
-                      "4e17c12963f4bee0e6ec90da5480512f",
-                      "4e17c12963f4bee0e6ec90da54805c25",
-                      "4e17c12963f4bee0e6ec90da54806ba1",
-                      "4e17c12963f4bee0e6ec90da548072b3",
-                      "4e17c12963f4bee0e6ec90da54807609",
-                      "4e17c12963f4bee0e6ec90da54807718",
-                      "4e17c12963f4bee0e6ec90da54807754",
-                      "4e17c12963f4bee0e6ec90da54807e5d",
-                      "4e17c12963f4bee0e6ec90da54808d28"
-                  ]
-              }
-
-        - ``utc_random``: The time since Jan 1, 1970 UTC, in microseconds. The
-          first 14 characters are the time in hex. The last 18 are random.
-
-          .. code-block:: javascript
-
-              {
-                  "uuids": [
-                      "04dd32b3af699659b6db9486a9c58c62",
-                      "04dd32b3af69bb1c2ac7ebfee0a50d88",
-                      "04dd32b3af69d8591b99a8e86a76e0fb",
-                      "04dd32b3af69f4a18a76efd89867f4f4",
-                      "04dd32b3af6a1f7925001274bbfde952",
-                      "04dd32b3af6a3fe8ea9b120ed906a57f",
-                      "04dd32b3af6a5b5c518809d3d4b76654",
-                      "04dd32b3af6a78f6ab32f1e928593c73",
-                      "04dd32b3af6a99916c665d6bbf857475",
-                      "04dd32b3af6ab558dd3f2c0afacb7d66"
-                  ]
-              }
-
-        - ``utc_id``: The time since Jan 1, 1970 UTC, in microseconds, plus the
-          ``utc_id_suffix`` string. The first 14 characters are the time in
-          hex. The :option:`uuids/utc_id_suffix` string value is appended to
-          these.
-
-          .. code-block:: javascript
-
-              {
-                  "uuids": [
-                      "04dd32bd5eabcc@mycouch",
-                      "04dd32bd5eabee@mycouch",
-                      "04dd32bd5eac05@mycouch",
-                      "04dd32bd5eac28@mycouch",
-                      "04dd32bd5eac43@mycouch",
-                      "04dd32bd5eac58@mycouch",
-                      "04dd32bd5eac6e@mycouch",
-                      "04dd32bd5eac84@mycouch",
-                      "04dd32bd5eac98@mycouch",
-                      "04dd32bd5eacad@mycouch"
-                  ]
-              }
-
-        .. note::
-            **Impact of UUID choices:** the choice of UUID has a significant
-            impact on the layout of the B-tree, prior to compaction.
-
-            For example, using a sequential UUID algorithm while uploading a
-            large batch of documents will avoid the need to rewrite many
-            intermediate B-tree nodes. A random UUID algorithm may require
-            rewriting intermediate nodes on a regular basis, resulting in
-            significantly decreased throughput and wasted disk space space due to
-            the append-only B-tree design.
-
-            It is generally recommended to set your own UUIDs, or use the
-            sequential algorithm unless you have a specific need and take into
-            account the likely need for compaction to re-balance the B-tree and
-            reclaim wasted space.
-
-    .. config:option:: utc_id_suffix :: UTC ID Suffix
-
-        .. versionadded:: 1.3
-
-        The ``utc_id_suffix`` value will be appended to UUIDs generated by the
-        ``utc_id`` algorithm. Replicating instances should have unique
-        ``utc_id_suffix`` values to ensure uniqueness of ``utc_id`` ids. ::
-
-            [uuid]
-            utc_id_suffix = my-awesome-suffix
-
-    .. config:option:: max_count :: Per-Request UUID Limit
-
-        .. versionadded:: 1.5.1
-
-        No more than this number of UUIDs will be sent in a single request. If
-        more UUIDs are requested, an HTTP error response will be thrown. ::
-
-            [uuid]
-            max_count = 1000
-
-.. _config/vendor:
-
-Vendor information
-==================
-
-.. config:section:: vendor :: Vendor information
-
-    .. versionadded:: 1.3
-
-    CouchDB distributors have the option of customizing CouchDB's welcome
-    message. This is returned when requesting ``GET /``. ::
-
-        [vendor]
-        name = The Apache Software Foundation
-        version = 1.5.0
-
-.. _config/csp:
-
-Content-Security-Policy
-=======================
-
-.. config:section:: csp :: Content-Security-Policy
-
-    You can configure ``Content-Security-Policy`` header for Fauxton, attachments and
-    show/list functions separately. See `MDN Content-Security-Policy <https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy>`_
-    for more details on CSP.
-
-    .. config:option:: utils_enable :: Enable Content-Security-Policy header (Fauxton)
-
-        Enable the sending of the header ``Content-Security-Policy`` for ``/_utils``.
-        Defaults to ``true``::
-
-            [csp]
-            utils_enable = true
-
-    .. config:option:: utils_header_value :: Set CSP-Header value (Fauxton)
-
-        Specifies the exact header value to send. Defaults to::
-
-            [csp]
-            utils_header_value = default-src 'self'; img-src 'self'; font-src *; script-src 'self' 'unsafe-eval'; style-src 'self' 'unsafe-inline';
-
-    .. config:option:: attachments_enable :: Enable CSP-Header (attachments)
-
-        Enable sending the ``Content-Security-Policy`` header for attachments::
-
-            [csp]
-            attachments_enable = true
-
-    .. config:option:: attachments_header_value :: Set CSP-Header value (attachments)
-
-        Specifies the exact header value to send. Defaults to::
-
-            [csp]
-            attachments_header_value = sandbox
-
-    .. config:option:: showlist_enable :: Enable CSP-Header (Show/List-Functions)
-
-        Enable sending the ``Content-Security-Policy`` header for show and list functions::
-
-            [csp]
-            showlist_enable = true
-
-    .. config:option:: showlist_header_value :: Set CSP-Header value (Show/List-Functions)
-
-        Specifies the exact header value to send. Defaults to::
-
-            [csp]
-            showlist_header_value = sandbox
-
-    The pre 3.2.0 behaviour is still honoured, but we recommend updating
-    to the new format.
-
-    Experimental support of CSP headers for ``/_utils`` (Fauxton).
-
-    .. config:option:: enable :: Enable CSP-Header (deprecated)
-
-        Enable the sending of the Header ``Content-Security-Policy``::
-
-            [csp]
-            enable = true
-
-    .. config:option:: header_value :: Set CSP-Header value (deprecated)
-
-        You can change the default value for the Header which is sent::
-
-            [csp]
-            header_value = default-src 'self'; img-src *; font-src *;
-
-.. _config/purge:
-
-Configuration of Database Purge
-===============================
-
-.. config:section:: purge :: Configuration of Database Purge
-
-    .. config:option:: max_document_id_number :: Allowed number of documents \
-        per Delete-Request
-
-        .. versionadded:: 3.0
-
-        Sets the maximum number of documents allowed in a single purge request::
-
-            [purge]
-            max_document_id_number = 100
-
-    .. config:option:: max_revisions_number :: Allowed number of accumulated \
-        revisions per Purge-Request
-
-        .. versionadded:: 3.0
-
-        Sets the maximum number of accumulated revisions allowed in a single purge
-        request::
-
-            [purge]
-            max_revisions_number = 1000
-
-    .. config:option:: index_lag_warn_seconds :: Allowed duration for purge \
-        checkpoint document
-
-        .. versionadded:: 3.0
-
-        Sets the allowed duration when index is not updated for local purge checkpoint
-        document. Default is 24 hours::
-
-            [purge]
-            index_lag_warn_seconds = 86400
-
-.. _config/prometheus:
-
-Configuration of Prometheus Endpoint
-====================================
-
-.. config:section:: prometheus :: Configuration of Prometheus Options
-
-    .. config:option:: additional_port :: Enable a separate, non-authenticated port \
-        for prometheus data
-
-        .. versionadded:: 3.2
-
-        Sets whether or not to create a separate, non-authenticated port (default is false)::
-
-            [prometheus]
-            additional_port = true
-
-    .. config:option:: bind_address :: IP address binding
-
-        .. versionadded:: 3.2
-
-        The IP address to bind::
-
-            [prometheus]
-            bind_address = 127.0.0.1
-
-    .. config:option:: port :: Port for querying Prometheus data without authentication
-
-        .. versionadded:: 3.2
-
-        The port on which clients can query prometheus endpoint data without authentication::
-
-            [prometheus]
-            port = 17986
diff --git a/src/config/query-servers.rst b/src/config/query-servers.rst
deleted file mode 100644
index 1d81665..0000000
--- a/src/config/query-servers.rst
+++ /dev/null
@@ -1,270 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. highlight:: ini
-
-=============
-Query Servers
-=============
-
-.. _config/query_servers:
-
-Query Servers Definition
-========================
-
-.. versionchanged:: 2.3 Changed configuration method for Query Servers
-  and Native Query Servers.
-
-CouchDB delegates computation of :ref:`design documents <ddocs>` functions
-to external query servers. The external query server is a special OS
-process which communicates with CouchDB over standard input/output using a
-very simple line-based protocol with JSON messages.
-
-An external query server may be defined with environment variables following
-this pattern::
-
-    COUCHDB_QUERY_SERVER_LANGUAGE="PATH ARGS"
-
-Where:
-
-- ``LANGUAGE``: is a programming language which code this query server may
-  execute. For instance, there are `PYTHON`, `RUBY`, `CLOJURE` and other
-  query servers in the wild. This value in *lowercase* is also used for ``ddoc``
-  field ``language`` to determine which query server processes the functions.
-
-  Note, that you may set up multiple query servers for the same programming
-  language, but you have to name them differently (like `PYTHONDEV` etc.).
-
-- ``PATH``: is a system path to the executable binary program that runs the
-  query server.
-
-- ``ARGS``: optionally, you may specify additional command line arguments
-  for the executable ``PATH``.
-
-The default query server is written in :ref:`JavaScript <query-server/js>`,
-running via `Mozilla SpiderMonkey`_. It requires no special environment
-settings to enable, but is the equivalent of these two variables::
-
-    COUCHDB_QUERY_SERVER_JAVASCRIPT="/opt/couchdb/bin/couchjs /opt/couchdb/share/server/main.js"
-    COUCHDB_QUERY_SERVER_COFFEESCRIPT="/opt/couchdb/bin/couchjs /opt/couchdb/share/server/main-coffee.js"
-
-By default, ``couchjs`` limits the max runtime allocation to 64MiB.
-If you run into out of memory issue in your ddoc functions,
-you can adjust the memory limitation (here, increasing to 512 MiB)::
-
-    COUCHDB_QUERY_SERVER_JAVASCRIPT="/usr/bin/couchjs -S 536870912 /usr/share/server/main.js"
-
-For more info about the available options, please consult ``couchjs -h``.
-
-.. _Mozilla SpiderMonkey: https://spidermonkey.dev/
-
-.. seealso::
-    The :ref:`Mango Query Server <api/db/_find>` is a declarative language
-    that requires *no programming*, allowing for easier indexing and finding
-    of data in documents.
-
-    The :ref:`Native Erlang Query Server <config/native_query_servers>`
-    allows running `ddocs` written in Erlang natively, bypassing
-    stdio communication and JSON serialization/deserialization round trip
-    overhead.
-
-.. _config/query_server_config:
-
-Query Servers Configuration
-===========================
-
-.. config:section:: query_server_config :: Query Servers Configuration
-
-    .. config:option:: commit_freq :: View index commit delay
-
-        Specifies the delay in seconds before view index changes are committed
-        to disk. The default value is ``5``::
-
-            [query_server_config]
-            commit_freq = 5
-
-    .. config:option:: os_process_limit :: Query Server process hard limit
-
-        Hard limit on the number of OS processes usable by Query
-        Servers. The default value is ``100``::
-
-            [query_server_config]
-            os_process_limit = 100
-
-        Setting ``os_process_limit`` too low can result in starvation of
-        Query Servers, and manifest in ``os_process_timeout`` errors,
-        while setting it too high can potentially use too many system
-        resources. Production settings are typically 10-20 times the
-        default value.
-
-    .. config:option:: os_process_soft_limit :: Query Server process soft limit
-
-        Soft limit on the number of OS processes usable by Query
-        Servers. The default value is ``100``::
-
-            [query_server_config]
-            os_process_soft_limit = 100
-
-        Idle OS processes are closed until the total reaches the soft
-        limit.
-
-        For example, if the hard limit is 200 and the soft limit is
-        100, the total number of OS processes will never exceed 200,
-        and CouchDB will close all idle OS processes until it reaches
-        100, at which point it will leave the rest intact, even if
-        some are idle.
-
-    .. config:option:: reduce_limit :: Reduce limit control
-
-        Controls `Reduce overflow` error that raises when output of
-        :ref:`reduce functions <reducefun>` is too big::
-
-            [query_server_config]
-            reduce_limit = true
-
-        Normally, you don't have to disable (by setting ``false`` value) this
-        option since main propose of `reduce` functions is to *reduce* the
-        input.
-
-.. _config/native_query_servers:
-
-Native Erlang Query Server
-==========================
-
-.. config:section:: native_query_servers :: Native Erlang Query Server
-
-    .. warning::
-        Due to security restrictions, the Erlang query server is disabled by
-        default.
-
-        Unlike the JavaScript query server, the Erlang one does not run in a
-        sandbox mode. This means that Erlang code has full access to your OS,
-        file system and network, which may lead to security issues. While Erlang
-        functions are faster than JavaScript ones, you need to be careful
-        about running them, especially if they were written by someone else.
-
-    CouchDB has a native Erlang query server, allowing you to write your
-    map/reduce functions in Erlang.
-
-    First, you'll need to edit your `local.ini` to include a
-    ``[native_query_servers]`` section::
-
-        [native_query_servers]
-        enable_erlang_query_server = true
-
-    To see these changes you will also need to restart the server.
-
-    Let's try an example of map/reduce functions which count the total
-    documents at each number of revisions (there are x many documents at
-    version "1", and y documents at "2"... etc). Add a few documents to the
-    database, then enter the following functions as a view:
-
-    .. code-block:: erlang
-
-        %% Map Function
-        fun({Doc}) ->
-            <<K,_/binary>> = proplists:get_value(<<"_rev">>, Doc, null),
-            V = proplists:get_value(<<"_id">>, Doc, null),
-            Emit(<<K>>, V)
-        end.
-
-        %% Reduce Function
-        fun(Keys, Values, ReReduce) -> length(Values) end.
-
-    If all has gone well, after running the view you should see a list of the
-    total number of documents at each revision number.
-
-    Additional examples are on the `users@couchdb.apache.org mailing list
-    <https://lists.apache.org/thread.html/9b5f2837bd32189385bb82eee44aec243f2ecacc6e907ffe0e1e03d3@1360091211@%3Cuser.couchdb.apache.org%3E>`_.
-
-.. _config/search:
-
-Search
-======
-
-CouchDB's search subsystem can be configured via the ``dreyfus`` configuration section.
-
-.. config:section:: dreyfus :: Search Subsystem Configuration
-
-    .. config:option:: name :: Clouseau JVM node name and location
-
-        The name and location of the Clouseau Java service required to enable Search
-        functionality. Defaults to ``clouseau@127.0.0.1``.
-
-    .. config:option:: retry_limit :: Maximum number of connection retries
-
-        CouchDB will try to reconnect to Clouseau using a bounded exponential backoff with
-        the following number of iterations. Defaults to ``5``.
-
-    .. config:option:: limit :: Default result set limit for global search
-
-        The number of results returned from a global search query if no limit is
-        specified. Defaults to ``25``.
-
-    .. config:option:: limit_partitions :: Default result set limit for partitioned DBs
-
-        The number of results returned from a search on a partition of a database if no
-        limit is specified. Defaults to ``2000``.
-
-    .. config:option:: max_limit :: Maximum result set for global search
-
-        The maximum number of results that can be returned from a global search query (or
-        any search query on a database without user-defined partitions). Attempts to set
-        ``?limit=N higher`` than this value will be rejected. Defaults to ``200``.
-
-    .. config:option:: max_limit_partitions :: Maximum result set for partitioned search
-
-        The maximum number of results that can be returned when searching a partition of a
-        database. Attempts to set ``?limit=N`` higher than this value will be rejected. If
-        this config setting is not defined, CouchDB will use the value of ``max_limit``
-        instead. If neither is defined, the default is ``2000``.
-
-.. _config/mango:
-
-Mango
-=====
-
-Mango is the Query Engine that services the :ref:`_find <api/db/_find>`, endpoint.
-
-.. config:section:: mango :: Mango Configuration
-
-    .. config:option:: index_all_disabled :: Disable "index all fields" behaviour
-
-        Set to ``true`` to disable the "index all fields" text index. This can lead
-        to out of memory issues when there are documents with nested array fields.
-        Defaults to ``false``.::
-
-            [mango]
-            index_all_disabled = false
-
-    .. config:option:: default_limit :: Default limit value for Mango queries.
-
-        Sets the default number of results that will be returned in a
-        :ref:`_find <api/db/_find>` response. Individual requests can override this
-        by setting ``limit`` directly in the query parameters.
-        Defaults to ``25``.::
-
-            [mango]
-            default_limit = 25
-
-    .. config:option:: index_scan_warning_threshold :: Ratio threshold that generates \
-        an index scan warning
-
-        This sets the ratio between documents scanned and results matched that
-        will generate a warning in the _find response. For example, if a query
-        requires reading 100 documents to return 10 rows, a warning will be
-        generated if this value is ``10``.
-
-        Defaults to ``10``. Setting the value to ``0`` disables the warning.::
-
-            [mango]
-            index_scan_warning_threshold = 10
diff --git a/src/config/replicator.rst b/src/config/replicator.rst
deleted file mode 100644
index 1f94efe..0000000
--- a/src/config/replicator.rst
+++ /dev/null
@@ -1,309 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. highlight:: ini
-
-==========
-Replicator
-==========
-
-.. _config/replicator:
-
-Replicator Database Configuration
-=================================
-
-.. config:section:: replicator :: Replicator Database Configuration
-
-    .. config:option:: max_jobs :: Maximum replications jobs
-
-        .. versionadded:: 2.1
-
-        Number of actively running replications.
-        This value represents the threshold to trigger the automatic replication
-        scheduler.
-        The system will check every ``interval`` milliseconds how many replication
-        jobs are running, and if there are more than ``max_jobs`` active jobs,
-        the scheduler will pause-and-restart up to ``max_churn`` jobs in the
-        scheduler queue.
-        Making this value too high could cause performance issues, while making
-        it too low could mean replications jobs might not have enough time to make
-        progress before getting unscheduled again.
-        This parameter can be adjusted at runtime and will take effect during next
-        rescheduling cycle::
-
-             [replicator]
-             max_jobs = 500
-
-    .. config:option:: interval :: Checking interval of replication jobs
-
-        .. versionadded:: 2.1
-
-        Scheduling interval in milliseconds.
-        During each reschedule cycle the scheduler might start or stop up to ``max_churn``
-        number of jobs::
-
-             [replicator]
-             interval = 60000
-
-    .. config:option:: max_churn :: Maximum number of jobs to start and stop
-
-        .. versionadded:: 2.1
-
-        Maximum number of replication jobs to start and stop during rescheduling.
-        This parameter, along with ``interval``, defines the rate of job replacement.
-        During startup, however, a much larger number of jobs could be started
-        (up to ``max_jobs``) in a short period of time::
-
-             [replicator]
-             max_churn = 20
-
-    .. config:option:: max_history :: Maximum number of events recorded for each job
-
-        Maximum number of events recorded for each job. This parameter defines
-        an upper bound on the consecutive failure count for a job, and in turn
-        the maximum backoff factor used when determining the delay before the job
-        is restarted. The longer the length of the crash count, the longer the
-        possible length of the delay::
-
-             [replicator]
-             max_history = 20
-
-    .. config:option:: update_docs :: Update replication document with error and \
-        triggered states
-
-        .. versionadded:: 2.1
-
-        When set to ``true`` replicator will update replication document with
-        error and triggered states. This approximates pre-2.1 replicator
-        behavior::
-
-             [replicator]
-             update_docs = false
-
-    .. config:option:: worker_batch_size :: Batch size of workers
-
-        With lower batch sizes checkpoints are done more frequently. Lower
-        batch sizes also reduce the total amount of used RAM memory::
-
-            [replicator]
-            worker_batch_size = 500
-
-    .. config:option:: worker_processes :: Number of worker processes
-
-        More worker processes can give higher network throughput but can also
-        imply more disk and network IO::
-
-            [replicator]
-            worker_processes = 4
-
-    .. config:option:: http_connections :: Maximum number of HTTP connections
-
-        Maximum number of HTTP connections per replication::
-
-            [replicator]
-            http_connections = 20
-
-    .. config:option:: connection_timeout :: Per replication connection timeout
-
-        HTTP connection timeout per replication.
-        This is divided by three (3) when the replicator makes changes feed requests.
-        Even for very fast/reliable networks it might need to be increased if
-        a remote database is too busy::
-
-            [replicator]
-            connection_timeout = 30000
-
-    .. config:option:: retries_per_request :: Number of retries per request
-
-        .. versionchanged:: 2.1.1
-
-        If a request fails, the replicator will retry it up to N times. The
-        default value for N is 5 (before version 2.1.1 it was 10). The requests
-        are retried with a doubling exponential backoff starting at 0.25
-        seconds. So by default requests would be retried in 0.25, 0.5, 1, 2, 4
-        second intervals. When number of retires is exhausted, the whole
-        replication job is stopped and will retry again later::
-
-            [replicator]
-            retries_per_request = 5
-
-    .. config:option:: socket_options :: Erlang socket options
-
-        Some socket options that might boost performance in some scenarios:
-
-        - ``{nodelay, boolean()}``
-        - ``{sndbuf, integer()}``
-        - ``{recbuf, integer()}``
-        - ``{priority, integer()}``
-
-        See the `inet`_ Erlang module's man page for the full list of options::
-
-            [replicator]
-            socket_options = [{keepalive, true}, {nodelay, false}]
-
-        .. _inet: http://www.erlang.org/doc/man/inet.html#setopts-2
-
-    .. config:option:: checkpoint_interval :: Replication checkpoint interval
-
-        .. versionadded:: 1.6
-
-        Defines replication checkpoint interval in milliseconds.
-        :ref:`Replicator <replicator>` will :get:`requests </{db}>` from the
-        Source database at the specified interval::
-
-            [replicator]
-            checkpoint_interval = 5000
-
-        Lower intervals may be useful for frequently changing data, while
-        higher values will lower bandwidth and make fewer requests for
-        infrequently updated databases.
-
-    .. config:option:: use_checkpoints :: Use checkpoints during replication
-
-        .. versionadded:: 1.6
-
-        If ``use_checkpoints`` is set to ``true``, CouchDB will make
-        checkpoints during replication and at the completion of replication.
-        CouchDB can efficiently resume replication from any of these
-        checkpoints::
-
-            [replicator]
-            use_checkpoints = true
-
-        .. note::
-            Checkpoints are stored in :ref:`local documents <api/local>`
-            on both the source and target databases (which requires write
-            access).
-
-        .. warning::
-            Disabling checkpoints is **not recommended** as CouchDB will scan
-            the Source database's changes feed from the beginning.
-
-    .. config:option:: cert_file :: Path to user PEM certificate file
-
-        Path to a file containing the user's certificate::
-
-            [replicator]
-            cert_file = /full/path/to/server_cert.pem
-
-    .. config:option:: key_file :: Path to private user PEM file
-
-        Path to file containing user's private PEM encoded key::
-
-            [replicator]
-            key_file = /full/path/to/server_key.pem
-
-    .. config:option:: password :: Optional password for protected key file
-
-        String containing the user's password. Only used if the private key file
-        is password protected::
-
-            [replicator]
-            password = somepassword
-
-    .. config:option:: verify_ssl_certificates :: Check peer certificates
-
-        Set to true to validate peer certificates::
-
-            [replicator]
-            verify_ssl_certificates = false
-
-    .. config:option:: ssl_trusted_certificates_file :: Trusted peer certificates
-
-        File containing a list of peer trusted certificates (in the PEM
-        format)::
-
-            [replicator]
-            ssl_trusted_certificates_file = /etc/ssl/certs/ca-certificates.crt
-
-    .. config:option:: ssl_certificate_max_depth :: Maximum peer certificate \
-        depth checking size
-
-        Maximum peer certificate depth (must be set even if certificate
-        validation is off)::
-
-            [replicator]
-            ssl_certificate_max_depth = 3
-
-    .. config:option:: auth_plugins :: List of replicator client authentication plugins
-
-        .. versionadded:: 2.2
-
-        List of replicator client authentication plugins. Plugins will
-        be tried in order and the first to initialize successfully will
-        be used. By default there are two plugins available:
-        `couch_replicator_auth_session` implementing session (cookie)
-        authentication, and `couch_replicator_auth_noop` implementing basic
-        authentication. For backwards compatibility, the no-op plugin should be used at
-        the end of the plugin list::
-
-          [replicator]
-          auth_plugins = couch_replicator_auth_session,couch_replicator_auth_noop
-
-    .. config:option:: usage_coeff :: Usage coefficient decay
-
-        .. versionadded:: 3.2.0
-
-        Usage coefficient decays historic fair share usage every
-        scheduling cycle. The value must be between 0.0 and 1.0. Lower
-        values will ensure historic usage decays quicker and higher
-        values means it will be remembered longer::
-
-            [replicator]
-            usage_coeff = 0.5
-
-    .. config:option:: priority_coeff :: Priority coefficient decays
-
-        .. versionadded:: 3.2.0
-
-       Priority coefficient decays all the job priorities such that they slowly
-       drift towards the front of the run queue. This coefficient defines a maximum
-       time window over which this algorithm would operate. For example, if this
-       value is too small (0.1), after a few cycles quite a few jobs would end up at
-       priority 0, and would render this algorithm useless. The default value of
-       0.98 is picked such that if a job ran for one scheduler cycle, then didn't
-       get to run for 7 hours, it would still have priority > 0. 7 hours was picked
-       as it was close enough to 8 hours which is the default maximum error backoff
-       interval::
-
-            [replicator]
-            priority_coeff = 0.98
-
-.. _config/replicator.shares:
-
-Fair Share Replicator Share Allocation
-======================================
-
-.. config:section:: replicator.shares :: Per-Database Fair Share Allocation
-
-    .. config:option:: $replicator_db :: Value for a replicator database
-
-        .. versionadded:: 3.2.0
-
-        Fair share configuration section. Higher share values results in a
-        higher chance that jobs from that db get to run. The default
-        value is 100, minimum is 1 and maximum is 1000. The
-        configuration may be set even if the database does not exist.
-
-        In this context the option ``$replicator_db`` acts as a placeholder
-        for your replicator database name. The default replicator database is
-        ``_replicator``. Additional replicator databases can be created. To be
-        recognized as such by the system, their database names should end with
-        ``/_replicator``. See the :ref:`Replicator Database <replicator>`
-        section for more info.
-
-        ::
-
-            [replicator.shares]
-            _replicator = 50
-            foo/_replicator = 25
-            bar/_replicator = 25
diff --git a/src/config/resharding.rst b/src/config/resharding.rst
deleted file mode 100644
index 91531aa..0000000
--- a/src/config/resharding.rst
+++ /dev/null
@@ -1,108 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. highlight:: ini
-
-==========
-Resharding
-==========
-
-.. _config/reshard:
-
-Resharding Configuration
-========================
-
-.. config:section:: resharding :: Resharding Configuration
-
-    .. config:option:: max_jobs :: Maximum resharding jobs per node
-
-        Maximum number of resharding jobs per cluster node. This includes
-        completed, failed, and running jobs. If the job appears in the
-        _reshard/jobs HTTP API results it will be counted towards the limit.
-        When more than ``max_jobs`` jobs have been created, subsequent requests
-        will start to fail with the ``max_jobs_exceeded`` error::
-
-             [reshard]
-             max_jobs = 48
-
-    .. config:option:: max_history :: Maximum size of the event log
-
-        Each resharding job maintains a timestamped event log. This setting
-        limits the maximum size of that log::
-
-             [reshard]
-             max_history = 20
-
-    .. config:option:: max_retries :: Maximum number of retries before failing \
-        resharding job
-
-        How many times to retry shard splitting steps if they fail. For
-        example, if indexing or topping off fails, it will be retried up to
-        this many times before the whole resharding job fails::
-
-             [reshard]
-             max_retries = 1
-
-    .. config:option:: retry_interval_sec :: Wait time between resharding retries
-
-        How long to wait between subsequent retries::
-
-             [reshard]
-             retry_interval_sec = 10
-
-    .. config:option:: delete_source :: Delete source after resharding
-
-        Indicates if the source shard should be deleted after resharding has
-        finished. By default, it is ``true`` as that would recover the space
-        utilized by the shard. When debugging or when extra safety is required,
-        this can be switched to ``false``::
-
-             [reshard]
-             delete_source = true
-
-    .. config:option:: update_shard_map_timeout_sec :: Shard map update waiting time
-
-        How many seconds to wait for the shard map update operation to
-        complete. If there is a large number of shard db changes waiting to
-        finish replicating, it might be beneficial to increase this timeout::
-
-            [reshard]
-            update_shard_map_timeout_sec = 60
-
-    .. config:option:: source_close_timeout_sec :: Source shard wait time before close
-
-        How many seconds to wait for the source shard to close. "Close" in this
-        context means that client requests which keep the database open have
-        all finished::
-
-            [reshard]
-            source_close_timeout_sec = 600
-
-    .. config:option:: require_node_param :: Require node parameter when creating \
-        resharding job
-
-        Require users to specify a ``node`` parameter when creating resharding
-        jobs. This can be used as a safety check to avoid inadvertently
-        starting too many resharding jobs by accident::
-
-            [reshard]
-            require_node_param = false
-
-    .. config:option:: require_range_param :: Require range parameter when creating \
-        resharding job
-
-        Require users to specify a ``range`` parameter when creating resharding
-        jobs. This can be used as a safety check to avoid inadvertently
-        starting too many resharding jobs by accident::
-
-            [reshard]
-            require_range_param = false
diff --git a/src/contributing.rst b/src/contributing.rst
deleted file mode 100644
index 17f66d5..0000000
--- a/src/contributing.rst
+++ /dev/null
@@ -1,218 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _contributing:
-
-==================================
-Contributing to this Documentation
-==================================
-
-The documentation lives in its own source tree. We'll start by forking and
-cloning the CouchDB documentation GitHub mirror. That will allow us to send the
-contribution to CouchDB with a pull request.
-
-If you don't have a GitHub account yet, it is a good time to get one, they are
-free. If you don't want to use GitHub, there are alternate ways to
-contributing back, that we'll cover next time.
-
-Go to https://github.com/apache/couchdb-documentation and click the "fork"
-button in the top right. This will create a fork of CouchDB in your GitHub
-account. If your account is `username`, your fork lives at
-https://github.com/username/couchdb-documentation. In the header, it tells me my
-"GitHub Clone URL". We need to copy that and start a terminal:
-
-.. code-block:: bash
-
-    $ git clone https://github.com/username/couchdb-documentation.git
-    $ cd couchdb-documentation
-    $ subl .
-
-I'm opening the whole CouchDB documentation source tree in my favourite editor.
-It gives me the usual directory listing:
-
-.. code-block:: bash
-
-    ebin/
-    ext/
-    .git/
-    .gitignore
-    images/
-    LICENSE
-    make.bat
-    Makefile
-    NOTICE
-    rebar.config
-    src/
-    static/
-    templates/
-    themes/
-    .travis.yml
-
-The documentation sources live in `src`, you can safely ignore all
-the other files and directories.
-
-First we should determine where we want to document this inside the
-documentation. We can look through http://docs.couchdb.org/en/latest/
-for inspiration. The `JSON Structure Reference`_ looks like a fine place to
-write this up.
-
-.. _JSON Structure Reference: http://docs.couchdb.org/en/latest/json-structure.html
-
-The current state includes mostly tables describing the JSON structure (after
-all, that's the title of this chapter), but some prose about the number
-representation can't hurt. For future reference, since the topic in the thread
-includes views and different encoding in views (as opposed to the storage
-engine), we should remember to make a note in the views documentation as well,
-but we'll leave this for later.
-
-Let's try and find the source file that builds the file
-http://docs.couchdb.org/en/latest/json-structure.html -- we are in luck, under
-`share/doc/src` we find the file `json-structure.rst`. That looks promising.
-`.rst` stands for ReStructured Text (see
-http://thomas-cokelaer.info/tutorials/sphinx/rest_syntax.html
-for a markup reference), which is an ASCII format for writing
-documents, documentation in this case. Let's have a look and open it.
-
-We see ASCII tables with some additional formatting, all looking like the
-final HTML. So far so easy. For now, let's just add to the bottom of this. We
-can worry about organising this better later.
-
-We start by adding a new headline::
-
-    Number Handling
-    ===============
-
-Now we paste in the rest of the main email of the thread. It is mostly text,
-but it includes some code listings. Let's mark them up. We'll turn::
-
-    ejson:encode(ejson:decode(<<"1.1">>)).
-    <<"1.1000000000000000888">>
-
-Into::
-
-    .. code-block:: erlang
-
-        ejson:encode(ejson:decode(<<"1.1">>)).
-        <<"1.1000000000000000888">>
-
-And we follow along with the other code samples. We turn::
-
-    Spidermonkey
-
-    $ js -h 2>&1 | head -n 1
-    JavaScript-C 1.8.5 2011-03-31
-    $ js
-    js> JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    "1.0123456789012346"
-    js> var f = JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-    js> JSON.stringify(JSON.parse(f))
-    "1.0123456789012346"
-
-into::
-
-    Spidermonkey::
-
-        $ js -h 2>&1 | head -n 1
-        JavaScript-C 1.8.5 2011-03-31
-        $ js
-        js> JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-        "1.0123456789012346"
-        js> var f = JSON.stringify(JSON.parse("1.01234567890123456789012345678901234567890"))
-        js> JSON.stringify(JSON.parse(f))
-        "1.0123456789012346"
-
-And then follow all the other ones.
-
-I cleaned up the text a little but to make it sound more like a documentation
-entry as opposed to a post on a mailing list.
-
-The next step would be to validate that we got all the markup right. I'll
-leave this for later. For now we'll contribute our change back to CouchDB.
-
-First, we commit our changes::
-
-    $ > git commit -am 'document number encoding'
-    [main a84b2cf] document number encoding
-    1 file changed, 199 insertions(+)
-
-Then we push the commit to our CouchDB fork::
-
-    $ git push origin main
-
-Next, we go back to our GitHub page
-https://github.com/username/couchdb-documentation and click the "Pull Request"
-button. Fill in the description with something useful and hit the
-"Send Pull Request" button.
-
-And we're done!
-
-Style Guidelines for this Documentation
-=======================================
-
-When you make a change to the documentation, you should make sure that you
-follow the style. Look through some files and you will see that the style is
-quite straightforward. If you do not know if your formatting is in compliance
-with the style, ask yourself the following question::
-
-    Is it needed for correct syntax?
-
-If the answer is ``No.`` then it is probably not.
-
-These guidelines strive be simple, without contradictions and exceptions. The
-best style is the one that is followed because it seems to be the natural way of
-doing it.
-
-The guidelines
---------------
-
-The guidelines are in descending priority.
-
-#. Syntax
-
-   * Correct syntax is always more important than style. This includes
-     configuration files, HTML responses, etc.
-
-#. Encoding
-
-   * All files are ``UTF-8``.
-
-#. Line ending
-
-   * All lines end with ``\n``.
-   * No trailing whitespace.
-
-#. Line length
-
-   * The maximum line length is ``90`` characters.
-
-#. Links
-
-   * All internal links are relative.
-
-#. Indentation
-
-   * ``4`` spaces.
-
-#. Titles
-
-   * The highest level titles in a file is over and underlined with ``=``.
-   * Lower level titles are underlined with the following characters in
-     descending order::
-
-        = - ^ *  + # ` : . " ~ _
-
-   * Over and underline match the title length.
-
-#. Empty lines
-
-   * No empty line at the end of the file.
-   * Lists may separated each item with an empty line.
diff --git a/src/cve/2010-0009.rst b/src/cve/2010-0009.rst
deleted file mode 100644
index 8dc47a8..0000000
--- a/src/cve/2010-0009.rst
+++ /dev/null
@@ -1,53 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2010-0009:
-
-=========================================================
-CVE-2010-0009: Apache CouchDB Timing Attack Vulnerability
-=========================================================
-
-:Date: 31.03.2010
-
-:Affected: Apache CouchDB 0.8.0 to 0.10.1
-
-:Severity: Important
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-Apache CouchDB versions prior to version :ref:`0.11.0 <release/0.11.0>` are
-vulnerable to timing attacks, also known as side-channel information leakage,
-due to using simple break-on-inequality string comparisons when verifying hashes
-and passwords.
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`0.11.0 <release/0.11.0>`.
-Upgrades from the :ref:`0.10.x <release/0.10.x>` series should be seamless.
-Users on earlier versions should consult with
-:ref:`upgrade notes <release/0.10.x/upgrade>`.
-
-Example
-=======
-
-A canonical description of the attack can be found in
-http://codahale.com/a-lesson-in-timing-attacks/
-
-Credit
-======
-
-This issue was discovered by *Jason Davies* of the Apache CouchDB development
-team.
diff --git a/src/cve/2010-2234.rst b/src/cve/2010-2234.rst
deleted file mode 100644
index 674ac03..0000000
--- a/src/cve/2010-2234.rst
+++ /dev/null
@@ -1,61 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2010-2234:
-
-===============================================================
-CVE-2010-2234: Apache CouchDB Cross Site Request Forgery Attack
-===============================================================
-
-:Date: 21.02.2010
-
-:Affected: Apache CouchDB 0.8.0 to 0.11.1
-
-:Severity: Important
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-Apache CouchDB versions prior to version :ref:`0.11.1 <release/0.11.1>` are
-vulnerable to `Cross Site Request Forgery`_ (CSRF) attacks.
-
-.. _Cross Site Request Forgery: http://en.wikipedia.org/wiki/Cross-site_request_forgery
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`0.11.2 <release/0.11.2>`
-or :ref:`1.0.1 <release/1.0.1>`.
-
-Upgrades from the :ref:`0.11.x <release/0.11.x>` and
-:ref:`0.10.x <release/0.10.x>` series should be seamless.
-
-Users on earlier versions should consult with upgrade notes.
-
-Example
-=======
-
-A malicious website can `POST` arbitrary JavaScript code to well
-known CouchDB installation URLs (like http://localhost:5984/)
-and make the browser execute the injected JavaScript in the
-security context of CouchDB's admin interface Futon.
-
-Unrelated, but in addition the JSONP API has been turned off
-by default to avoid potential information leakage.
-
-Credit
-======
-
-This CSRF issue was discovered by a source that wishes to stay
-anonymous.
diff --git a/src/cve/2010-3854.rst b/src/cve/2010-3854.rst
deleted file mode 100644
index 368d0c5..0000000
--- a/src/cve/2010-3854.rst
+++ /dev/null
@@ -1,55 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2010-3854:
-
-========================================================
-CVE-2010-3854: Apache CouchDB Cross Site Scripting Issue
-========================================================
-
-:Date: 28.01.2011
-
-:Affected: Apache CouchDB 0.8.0 to 1.0.1
-
-:Severity: Important
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-Apache CouchDB versions prior to version :ref:`1.0.2 <release/1.0.2>` are
-vulnerable to `Cross Site Scripting`_ (XSS) attacks.
-
-.. _Cross Site Scripting: http://en.wikipedia.org/wiki/Cross-site_scripting
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`1.0.2 <release/1.0.2>`.
-
-Upgrades from the :ref:`0.11.x <release/0.11.x>` and
-:ref:`0.10.x <release/0.10.x>` series should be seamless.
-
-Users on earlier versions should consult with upgrade notes.
-
-Example
-=======
-
-Due to inadequate validation of request parameters and cookie data in Futon,
-CouchDB's web-based administration UI, a malicious site can execute arbitrary
-code in the context of a user's browsing session.
-
-Credit
-======
-
-This XSS issue was discovered by a source that wishes to stay anonymous.
diff --git a/src/cve/2012-5641.rst b/src/cve/2012-5641.rst
deleted file mode 100644
index e387332..0000000
--- a/src/cve/2012-5641.rst
+++ /dev/null
@@ -1,75 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2012-5641:
-
-==================================================================================
-CVE-2012-5641: Information disclosure via unescaped backslashes in URLs on Windows
-==================================================================================
-
-:Date: 14.01.2013
-
-:Affected: All Windows-based releases of Apache CouchDB, up to and including
-           1.0.3, 1.1.1, and 1.2.0 are vulnerable.
-
-:Severity: Moderate
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-A specially crafted request could be used to access content directly that
-would otherwise be protected by inbuilt CouchDB security mechanisms. This
-request could retrieve in binary form any CouchDB database, including the
-`_users` or `_replication` databases, or any other file that the user account
-used to run CouchDB might have read access to on the local filesystem. This
-exploit is due to a vulnerability in the included MochiWeb HTTP library.
-
-Mitigation
-==========
-
-Upgrade to a supported CouchDB release that includes this fix, such as:
-
-- :ref:`1.0.4 <release/1.0.4>`
-- :ref:`1.1.2 <release/1.1.2>`
-- :ref:`1.2.1 <release/1.2.1>`
-- :ref:`1.3.x <release/1.3.x>`
-
-All listed releases have included a specific fix for the MochiWeb component.
-
-Work-Around
-===========
-
-Users may simply exclude any file-based web serving components directly
-within their configuration file, typically in `local.ini`. On a default
-CouchDB installation, this requires amending the
-`httpd_global_handlers/favicon.ico` and `httpd_global_handlers/_utils`
-lines within `httpd_global_handlers`::
-
-    [httpd_global_handlers]
-    favicon.ico = {couch_httpd_misc_handlers, handle_welcome_req, <<"Forbidden">>}
-    _utils = {couch_httpd_misc_handlers, handle_welcome_req, <<"Forbidden">>}
-
-If additional handlers have been added, such as to support Adobe's Flash
-`crossdomain.xml` files, these would also need to be excluded.
-
-Acknowledgement
-===============
-
-The issue was found and reported by Sriram Melkote to the upstream MochiWeb
-project.
-
-References
-==========
-
-- https://github.com/melkote/mochiweb/commit/ac2bf
diff --git a/src/cve/2012-5649.rst b/src/cve/2012-5649.rst
deleted file mode 100644
index 4ac6738..0000000
--- a/src/cve/2012-5649.rst
+++ /dev/null
@@ -1,49 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2012-5649:
-
-==============================================================
-CVE-2012-5649: JSONP arbitrary code execution with Adobe Flash
-==============================================================
-
-:Date: 14.01.2013
-
-:Affected: Releases up to and including 1.0.3, 1.1.1, and 1.2.0 are vulnerable,
-           if administrators have enabled JSONP.
-
-:Severity: Moderate
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-A hand-crafted JSONP callback and response can be used to run arbitrary code
-inside client-side browsers via Adobe Flash.
-
-Mitigation
-==========
-
-Upgrade to a supported CouchDB release that includes this fix, such as:
-
-- :ref:`1.0.4 <release/1.0.4>`
-- :ref:`1.1.2 <release/1.1.2>`
-- :ref:`1.2.1 <release/1.2.1>`
-- :ref:`1.3.x <release/1.3.x>`
-
-All listed releases have included a specific fix.
-
-Work-Around
-===========
-
-Disable JSONP or don't enable it since it's disabled by default.
diff --git a/src/cve/2012-5650.rst b/src/cve/2012-5650.rst
deleted file mode 100644
index 4b41917..0000000
--- a/src/cve/2012-5650.rst
+++ /dev/null
@@ -1,68 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2012-5650:
-
-==========================================================
-CVE-2012-5650: DOM based Cross-Site Scripting via Futon UI
-==========================================================
-
-:Date: 14.01.2013
-
-:Affected: Apache CouchDB releases up to and including 1.0.3, 1.1.1,
-           and 1.2.0 are vulnerable.
-
-:Severity: Moderate
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-Query parameters passed into the browser-based test suite are not sanitised,
-and can be used to load external resources. An attacker may execute JavaScript
-code in the browser, using the context of the remote user.
-
-Mitigation
-==========
-
-Upgrade to a supported CouchDB release that includes this fix, such as:
-
-- :ref:`1.0.4 <release/1.0.4>`
-- :ref:`1.1.2 <release/1.1.2>`
-- :ref:`1.2.1 <release/1.2.1>`
-- :ref:`1.3.x <release/1.3.x>`
-
-All listed releases have included a specific fix.
-
-Work-Around
-===========
-
-Disable the Futon user interface completely, by adapting `local.ini` and
-restarting CouchDB::
-
-  [httpd_global_handlers]
-  _utils = {couch_httpd_misc_handlers, handle_welcome_req, <<"Forbidden">>}
-
-Or by removing the UI test suite components:
-
-- share/www/verify_install.html
-- share/www/couch_tests.html
-- share/www/custom_test.html
-
-Acknowledgement
-===============
-
-This vulnerability was discovered & reported to the Apache Software Foundation
-by `Frederik Braun`_.
-
-.. _Frederik Braun: https://frederik-braun.com/
diff --git a/src/cve/2014-2668.rst b/src/cve/2014-2668.rst
deleted file mode 100644
index 7cdb725..0000000
--- a/src/cve/2014-2668.rst
+++ /dev/null
@@ -1,52 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2014-2668:
-
-==================================================================================
-CVE-2014-2668: DoS (CPU and memory consumption) via the count parameter to /_uuids
-==================================================================================
-
-:Date: 26.03.2014
-
-:Affected: Apache CouchDB releases up to and including 1.3.1, 1.4.0,
-           and 1.5.0 are vulnerable.
-
-:Severity: Moderate
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-The :ref:`api/server/uuids` resource's `count` query parameter is able to take
-unreasonable huge numeric value which leads to exhaustion of server resources
-(CPU and memory) and to DoS as the result.
-
-Mitigation
-==========
-
-Upgrade to a supported CouchDB release that includes this fix, such as:
-
-- :ref:`1.5.1 <release/1.5.1>`
-- :ref:`1.6.0 <release/1.6.0>`
-
-All listed releases have included a specific fix to
-
-Work-Around
-===========
-
-Disable the :ref:`api/server/uuids` handler completely, by adapting
-`local.ini` and restarting CouchDB::
-
-    [httpd_global_handlers]
-    _uuids =
diff --git a/src/cve/2017-12635.rst b/src/cve/2017-12635.rst
deleted file mode 100644
index 0f46cf5..0000000
--- a/src/cve/2017-12635.rst
+++ /dev/null
@@ -1,67 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2017-12635:
-
-==========================================================
-CVE-2017-12635: Apache CouchDB Remote Privilege Escalation
-==========================================================
-
-:Date: 14.11.2017
-
-:Affected: All Versions of Apache CouchDB
-
-:Severity: Critical
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-Due to differences in CouchDB’s Erlang-based JSON parser and JavaScript-based
-JSON parser, it is possible to submit _users documents with duplicate keys for
-`roles` used for access control within the database, including the special case
-`_admin` role, that denotes administrative users. In combination with
-:ref:`CVE-2017-12636 <cve/2017-12636>` (Remote Code Execution), this can be used
-to give non-admin users access to arbitrary shell commands on the server as the
-database system user.
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`1.7.1 <release/1.7.1>` or
-:ref:`2.1.1 <release/2.1.1>`.
-
-Upgrades from previous 1.x and 2.x versions in the same series should be
-seamless.
-
-Users on earlier versions, or users upgrading from 1.x to 2.x should consult
-with upgrade notes.
-
-Example
-=======
-
-The JSON parser differences result in behaviour that if two `roles` keys
-are available in the JSON, the second one will be used for authorising the
-document write, but the first `roles` key is used for subsequent authorisation
-for the newly created user. By design, users can not assign themselves roles.
-The vulnerability allows non-admin users to give themselves admin privileges.
-
-We addressed this issue by updating the way CouchDB parses JSON in Erlang,
-mimicking the JavaScript behaviour of picking the last key, if duplicates exist.
-
-Credit
-======
-
-This issue was discovered by `Max Justicz`_.
-
-.. _Max Justicz: https://mastodon.mit.edu/@maxj
diff --git a/src/cve/2017-12636.rst b/src/cve/2017-12636.rst
deleted file mode 100644
index 8133508..0000000
--- a/src/cve/2017-12636.rst
+++ /dev/null
@@ -1,54 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2017-12636:
-
-====================================================
-CVE-2017-12636: Apache CouchDB Remote Code Execution
-====================================================
-
-:Date: 14.11.2017
-
-:Affected: All Versions of Apache CouchDB
-
-:Severity: Critical
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-CouchDB administrative users can configure the database server via HTTP(S). Some
-of the configuration options include paths for operating system-level binaries
-that are subsequently launched by CouchDB. This allows a CouchDB admin user to
-execute arbitrary shell commands as the CouchDB user, including downloading
-and executing scripts from the public internet.
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`1.7.1 <release/1.7.1>` or
-:ref:`2.1.1 <release/2.1.1>`.
-
-Upgrades from previous 1.x and 2.x versions in the same series should be
-seamless.
-
-Users on earlier versions, or users upgrading from 1.x to 2.x should consult
-with upgrade notes.
-
-Credit
-======
-
-This issue was discovered by `Joan Touzet`_ of the CouchDB Security team during
-the investigation of :ref:`CVE-2017-12635 <cve/2017-12635>`.
-
-.. _Joan Touzet: http://www.atypical.net
diff --git a/src/cve/2018-11769.rst b/src/cve/2018-11769.rst
deleted file mode 100644
index 7968116..0000000
--- a/src/cve/2018-11769.rst
+++ /dev/null
@@ -1,60 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2018-11769:
-
-====================================================
-CVE-2018-11769: Apache CouchDB Remote Code Execution
-====================================================
-
-:Date: 08.08.2018
-
-:Affected: Apache CouchDB 1.x and ≤2.1.2
-
-:Severity: Low
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-CouchDB administrative users can configure the database server via HTTP(S). Due
-to insufficient validation of administrator-supplied configuration settings via
-the HTTP API, it is possible for a CouchDB administrator user to escalate their
-privileges to that of the operating system’s user under which CouchDB runs, by
-bypassing the blacklist of configuration settings that are not allowed to be
-modified via the HTTP API.
-
-This privilege escalation effectively allows a CouchDB admin user to gain
-arbitrary remote code execution, bypassing mitigations for
-:ref:`CVE-2017-12636 <cve/2017-12636>` and :ref:`CVE-2018-8007 <cve/2018-8007>`.
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`2.2.0 <release/2.2.0>`.
-
-Upgrades from previous 2.x versions in the same series should be seamless.
-
-Users still on CouchDB 1.x should be advised that the Apache CouchDB team no
-longer support 1.x.
-
-In-place mitigation (on any 1.x release, or 2.x prior to 2.2.0) is possible by
-removing the ``_config`` route from the ``default.ini`` file, as follows:
-
-   .. code-block:: text
-
-    [httpd_global_handlers]
-    ;_config = {couch_httpd_misc_handlers, handle_config_req}
-
-or by blocking access to the `/_config` (1.x) or `/_node/*/_config` routes at a reverse
-proxy in front of the service.
diff --git a/src/cve/2018-17188.rst b/src/cve/2018-17188.rst
deleted file mode 100644
index f7bd045..0000000
--- a/src/cve/2018-17188.rst
+++ /dev/null
@@ -1,67 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2018-17188:
-
-===========================================================
-CVE-2018-17188: Apache CouchDB Remote Privilege Escalations
-===========================================================
-
-:Date: 17.12.2018
-
-:Affected: All Versions of Apache CouchDB
-
-:Severity: Medium
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-Prior to CouchDB version 2.3.0, CouchDB allowed for runtime-configuration of key
-components of the database. In some cases, this lead to vulnerabilities where
-CouchDB admin users could access the underlying operating system as the CouchDB
-user. Together with other vulnerabilities, it allowed full system entry for
-unauthenticated users.
-
-These vulnerabilities were fixed and disclosed in the following CVE reports:
-
-- :ref:`CVE-2018-11769: Apache CouchDB Remote Code Execution <cve/2018-11769>`
-- :ref:`CVE-2018-8007: Apache CouchDB Remote Code Execution <cve/2018-8007>`
-- :ref:`CVE-2017-12636: Apache CouchDB Remote Code Execution <cve/2017-12636>`
-- :ref:`CVE-2017-12635: Apache CouchDB Remote Privilege Escalation <cve/2017-12635>`
-
-Rather than waiting for new vulnerabilities to be discovered, and fixing them
-as they come up, the CouchDB development team decided to make changes to avoid
-this entire class of vulnerabilities.
-
-With CouchDB version 2.3.0, CouchDB no longer can configure key components at
-runtime. While some flexibility is needed for speciality configurations of
-CouchDB, the configuration was changed from being available at runtime to
-start-up time. And as such now requires shell access to the CouchDB server.
-
-This closes all future paths for vulnerabilities of this type.
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`2.3.0 <release/2.3.0>`.
-
-Upgrades from previous 2.x versions in the same series should be
-seamless.
-
-Users on earlier versions should consult with upgrade notes.
-
-Credit
-======
-
-This issue was discovered by the Apple Information Security team.
diff --git a/src/cve/2018-8007.rst b/src/cve/2018-8007.rst
deleted file mode 100644
index c99d6bd..0000000
--- a/src/cve/2018-8007.rst
+++ /dev/null
@@ -1,58 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2018-8007:
-
-====================================================
-CVE-2018-8007: Apache CouchDB Remote Code Execution
-====================================================
-
-:Date: 30.04.2018
-
-:Affected: All Versions of Apache CouchDB
-
-:Severity: Low
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-CouchDB administrative users can configure the database server via HTTP(S). Due
-to insufficient validation of administrator-supplied configuration settings via
-the HTTP API, it is possible for a CouchDB administrator user to escalate their
-privileges to that of the operating system's user that CouchDB runs under, by
-bypassing the backlist of configuration settings that are not allowed to be
-modified via the HTTP API.
-
-This privilege escalation effectively allows a CouchDB admin user to gain
-arbitrary remote code execution, bypassing
-:ref:`CVE-2017-12636 <cve/2017-12636>`
-
-Mitigation
-==========
-
-All users should upgrade to CouchDB :ref:`1.7.2 <release/1.7.2>` or
-:ref:`2.1.2 <release/2.1.2>`.
-
-Upgrades from previous 1.x and 2.x versions in the same series should be
-seamless.
-
-Users on earlier versions, or users upgrading from 1.x to 2.x should consult
-with upgrade notes.
-
-Credit
-======
-
-This issue was discovered by Francesco Oddo of `MDSec Labs`_.
-
-.. _MDSec Labs: https://www.mdsec.co.uk/
diff --git a/src/cve/2020-1955.rst b/src/cve/2020-1955.rst
deleted file mode 100644
index 6766f20..0000000
--- a/src/cve/2020-1955.rst
+++ /dev/null
@@ -1,60 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2020-1955:
-
-===========================================================
-CVE-2020-1955: Apache CouchDB Remote Privilege Escalation
-===========================================================
-
-:Date: 19.05.2020
-
-:Affected: 3.0.0
-
-:Severity: Medium
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-CouchDB version 3.0.0 shipped with a new configuration setting that
-governs access control to the entire database server called
-`require_valid_user_except_for_up`. It was meant as an extension to the
-long-standing setting `require_valid_user`, which in turn requires that
-any and all requests to CouchDB will have to be made with valid
-credentials, effectively forbidding any anonymous requests.
-
-The new `require_valid_user_except_for_up` is an off-by-default setting
-that was meant to allow requiring valid credentials for all endpoints
-except for the `/_up` endpoint.
-
-However, the implementation of this made an error that lead to not
-enforcing credentials on any endpoint, when enabled.
-
-CouchDB versions :ref:`3.0.1 <release/3.0.1>` and :ref:`3.1.0
-<release/3.1.0>` fix this issue.
-
-Mitigation
-==========
-
-Users who have not enabled `require_valid_user_except_for_up` are not
-affected.
-
-Users who have it enabled can either disable it again, or upgrade to
-CouchDB versions :ref:`3.0.1 <release/3.0.1>` and :ref:`3.1.0
-<release/3.1.0>`
-
-Credit
-======
-
-This issue was discovered by Stefan Klein.
diff --git a/src/cve/2021-38295.rst b/src/cve/2021-38295.rst
deleted file mode 100644
index db323de..0000000
--- a/src/cve/2021-38295.rst
+++ /dev/null
@@ -1,58 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2021-38295:
-
-===========================================================
-CVE-2021-38295: Apache CouchDB Privilege Escalation
-===========================================================
-
-:Date: 12.10.2021
-
-:Affected: 3.1.1 and below
-
-:Severity: Low
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-A malicious user with permission to create documents in a database is able
-to attach a HTML attachment to a document. If a CouchDB admin opens that
-attachment in a browser, e.g. via the CouchDB admin interface Fauxton,
-any JavaScript code embedded in that HTML attachment will be executed within
-the security context of that admin. A similar route is available with the
-already deprecated `_show` and `_list` functionality.
-
-This *privilege escalation* vulnerability allows an attacker to add or remove
-data in any database or make configuration changes.
-
-Mitigation
-==========
-
-CouchDB :ref:`3.2.0 <release/3.2.0>`  and onwards adds `Content-Security-Policy`
-headers for all attachment, `_show` and `_list` requests. This breaks certain
-niche use-cases and there are configuration options to restore the previous
-behaviour for those who need it.
-
-CouchDB :ref:`3.1.2 <release/3.1.2>`  defaults to the previous behaviour, but
-adds configuration options to turn `Content-Security-Policy` headers on for
-all affected requests.
-
-Credit
-======
-
-This issue was identified by `Cory Sabol`_ of `Secure Ideas`_.
-
-.. _Secure Ideas: https://secureideas.com/
-.. _Cory Sabol: mailto:cory@secureideas.com
diff --git a/src/cve/2022-24706.rst b/src/cve/2022-24706.rst
deleted file mode 100644
index 82b2215..0000000
--- a/src/cve/2022-24706.rst
+++ /dev/null
@@ -1,65 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve/2022-24706:
-
-===========================================================
-CVE-2022-24706: Apache CouchDB Remote Privilege Escalation
-===========================================================
-
-:Date: 25.04.2022
-
-:Affected: 3.2.1 and below
-
-:Severity: Critical
-
-:Vendor: The Apache Software Foundation
-
-Description
-===========
-
-An attacker can access an improperly secured default installation without
-authenticating and gain admin privileges.
-
-1. CouchDB opens a random network port, bound to all available interfaces
-   in anticipation of clustered operation and/or runtime introspection. A
-   utility process called `epmd` advertises that random port to the network.
-   `epmd` itself listens on a fixed port.
-2. CouchDB packaging previously chose a default `cookie` value for single-node
-   as well as clustered installations. That cookie authenticates any
-   communication between Erlang nodes.
-
-The `CouchDB documentation has always made recommendations`_ for properly securing
-an installation, but not all users follow the advice.
-
-We recommend a firewall in front of all CouchDB installations. The full CouchDB api
-is available on registered port `5984` and this is the only port that needs to be
-exposed for a single-node install. Installations that do not expose the separate
-distribution port to external access are not vulnerable.
-
-Mitigation
-==========
-
-CouchDB :ref:`3.2.2 <release/3.2.2>` and onwards will refuse to start with
-the former default erlang cookie value of `monster`. Installations that
-upgrade to this versions are forced to choose a different value.
-
-In addition, all binary packages have been updated to bind `epmd` as well
-as the CouchDB distribution port to `127.0.0.1` and/or `::1` respectively.
-
-Credit
-======
-
-This issue was identified by `Alex Vandiver`_.
-
-.. _Alex Vandiver: mailto:alexmv@zulip.com
-.. _CouchDB documentation has always made recommendations: https://docs.couchdb.org/en/stable/setup/cluster.html
diff --git a/src/cve/index.rst b/src/cve/index.rst
deleted file mode 100644
index 8807d04..0000000
--- a/src/cve/index.rst
+++ /dev/null
@@ -1,73 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _cve:
-
-======================
-Security Issues / CVEs
-======================
-
-.. toctree::
-    :maxdepth: 1
-    :glob:
-
-    *
-
-.. _cve/report:
-
-===================================================
-Reporting New Security Problems with Apache CouchDB
-===================================================
-
-The Apache Software Foundation takes a very active stance in eliminating
-security problems and denial of service attacks against Apache CouchDB.
-
-We strongly encourage folks to report such problems to our private security
-mailing list first, before disclosing them in a public forum.
-
-Please note that the security mailing list should only be used for reporting
-undisclosed security vulnerabilities in Apache CouchDB and managing the
-process of fixing such vulnerabilities. We cannot accept regular bug reports
-or other queries at this address. All mail sent to this address that does not
-relate to an undisclosed security problem in the Apache CouchDB source code
-will be ignored.
-
-If you need to report a bug that isn't an undisclosed security vulnerability,
-please use the `bug reporting page`_.
-
-Questions about:
-
-- How to configure CouchDB securely
-- If a vulnerability applies to your particular application
-- Obtaining further information on a published vulnerability
-- Availability of patches and/or new releases
-
-should be address to the `users mailing list`_. Please see the `mailing
-lists page`_ for details of how to subscribe.
-
-The private security mailing address is: `security@couchdb.apache.org`_
-
-Please read `how the Apache Software Foundation handles security`_ reports to
-know what to expect.
-
-Note that all networked servers are subject to denial of service attacks,
-and we cannot promise magic workarounds to generic problems (such as a client
-streaming lots of data to your server, or re-requesting the same URL
-repeatedly). In general our philosophy is to avoid any attacks which can
-cause the server to consume resources in a non-linear relationship to the
-size of inputs.
-
-.. _bug reporting page: https://github.com/apache/couchdb/issues
-.. _mailing lists page: http://couchdb.apache.org/#mailing-list
-.. _how the Apache Software Foundation handles security: http://apache.org/security/committers.html
-.. _security@couchdb.apache.org: mailto:security@couchdb.apache.org
-.. _users mailing list: mailto:user@couchdb.apache.org
diff --git a/src/ddocs/ddocs.rst b/src/ddocs/ddocs.rst
deleted file mode 100644
index 4f2cd31..0000000
--- a/src/ddocs/ddocs.rst
+++ /dev/null
@@ -1,905 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: js
-
-.. _ddocs:
-
-================
-Design Documents
-================
-
-In this section we'll show how to write design documents, using the built-in
-:ref:`JavaScript Query Server <query-server/js>`.
-
-But before we start to write our first document, let's take a look at the list
-of common objects that will be used during our code journey - we'll be using
-them extensively within each function:
-
-- :ref:`Database information object <dbinfo_object>`
-- :ref:`Request object <request_object>`
-- :ref:`Response object <response_object>`
-- :ref:`UserCtx object <userctx_object>`
-- :ref:`Database Security object <security_object>`
-- :ref:`Guide to JavaScript Query Server <query-server/js>`
-
-Creation and Structure
-======================
-
-Design documents contain functions such as view and update functions. These functions
-are executed when requested.
-
-Design documents are denoted by an id field with the format ``_design/{name}``. Their
-structure follows the example below.
-
-**Example**:
-
-.. code-block:: json
-
-    {
-        "_id": "_design/example",
-        "views": {
-            "view-number-one": {
-                "map": "function (doc) {/* function code here - see below */}"
-            },
-            "view-number-two": {
-                "map": "function (doc) {/* function code here - see below */}",
-                "reduce": "function (keys, values, rereduce) {/* function code here - see below */}"
-            }
-        },
-        "updates": {
-            "updatefun1": "function(doc,req) {/* function code here - see below */}",
-            "updatefun2": "function(doc,req) {/* function code here - see below */}"
-        },
-        "filters": {
-            "filterfunction1": "function(doc, req){ /* function code here - see below */ }"
-        },
-        "validate_doc_update": "function(newDoc, oldDoc, userCtx, secObj) { /* function code here - see below */ }",
-        "language": "javascript"
-    }
-
-As you can see, a design document can include multiple functions of the same type. The
-example defines two views, both of which have a map function and one of which has a
-reduce function. It also defines two update functions and one filter function. The
-Validate Document Update function is a special case, as each design document cannot
-contain more than one of those.
-
-.. _viewfun:
-
-View Functions
-==============
-
-Views are the primary tool used for querying and reporting on CouchDB databases.
-
-.. _mapfun:
-
-Map Functions
--------------
-
-.. function:: mapfun(doc)
-
-   :param doc: The document that is being processed
-
-Map functions accept a single document as the argument and (optionally)
-:func:`emit` key/value pairs that are stored in a view.
-
-.. code-block:: javascript
-
-    function (doc) {
-      if (doc.type === 'post' && doc.tags && Array.isArray(doc.tags)) {
-        doc.tags.forEach(function (tag) {
-          emit(tag.toLowerCase(), 1);
-        });
-      }
-    }
-
-In this example a key/value pair is emitted for each value in the `tags` array
-of a document with a `type` of "post". Note that :func:`emit` may be called many
-times for a single document, so the same document may be available by several
-different keys.
-
-Also keep in mind that each document is *sealed* to prevent the situation where
-one map function changes document state and another receives a modified version.
-
-For efficiency reasons, documents are passed to a group of map functions - each
-document is processed by a group of map functions from all views of the related
-design document. This means that if you trigger an index update for one view in
-the design document, all others will get updated too.
-
-Since version `1.1.0`, `map` supports :ref:`CommonJS <commonjs>` modules and
-the :func:`require` function.
-
-.. _reducefun:
-
-Reduce and Rereduce Functions
------------------------------
-
-.. function:: redfun(keys, values[, rereduce])
-
-    :param keys: Array of pairs of key-docid for related map function results.
-                 Always ``null`` if rereduce is running (has ``true`` value).
-    :param values: Array of map function result values.
-    :param rereduce: Boolean flag to indicate a rereduce run.
-
-    :return: Reduces `values`
-
-Reduce functions take two required arguments of keys and values lists - the
-result of the related map function - and an optional third value which indicates
-if `rereduce` mode is active or not. `Rereduce` is used for additional reduce
-values list, so when it is ``true`` there is no information about related `keys`
-(first argument is ``null``).
-
-Note that if the result of a `reduce` function is longer than the initial
-values list then a Query Server error will be raised. However, this behavior
-can be disabled by setting ``reduce_limit`` config option to ``false``:
-
-.. code-block:: ini
-
-    [query_server_config]
-    reduce_limit = false
-
-While disabling ``reduce_limit`` might be useful for debug proposes, remember
-that the main task of reduce functions is to *reduce* the mapped result, not to
-make it bigger. Generally, your reduce function should converge rapidly to a
-single value - which could be an array or similar object.
-
-.. _reducefun/builtin:
-
-Built-in Reduce Functions
-^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Additionally, CouchDB has a set of built-in reduce functions. These are
-implemented in Erlang and run inside CouchDB, so they are much faster than the
-equivalent JavaScript functions.
-
-.. data:: _approx_count_distinct
-
-.. versionadded:: 2.2
-
-Approximates the number of distinct keys in a view index using a variant of the
-`HyperLogLog`_ algorithm. This algorithm enables an efficient, parallelizable
-computation of cardinality using fixed memory resources. CouchDB has configured
-the underlying data structure to have a relative error of ~2%.
-
-.. _HyperLogLog: https://en.wikipedia.org/wiki/HyperLogLog
-
-As this reducer ignores the emitted values entirely, an invocation with
-``group=true`` will simply return a value of 1 for every distinct key in the
-view. In the case of array keys, querying the view with a ``group_level``
-specified will return the number of distinct keys that share the common group
-prefix in each row. The algorithm is also cognizant of the ``startkey`` and
-``endkey`` boundaries and will return the number of distinct keys within the
-specified key range.
-
-A final note regarding Unicode collation: this reduce function uses the binary
-representation of each key in the index directly as input to the HyperLogLog
-filter. As such, it will (incorrectly) consider keys that are not byte identical
-but that compare equal according to the Unicode collation rules to be distinct
-keys, and thus has the potential to overestimate the cardinality of the key
-space if a large number of such keys exist.
-
-.. data:: _count
-
-Counts the number of values in the index with a given key. This could be
-implemented in JavaScript as:
-
-.. code-block:: javascript
-
-    // could be replaced by _count
-    function(keys, values, rereduce) {
-        if (rereduce) {
-            return sum(values);
-        } else {
-            return values.length;
-        }
-    }
-
-.. data:: _stats
-
-Computes the following quantities for numeric values associated with each key:
-``sum``, ``min``, ``max``, ``count``, and ``sumsqr``. The behavior of the
-``_stats`` function varies depending on the output of the map function. The
-simplest case is when the map phase emits a single numeric value for each key.
-In this case the ``_stats`` function is equivalent to the following JavaScript:
-
-.. code-block:: javascript
-
-    // could be replaced by _stats
-    function(keys, values, rereduce) {
-        if (rereduce) {
-            return {
-                'sum': values.reduce(function(a, b) { return a + b.sum }, 0),
-                'min': values.reduce(function(a, b) { return Math.min(a, b.min) }, Infinity),
-                'max': values.reduce(function(a, b) { return Math.max(a, b.max) }, -Infinity),
-                'count': values.reduce(function(a, b) { return a + b.count }, 0),
-                'sumsqr': values.reduce(function(a, b) { return a + b.sumsqr }, 0)
-            }
-        } else {
-            return {
-                'sum': sum(values),
-                'min': Math.min.apply(null, values),
-                'max': Math.max.apply(null, values),
-                'count': values.length,
-                'sumsqr': (function() {
-                var sumsqr = 0;
-
-                values.forEach(function (value) {
-                    sumsqr += value * value;
-                });
-
-                return sumsqr;
-                })(),
-            }
-        }
-    }
-
-The ``_stats`` function will also work with "pre-aggregated" values from a map
-phase. A map function that emits an object containing ``sum``, ``min``, ``max``,
-``count``, and ``sumsqr`` keys and numeric values for each can use the
-``_stats`` function to combine these results with the data from other documents.
-The emitted object may contain other keys (these are ignored by the reducer),
-and it is also possible to mix raw numeric values and pre-aggregated objects
-in a single view and obtain the correct aggregated statistics.
-
-Finally, ``_stats`` can operate on key-value pairs where each value is an array
-comprised of numbers or pre-aggregated objects. In this case **every** value
-emitted from the map function must be an array, and the arrays must all be the
-same length, as ``_stats`` will compute the statistical quantities above
-*independently* for each element in the array. Users who want to compute
-statistics on multiple values from a single document should either ``emit`` each
-value into the index separately, or compute the statistics for the set of values
-using the JavaScript example above and emit a pre-aggregated object.
-
-.. data:: _sum
-
-In its simplest variation, ``_sum`` sums the numeric values associated with each
-key, as in the following JavaScript:
-
-.. code-block:: javascript
-
-    // could be replaced by _sum
-    function(keys, values) {
-        return sum(values);
-    }
-
-As with ``_stats``, the ``_sum`` function offers a number of extended
-capabilities. The ``_sum`` function requires that map values be numbers, arrays
-of numbers, or objects. When presented with array output from a map function,
-``_sum`` will compute the sum for every element of the array. A bare numeric
-value will be treated as an array with a single element, and arrays with fewer
-elements will be treated as if they contained zeroes for every additional
-element in the longest emitted array. As an example, consider the following map
-output:
-
-.. code-block:: javascript
-
-    {"total_rows":5, "offset":0, "rows": [
-        {"id":"id1", "key":"abc", "value": 2},
-        {"id":"id2", "key":"abc", "value": [3,5,7]},
-        {"id":"id2", "key":"def", "value": [0,0,0,42]},
-        {"id":"id2", "key":"ghi", "value": 1},
-        {"id":"id1", "key":"ghi", "value": 3}
-    ]}
-
-The ``_sum`` for this output without any grouping would be:
-
-.. code-block:: javascript
-
-    {"rows": [
-        {"key":null, "value": [9,5,7,42]}
-    ]}
-
-while the grouped output would be
-
-.. code-block:: javascript
-
-    {"rows": [
-        {"key":"abc", "value": [5,5,7]},
-        {"key":"def", "value": [0,0,0,42]},
-        {"key":"ghi", "value": 4
-    ]}
-
-This is in contrast to the behavior of the ``_stats`` function which requires
-that all emitted values be arrays of identical length if any array is emitted.
-
-It is also possible to have ``_sum`` recursively descend through an emitted
-object and compute the sums for every field in the object. Objects *cannot* be
-mixed with other data structures. Objects can be arbitrarily nested, provided
-that the values for all fields are themselves numbers, arrays of numbers, or
-objects.
-
-.. note::
-    **Why don't reduce functions support CommonJS modules?**
-
-    While `map` functions have limited access to stored modules through
-    :func:`require`, there is no such feature for `reduce` functions.
-    The reason lies deep inside the way `map` and `reduce`
-    functions are processed by the Query Server. Let's take a look at `map`
-    functions first:
-
-    #. CouchDB sends all `map` functions in a processed design document to the
-       Query Server.
-    #. the Query Server handles them one by one, compiles and puts them onto an
-       internal stack.
-    #. after all `map` functions have been processed, CouchDB will send the
-       remaining documents for indexing, one by one.
-    #. the Query Server receives the document object and applies it to every
-       function from the stack. The emitted results are then joined into a
-       single array and sent back to CouchDB.
-
-    Now let's see how `reduce` functions are handled:
-
-    #. CouchDB sends *as a single command* the list of available `reduce`
-       functions with the result list of key-value pairs that were previously
-       returned from the `map` functions.
-    #. the Query Server compiles the reduce functions and applies them to the
-       key-value lists. The reduced result is sent back to CouchDB.
-
-    As you may note, `reduce` functions are applied in a single shot to the map
-    results while `map` functions are applied to documents one by one. This
-    means that it's possible for `map` functions to precompile CommonJS
-    libraries and use them during the entire view processing, but for `reduce`
-    functions they would be compiled again and again for each view result
-    reduction, which would lead to performance degradation.
-
-.. _showfun:
-
-Show Functions
-==============
-
-.. warning::
-
-    Show functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. function:: showfun(doc, req)
-
-    :param doc: The document that is being processed; may be omitted.
-    :param req: :ref:`Request object <request_object>`.
-
-    :return: :ref:`Response object <response_object>`
-    :rtype: object or string
-
-Show functions are used to represent documents in various formats, commonly as
-HTML pages with nice formatting. They can also be used to run server-side
-functions without requiring a pre-existing document.
-
-Basic example of show function could be:
-
-.. code-block:: javascript
-
-    function(doc, req){
-        if (doc) {
-            return "Hello from " + doc._id + "!";
-        } else {
-            return "Hello, world!";
-        }
-    }
-
-Also, there is more simple way to return json encoded data:
-
-.. code-block:: javascript
-
-    function(doc, req){
-        return {
-            'json': {
-                'id': doc['_id'],
-                'rev': doc['_rev']
-            }
-        }
-    }
-
-and even files (this one is CouchDB logo):
-
-.. code-block:: javascript
-
-    function(doc, req){
-        return {
-            'headers': {
-                'Content-Type' : 'image/png',
-            },
-            'base64': ''.concat(
-                'iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAMAAAAoLQ9TAAAAsV',
-                'BMVEUAAAD////////////////////////5ur3rEBn////////////////wDBL/',
-                'AADuBAe9EB3IEBz/7+//X1/qBQn2AgP/f3/ilpzsDxfpChDtDhXeCA76AQH/v7',
-                '/84eLyWV/uc3bJPEf/Dw/uw8bRWmP1h4zxSlD6YGHuQ0f6g4XyQkXvCA36MDH6',
-                'wMH/z8/yAwX64ODeh47BHiv/Ly/20dLQLTj98PDXWmP/Pz//39/wGyJ7Iy9JAA',
-                'AADHRSTlMAbw8vf08/bz+Pv19jK/W3AAAAg0lEQVR4Xp3LRQ4DQRBD0QqTm4Y5',
-                'zMxw/4OleiJlHeUtv2X6RbNO1Uqj9g0RMCuQO0vBIg4vMFeOpCWIWmDOw82fZx',
-                'vaND1c8OG4vrdOqD8YwgpDYDxRgkSm5rwu0nQVBJuMg++pLXZyr5jnc1BaH4GT',
-                'LvEliY253nA3pVhQqdPt0f/erJkMGMB8xucAAAAASUVORK5CYII=')
-        }
-    }
-
-But what if you need to represent data in different formats via a single
-function? Functions :func:`registerType` and :func:`provides` are your the best
-friends in that question:
-
-.. code-block:: javascript
-
-    function(doc, req){
-        provides('json', function(){
-            return {'json': doc}
-        });
-        provides('html', function(){
-            return '<pre>' + toJSON(doc) + '</pre>'
-        })
-        provides('xml', function(){
-            return {
-                'headers': {'Content-Type': 'application/xml'},
-                'body' : ''.concat(
-                    '<?xml version="1.0" encoding="utf-8"?>\n',
-                    '<doc>',
-                    (function(){
-                        escape = function(s){
-                            return s.replace(/&quot;/g, '"')
-                                    .replace(/&gt;/g, '>')
-                                    .replace(/&lt;/g, '<')
-                                    .replace(/&amp;/g, '&');
-                        };
-                        var content = '';
-                        for(var key in doc){
-                            if(!doc.hasOwnProperty(key)) continue;
-                            var value = escape(toJSON(doc[key]));
-                            var key = escape(key);
-                            content += ''.concat(
-                                '<' + key + '>',
-                                value
-                                '</' + key + '>'
-                            )
-                        }
-                        return content;
-                    })(),
-                    '</doc>'
-                )
-            }
-        })
-        registerType('text-json', 'text/json')
-        provides('text-json', function(){
-            return toJSON(doc);
-        })
-    }
-
-This function may return `html`, `json` , `xml` or our custom `text json` format
-representation of same document object with same processing rules. Probably,
-the `xml` provider in our function needs more care to handle nested objects
-correctly, and keys with invalid characters, but you've got the idea!
-
-.. seealso::
-    CouchDB Guide:
-        - `Show Functions <http://guide.couchdb.org/editions/1/en/show.html>`_
-
-.. _listfun:
-
-List Functions
-==============
-
-.. warning::
-
-    List functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-.. function:: listfun(head, req)
-
-    :param head: :ref:`view_head_info_object`
-    :param req: :ref:`Request object <request_object>`.
-
-    :return: Last chunk.
-    :rtype: string
-
-While :ref:`showfun` are used to customize document presentation, :ref:`listfun`
-are used for the same purpose, but on :ref:`viewfun` results.
-
-The following list function formats the view and represents it as a very simple
-HTML page:
-
-.. code-block:: javascript
-
-    function(head, req){
-        start({
-            'headers': {
-                'Content-Type': 'text/html'
-            }
-        });
-        send('<html><body><table>');
-        send('<tr><th>ID</th><th>Key</th><th>Value</th></tr>');
-        while(row = getRow()){
-            send(''.concat(
-                '<tr>',
-                '<td>' + toJSON(row.id) + '</td>',
-                '<td>' + toJSON(row.key) + '</td>',
-                '<td>' + toJSON(row.value) + '</td>',
-                '</tr>'
-            ));
-        }
-        send('</table></body></html>');
-    }
-
-Templates and styles could obviously be used to present data in a nicer fashion,
-but this is an excellent starting point. Note that you may also use
-:func:`registerType` and :func:`provides` functions in a similar way as for
-:ref:`showfun`! However, note that :func:`provides` expects the return value to
-be a string when used inside a list function, so you'll need to use
-:func:`start` to set any custom headers and stringify your JSON before
-returning it.
-
-.. seealso::
-    CouchDB Guide:
-        - `Transforming Views with List Functions
-          <http://guide.couchdb.org/draft/transforming.html>`_
-
-.. _updatefun:
-
-Update Functions
-================
-
-.. function:: updatefun(doc, req)
-
-    :param doc: The document that is being processed.
-    :param req: :ref:`request_object`
-
-    :returns: Two-element array: the first element is the (updated or new)
-      document, which is committed to the database. If the first element
-      is ``null`` no document will be committed to the database.
-      If you are updating an existing document, it should already have an
-      ``_id`` set, and if you are creating a new document, make sure to set its
-      ``_id`` to something, either generated based on the input or the
-      ``req.uuid`` provided. The second element is the response that will
-      be sent back to the caller.
-
-Update handlers are functions that clients can request to invoke server-side
-logic that will create or update a document. This feature allows a range of use
-cases such as providing a server-side last modified timestamp, updating
-individual fields in a document without first getting the latest revision, etc.
-
-When the request to an update handler includes a document ID in the URL, the
-server will provide the function with the most recent version of that document.
-You can provide any other values needed by the update handler function via the
-``POST``/``PUT`` entity body or query string parameters of the request.
-
-A basic example that demonstrates all use-cases of update handlers:
-
-.. code-block:: javascript
-
-    function(doc, req){
-        if (!doc){
-            if ('id' in req && req['id']){
-                // create new document
-                return [{'_id': req['id']}, 'New World']
-            }
-            // change nothing in database
-            return [null, 'Empty World']
-        }
-        doc['world'] = 'hello';
-        doc['edited_by'] = req['userCtx']['name']
-        return [doc, 'Edited World!']
-    }
-
-.. _filterfun:
-
-Filter Functions
-================
-
-.. function:: filterfun(doc, req)
-
-    :param doc: The document that is being processed
-    :param req: :ref:`request_object`
-    :return: Boolean value: ``true`` means that `doc` passes the filter rules,
-      ``false`` means that it does not.
-
-Filter functions mostly act like :ref:`showfun` and :ref:`listfun`: they
-format, or *filter* the :ref:`changes feed<changes>`.
-
-Classic Filters
----------------
-
-By default the changes feed emits all database documents changes. But if you're
-waiting for some special changes, processing all documents is inefficient.
-
-Filters are special design document functions that allow the changes feed to
-emit only specific documents that pass filter rules.
-
-Let's assume that our database is a mailbox and we need to handle only new mail
-events (documents with the status `new`). Our filter function would look like
-this:
-
-.. code-block:: javascript
-
-    function(doc, req){
-        // we need only `mail` documents
-        if (doc.type != 'mail'){
-            return false;
-        }
-        // we're interested only in `new` ones
-        if (doc.status != 'new'){
-            return false;
-        }
-        return true; // passed!
-    }
-
-Filter functions must return ``true`` if a document passed all the rules.  Now,
-if you apply this function to the changes feed it will emit only changes about
-"new mails"::
-
-    GET /somedatabase/_changes?filter=mailbox/new_mail HTTP/1.1
-
-.. code-block:: javascript
-
-    {"results":[
-    {"seq":"1-g1AAAAF9eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBMZc4EC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HqQ_kQG3qgSQqnoCqvJYgCRDA5ACKpxPWOUCiMr9hFUegKi8T1jlA4hKkDuzAC2yZRo","id":"df8eca9da37dade42ee4d7aa3401f1dd","changes":[{"rev":"1-c2e0085a21d34fa1cecb6dc26a4ae657"}]},
-    {"seq":"9-g1AAAAIreJyVkEsKwjAURUMrqCOXoCuQ5MU0OrI70XyppcaRY92J7kR3ojupaSPUUgqWwAu85By4t0AITbJYo5k7aUNSAnyJ_SGFf4gEkvOyLPMsFtHRL8ZKaC1M0v3eq5ALP-X2a0G1xYKhgnONpmenjT04o_v5tOJ3LV5itTES_uP3FX9ppcAACaVsQAo38hNd_eVFt8ZklVljPqSPYLoH06PJhG0Cxq7-yhQcz-B4_fQCjFuqBjjewVF3E9cORoExSrpU_gHBTo5m","id":"df8eca9da37dade42ee4d7aa34024714","changes":[{"rev":"1-29d748a6e87b43db967fe338bcb08d74"}]},
-    ],
-    "last_seq":"10-g1AAAAIreJyVkEsKwjAURR9tQR25BF2B5GMaHdmdaNIk1FLjyLHuRHeiO9Gd1LQRaimFlsALvOQcuLcAgGkWKpjbs9I4wYSvkDu4cA-BALkoyzLPQhGc3GKSCqWEjrvfexVy6abc_SxQWwzRVHCuYHaxSpuj1aqfTyp-3-IlSrdakmH8oeKvrRSIkJhSNiKFjdyEm7uc6N6YTKo3iI_pw5se3vRsMiETE23WgzJ5x8s73n-9EMYNTUc4Pt5RdxPVDkYJYxR3qfwLwW6OZw"}
-
-Note that the value of ``last_seq`` is `10-..`, but we received only two records.
-Seems like any other changes were for documents that haven't passed our filter.
-
-We probably need to filter the changes feed of our mailbox by more than a single
-status value. We're also interested in statuses like "spam" to update
-spam-filter heuristic rules, "outgoing" to let a mail daemon actually send
-mails, and so on. Creating a lot of similar functions that actually do similar
-work isn't good idea - so we need a dynamic filter.
-
-You may have noticed that filter functions take a second argument named
-:ref:`request <request_object>`. This allows the creation of dynamic filters
-based on query parameters, :ref:`user context <userctx_object>` and more.
-
-The dynamic version of our filter looks like this:
-
-.. code-block:: javascript
-
-    function(doc, req){
-        // we need only `mail` documents
-        if (doc.type != 'mail'){
-            return false;
-        }
-        // we're interested only in requested status
-        if (doc.status != req.query.status){
-            return false;
-        }
-        return true; // passed!
-    }
-
-and now we have passed the `status` query parameter in the request to let our
-filter match only the required documents::
-
-    GET /somedatabase/_changes?filter=mailbox/by_status&status=new HTTP/1.1
-
-.. code-block:: javascript
-
-    {"results":[
-    {"seq":"1-g1AAAAF9eJzLYWBg4MhgTmHgz8tPSTV0MDQy1zMAQsMcoARTIkOS_P___7MymBMZc4EC7MmJKSmJqWaYynEakaQAJJPsoaYwgE1JM0o1TjQ3T2HgLM1LSU3LzEtNwa3fAaQ_HqQ_kQG3qgSQqnoCqvJYgCRDA5ACKpxPWOUCiMr9hFUegKi8T1jlA4hKkDuzAC2yZRo","id":"df8eca9da37dade42ee4d7aa3401f1dd","changes":[{"rev":"1-c2e0085a21d34fa1cecb6dc26a4ae657"}]},
-    {"seq":"9-g1AAAAIreJyVkEsKwjAURUMrqCOXoCuQ5MU0OrI70XyppcaRY92J7kR3ojupaSPUUgqWwAu85By4t0AITbJYo5k7aUNSAnyJ_SGFf4gEkvOyLPMsFtHRL8ZKaC1M0v3eq5ALP-X2a0G1xYKhgnONpmenjT04o_v5tOJ3LV5itTES_uP3FX9ppcAACaVsQAo38hNd_eVFt8ZklVljPqSPYLoH06PJhG0Cxq7-yhQcz-B4_fQCjFuqBjjewVF3E9cORoExSrpU_gHBTo5m","id":"df8eca9da37dade42ee4d7aa34024714","changes":[{"rev":"1-29d748a6e87b43db967fe338bcb08d74"}]},
-    ],
-    "last_seq":"10-g1AAAAIreJyVkEsKwjAURR9tQR25BF2B5GMaHdmdaNIk1FLjyLHuRHeiO9Gd1LQRaimFlsALvOQcuLcAgGkWKpjbs9I4wYSvkDu4cA-BALkoyzLPQhGc3GKSCqWEjrvfexVy6abc_SxQWwzRVHCuYHaxSpuj1aqfTyp-3-IlSrdakmH8oeKvrRSIkJhSNiKFjdyEm7uc6N6YTKo3iI_pw5se3vRsMiETE23WgzJ5x8s73n-9EMYNTUc4Pt5RdxPVDkYJYxR3qfwLwW6OZw"}
-
-and we can easily change filter behavior with::
-
-    GET /somedatabase/_changes?filter=mailbox/by_status&status=spam HTTP/1.1
-
-.. code-block:: javascript
-
-    {"results":[
-    {"seq":"6-g1AAAAIreJyVkM0JwjAYQD9bQT05gk4gaWIaPdlNNL_UUuPJs26im-gmuklMjVClFFoCXyDJe_BSAsA4jxVM7VHpJEswWyC_ktJfRBzEzDlX5DGPDv5gJLlSXKfN560KMfdTbL4W-FgM1oQzpmByskqbvdWqnc8qfvvHCyTXWuBu_K7iz38VCOOUENqjwg79hIvfvOhamQahROoVYn3-I5huwXSvm5BJsTbLTk3B8QiO58-_YMoMkT0cr-BwdRElmFKSNKniDcAcjmM","id":"8960e91220798fc9f9d29d24ed612e0d","changes":[{"rev":"3-cc6ff71af716ddc2ba114967025c0ee0"}]},
-    ],
-    "last_seq":"10-g1AAAAIreJyVkEsKwjAURR9tQR25BF2B5GMaHdmdaNIk1FLjyLHuRHeiO9Gd1LQRaimFlsALvOQcuLcAgGkWKpjbs9I4wYSvkDu4cA-BALkoyzLPQhGc3GKSCqWEjrvfexVy6abc_SxQWwzRVHCuYHaxSpuj1aqfTyp-3-IlSrdakmH8oeKvrRSIkJhSNiKFjdyEm7uc6N6YTKo3iI_pw5se3vRsMiETE23WgzJ5x8s73n-9EMYNTUc4Pt5RdxPVDkYJYxR3qfwLwW6OZw"}
-
-Combining filters with a `continuous` feed allows creating powerful event-driven
-systems.
-
-.. _viewfilter:
-
-View Filters
-------------
-
-View filters are the same as classic filters above, with one small difference:
-they use the `map` instead of the `filter` function of a view, to filter the
-changes feed. Each time a key-value pair is emitted from the `map` function, a
-change is returned. This allows avoiding filter functions that mostly do the
-same work as views.
-
-To use them just pass `filter=_view` and `view=designdoc/viewname` as request
-parameters to the :ref:`changes feed<changes>`::
-
-    GET /somedatabase/_changes?filter=_view&view=dname/viewname  HTTP/1.1
-
-.. note::
-    Since view filters use `map` functions as filters, they can't show any
-    dynamic behavior since :ref:`request object<request_object>` is not
-    available.
-
-.. seealso::
-    CouchDB Guide:
-        - `Guide to filter change notification
-          <http://guide.couchdb.org/draft/notifications.html#filters>`_
-
-.. _vdufun:
-
-Validate Document Update Functions
-==================================
-
-.. function:: validatefun(newDoc, oldDoc, userCtx, secObj)
-
-    :param newDoc: New version of document that will be stored.
-    :param oldDoc: Previous version of document that is already stored.
-    :param userCtx: :ref:`userctx_object`
-    :param secObj: :ref:`security_object`
-
-    :throws: ``forbidden`` error to gracefully prevent document storing.
-    :throws: ``unauthorized`` error to prevent storage and allow the user to
-      re-auth.
-
-A design document may contain a function named `validate_doc_update`
-which can be used to prevent invalid or unauthorized document update requests
-from being stored.  The function is passed the new document from the update
-request, the current document stored in the database, a :ref:`userctx_object`
-containing information about the user writing the document (if present), and
-a :ref:`security_object` with lists of database security roles.
-
-Validation functions typically examine the structure of the new document to
-ensure that required fields are present and to verify that the requesting user
-should be allowed to make changes to the document properties.  For example,
-an application may require that a user must be authenticated in order to create
-a new document or that specific document fields be present when a document
-is updated. The validation function can abort the pending document write
-by throwing one of two error objects:
-
-.. code-block:: javascript
-
-    // user is not authorized to make the change but may re-authenticate
-    throw({ unauthorized: 'Error message here.' });
-
-    // change is not allowed
-    throw({ forbidden: 'Error message here.' });
-
-Document validation is optional, and each design document in the database may
-have at most one validation function.  When a write request is received for
-a given database, the validation function in each design document in that
-database is called in an unspecified order.  If any of the validation functions
-throw an error, the write will not succeed.
-
-**Example**: The ``_design/_auth`` ddoc from `_users` database uses a validation
-function to ensure that documents contain some required fields and are only
-modified by a user with the ``_admin`` role:
-
-.. code-block:: javascript
-
-    function(newDoc, oldDoc, userCtx, secObj) {
-        if (newDoc._deleted === true) {
-            // allow deletes by admins and matching users
-            // without checking the other fields
-            if ((userCtx.roles.indexOf('_admin') !== -1) ||
-                (userCtx.name == oldDoc.name)) {
-                return;
-            } else {
-                throw({forbidden: 'Only admins may delete other user docs.'});
-            }
-        }
-
-        if ((oldDoc && oldDoc.type !== 'user') || newDoc.type !== 'user') {
-            throw({forbidden : 'doc.type must be user'});
-        } // we only allow user docs for now
-
-        if (!newDoc.name) {
-            throw({forbidden: 'doc.name is required'});
-        }
-
-        if (!newDoc.roles) {
-            throw({forbidden: 'doc.roles must exist'});
-        }
-
-        if (!isArray(newDoc.roles)) {
-            throw({forbidden: 'doc.roles must be an array'});
-        }
-
-        if (newDoc._id !== ('org.couchdb.user:' + newDoc.name)) {
-            throw({
-                forbidden: 'Doc ID must be of the form org.couchdb.user:name'
-            });
-        }
-
-        if (oldDoc) { // validate all updates
-            if (oldDoc.name !== newDoc.name) {
-                throw({forbidden: 'Usernames can not be changed.'});
-            }
-        }
-
-        if (newDoc.password_sha && !newDoc.salt) {
-            throw({
-                forbidden: 'Users with password_sha must have a salt.' +
-                    'See /_utils/script/couch.js for example code.'
-            });
-        }
-
-        var is_server_or_database_admin = function(userCtx, secObj) {
-            // see if the user is a server admin
-            if(userCtx.roles.indexOf('_admin') !== -1) {
-                return true; // a server admin
-            }
-
-            // see if the user a database admin specified by name
-            if(secObj && secObj.admins && secObj.admins.names) {
-                if(secObj.admins.names.indexOf(userCtx.name) !== -1) {
-                    return true; // database admin
-                }
-            }
-
-            // see if the user a database admin specified by role
-            if(secObj && secObj.admins && secObj.admins.roles) {
-                var db_roles = secObj.admins.roles;
-                for(var idx = 0; idx < userCtx.roles.length; idx++) {
-                    var user_role = userCtx.roles[idx];
-                    if(db_roles.indexOf(user_role) !== -1) {
-                        return true; // role matches!
-                    }
-                }
-            }
-
-            return false; // default to no admin
-        }
-
-        if (!is_server_or_database_admin(userCtx, secObj)) {
-            if (oldDoc) { // validate non-admin updates
-                if (userCtx.name !== newDoc.name) {
-                    throw({
-                        forbidden: 'You may only update your own user document.'
-                    });
-                }
-                // validate role updates
-                var oldRoles = oldDoc.roles.sort();
-                var newRoles = newDoc.roles.sort();
-
-                if (oldRoles.length !== newRoles.length) {
-                    throw({forbidden: 'Only _admin may edit roles'});
-                }
-
-                for (var i = 0; i < oldRoles.length; i++) {
-                    if (oldRoles[i] !== newRoles[i]) {
-                        throw({forbidden: 'Only _admin may edit roles'});
-                    }
-                }
-            } else if (newDoc.roles.length > 0) {
-                throw({forbidden: 'Only _admin may set roles'});
-            }
-        }
-
-        // no system roles in users db
-        for (var i = 0; i < newDoc.roles.length; i++) {
-            if (newDoc.roles[i][0] === '_') {
-                throw({
-                    forbidden:
-                    'No system roles (starting with underscore) in users db.'
-                });
-            }
-        }
-
-        // no system names as names
-        if (newDoc.name[0] === '_') {
-            throw({forbidden: 'Username may not start with underscore.'});
-        }
-
-        var badUserNameChars = [':'];
-
-        for (var i = 0; i < badUserNameChars.length; i++) {
-            if (newDoc.name.indexOf(badUserNameChars[i]) >= 0) {
-                throw({forbidden: 'Character `' + badUserNameChars[i] +
-                        '` is not allowed in usernames.'});
-            }
-        }
-    }
-
-.. note::
-    The ``return`` statement is used only for function, it has no impact on
-    the validation process.
-
-.. seealso::
-    CouchDB Guide:
-        - `Validation Functions
-          <http://guide.couchdb.org/editions/1/en/validation.html>`_
diff --git a/src/ddocs/index.rst b/src/ddocs/index.rst
deleted file mode 100644
index ad11d85..0000000
--- a/src/ddocs/index.rst
+++ /dev/null
@@ -1,47 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _indexes:
-
-================
-Design Documents
-================
-
-CouchDB supports special documents within databases known as "design
-documents". These documents, mostly driven by JavaScript you write, are used
-to build indexes, validate document updates, format query results, and filter
-replications.
-
-.. toctree::
-    :maxdepth: 2
-
-    ddocs
-    views/index
-    search
-
-*Note*: Previously, the functionality provided by CouchDB's design documents,
-in combination with document attachments, was referred to as "CouchApps." The
-general principle was that entire web applications could be hosted in CouchDB,
-without need for an additional application server.
-
-Use of CouchDB as a combined standalone database and application server is no
-longer recommended. There are significant limitations to a pure CouchDB web
-server application stack, including but not limited to: fully-fledged
-fine-grained security, robust templating and scaffolding, complete developer
-tooling, and most importantly, a thriving ecosystem of developers, modules and
-frameworks to choose from.
-
-The developers of CouchDB believe that web developers should pick "the right
-tool for the right job". Use CouchDB as your database layer, in conjunction
-with any number of other server-side web application frameworks, such as the
-entire Node.JS ecosystem, Python's Django and Flask, PHP's Drupal, Java's
-Apache Struts, and more.
diff --git a/src/ddocs/search.rst b/src/ddocs/search.rst
deleted file mode 100644
index 5dbc990..0000000
--- a/src/ddocs/search.rst
+++ /dev/null
@@ -1,1054 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _ddoc/search:
-
-======
-Search
-======
-
-Search indexes enable you to query a database by using the
-`Lucene Query Parser Syntax. <http://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Overview>`_
-A search index uses one, or multiple, fields from your documents. You can use a search
-index to run queries, find documents based on the content they contain, or work with
-groups, facets, or geographical searches.
-
-.. warning::
-    Search cannot function unless it has a functioning, cluster-connected
-    Clouseau instance. See :ref:`Search Plugin Installation <install/search>`
-    for details.
-
-To create a search index, you add a JavaScript function to a design document in the
-database. An index builds after processing one search request or after the server detects
-a document update. The ``index`` function takes the following parameters:
-
-1.  Field name - The name of the field you want to use when you query the index. If you
-set this parameter to ``default``, then this field is queried if no field is specified in
-the query syntax.
-
-2.  Data that you want to index, for example, ``doc.address.country``.
-
-3.  (Optional) The third parameter includes the following fields: ``boost``, ``facet``,
-``index``, and ``store``. These fields are described in more detail later.
-
-By default, a search index response returns 25 rows. The number of rows that is returned
-can be changed by using the ``limit`` parameter. Each response includes a ``bookmark``
-field. You can include the value of the ``bookmark`` field in later queries to look
-through the responses.
-
-*Example design document that defines a search index:*
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/search_example",
-        "indexes": {
-            "animals": {
-                "index": "function(doc){ ... }"
-            }
-        }
-    }
-
-A search index will inherit the partitioning type from the ``options.partitioned`` field
-of the design document that contains it.
-
-Index functions
-===============
-
-Attempting to index by using a data field that does not exist fails. To avoid
-this problem, use the appropriate
-:ref:`guard clause <ddoc/search/index_guard_clauses>`.
-
-.. note::
-    Your indexing functions operate in a memory-constrained environment
-    where the document itself forms a part of the memory that is used
-    in that environment. Your code's stack and document must fit inside this
-    memory. In other words, a document must be loaded in order to be indexed.
-    Documents are limited to a maximum size of 64 MB.
-
-.. note::
-    Within a search index, do not index the same field name with more than one data
-    type. If the same field name is indexed with different data types in the same search
-    index function, you might get an error when querying the search index that says the
-    field "was indexed without position data." For example, do not include both of these
-    lines in the same search index function, as they index the ``myfield`` field as two
-    different data types: a string ``"this is a string"`` and a number ``123``.
-
-.. code-block:: javascript
-
-    index("myfield", "this is a string");
-    index("myfield", 123);
-
-The function that is contained in the index field is a JavaScript function
-that is called for each document in the database.
-The function takes the document as a parameter,
-extracts some data from it, and then calls the function that is defined
-in the ``index`` field to index that data.
-
-The ``index`` function takes three parameters, where the third parameter is optional.
-
-The first parameter is the name of the field you intend to use when querying the index,
-and which is specified in the Lucene syntax portion of subsequent queries.
-An example appears in the following query:
-
-.. code-block:: javascript
-
-    query=color:red
-
-The Lucene field name ``color`` is the first parameter of the ``index`` function.
-
-The ``query`` parameter can be abbreviated to ``q``,
-so another way of writing the query is as follows:
-
-.. code-block:: javascript
-
-    q=color:red
-
-If the special value ``"default"`` is used when you define the name,
-you do not have to specify a field name at query time.
-The effect is that the query can be simplified:
-
-.. code-block:: javascript
-
-    query=red
-
-The second parameter is the data to be indexed. Keep the following information
-in mind when you index your data:
-
-- This data must be only a string, number, or boolean. Other types will cause
-  an error to be thrown by the index function call.
-
-- If an error is thrown when running your function, for this reason or others,
-  the document will not be added to that search index.
-
-The third, optional, parameter is a JavaScript object with the following fields:
-
-*Index function (optional parameter)*
-
-* **boost** - A number that specifies the relevance in search results. Content that is
-  indexed with a boost value greater than 1 is more relevant than content that is
-  indexed without a boost value. Content with a boost value less than one is not so
-  relevant. Value is a positive floating point number. Default is 1 (no boosting).
-
-* **facet** - Creates a faceted index. See :ref:`Faceting <ddoc/search/faceting>`.
-  Values are ``true`` or ``false``. Default is ``false``.
-
-* **index** - Whether the data is indexed, and if so, how. If set to ``false``, the data
-  cannot be used for searches, but can still be retrieved from the index if ``store`` is
-  set to ``true``. See :ref:`Analyzers <ddoc/search/analyzers>`.
-  Values are ``true`` or ``false``. Default is ``true``
-
-* **store** - If ``true``, the value is returned in the search result; otherwise,
-  the value is not returned. Values are ``true`` or ``false``. Default is ``false``.
-
-.. note::
-
-    If you do not set the ``store`` parameter,
-    the index data results for the document are not returned in response to a query.
-
-*Example search index function:*
-
-.. code-block:: javascript
-
-    function(doc) {
-        index("default", doc._id);
-        if (doc.min_length) {
-            index("min_length", doc.min_length, {"store": true});
-        }
-        if (doc.diet) {
-            index("diet", doc.diet, {"store": true});
-        }
-        if (doc.latin_name) {
-            index("latin_name", doc.latin_name, {"store": true});
-        }
-        if (doc.class) {
-            index("class", doc.class, {"store": true});
-        }
-    }
-
-.. _ddoc/search/index_guard_clauses:
-
-Index guard clauses
--------------------
-
-The ``index`` function requires the name of the data field to index as the second
-parameter. However, if that data field does not exist for the document, an error occurs.
-The solution is to use an appropriate 'guard clause' that checks if the field exists, and
-contains the expected type of data, *before* any attempt to create the corresponding
-index.
-
-*Example of failing to check whether the index data field exists:*
-
-.. code-block:: javascript
-
-    if (doc.min_length) {
-        index("min_length", doc.min_length, {"store": true});
-    }
-
-You might use the JavaScript ``typeof`` function to implement the guard clause test. If
-the field exists *and* has the expected type, the correct type name is returned, so the
-guard clause test succeeds and it is safe to use the index function. If the field does
-*not* exist, you would not get back the expected type of the field, therefore you would
-not attempt to index the field.
-
-JavaScript considers a result to be false if one of the following values is tested:
-
-* 'undefined'
-* null
-* The number +0
-* The number -0
-* NaN (not a number)
-* "" (the empty string)
-
-*Using a guard clause to check whether the required data field exists, and holds a number,
-before an attempt to index:*
-
-.. code-block:: javascript
-
-    if (typeof(doc.min_length) === 'number') {
-        index("min_length", doc.min_length, {"store": true});
-    }
-
-Use a generic guard clause test to ensure that the type of the candidate data field is
-defined.
-
-*Example of a 'generic' guard clause:*
-
-.. code-block:: javascript
-
-    if (typeof(doc.min_length) !== 'undefined') {
-        // The field exists, and does have a type, so we can proceed to index using it.
-        ...
-    }
-
-.. _ddoc/search/analyzers:
-
-Analyzers
-=========
-
-Analyzers are settings that define how to recognize terms within text. Analyzers can be
-helpful if you need to
-:ref:`index multiple languages <ddoc/search/language-specific-analyzers>`.
-
-Here's the list of generic analyzers, and their descriptions, that are supported by
-search:
-
-- ``classic`` - The standard Lucene analyzer, circa release 3.1.
-- ``email`` - Like the ``standard`` analyzer, but tries harder to
-  match an email address as a complete token.
-- ``keyword`` - Input is not tokenized at all.
-- ``simple`` - Divides text at non-letters.
-- ``standard`` - The default analyzer. It implements the Word Break
-  rules from the `Unicode Text Segmentation algorithm <http://www.unicode.org/reports/tr29/>`_
-- ``whitespace`` - Divides text at white space boundaries.
-
-*Example analyzer document:*
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/analyzer_example",
-        "indexes": {
-            "INDEX_NAME": {
-                "index": "function (doc) { ... }",
-                "analyzer": "$ANALYZER_NAME"
-            }
-        }
-    }
-
-.. _ddoc/search/language-specific-analyzers:
-
-Language-specific analyzers
----------------------------
-
-These analyzers omit common words in the specific language,
-and many also `remove prefixes and suffixes <http://en.wikipedia.org/wiki/Stemming>`_.
-The name of the language is also the name of the analyzer. See
-`package org.apache.lucene.analysis <https://lucene.apache.org/core/4_6_1/core/org/apache/lucene/analysis/package-summary.html>`_
-for more information.
-
-+----------------+----------------------------------------------------------+
-| Language       | Analyzer                                                 |
-+================+==========================================================+
-| ``arabic``     | org.apache.lucene.analysis.ar.ArabicAnalyzer             |
-+----------------+----------------------------------------------------------+
-| ``armenian``   | org.apache.lucene.analysis.hy.ArmenianAnalyzer           |
-+----------------+----------------------------------------------------------+
-| ``basque``     | org.apache.lucene.analysis.eu.BasqueAnalyzer             |
-+----------------+----------------------------------------------------------+
-| ``bulgarian``  | org.apache.lucene.analysis.bg.BulgarianAnalyzer          |
-+----------------+----------------------------------------------------------+
-| ``brazilian``  | org.apache.lucene.analysis.br.BrazilianAnalyzer          |
-+----------------+----------------------------------------------------------+
-| ``catalan``    | org.apache.lucene.analysis.ca.CatalanAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``cjk``        | org.apache.lucene.analysis.cjk.CJKAnalyzer               |
-+----------------+----------------------------------------------------------+
-| ``chinese``    | org.apache.lucene.analysis.cn.smart.SmartChineseAnalyzer |
-+----------------+----------------------------------------------------------+
-| ``czech``      | org.apache.lucene.analysis.cz.CzechAnalyzer              |
-+----------------+----------------------------------------------------------+
-| ``danish``     | org.apache.lucene.analysis.da.DanishAnalyzer             |
-+----------------+----------------------------------------------------------+
-| ``dutch``      | org.apache.lucene.analysis.nl.DutchAnalyzer              |
-+----------------+----------------------------------------------------------+
-| ``english``    | org.apache.lucene.analysis.en.EnglishAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``finnish``    | org.apache.lucene.analysis.fi.FinnishAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``french``     | org.apache.lucene.analysis.fr.FrenchAnalyzer             |
-+----------------+----------------------------------------------------------+
-| ``german``     | org.apache.lucene.analysis.de.GermanAnalyzer             |
-+----------------+----------------------------------------------------------+
-| ``greek``      | org.apache.lucene.analysis.el.GreekAnalyzer              |
-+----------------+----------------------------------------------------------+
-| ``galician``   | org.apache.lucene.analysis.gl.GalicianAnalyzer           |
-+----------------+----------------------------------------------------------+
-| ``hindi``      | org.apache.lucene.analysis.hi.HindiAnalyzer              |
-+----------------+----------------------------------------------------------+
-| ``hungarian``  | org.apache.lucene.analysis.hu.HungarianAnalyzer          |
-+----------------+----------------------------------------------------------+
-| ``indonesian`` | org.apache.lucene.analysis.id.IndonesianAnalyzer         |
-+----------------+----------------------------------------------------------+
-| ``irish``      | org.apache.lucene.analysis.ga.IrishAnalyzer              |
-+----------------+----------------------------------------------------------+
-| ``italian``    | org.apache.lucene.analysis.it.ItalianAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``japanese``   | org.apache.lucene.analysis.ja.JapaneseAnalyzer           |
-+----------------+----------------------------------------------------------+
-| ``japanese``   | org.apache.lucene.analysis.ja.JapaneseTokenizer          |
-+----------------+----------------------------------------------------------+
-| ``latvian``    | org.apache.lucene.analysis.lv.LatvianAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``norwegian``  | org.apache.lucene.analysis.no.NorwegianAnalyzer          |
-+----------------+----------------------------------------------------------+
-| ``persian``    | org.apache.lucene.analysis.fa.PersianAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``polish``     | org.apache.lucene.analysis.pl.PolishAnalyzer             |
-+----------------+----------------------------------------------------------+
-| ``portuguese`` | org.apache.lucene.analysis.pt.PortugueseAnalyzer         |
-+----------------+----------------------------------------------------------+
-| ``romanian``   | org.apache.lucene.analysis.ro.RomanianAnalyzer           |
-+----------------+----------------------------------------------------------+
-| ``russian``    | org.apache.lucene.analysis.ru.RussianAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``spanish``    | org.apache.lucene.analysis.es.SpanishAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``swedish``    | org.apache.lucene.analysis.sv.SwedishAnalyzer            |
-+----------------+----------------------------------------------------------+
-| ``thai``       | org.apache.lucene.analysis.th.ThaiAnalyzer               |
-+----------------+----------------------------------------------------------+
-| ``turkish``    | org.apache.lucene.analysis.tr.TurkishAnalyzer            |
-+----------------+----------------------------------------------------------+
-
-.. note::
-
-    The ``japanese`` analyzer, org.apache.lucene.analysis.ja.JapaneseTokenizer,
-    includes DEFAULT_MODE and defaultStopTags.
-
-.. note::
-
-    Language-specific analyzers are optimized for the specified language. You cannot
-    combine a generic analyzer with a language-specific analyzer. Instead, you might use a
-    :ref:`per field analyzer <ddoc/search/per-field-analyzers>` to select different
-    analyzers for different fields within the documents.
-
-.. _ddoc/search/per-field-analyzers:
-
-Per-field analyzers
--------------------
-
-The ``perfield`` analyzer configures multiple analyzers for different fields.
-
-*Example of defining different analyzers for different fields:*
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/analyzer_example",
-        "indexes": {
-            "INDEX_NAME": {
-                "analyzer": {
-                    "name": "perfield",
-                    "default": "english",
-                    "fields": {
-                        "spanish": "spanish",
-                        "german": "german"
-                    }
-                },
-                "index": "function (doc) { ... }"
-            }
-        }
-    }
-
-Stop words
-----------
-
-Stop words are words that do not get indexed. You define them within a design document by
-turning the analyzer string into an object.
-
-.. note::
-
-    The ``keyword``, ``simple``, and ``whitespace`` analyzers do not support stop words.
-
-The default stop words for the ``standard`` analyzer are included below:
-
-.. code-block:: javascript
-
-    "a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if",
-    "in", "into", "is", "it", "no", "not", "of", "on", "or", "such",
-    "that", "the", "their", "then", "there", "these", "they", "this",
-    "to", "was", "will", "with"
-
-*Example of defining non-indexed ('stop') words:*
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/stop_words_example",
-        "indexes": {
-            "INDEX_NAME": {
-                "analyzer": {
-                    "name": "portuguese",
-                    "stopwords": [
-                        "foo",
-                        "bar",
-                        "baz"
-                    ]
-                },
-                "index": "function (doc) { ... }"
-            }
-        }
-    }
-
-Testing analyzer tokenization
------------------------------
-
-You can test the results of analyzer tokenization by posting sample data to the
-``_search_analyze`` endpoint.
-
-*Example of using HTTP to test the keyword analyzer:*
-
-.. code-block:: http
-
-    POST /_search_analyze HTTP/1.1
-    Content-Type: application/json
-    {"analyzer":"keyword", "text":"ablanks@renovations.com"}
-
-*Example of using the command line to test the keyword analyzer:*
-
-.. code-block:: sh
-
-    curl 'https://$HOST:5984/_search_analyze' -H 'Content-Type: application/json'
-        -d '{"analyzer":"keyword", "text":"ablanks@renovations.com"}'
-
-*Result of testing the keyword analyzer:*
-
-.. code-block:: javascript
-
-    {
-        "tokens": [
-            "ablanks@renovations.com"
-        ]
-    }
-
-*Example of using HTTP to test the standard analyzer:*
-
-.. code-block:: http
-
-    POST /_search_analyze HTTP/1.1
-    Content-Type: application/json
-    {"analyzer":"standard", "text":"ablanks@renovations.com"}
-
-*Example of using the command line to test the standard analyzer:*
-
-.. code-block:: sh
-
-    curl 'https://$HOST:5984/_search_analyze' -H 'Content-Type: application/json'
-        -d '{"analyzer":"standard", "text":"ablanks@renovations.com"}'
-
-*Result of testing the standard analyzer:*
-
-.. code-block:: javascript
-
-    {
-        "tokens": [
-            "ablanks",
-            "renovations.com"
-        ]
-    }
-
-Queries
-=======
-
-After you create a search index, you can query it.
-
-- Issue a partition query using:
-  ``GET /$DATABASE/_partition/$PARTITION_KEY/_design/$DDOC/_search/$INDEX_NAME``
-- Issue a global query using:
-  ``GET /$DATABASE/_design/$DDOC/_search/$INDEX_NAME``
-
-Specify your search by using the ``query`` parameter.
-
-*Example of using HTTP to query a partitioned index:*
-
-.. code-block:: http
-
-    GET /$DATABASE/_partition/$PARTITION_KEY/_design/$DDOC/_search/$INDEX_NAME?include_docs=true&query="*:*"&limit=1 HTTP/1.1
-    Content-Type: application/json
-
-*Example of using HTTP to query a global index:*
-
-.. code-block:: http
-
-    GET /$DATABASE/_design/$DDOC/_search/$INDEX_NAME?include_docs=true&query="*:*"&limit=1 HTTP/1.1
-    Content-Type: application/json
-
-*Example of using the command line to query a partitioned index:*
-
-.. code-block:: sh
-
-    curl https://$HOST:5984/$DATABASE/_partition/$PARTITION_KEY/_design/$DDOC/
-    _search/$INDEX_NAME?include_docs=true\&query="*:*"\&limit=1 \
-
-*Example of using the command line to query a global index:*
-
-.. code-block:: sh
-
-    curl https://$HOST:5984/$DATABASE/_design/$DDOC/_search/$INDEX_NAME?
-    include_docs=true\&query="*:*"\&limit=1 \
-
-.. _ddoc/search/query_parameters:
-
-Query Parameters
-----------------
-
-A full list of query parameters can be found in the
-:ref:`API Reference <api/ddoc/search>`.
-
-You must enable :ref:`faceting <ddoc/search/faceting>` before you can use the
-following parameters:
-
-- ``counts``
-- ``drilldown``
-- ``ranges``
-
-.. note::
-    Do not combine the ``bookmark`` and ``stale`` options. These options
-    constrain the choice of shard replicas to use for the response. When used
-    together, the options might cause problems when contact is attempted
-    with replicas that are slow or not available.
-
-Relevance
----------
-
-When more than one result might be returned, it is possible for them to be sorted. By
-default, the sorting order is determined by 'relevance'.
-
-Relevance is measured according to
-`Apache Lucene Scoring <https://lucene.apache.org/core/3_6_0/scoring.html>`_.
-As an example, if you search a simple database for the word ``example``, two documents
-might contain the word. If one document mentions the word ``example`` 10 times, but the
-second document mentions it only twice, then the first document is considered to be more
-'relevant'.
-
-If you do not provide a ``sort`` parameter, relevance is used by default. The highest
-scoring matches are returned first.
-
-If you provide a ``sort`` parameter, then matches are returned in that order, ignoring
-relevance.
-
-If you want to use a ``sort`` parameter, and also include ordering by relevance in your
-search results, use the special fields ``-<score>`` or ``<score>`` within the ``sort``
-parameter.
-
-POSTing search queries
-----------------------
-
-Instead of using the ``GET`` HTTP method, you can also use ``POST``. The main advantage of
-``POST`` queries is that they can have a request body, so you can specify the request as a
-JSON object. Each parameter in the query string of a ``GET`` request corresponds to a
-field in the JSON object in the request body.
-
-*Example of using HTTP to POST a search request:*
-
-.. code-block:: http
-
-    POST /db/_design/ddoc/_search/searchname HTTP/1.1
-    Content-Type: application/json
-
-*Example of using the command line to POST a search request:*
-
-.. code-block:: sh
-
-    curl 'https://$HOST:5984/db/_design/ddoc/_search/searchname' -X POST -H 'Content-Type: application/json' -d @search.json
-
-*Example JSON document that contains a search request:*
-
-.. code-block:: javascript
-
-    {
-        "q": "index:my query",
-        "sort": "foo",
-        "limit": 3
-    }
-
-Query syntax
-============
-
-The CouchDB search query syntax is based on the
-`Lucene syntax. <http://lucene.apache.org/core/4_3_0/queryparser/org/apache/lucene/queryparser/classic/package-summary.html#Overview>`_
-Search queries take the form of ``name:value`` unless the name is omitted, in which case
-they use the default field, as demonstrated in the following examples:
-
-*Example search query expressions:*
-
-.. code-block:: javascript
-
-    // Birds
-    class:bird
-
-.. code-block:: text
-
-    // Animals that begin with the letter "l"
-    l*
-
-.. code-block:: text
-
-    // Carnivorous birds
-    class:bird AND diet:carnivore
-
-.. code-block:: text
-
-    // Herbivores that start with letter "l"
-    l* AND diet:herbivore
-
-.. code-block:: text
-
-    // Medium-sized herbivores
-    min_length:[1 TO 3] AND diet:herbivore
-
-.. code-block:: text
-
-    // Herbivores that are 2m long or less
-    diet:herbivore AND min_length:[-Infinity TO 2]
-
-.. code-block:: text
-
-    // Mammals that are at least 1.5m long
-    class:mammal AND min_length:[1.5 TO Infinity]
-
-.. code-block:: text
-
-    // Find "Meles meles"
-    latin_name:"Meles meles"
-
-.. code-block:: text
-
-    // Mammals who are herbivore or carnivore
-    diet:(herbivore OR omnivore) AND class:mammal
-
-.. code-block:: text
-
-    // Return all results
-    *:*
-
-Queries over multiple fields can be logically combined, and groups and fields can be
-further grouped. The available logical operators are case-sensitive and are ``AND``,
-``+``, ``OR``, ``NOT`` and ``-``. Range queries can run over strings or numbers.
-
-If you want a fuzzy search, you can run a query with ``~`` to find terms like the search
-term. For instance, ``look~`` finds the terms ``book`` and ``took``.
-
-.. note::
-    If the lower and upper bounds of a range query are both strings that
-    contain only numeric digits, the bounds are treated as numbers not as
-    strings. For example, if you search by using the query
-    ``mod_date:["20170101" TO "20171231"]``, the results include documents
-    for which ``mod_date`` is between the numeric values 20170101 and
-    20171231, not between the strings "20170101" and "20171231".
-
-You can alter the importance of a search term by adding ``^`` and a positive number. This
-alteration makes matches containing the term more or less relevant, proportional to the
-power of the boost value. The default value is 1, which means no increase or decrease in
-the strength of the match. A decimal value of 0 - 1 reduces importance. making the match
-strength weaker. A value greater than one increases importance, making the match strength
-stronger.
-
-Wildcard searches are supported, for both single (``?``) and multiple (``*``) character
-searches. For example, ``dat?`` would match ``date`` and ``data``, whereas ``dat*`` would
-match ``date``, ``data``, ``database``, and ``dates``. Wildcards must come after the
-search term.
-
-Use ``*:*`` to return all results.
-
-If the search query does *not* specify the ``"group_field"`` argument, the response
-contains a bookmark. If this bookmark is later provided as a URL parameter, the response
-skips the rows that were seen already, making it quick and easy to get the next set of
-results.
-
-.. note::
-    The response never includes a bookmark if the ``"group_field"``
-    parameter is included in the search query.
-    See :ref:`group_field parameter <api/ddoc/search>`.
-
-.. note::
-    The ``group_field``, ``group_limit``, and ``group_sort`` options
-    are only available when making global queries.
-
-The following characters require escaping if you want to search on them:
-
-.. code-block:: sh
-
-    + - && || ! ( ) { } [ ] ^ " ~ * ? : \ /
-
-To escape one of these characters, use a preceding backslash character (``\``).
-
-The response to a search query contains an ``order`` field for each of the results. The
-``order`` field is an array where the first element is the field or fields that are
-specified in the ``sort`` parameter. See the
-:ref:`sort parameter <api/ddoc/search>`. If no ``sort`` parameter is included
-in the query, then the ``order`` field contains the `Lucene relevance score
-<https://lucene.apache.org/core/3_6_0/scoring.html>`_. If you use the 'sort by distance'
-feature as described in :ref:`geographical searches <ddoc/search/geographical_searches>`,
-then the first element is the distance from a point. The distance is measured by using
-either kilometers or miles.
-
-.. note::
-    The second element in the order array can be ignored.
-    It is used for troubleshooting purposes only.
-
-.. _ddoc/search/faceting:
-
-Faceting
---------
-
-CouchDB Search also supports faceted searching, enabling discovery of aggregate
-information about matches quickly and easily. You can match all documents by using the
-special ``?q=*:*`` query syntax, and use the returned facets to refine your query. To
-indicate that a field must be indexed for faceted queries, set ``{"facet": true}`` in its
-options.
-
-*Example of search query, specifying that faceted search is enabled:*
-
-.. code-block:: javascript
-
-    function(doc) {
-        index("type", doc.type, {"facet": true});
-        index("price", doc.price, {"facet": true});
-    }
-
-To use facets, all the documents in the index must include all the fields that have
-faceting enabled. If your documents do not include all the fields, you receive a
-``bad_request`` error with the following reason, "The ``field_name`` does not exist." If
-each document does not contain all the fields for facets, create separate indexes for each
-field. If you do not create separate indexes for each field, you must include only
-documents that contain all the fields. Verify that the fields exist in each document by
-using a single ``if`` statement.
-
-*Example if statement to verify that the required fields exist in each document:*
-
-.. code-block:: javascript
-
-    if (typeof doc.town == "string" && typeof doc.name == "string") {
-        index("town", doc.town, {facet: true});
-        index("name", doc.name, {facet: true});
-       }
-
-Counts
-------
-
-.. note::
-    The ``counts`` option is only available when making global queries.
-
-The ``counts`` facet syntax takes a list of fields, and returns the number of query
-results for each unique value of each named field.
-
-.. note::
-    The ``count`` operation works only if the indexed values are strings.
-    The indexed values cannot be mixed types. For example,
-    if 100 strings are indexed, and one number,
-    then the index cannot be used for ``count`` operations.
-    You can check the type by using the ``typeof`` operator, and convert it
-    by using the ``parseInt``,
-    ``parseFloat``, or ``.toString()`` functions.
-
-*Example of a query using the counts facet syntax:*
-
-.. code-block:: http
-
-    ?q=*:*&counts=["type"]
-
-*Example response after using of the counts facet syntax:*
-
-.. code-block:: javascript
-
-    {
-        "total_rows":100000,
-        "bookmark":"g...",
-        "rows":[...],
-        "counts":{
-            "type":{
-                "sofa": 10,
-                "chair": 100,
-                "lamp": 97
-            }
-        }
-    }
-
-Drilldown
--------------
-
-.. note::
-    The ``drilldown`` option is only available when making global queries.
-
-You can restrict results to documents with a dimension equal to the specified label.
-Restrict the results by adding ``drilldown=["dimension","label"]`` to a search query. You
-can include multiple ``drilldown`` parameters to restrict results along multiple
-dimensions.
-
-.. code-block:: http
-
-    GET /things/_design/inventory/_search/fruits?q=*:*&drilldown=["state","old"]&drilldown=["item","apple"]&include_docs=true HTTP/1.1
-
-For better language interoperability, you can achieve the same by supplying a list of lists:
-
-.. code-block:: http
-
-    GET /things/_design/inventory/_search/fruits?q=*:*&drilldown=[["state","old"],["item","apple"]]&include_docs=true HTTP/1.1
-
-You can also supply a list of lists for ``drilldown`` in bodies of POST requests.
-
-Note that, multiple values for a single key in a ``drilldown`` means an
-``OR`` relation between them and there is an ``AND`` relation between multiple keys.
-
-Using a ``drilldown`` parameter is similar to using ``key:value`` in the ``q`` parameter,
-but the ``drilldown`` parameter returns values that the analyzer might skip.
-
-For example, if the analyzer did not index a stop word like ``"a"``, using ``drilldown``
-returns it when you specify ``drilldown=["key","a"]``.
-
-Ranges
-------
-
-.. note::
-    The ``ranges`` option is only available when making global queries.
-
-The ``range`` facet syntax reuses the standard Lucene syntax for ranges to return counts
-of results that fit into each specified category. Inclusive range queries are denoted by
-brackets (``[``, ``]``). Exclusive range queries are denoted by curly brackets (``{``,
-``}``).
-
-.. note::
-    The ``range`` operation works only if the indexed values are numbers. The indexed
-    values cannot be mixed types. For example, if 100 strings are indexed, and one number,
-    then the index cannot be used for ``range`` operations. You can check the type by
-    using the ``typeof`` operator, and convert it by using the ``parseInt``,
-    ``parseFloat``, or ``.toString()`` functions.
-
-*Example of a request that uses faceted search for matching ranges:*
-
-.. code-block:: http
-
-    ?q=*:*&ranges={"price":{"cheap":"[0 TO 100]","expensive":"{100 TO Infinity}"}}
-
-*Example results after a ranges check on a faceted search:*
-
-.. code-block:: javascript
-
-    {
-        "total_rows":100000,
-        "bookmark":"g...",
-        "rows":[...],
-        "ranges": {
-            "price": {
-                "expensive": 278682,
-                "cheap": 257023
-            }
-        }
-    }
-
-.. _ddoc/search/geographical_searches:
-
-Geographical searches
-=====================
-
-In addition to searching by the content of textual fields, you can also sort your results
-by their distance from a geographic coordinate using Lucene's built-in geospatial
-capabilities.
-
-To sort your results in this way, you must index two numeric fields, representing the
-longitude and latitude.
-
-.. note::
-    You can also sort your results by their distance from a geographic coordinate
-    using Lucene's built-in geospatial capabilities.
-
-You can then query by using the special ``<distance...>`` sort field, which takes five
-parameters:
-
-- Longitude field name: The name of your longitude field (``mylon`` in the example).
-
-- Latitude field name: The name of your latitude field (``mylat`` in the example).
-
-- Longitude of origin: The longitude of the place you want to sort by distance from.
-
-- Latitude of origin: The latitude of the place you want to sort by distance from.
-
-- Units: The units to use: ``km`` for kilometers or ``mi`` for miles.
-  The distance is returned in the order field.
-
-You can combine sorting by distance with any other search query, such as range searches on
-the latitude and longitude, or queries that involve non-geographical information.
-
-That way, you can search in a bounding box, and narrow down the search with extra
-criteria.
-
-*Example geographical data:*
-
-.. code-block:: javascript
-
-    {
-        "name":"Aberdeen, Scotland",
-        "lat":57.15,
-        "lon":-2.15,
-        "type":"city"
-    }
-
-*Example of a design document that contains a search index for the geographic data:*
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.type && doc.type == 'city') {
-            index('city', doc.name, {'store': true});
-            index('lat', doc.lat, {'store': true});
-            index('lon', doc.lon, {'store': true});
-        }
-    }
-
-*An example of using HTTP for a query that sorts cities in the northern hemisphere by
-their distance to New York:*
-
-.. code-block:: http
-
-    GET /examples/_design/cities-designdoc/_search/cities?q=lat:[0+TO+90]&sort="<distance,lon,lat,-74.0059,40.7127,km>" HTTP/1.1
-
-*An example of using the command line for a query that sorts cities in the northern
-hemisphere by their distance to New York:*
-
-.. code-block:: sh
-
-    curl 'https://$HOST:5984/examples/_design/cities-designdoc/_search/cities?q=lat:[0+TO+90]&sort="<distance,lon,lat,-74.0059,40.7127,km>"'
-
-*Example (abbreviated) response, containing a list of northern hemisphere
-cities sorted by distance to New York:*
-
-.. code-block:: javascript
-
-    {
-        "total_rows": 205,
-        "bookmark": "g1A...XIU",
-        "rows": [
-            {
-                "id": "city180",
-                "order": [
-                    8.530665755719783,
-                    18
-                ],
-                "fields": {
-                    "city": "New York, N.Y.",
-                    "lat": 40.78333333333333,
-                    "lon": -73.96666666666667
-                }
-            },
-            {
-                "id": "city177",
-                "order": [
-                    13.756343205985946,
-                    17
-                ],
-                "fields": {
-                    "city": "Newark, N.J.",
-                    "lat": 40.733333333333334,
-                    "lon": -74.16666666666667
-                }
-            },
-            {
-                "id": "city178",
-                "order": [
-                    113.53603438866077,
-                    26
-                ],
-                "fields": {
-                    "city": "New Haven, Conn.",
-                    "lat": 41.31666666666667,
-                    "lon": -72.91666666666667
-                }
-            }
-        ]
-    }
-
-Highlighting search terms
-=========================
-
-Sometimes it is useful to get the context in which a search term was mentioned so that you
-can display more emphasized results to a user.
-
-To get more emphasized results, add the ``highlight_fields`` parameter to the search
-query. Specify the field names for which you would like excerpts, with the highlighted
-search term returned.
-
-By default, the search term is placed in ``<em>`` tags to highlight it, but the highlight
-can be overridden by using the ``highlights_pre_tag`` and ``highlights_post_tag``
-parameters.
-
-The length of the fragments is 100 characters by default. A different length can be
-requested with the ``highlights_size`` parameter.
-
-The ``highlights_number`` parameter controls the number of fragments that are returned,
-and defaults to 1.
-
-In the response, a ``highlights`` field is added, with one subfield per field name.
-
-For each field, you receive an array of fragments with the search term highlighted.
-
-.. note::
-    For highlighting to work, store the field in the index by
-    using the ``store: true`` option.
-
-*Example of using HTTP to search with highlighting enabled:*
-
-.. code-block:: http
-
-    GET /movies/_design/searches/_search/movies?q=movie_name:Azazel&highlight_fields=["movie_name"]&highlight_pre_tag="**"&highlight_post_tag="**"&highlights_size=30&highlights_number=2 HTTP/1.1
-    Authorization: ...
-
-*Example of using the command line to search with
-highlighting enabled:*
-
-.. code-block:: sh
-
-    curl "https://$HOST:5984/movies/_design/searches/_search/movies?q=movie_name:Azazel&highlight_fields=\[\"movie_name\"\]&highlight_pre_tag=\"**\"&highlight_post_tag=\"**\"&highlights_size=30&highlights_number=2
-
-*Example of highlighted search results:*
-
-.. code-block:: javascript
-
-    {
-        "highlights": {
-            "movie_name": [
-                " on the Azazel Orient Express",
-                " Azazel manuals, you"
-            ]
-        }
-    }
diff --git a/src/ddocs/views/collation.rst b/src/ddocs/views/collation.rst
deleted file mode 100644
index f168275..0000000
--- a/src/ddocs/views/collation.rst
+++ /dev/null
@@ -1,264 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _views/collation:
-
-===============
-Views Collation
-===============
-
-Basics
-======
-
-View functions specify a key and a value to be returned for each row. CouchDB
-collates the view rows by this key. In the following example, the ``LastName``
-property serves as the key, thus the result will be sorted by ``LastName``:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.Type == "customer") {
-            emit(doc.LastName, {FirstName: doc.FirstName, Address: doc.Address});
-        }
-    }
-
-CouchDB allows arbitrary JSON structures to be used as keys. You can use JSON
-arrays as keys for fine-grained control over sorting and grouping.
-
-Examples
-========
-
-The following clever trick would return both customer and order documents.
-The key is composed of a customer ``_id`` and a sorting token. Because the key
-for order documents begins with the ``_id`` of a customer document, all the
-orders will be sorted by customer. Because the sorting token for customers is
-lower than the token for orders, the customer document will come before the
-associated orders. The values 0 and 1 for the sorting token are arbitrary.
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.Type == "customer") {
-            emit([doc._id, 0], null);
-        } else if (doc.Type == "order") {
-            emit([doc.customer_id, 1], null);
-        }
-    }
-
-To list a specific customer with ``_id`` XYZ, and all of that customer's orders,
-limit the startkey and endkey ranges to cover only documents for that customer's
-``_id``::
-
-    startkey=["XYZ"]&endkey=["XYZ", {}]
-
-It is not recommended to emit the document itself in the view. Instead, to
-include the bodies of the documents when requesting the view, request the view
-with ``?include_docs=true``.
-
-Sorting by Dates
-================
-
-It maybe be convenient to store date attributes in a human readable format
-(i.e. as a `string`), but still sort by date. This can be done by converting
-the date to a `number` in the :js:func:`emit` function. For example, given
-a document with a created_at attribute of ``'Wed Jul 23 16:29:21 +0100 2013'``,
-the following emit function would sort by date:
-
-.. code-block:: javascript
-
-    emit(Date.parse(doc.created_at).getTime(), null);
-
-Alternatively, if you use a date format which sorts lexicographically,
-such as ``"2013/06/09 13:52:11 +0000"`` you can just
-
-.. code-block:: javascript
-
-    emit(doc.created_at, null);
-
-and avoid the conversion. As a bonus, this date format is compatible with the
-JavaScript date parser, so you can use ``new Date(doc.created_at)`` in your
-client side JavaScript to make date sorting easy in the browser.
-
-String Ranges
-=============
-
-If you need start and end keys that encompass every string with a given prefix,
-it is better to use a high value Unicode character, than to use a ``'ZZZZ'``
-suffix.
-
-That is, rather than::
-
-    startkey="abc"&endkey="abcZZZZZZZZZ"
-
-You should use::
-
-    startkey="abc"&endkey="abc\ufff0"
-
-Collation Specification
-=======================
-
-This section is based on the view_collation function in `view_collation.js`_:
-
-.. _view_collation.js: https://github.com/apache/couchdb/blob/main/test/javascript/tests/view_collation.js
-
-.. code-block:: javascript
-
-    // special values sort before all other types
-    null
-    false
-    true
-
-    // then numbers
-    1
-    2
-    3.0
-    4
-
-    // then text, case sensitive
-    "a"
-    "A"
-    "aa"
-    "b"
-    "B"
-    "ba"
-    "bb"
-
-    // then arrays. compared element by element until different.
-    // Longer arrays sort after their prefixes
-    ["a"]
-    ["b"]
-    ["b","c"]
-    ["b","c", "a"]
-    ["b","d"]
-    ["b","d", "e"]
-
-    // then object, compares each key value in the list until different.
-    // larger objects sort after their subset objects.
-    {a:1}
-    {a:2}
-    {b:1}
-    {b:2}
-    {b:2, a:1} // Member order does matter for collation.
-               // CouchDB preserves member order
-               // but doesn't require that clients will.
-               // this test might fail if used with a js engine
-               // that doesn't preserve order
-    {b:2, c:2}
-
-Comparison of strings is done using `ICU`_ which implements the
-`Unicode Collation Algorithm`_, giving a dictionary sorting of keys.
-This can give surprising results if you were expecting ASCII ordering.
-Note that:
-
-- All symbols sort before numbers and letters (even the "high" symbols like
-  tilde, ``0x7e``)
-
-- Differing sequences of letters are compared without regard to case, so
-  ``a < aa`` but also ``A < aa`` and ``a < AA``
-
-- Identical sequences of letters are compared with regard to case, with
-  lowercase before uppercase, so ``a < A``
-
-.. _ICU: http://site.icu-project.org/
-.. _Unicode Collation Algorithm: https://www.unicode.org/reports/tr10/
-
-You can demonstrate the collation sequence for 7-bit ASCII characters like this:
-
-.. code-block:: ruby
-
-    require 'rubygems'
-    require 'restclient'
-    require 'json'
-
-    DB="http://127.0.0.1:5984/collator"
-
-    RestClient.delete DB rescue nil
-    RestClient.put "#{DB}",""
-
-    (32..126).each do |c|
-        RestClient.put "#{DB}/#{c.to_s(16)}", {"x"=>c.chr}.to_json
-    end
-
-    RestClient.put "#{DB}/_design/test", <<EOS
-    {
-        "views":{
-            "one":{
-                "map":"function (doc) { emit(doc.x,null); }"
-            }
-        }
-    }
-    EOS
-
-    puts RestClient.get("#{DB}/_design/test/_view/one")
-
-This shows the collation sequence to be::
-
-    ` ^ _ - , ; : ! ? . ' " ( ) [ ] { } @ * / \ & # % + < = > | ~ $ 0 1 2 3 4 5 6 7 8 9
-    a A b B c C d D e E f F g G h H i I j J k K l L m M n N o O p P q Q r R s S t T u U v V w W x X y Y z Z
-
-Key ranges
-----------
-
-Take special care when querying key ranges. For example: the query::
-
-    startkey="Abc"&endkey="AbcZZZZ"
-
-will match "ABC" and "abc1", but not "abc". This is because UCA sorts as::
-
-    abc < Abc < ABC < abc1 < AbcZZZZZ
-
-For most applications, to avoid problems you should lowercase the `startkey`::
-
-    startkey="abc"&endkey="abcZZZZZZZZ"
-
-will match all keys starting with ``[aA][bB][cC]``
-
-Complex keys
-------------
-
-The query ``startkey=["foo"]&endkey=["foo",{}]`` will match most array keys
-with "foo" in the first element, such as ``["foo","bar"]`` and
-``["foo",["bar","baz"]]``. However it will not match ``["foo",{"an":"object"}]``
-
-_all_docs
-=========
-
-The :ref:`_all_docs <api/db/all_docs>`  view is a special case because it uses
-ASCII collation for doc ids, not UCA::
-
-    startkey="_design/"&endkey="_design/ZZZZZZZZ"
-
-will not find ``_design/abc`` because `'Z'` comes before `'a'` in the ASCII
-sequence. A better solution is::
-
-    startkey="_design/"&endkey="_design0"
-
-Raw collation
-=============
-
-To squeeze a little more performance out of views, you can specify
-``"options":{"collation":"raw"}``  within the view definition for native Erlang
-collation, especially if you don't require UCA. This gives a different collation
-sequence:
-
-.. code-block:: javascript
-
-    1
-    false
-    null
-    true
-    {"a":"a"},
-    ["a"]
-    "a"
-
-Beware that ``{}`` is no longer a suitable "high" key sentinel value. Use a
-string like ``"\ufff0"`` instead.
diff --git a/src/ddocs/views/index.rst b/src/ddocs/views/index.rst
deleted file mode 100644
index fa9c2ac..0000000
--- a/src/ddocs/views/index.rst
+++ /dev/null
@@ -1,29 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _views:
-
-==============
-Guide to Views
-==============
-
-Views are the primary tool used for querying and reporting on CouchDB documents.
-There you'll learn how they work and how to use them to build effective
-applications with CouchDB.
-
-.. toctree::
-
-    intro
-    collation
-    joins
-    nosql
-    pagination
diff --git a/src/ddocs/views/intro.rst b/src/ddocs/views/intro.rst
deleted file mode 100644
index 2b44374..0000000
--- a/src/ddocs/views/intro.rst
+++ /dev/null
@@ -1,740 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _views/intro:
-
-=====================
-Introduction to Views
-=====================
-
-Views are useful for many purposes:
-
-- Filtering the documents in your database to find those relevant to a
-  particular process.
-- Extracting data from your documents and presenting it in a specific order.
-- Building efficient indexes to find documents by any value or structure that
-  resides in them.
-- Use these indexes to represent relationships among documents.
-- Finally, with views you can make all sorts of calculations on the data in your
-  documents. For example, if documents represent your company’s financial
-  transactions, a view can answer the question of what the spending was in the
-  last week, month, or year.
-
-What Is a View?
-===============
-
-Let’s go through the different use cases. First is extracting data that you
-might need for a special purpose in a specific order. For a front page, we want
-a list of blog post titles sorted by date. We’ll work with a set of example
-documents as we walk through how views work:
-
-.. code-block:: javascript
-
-    {
-        "_id":"biking",
-        "_rev":"AE19EBC7654",
-
-        "title":"Biking",
-        "body":"My biggest hobby is mountainbiking. The other day...",
-        "date":"2009/01/30 18:04:11"
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id":"bought-a-cat",
-        "_rev":"4A3BBEE711",
-
-        "title":"Bought a Cat",
-        "body":"I went to the the pet store earlier and brought home a little kitty...",
-        "date":"2009/02/17 21:13:39"
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id":"hello-world",
-        "_rev":"43FBA4E7AB",
-
-        "title":"Hello World",
-        "body":"Well hello and welcome to my new blog...",
-        "date":"2009/01/15 15:52:20"
-    }
-
-Three will do for the example. Note that the documents are sorted by "_id",
-which is how they are stored in the database. Now we define a view.
-Bear with us without an explanation while we show you some code:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.date && doc.title) {
-            emit(doc.date, doc.title);
-        }
-    }
-
-This is a `map function`, and it is written in JavaScript. If you are not
-familiar with JavaScript but have used C or any other C-like language such as
-Java, PHP, or C#, this should look familiar. It is a simple function definition.
-
-You provide CouchDB with view functions as strings stored inside the ``views``
-field of a design document. To create this view you can use this command:
-
-.. code-block:: console
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/db/_design/my_ddoc
-         -d '{"views":{"my_filter":{"map":
-             "function(doc) { if(doc.date && doc.title) { emit(doc.date, doc.title); }}"}}}'
-
-You don’t run the JavaScript function yourself. Instead, when you
-`query your view`, CouchDB takes the source code and runs it for you on every
-document in the database your view was defined in. You `query your view` to
-retrieve the `view result` using the following command:
-
-.. code-block:: console
-
-    curl -X GET http://admin:password@127.0.0.1:5984/db/_design/my_ddoc/_view/my_filter
-
-All map functions have a single parameter doc. This is a single document in
-the database. Our map function checks whether our document has a ``date`` and
-a ``title`` attribute — luckily, all of our documents have them — and then calls
-the built-in :js:func:`emit` function with these two attributes as arguments.
-
-The :js:func:`emit` function always takes two arguments: the first is ``key``,
-and the second is ``value``. The ``emit(key, value)`` function creates an entry
-in our `view result`. One more thing: the ``emit()`` function can be called
-multiple times in the map function to create multiple entries in the view
-results from a single document, but we are not doing that yet.
-
-CouchDB takes whatever you pass into the emit() function and puts it into a list
-(see Table 1, “View results” below). Each row in that list includes the `key`
-and `value`. More importantly, the list is sorted by key (by ``doc.date``
-in our case).  The most important feature of a view result is that it is sorted
-by `key`. We will come back to that over and over again to do neat things. Stay
-tuned.
-
-Table 1. View results:
-
-+-----------------------+------------------+
-|         Key           |      Value       |
-+=======================+==================+
-| "2009/01/15 15:52:20" | "Hello World"    |
-+-----------------------+------------------+
-| "2009/01/30 18:04:11" | "Biking"         |
-+-----------------------+------------------+
-| "2009/02/17 21:13:39" | "Bought a Cat"   |
-+-----------------------+------------------+
-
-When you query your view, CouchDB takes the source code and runs it for you on
-every document in the database. If you have a lot of documents, that takes
-quite a bit of time and you might wonder if it is not horribly inefficient
-to do this. Yes, it would be, but CouchDB is designed to avoid any extra costs:
-it only runs through all documents once, when you first query your view.
-If a document is changed, the map function is only run once, to recompute
-the keys and values for that single document.
-
-The view result is stored in a B-tree, just like the structure that is
-responsible for holding your documents. View B-trees are stored in their
-own file, so that for high-performance CouchDB usage, you can keep views on
-their own disk. The B-tree provides very fast lookups of rows by key, as well
-as efficient streaming of rows in a key range. In our example, a single view
-can answer all questions that involve time: “Give me all the blog posts from
-last week” or “last month” or “this year.” Pretty neat.
-
-When we query our view, we get back a list of all documents sorted by date.
-Each row also includes the post title so we can construct links to posts.
-Table 1 is just a graphical representation of the view result.
-The actual result is JSON-encoded and contains a little more metadata:
-
-.. code-block:: javascript
-
-    {
-        "total_rows": 3,
-        "offset": 0,
-        "rows": [
-            {
-                "key": "2009/01/15 15:52:20",
-                "id": "hello-world",
-                "value": "Hello World"
-            },
-
-            {
-                "key": "2009/01/30 18:04:11",
-                "id": "biking",
-                "value": "Biking"
-            },
-
-            {
-                "key": "2009/02/17 21:13:39",
-                "id": "bought-a-cat",
-                "value": "Bought a Cat"
-            }
-
-        ]
-    }
-
-Now, the actual result is not as nicely formatted and doesn’t include any
-superfluous whitespace or newlines, but this is better for you (and us!)
-to read and understand. Where does that "id" member in the result rows come
-from? That wasn’t there before. That’s because we omitted it earlier to avoid
-confusion. CouchDB automatically includes the document ID of the document that
-created the entry in the view result. We’ll use this as well when constructing
-links to the blog post pages.
-
-.. warning::
-
-    Do not emit the entire document as the value of your ``emit(key, value)``
-    statement unless you're sure you know you want it. This stores an entire
-    additional copy of your document in the view's secondary index. Views with
-    ``emit(key, doc)`` take longer to update, longer to write to disk, and
-    consume significantly more disk space. The only advantage is that they
-    are faster to query than using the ``?include_docs=true`` parameter when
-    querying a view.
-
-    Consider the trade-offs before emitting the entire document. Often it is
-    sufficient to emit only a portion of the document, or just a single key /
-    value pair, in your views.
-
-Efficient Lookups
-=================
-
-Let’s move on to the second use case for views: “building efficient indexes to
-find documents by any value or structure that resides in them.” We already
-explained the efficient indexing, but we skipped a few details. This is a good
-time to finish this discussion as we are looking at map functions that are a
-little more complex.
-
-First, back to the B-trees! We explained that the B-tree that backs the
-key-sorted view result is built only once, when you first query a view,
-and all subsequent queries will just read the B-tree instead of executing
-the map function for all documents again. What happens, though, when you change
-a document, add a new one, or delete one? Easy: CouchDB is smart enough
-to find the rows in the view result that were created by a specific document.
-It marks them invalid so that they no longer show up in view results.
-If the document was deleted, we’re good — the resulting B-tree reflects the
-state of the database. If a document got updated, the new document is run
-through the map function and the resulting new lines are inserted into
-the B-tree at the correct spots. New documents are handled in the same way.
-The B-tree is a very efficient data structure for our needs, and the crash-only
-design of CouchDB databases is carried over to the view indexes as well.
-
-To add one more point to the efficiency discussion: usually multiple documents
-are updated between view queries. The mechanism explained in the previous
-paragraph gets applied to all changes in the database since the last time
-the view was queried in a batch operation, which makes things even faster and
-is generally a better use of your resources.
-
-Find One
---------
-
-On to more complex map functions. We said “find documents by any value or
-structure that resides in them.” We already explained how to extract a value
-by which to sort a list of views (our date field). The same mechanism is used
-for fast lookups. The URI to query to get a view’s result is
-``/database/_design/designdocname/_view/viewname``. This gives you a list of all
-rows in the view. We have only three documents, so things are small, but with
-thousands of documents, this can get long. You can add view parameters to the
-URI to constrain the result set. Say we know the date of a blog post.
-To find a single document, we would use
-``/blog/_design/docs/_view/by_date?key="2009/01/30 18:04:11"``
-to get the “Biking” blog post. Remember that you can place whatever you like
-in the key parameter to the emit() function. Whatever you put in there, we can
-now use to look up exactly — and fast.
-
-Note that in the case where multiple rows have the same key (perhaps we design
-a view where the key is the name of the post’s author), key queries can return
-more than one row.
-
-Find Many
----------
-
-We talked about “getting all posts for last month.” If it’s February now,
-this is as easy as::
-
-    /blog/_design/docs/_view/by_date?startkey="2010/01/01 00:00:00"&endkey="2010/02/00 00:00:00"
-
-The ``startkey`` and ``endkey`` parameters specify an inclusive range on which
-we can search.
-
-To make things a little nicer and to prepare for a future example, we are going
-to change the format of our date field. Instead of a string, we are going to use
-an array, where individual members are part of a timestamp in decreasing
-significance. This sounds fancy, but it is rather easy. Instead of::
-
-    {
-        "date": "2009/01/31 00:00:00"
-    }
-
-we use::
-
-    {
-        "date": [2009, 1, 31, 0, 0, 0]
-    }
-
-Our map function does not have to change for this, but our view result looks
-a little different:
-
-Table 2. New view results:
-
-+---------------------------+------------------+
-|            Key            |      Value       |
-+===========================+==================+
-| [2009, 1, 15, 15, 52, 20] | "Hello World"    |
-+---------------------------+------------------+
-| [2009, 2, 17, 21, 13, 39] | "Biking"         |
-+---------------------------+------------------+
-| [2009, 1, 30, 18, 4, 11]  | "Bought a Cat"   |
-+---------------------------+------------------+
-
-And our queries change to::
-
-    /blog/_design/docs/_view/by_date?startkey=[2010, 1, 1, 0, 0, 0]&endkey=[2010, 2, 1, 0, 0, 0]
-
-For all you care, this is just a change in syntax, not meaning. But it shows
-you the power of views. Not only can you construct an index with scalar values
-like strings and integers, you can also use JSON structures as keys for your
-views. Say we tag our documents with a list of tags and want to see all tags,
-but we don’t care for documents that have not been tagged.
-
-.. code-block:: javascript
-
-    {
-        ...
-        tags: ["cool", "freak", "plankton"],
-        ...
-    }
-
-.. code-block:: javascript
-
-    {
-        ...
-        tags: [],
-        ...
-    }
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.tags.length > 0) {
-            for(var idx in doc.tags) {
-                emit(doc.tags[idx], null);
-            }
-        }
-    }
-
-This shows a few new things. You can have conditions on structure
-(``if(doc.tags.length > 0)``) instead of just values. This is also an example of
-how a map function calls :js:func:`emit` multiple times per document.
-And finally, you can pass null instead of a value to the value parameter.
-The same is true for the key parameter. We’ll see in a bit how that is useful.
-
-Reversed Results
-----------------
-
-To retrieve view results in reverse order, use the ``descending=true`` query
-parameter. If you are using a ``startkey`` parameter, you will find that CouchDB
-returns different rows or no rows at all. What’s up with that?
-
-It’s pretty easy to understand when you see how view query options work under
-the hood. A view is stored in a tree structure for fast lookups. Whenever you
-query a view, this is how CouchDB operates:
-
-#. Starts reading at the top, or at the position that ``startkey`` specifies,
-   if present.
-#. Returns one row at a time until the end or until it hits ``endkey``,
-   if present.
-
-If you specify ``descending=true``, the reading direction is reversed,
-not the sort  order of the rows in the view. In addition, the same two-step
-procedure is followed.
-
-Say you have a view result that looks like this:
-
-+-----+-------+
-| Key | Value |
-+=====+=======+
-|  0  | "foo" |
-+-----+-------+
-|  1  | "bar" |
-+-----+-------+
-|  2  | "baz" |
-+-----+-------+
-
-Here are potential query options: ``?startkey=1&descending=true``. What will
-CouchDB do? See #1 above: it jumps to ``startkey``, which is the row with the
-key ``1``, and starts reading backward until it hits the end of the view.
-So the particular result would be:
-
-+-----+-------+
-| Key | Value |
-+=====+=======+
-|  1  | "bar" |
-+-----+-------+
-|  0  | "foo" |
-+-----+-------+
-
-This is very likely not what you want. To get the rows with the indexes ``1``
-and ``2`` in reverse order, you need to switch the ``startkey`` to ``endkey``:
-``endkey=1&descending=true``:
-
-+-----+-------+
-| Key | Value |
-+=====+=======+
-|  2  | "baz" |
-+-----+-------+
-|  1  | "bar" |
-+-----+-------+
-
-Now that looks a lot better. CouchDB started reading at the bottom of the view
-and went backward until it hit ``endkey``.
-
-The View to Get Comments for Posts
-==================================
-
-We use an array key here to support the ``group_level`` reduce query parameter.
-CouchDB’s views are stored in the B-tree file structure. Because of the way
-B-trees are structured, we can cache the intermediate reduce results in the
-non-leaf nodes of the tree, so reduce queries can be computed along arbitrary
-key ranges in logarithmic time. See Figure 1, “Comments map function”.
-
-In the blog app, we use ``group_level`` reduce queries to compute the count of
-comments both on a per-post and total basis, achieved by querying the same view
-index with different methods. With some array keys, and assuming each key has
-the value ``1``:
-
-.. code-block:: javascript
-
-  ["a","b","c"]
-  ["a","b","e"]
-  ["a","c","m"]
-  ["b","a","c"]
-  ["b","a","g"]
-
-the reduce view:
-
-.. code-block:: javascript
-
-    function(keys, values, rereduce) {
-        return sum(values)
-    }
-
-or:
-
-.. code-block:: javascript
-
-    _sum
-
-which is a built-in CouchDB reduce function (the others are ``_count`` and
-``_stats``). ``_sum`` here returns the total number of rows between the start
-and end key. So with ``startkey=["a","b"]&endkey=["b"]`` (which includes the
-first three of the above keys) the result would equal ``3``. The effect is to
-count rows.  If you’d like to count rows without depending on the row value,
-you can switch on the ``rereduce`` parameter:
-
-.. code-block:: javascript
-
-    function(keys, values, rereduce) {
-        if (rereduce) {
-            return sum(values);
-        } else {
-            return values.length;
-        }
-    }
-
-.. note::
-    The JavaScript function above could be effectively replaced by the built-in
-    ``_count``.
-
-.. figure:: ../../../images/views-intro-01.png
-    :align: center
-    :scale: 50 %
-    :alt:  Comments map function
-
-    Figure 1. Comments map function
-
-This is the reduce view used by the example app to count comments, while
-utilizing the map to output the comments, which are more useful than just
-``1`` over and over. It pays to spend some time playing around with map and
-reduce functions. Fauxton is OK for this, but it doesn’t give full access to
-all the query parameters. Writing your own test code for views in your language
-of choice is a great way to explore the nuances and capabilities of CouchDB’s
-incremental MapReduce system.
-
-Anyway, with a ``group_level`` query, you’re basically running a series of
-reduce range queries: one for each group that shows up at the level you query.
-Let’s reprint the key list from earlier, grouped at level ``1``:
-
-.. code-block:: javascript
-
-    ["a"]   3
-    ["b"]   2
-
-And at ``group_level=2``:
-
-.. code-block:: javascript
-
-    ["a","b"]   2
-    ["a","c"]   1
-    ["b","a"]   2
-
-Using the parameter ``group=true`` makes it behave as though it were
-``group_level=999``, so in the case of our current example, it would give the
-number ``1`` for each key, as there are no exactly duplicated keys.
-
-Reduce/Rereduce
-===============
-
-We briefly talked about the ``rereduce`` parameter to the reduce function.
-We’ll explain what’s up with it in this section. By now, you should have learned
-that your view result is stored in B-tree index structure for efficiency.
-The existence and use of the ``rereduce`` parameter is tightly coupled to how
-the B-tree index works.
-
-Consider the map result are:
-
-.. code-block:: javascript
-
-    "afrikaans", 1
-    "afrikaans", 1
-    "chinese", 1
-    "chinese", 1
-    "chinese", 1
-    "chinese", 1
-    "french", 1
-    "italian", 1
-    "italian", 1
-    "spanish", 1
-    "vietnamese", 1
-    "vietnamese", 1
-
-Example 1. Example view result (mmm, food)
-
-When we want to find out how many dishes there are per origin, we can reuse
-the simple reduce function shown earlier:
-
-.. code-block:: javascript
-
-    function(keys, values, rereduce) {
-        return sum(values);
-    }
-
-Figure 2, “The B-tree index” shows a simplified version of what the B-tree index
-looks like. We abbreviated the key strings.
-
-.. figure:: ../../../images/views-intro-02.png
-    :align: center
-    :alt: The B-tree index
-
-    Figure 2. The B-tree index
-
-The view result is what computer science grads call a “pre-order” walk through
-the tree. We look at each element in each node starting from the left. Whenever
-we see that there is a subnode to descend into, we descend and start reading
-the elements in that subnode. When we have walked through the entire tree,
-we’re done.
-
-You can see that CouchDB stores both keys and values inside each leaf node.
-In our case, it is simply always ``1``, but you might have a value where you
-count other results and then all rows have a different value. What’s important
-is that CouchDB runs all elements that are within a node into the reduce
-function (setting the ``rereduce`` parameter to false) and stores the result
-inside the parent node along with the edge to the subnode. In our case, each
-edge has a 3 representing the reduce value for the node it points to.
-
-.. note::
-    In reality, nodes have more than 1,600 elements in them. CouchDB computes
-    the result for all the elements in multiple iterations over the elements in
-    a single node, not all at once (which would be disastrous for memory
-    consumption).
-
-Now let’s see what happens when we run a query. We want to know how many
-"chinese" entries we have. The query option is simple: ``?key="chinese"``.
-See Figure 3, “The B-tree index reduce result”.
-
-.. figure:: ../../../images/views-intro-03.png
-    :align: center
-    :alt: The B-tree index reduce result
-
-    Figure 3. The B-tree index reduce result
-
-CouchDB detects that all values in the subnode include the "chinese" key.
-It concludes that it can take just the 3 values associated with that node to
-compute the final result. It then finds the node left to it and sees that it’s
-a node with keys outside the requested range (``key=`` requests a range where
-the beginning and the end are the same value). It concludes that it has to use
-the "chinese" element’s value and the other node’s value and run them through
-the reduce function with the ``rereduce`` parameter set to true.
-
-The reduce function effectively calculates 3 + 1 at query time and returns the
-desired result. The next example shows some pseudocode that shows the last
-invocation of the reduce function with actual values:
-
-.. code-block:: javascript
-
-    function(null, [3, 1], true) {
-        return sum([3, 1]);
-    }
-
-Now, we said your reduce function must actually reduce your values. If you see
-the B-tree, it should become obvious what happens when you don’t reduce your
-values. Consider the following map result and reduce function. This time we
-want to get a list of all the unique labels in our view:
-
-.. code-block:: javascript
-
-    "abc", "afrikaans"
-    "cef", "afrikaans"
-    "fhi", "chinese"
-    "hkl", "chinese"
-    "ino", "chinese"
-    "lqr", "chinese"
-    "mtu", "french"
-    "owx", "italian"
-    "qza", "italian"
-    "tdx", "spanish"
-    "xfg", "vietnamese"
-    "zul", "vietnamese"
-
-We don’t care for the key here and only list all the labels we have. Our reduce
-function removes duplicates:
-
-.. code-block:: javascript
-
-    function(keys, values, rereduce) {
-        var unique_labels = {};
-        values.forEach(function(label) {
-            if(!unique_labels[label]) {
-                unique_labels[label] = true;
-            }
-        });
-
-        return unique_labels;
-    }
-
-This translates to Figure 4, “An overflowing reduce index”.
-
-We hope you get the picture. The way the B-tree storage works means that if you
-don’t actually reduce your data in the reduce function, you end up having
-CouchDB copy huge amounts of data around that grow linearly, if not faster,
-with the number of rows in your view.
-
-CouchDB will be able to compute the final result, but only for views with a few
-rows. Anything larger will experience a ridiculously slow view build time.
-To help with that, CouchDB since version 0.10.0 will throw an error if your
-reduce function does not reduce its input values.
-
-.. figure:: ../../../images/views-intro-04.png
-    :align: center
-    :alt: An overflowing reduce index
-
-    Figure 4. An overflowing reduce index
-
-One vs. Multiple Design Documents
-=================================
-
-A common question is: when should I split multiple views into multiple design
-documents, or keep them together?
-
-Each view you create corresponds to one B-tree. All views in a single design
-document will live in the same set of index files on disk (one file per
-database shard; in 2.0+ by default, 8 files per node).
-
-The most practical consideration for separating views into separate documents
-is how often you change those views. Views that change often, and are in the
-same design document as other views, will invalidate those other views'
-indexes when the design document is written, forcing them all to rebuild from
-scratch. Obviously you will want to avoid this in production!
-
-However, when you have multiple views with the same map function in the same
-design document, CouchDB will optimize and only calculate that map function
-once. This lets you have two views with different *reduce* functions (say,
-one with ``_sum`` and one with ``_stats``) but build only a single copy
-of the mapped index. It also saves disk space and the time to write multiple
-copies to disk.
-
-Another benefit of having multiple views in the same design document is that
-the index files can keep a single index of backwards references from docids
-to rows. CouchDB needs these "back refs" to invalidate rows in a view when a
-document is deleted (otherwise, a delete would force a total rebuild!)
-
-One other consideration is that each separate design document will spawn
-another (set of) ``couchjs`` processes to generate the view, one per shard.
-Depending on the number of cores on your server(s), this may be efficient
-(using all of the idle cores you have) or inefficient (overloading the CPU on
-your servers). The exact situation will depend on your deployment architecture.
-
-So, should you use one or multiple design documents? The choice is yours.
-
-Lessons Learned
-===============
-
-- If you don’t use the key field in the map function, you are probably doing it
-  wrong.
-- If you are trying to make a list of values unique in the reduce functions,
-  you are probably doing it wrong.
-- If you don’t reduce your values to a single scalar value or a small
-  fixed-sized object or array with a fixed number of scalar values of small
-  sizes, you are probably doing it wrong.
-
-Wrapping Up
-===========
-
-Map functions are side effect–free functions that take a document as argument
-and `emit` key/value pairs. CouchDB stores the emitted rows by constructing a
-sorted B-tree index, so row lookups by key, as well as streaming operations
-across a range of rows, can be accomplished in a small memory and processing
-footprint, while writes avoid seeks. Generating a view takes ``O(N)``, where
-``N`` is the total number of rows in the view. However, querying a view is very
-quick, as the B-tree remains shallow even when it contains many, many keys.
-
-Reduce functions operate on the sorted rows emitted by map view functions.
-CouchDB’s reduce functionality takes advantage of one of the fundamental
-properties of B-tree indexes: for every leaf node (a sorted row), there is a
-chain of internal nodes reaching back to the root. Each leaf node in the B-tree
-carries a few rows (on the order of tens, depending on row size), and each
-internal node may link to a few leaf nodes or other internal nodes.
-
-The reduce function is run on every node in the tree in order to calculate
-the final reduce value. The end result is a reduce function that can be
-incrementally updated upon changes to the map function, while recalculating
-the reduction values for a minimum number of nodes. The initial reduction is
-calculated once per each node (inner and leaf) in the tree.
-
-When run on leaf nodes (which contain actual map rows), the reduce function’s
-third parameter, ``rereduce``, is false. The arguments in this case are the keys
-and values as output by the map function. The function has a single returned
-reduction value, which is stored on the inner node that a working set of leaf
-nodes have in common, and is used as a cache in future reduce calculations.
-
-When the reduce function is run on inner nodes, the ``rereduce`` flag is
-``true``. This allows the function to account for the fact that it will be
-receiving its own prior output. When ``rereduce`` is true, the values passed to
-the function are intermediate reduction values as cached from previous
-calculations. When the tree is more than two levels deep, the `rereduce` phase
-is repeated, consuming chunks of the previous level’s output until the final
-reduce value is calculated at the root node.
-
-A common mistake new CouchDB users make is attempting to construct complex
-aggregate values with a reduce function. Full reductions should result in a
-scalar value, like 5, and not, for instance, a JSON hash with a set of unique
-keys and the count of each. The problem with this approach is that you’ll end
-up with a very large final value. The number of unique keys can be nearly as
-large as the number of total keys, even for a large set. It is fine to combine
-a few scalar calculations into one reduce function; for instance, to find the
-total, average, and standard deviation of a set of numbers in a single function.
-
-If you’re interested in pushing the edge of CouchDB’s incremental reduce
-functionality, have a look at `Google’s paper on Sawzall`_, which gives examples
-of some of the more exotic reductions that can be accomplished in a system with
-similar constraints.
-
-.. _Google’s paper on Sawzall: http://research.google.com/archive/sawzall.html
diff --git a/src/ddocs/views/joins.rst b/src/ddocs/views/joins.rst
deleted file mode 100644
index df30078..0000000
--- a/src/ddocs/views/joins.rst
+++ /dev/null
@@ -1,431 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _views/json:
-
-================
-Joins With Views
-================
-
-Linked Documents
-================
-
-If your :ref:`map function <mapfun>` emits an object value which has
-``{'_id': XXX}`` and you :ref:`query view <api/ddoc/view>` with
-``include_docs=true`` parameter, then CouchDB will fetch the document with id
-``XXX`` rather than the document which was processed to emit the key/value pair.
-
-This means that if one document contains the ids of other documents, it can
-cause those documents to be fetched in the view too, adjacent to the same key
-if required.
-
-For example, if you have the following hierarchically-linked documents:
-
-.. code-block:: javascript
-
-    [
-        { "_id": "11111" },
-        { "_id": "22222", "ancestors": ["11111"], "value": "hello" },
-        { "_id": "33333", "ancestors": ["22222","11111"], "value": "world" }
-    ]
-
-You can emit the values with the ancestor documents adjacent to them in the view
-like this:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.value) {
-            emit([doc.value, 0], null);
-            if (doc.ancestors) {
-                for (var i in doc.ancestors) {
-                    emit([doc.value, Number(i)+1], {_id: doc.ancestors[i]});
-                }
-            }
-        }
-    }
-
-The result you get is:
-
-.. code-block:: javascript
-
-    {
-        "total_rows": 5,
-        "offset": 0,
-        "rows": [
-            {
-                "id": "22222",
-                "key": [
-                    "hello",
-                    0
-                ],
-                "value": null,
-                "doc": {
-                    "_id": "22222",
-                    "_rev": "1-0eee81fecb5aa4f51e285c621271ff02",
-                    "ancestors": [
-                        "11111"
-                    ],
-                    "value": "hello"
-                }
-            },
-            {
-                "id": "22222",
-                "key": [
-                    "hello",
-                    1
-                ],
-                "value": {
-                    "_id": "11111"
-                },
-                "doc": {
-                    "_id": "11111",
-                    "_rev": "1-967a00dff5e02add41819138abb3284d"
-                }
-            },
-            {
-                "id": "33333",
-                "key": [
-                    "world",
-                    0
-                ],
-                "value": null,
-                "doc": {
-                    "_id": "33333",
-                    "_rev": "1-11e42b44fdb3d3784602eca7c0332a43",
-                    "ancestors": [
-                        "22222",
-                        "11111"
-                    ],
-                    "value": "world"
-                }
-            },
-            {
-                "id": "33333",
-                "key": [
-                    "world",
-                    1
-                ],
-                "value": {
-                    "_id": "22222"
-                },
-                "doc": {
-                    "_id": "22222",
-                    "_rev": "1-0eee81fecb5aa4f51e285c621271ff02",
-                    "ancestors": [
-                        "11111"
-                    ],
-                    "value": "hello"
-                }
-            },
-            {
-                "id": "33333",
-                "key": [
-                    "world",
-                    2
-                ],
-                "value": {
-                    "_id": "11111"
-                },
-                "doc": {
-                    "_id": "11111",
-                    "_rev": "1-967a00dff5e02add41819138abb3284d"
-                }
-            }
-        ]
-    }
-
-which makes it very cheap to fetch a document plus all its ancestors in one
-query.
-
-Note that the ``"id"`` in the row is still that of the originating document.
-The only difference is that ``include_docs`` fetches a different doc.
-
-The current revision of the document is resolved at query time, not at the time
-the view is generated. This means that if a new revision of the linked document
-is added later, it will appear in view queries even though the view itself
-hasn't changed. To force a specific revision of a linked document to be used,
-emit a ``"_rev"`` property as well as ``"_id"``.
-
-Using View Collation
-====================
-
-:Author: Christopher Lenz
-:Date: 2007-10-05
-:Source: http://www.cmlenz.net/archives/2007/10/couchdb-joins
-
-Just today, there was a discussion on IRC on how you'd go about modeling a
-simple blogging system with “post” and “comment” entities, where any blog
-post might have N comments. If you'd be using an SQL database, you'd obviously
-have two tables with foreign keys and you'd be using joins. (At least until you
-needed to add some `denormalization`_).
-
-.. _denormalization: http://en.wikipedia.org/wiki/Denormalization
-
-But what would the “obvious” approach in CouchDB look like?
-
-Approach #1: Comments Inlined
------------------------------
-
-A simple approach would be to have one document per blog post, and store the
-comments inside that document:
-
-.. code-block:: javascript
-
-    {
-        "_id": "myslug",
-        "_rev": "123456",
-        "author": "john",
-        "title": "My blog post",
-        "content": "Bla bla bla …",
-        "comments": [
-            {"author": "jack", "content": "…"},
-            {"author": "jane", "content": "…"}
-        ]
-    }
-
-.. note::
-    Of course the model of an actual blogging system would be more extensive,
-    you'd have tags, timestamps, etc, etc. This is just to demonstrate the basics.
-
-The obvious advantage of this approach is that the data that belongs together
-is stored in one place. Delete the post, and you automatically delete the
-corresponding comments, and so on.
-
-You may be thinking that putting the comments inside the blog post document
-would not allow us to query for the comments themselves, but you'd be wrong.
-You could trivially write a CouchDB view that would return all comments across
-all blog posts, keyed by author:
-
-.. code-block:: javascript
-
-    function(doc) {
-        for (var i in doc.comments) {
-            emit(doc.comments[i].author, doc.comments[i].content);
-        }
-    }
-
-Now you could list all comments by a particular user by invoking the view and
-passing it a ``?key="username"`` query string parameter.
-
-However, this approach has a drawback that can be quite significant for many
-applications: To add a comment to a post, you need to:
-
-- Fetch the blog post document
-- Add the new comment to the JSON structure
-- Send the updated document to the server
-
-Now if you have multiple client processes adding comments at roughly the same
-time, some of them will get a `HTTP 409 Conflict` error on step 3 (that's
-optimistic concurrency in action). For some applications this makes sense, but
-in many other apps, you'd want to append new related data regardless of whether
-other data has been added in the meantime.
-
-The only way to allow non-conflicting addition of related data is by putting
-that related data into separate documents.
-
-Approach #2: Comments Separate
-------------------------------
-
-Using this approach you'd have one document per blog post, and one document per
-comment. The comment documents would have a “backlink” to the post they belong
-to.
-
-The blog post document would look similar to the above, minus the comments
-property. Also, we'd now have a type property on all our documents so that we
-can tell the difference between posts and comments:
-
-.. code-block:: javascript
-
-    {
-        "_id": "myslug",
-        "_rev": "123456",
-        "type": "post",
-        "author": "john",
-        "title": "My blog post",
-        "content": "Bla bla bla …"
-    }
-
-The comments themselves are stored in separate documents, which also have a type
-property (this time with the value “comment”), and additionally feature a post
-property containing the ID of the post document they belong to:
-
-.. code-block:: javascript
-
-    {
-        "_id": "ABCDEF",
-        "_rev": "123456",
-        "type": "comment",
-        "post": "myslug",
-        "author": "jack",
-        "content": "…"
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id": "DEFABC",
-        "_rev": "123456",
-        "type": "comment",
-        "post": "myslug",
-        "author": "jane",
-        "content": "…"
-    }
-
-To list all comments per blog post, you'd add a simple view, keyed by blog post
-ID:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.type == "comment") {
-            emit(doc.post, {author: doc.author, content: doc.content});
-        }
-    }
-
-And you'd invoke that view passing it a ``?key="post_id"`` query string
-parameter.
-
-Viewing all comments by author is just as easy as before:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.type == "comment") {
-            emit(doc.author, {post: doc.post, content: doc.content});
-        }
-    }
-
-So this is better in some ways, but it also has a disadvantage.
-Imagine you want to display a blog post with all the associated comments on the
-same web page. With our first approach, we needed just a single request to the
-CouchDB server, namely a ``GET`` request to the document. With this second
-approach, we need two requests: a ``GET`` request to the post document, and a
-``GET`` request to the view that returns all comments for the post.
-
-That is okay, but not quite satisfactory. Just imagine you wanted to add
-threaded comments: you'd now need an additional fetch per comment. What we'd
-probably want then would be a way to join the blog post and the various comments
-together to be able to retrieve them with a single HTTP request.
-
-This was when Damien Katz, the author of CouchDB, chimed in to the discussion
-on IRC to show us the way.
-
-Optimization: Using the Power of View Collation
------------------------------------------------
-
-Obvious to Damien, but not at all obvious to the rest of us: it's fairly simple
-to make a view that includes both the content of the blog post document, and
-the content of all the comments associated with that post. The way you do that
-is by using `complex keys`. Until now we've been using simple string values for
-the view keys, but in fact they can be arbitrary JSON values, so let's make
-some use of that:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc.type == "post") {
-            emit([doc._id, 0], null);
-        } else if (doc.type == "comment") {
-            emit([doc.post, 1], null);
-        }
-    }
-
-Okay, this may be confusing at first. Let's take a step back and look at what
-views in CouchDB are really about.
-
-CouchDB views are basically highly efficient on-disk dictionaries that map keys
-to values, where the key is automatically indexed and can be used to filter
-and/or sort the results you get back from your views. When you “invoke” a view,
-you can say that you're only interested in a subset of the view rows by
-specifying a ``?key=foo`` query string parameter. Or you can specify
-``?startkey=foo`` and/or ``?endkey=bar`` query string parameters to fetch rows
-over a range of keys. Finally, by adding ``?include_docs=true`` to the query,
-the result will include the full body of each emitted document.
-
-It's also important to note that keys are always used for collating (i.e.
-sorting) the rows. CouchDB has well defined (but as of yet undocumented) rules
-for comparing arbitrary JSON objects for collation. For example, the JSON value
-``["foo", 2]`` is sorted after (considered “greater than”) the values
-``["foo"]`` or ``["foo", 1, "bar"]``, but before e.g. ``["foo", 2, "bar"]``.
-This feature enables a whole class of tricks that are rather non-obvious...
-
-.. seealso::
-
-    :ref:`views/collation`
-
-With that in mind, let's return to the view function above. First note that,
-unlike the previous view functions we've used here, this view handles both
-"post" and "comment" documents, and both of them end up as rows in the same
-view. Also, the key in this view is not just a simple string, but an array.
-The first element in that array is always the ID of the post, regardless of
-whether we're processing an actual post document, or a comment associated with
-a post. The second element is 0 for post documents, and 1 for comment documents.
-
-Let's assume we have two blog posts in our database. Without limiting the view
-results via ``key``, ``startkey``, or ``endkey``, we'd get back something like
-the following:
-
-.. code-block:: javascript
-
-    {
-        "total_rows": 5, "offset": 0, "rows": [{
-                "id": "myslug",
-                "key": ["myslug", 0],
-                "value": null
-            }, {
-                "id": "ABCDEF",
-                "key": ["myslug", 1],
-                "value": null
-            }, {
-                "id": "DEFABC",
-                "key": ["myslug", 1],
-                "value": null
-            }, {
-                "id": "other_slug",
-                "key": ["other_slug", 0],
-                "value": null
-            }, {
-                "id": "CDEFAB",
-                "key": ["other_slug", 1],
-                "value": null
-            },
-        ]
-    }
-
-.. note::
-    The ``...`` placeholders here would contain the complete JSON encoding of the
-    corresponding documents
-
-Now, to get a specific blog post and all associated comments, we'd invoke that
-view with the query string::
-
-    ?startkey=["myslug"]&endkey=["myslug", 2]&include_docs=true
-
-We'd get back the first three rows, those that belong to the ``myslug`` post,
-but not the others, along with the full bodies of each document. Et voila, we
-now have the data we need to display a post with all associated comments,
-retrieved via a single ``GET`` request.
-
-You may be asking what the 0 and 1 parts of the keys are for. They're simply
-to ensure that the post document is always sorted before the the associated
-comment documents. So when you get back the results from this view for a
-specific post, you'll know that the first row contains the data for the blog
-post itself, and the remaining rows contain the comment data.
-
-One remaining problem with this model is that comments are not ordered, but
-that's simply because we don't have date/time information associated with them.
-If we had, we'd add the timestamp as third element of the key array, probably
-as ISO date/time strings. Now we would continue using the query string
-``?startkey=["myslug"]&endkey=["myslug", 2]&include_docs=true`` to fetch the
-blog post and all associated comments, only now they'd be in chronological
-order.
diff --git a/src/ddocs/views/nosql.rst b/src/ddocs/views/nosql.rst
deleted file mode 100644
index ceff7ad..0000000
--- a/src/ddocs/views/nosql.rst
+++ /dev/null
@@ -1,529 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _views/nosql:
-
-=============================
-View Cookbook for SQL Jockeys
-=============================
-
-This is a collection of some common SQL queries and how to get the same result
-in CouchDB. The key to remember here is that CouchDB does not work like an SQL
-database at all, and that best practices from the SQL world do not translate
-well or at all to CouchDB. This document’s “cookbook” assumes that you are
-familiar with the CouchDB basics such as creating and updating databases and
-documents.
-
-Using Views
-===========
-
-How you would do this in SQL::
-
-    CREATE TABLE
-
-or::
-
-    ALTER TABLE
-
-How you can do this in CouchDB?
-
-Using views is a two-step process. First you define a view; then you query it.
-This is analogous to defining a table structure (with indexes) using
-``CREATE TABLE`` or ``ALTER TABLE`` and querying it using an SQL query.
-
-Defining a View
----------------
-
-Defining a view is done by creating a special document in a CouchDB database.
-The only real specialness is the ``_id`` of the document, which starts with
-``_design/`` — for example, _design/application. Other than that, it is just a
-regular CouchDB document. To make sure CouchDB understands that you are defining
-a view, you need to prepare the contents of that design document in a special
-format. Here is an example:
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/application",
-        "_rev": "1-C1687D17",
-        "views": {
-            "viewname": {
-                "map": "function(doc) { ... }",
-                "reduce": "function(keys, values) { ... }"
-            }
-        }
-    }
-
-We are defining a view `viewname`. The definition of the view consists of two
-functions: the map function and the reduce function. Specifying a reduce
-function is optional. We’ll look at the nature of the functions later. Note that
-`viewname` can be whatever you like: ``users``, ``by-name``, or ``by-date`` are
-just some examples.
-
-A single design document can also include multiple view definitions, each
-identified by a unique name:
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/application",
-        "_rev": "1-C1687D17",
-        "views": {
-            "viewname": {
-                "map": "function(doc) { ... }",
-                "reduce": "function(keys, values) { ... }"
-            },
-            "anotherview": {
-                "map": "function(doc) { ... }",
-                "reduce": "function(keys, values) { ... }"
-            }
-        }
-    }
-
-Querying a View
----------------
-
-The name of the design document and the name of the view are significant for
-querying the view. To query the view `viewname`, you perform an HTTP ``GET``
-request to the following URI::
-
-    /database/_design/application/_view/viewname
-
-database is the name of the database you created your design document in. Next
-up is the design document name, and then the view name prefixed with ``_view/``.
-To query `anotherview`, replace `viewname` in that URI with `anotherview`.
-If you want to query a view in a different design document, adjust the design
-document name.
-
-MapReduce Functions
--------------------
-
-MapReduce is a concept that solves problems by applying a two-step process,
-aptly named the map phase and the reduce phase. The map phase looks at all
-documents in CouchDB separately one after the other and creates a `map result`.
-The map result is an ordered list of key/value pairs. Both key and value can
-be specified by the user writing the map function. A map function may call the
-built-in ``emit(key, value)`` function 0 to N times per document, creating a row
-in the map result per invocation.
-
-CouchDB is smart enough to run a map function only once for every document, even
-on subsequent queries on a view. Only changes to documents or new documents need
-to be processed anew.
-
-Map functions
--------------
-
-Map functions run in isolation for every document. They can’t modify the
-document, and they can’t talk to the outside world—they can’t have side effects.
-This is required so that CouchDB can guarantee correct results without having
-to recalculate a complete result when only one document gets changed.
-
-The map result looks like this:
-
-.. code-block:: javascript
-
-    {"total_rows":3,"offset":0,"rows":[
-    {"id":"fc2636bf50556346f1ce46b4bc01fe30","key":"Lena","value":5},
-    {"id":"1fb2449f9b9d4e466dbfa47ebe675063","key":"Lisa","value":4},
-    {"id":"8ede09f6f6aeb35d948485624b28f149","key":"Sarah","value":6}
-    ]}
-
-It is a list of rows sorted by the value of key. The id is added automatically
-and refers back to the document that created this row. The value is the data
-you’re looking for. For example purposes, it’s the girl’s age.
-
-The map function that produces this result is:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.name && doc.age) {
-            emit(doc.name, doc.age);
-        }
-    }
-
-It includes the if statement as a sanity check to ensure that we’re operating
-on the right fields and calls the emit function with the name and age as the key
-and value.
-
-Look Up by Key
-==============
-
-How you would do this in SQL::
-
-    SELECT field FROM table WHERE value="searchterm"
-
-How you can do this in CouchDB?
-
-Use case: get a result (which can be a record or set of records) associated
-with a key ("searchterm").
-
-To look something up quickly, regardless of the storage mechanism, an index is
-needed. An index is a data structure optimized for quick search and retrieval.
-CouchDB’s map result is stored in such an index, which happens to be a B+ tree.
-
-To look up a value by "searchterm", we need to put all values into the key of a
-view. All we need is a simple map function:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.value) {
-            emit(doc.value, null);
-        }
-    }
-
-This creates a list of documents that have a value field sorted by the data in
-the value field. To find all the records that match "searchterm", we query the
-view and specify the search term as a query parameter::
-
-    /database/_design/application/_view/viewname?key="searchterm"
-
-Consider the documents from the previous section, and say we’re indexing on the
-age field of the documents to find all the five-year-olds:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.age && doc.name) {
-            emit(doc.age, doc.name);
-        }
-    }
-
-Query::
-
-    /ladies/_design/ladies/_view/age?key=5
-
-Result:
-
-.. code-block:: javascript
-
-    {"total_rows":3,"offset":1,"rows":[
-    {"id":"fc2636bf50556346f1ce46b4bc01fe30","key":5,"value":"Lena"}
-    ]}
-
-Easy.
-
-Note that you have to emit a value. The view result includes the associated
-document ID in every row. We can use it to look up more data from the document
-itself. We can also use the ``?include_docs=true`` parameter to have CouchDB
-fetch the individual documents for us.
-
-Look Up by Prefix
-=================
-
-How you would do this in SQL::
-
-    SELECT field FROM table WHERE value LIKE "searchterm%"
-
-How you can do this in CouchDB?
-
-Use case: find all documents that have a field value that starts with
-`searchterm`. For example, say you stored a MIME type (like `text/html` or
-`image/jpg`) for each document and now you want to find all documents that are
-images according to the MIME type.
-
-The solution is very similar to the previous example: all we need is a map
-function that is a little more clever than the first one. But first, an example
-document:
-
-.. code-block:: javascript
-
-    {
-        "_id": "Hugh Laurie",
-        "_rev": "1-9fded7deef52ac373119d05435581edf",
-        "mime-type": "image/jpg",
-        "description": "some dude"
-    }
-
-The clue lies in extracting the prefix that we want to search for from our
-document and putting it into our view index. We use a regular expression to
-match our prefix:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc["mime-type"]) {
-            // from the start (^) match everything that is not a slash ([^\/]+) until
-            // we find a slash (\/). Slashes needs to be escaped with a backslash (\/)
-            var prefix = doc["mime-type"].match(/^[^\/]+\//);
-            if(prefix) {
-              emit(prefix, null);
-            }
-        }
-    }
-
-We can now query this view with our desired MIME type prefix and not only find
-all images, but also text, video, and all other formats::
-
-    /files/_design/finder/_view/by-mime-type?key="image/"
-
-Aggregate Functions
-===================
-
-How you would do this in SQL::
-
-    SELECT COUNT(field) FROM table
-
-How you can do this in CouchDB?
-
-Use case: calculate a derived value from your data.
-
-We haven’t explained reduce functions yet. Reduce functions are similar to
-aggregate functions in SQL. They compute a value over multiple documents.
-
-To explain the mechanics of reduce functions, we’ll create one that doesn’t make
-a whole lot of sense. But this example is easy to understand. We’ll explore more
-useful reductions later.
-
-Reduce functions operate on the output of the map function (also called the map
-result or intermediate result). The reduce function’s job, unsurprisingly, is to
-reduce the list that the map function produces.
-
-Here’s what our summing reduce function looks like:
-
-.. code-block:: javascript
-
-    function(keys, values) {
-        var sum = 0;
-        for(var idx in values) {
-            sum = sum + values[idx];
-        }
-        return sum;
-    }
-
-Here’s an alternate, more idiomatic JavaScript version:
-
-.. code-block:: javascript
-
-    function(keys, values) {
-        var sum = 0;
-        values.forEach(function(element) {
-            sum = sum + element;
-        });
-        return sum;
-    }
-
-.. note::
-    Don't miss effective built-in :ref:`reduce functions <reducefun>` like
-    ``_sum`` and ``_count``
-
-This reduce function takes two arguments: a list of keys and a list of values.
-For our summing purposes we can ignore the keys-list and consider only the value
-list. We’re looping over the list and add each item to a running total that
-we’re returning at the end of the function.
-
-You’ll see one difference between the map and the reduce function. The map
-function uses ``emit()`` to create its result, whereas the reduce function
-returns a value.
-
-For example, from a list of integer values that specify the age, calculate the
-sum of all years of life for the news headline,
-`“786 life years present at event.”` A little contrived, but very simple and
-thus good for demonstration purposes. Consider the documents and the map view we
-used earlier in this document.
-
-The reduce function to calculate the total age of all girls is:
-
-.. code-block:: javascript
-
-    function(keys, values) {
-        return sum(values);
-    }
-
-Note that, instead of the two earlier versions, we use CouchDB’s predefined
-:js:func:`sum` function. It does the same thing as the other two, but it is such
-a common piece of code that CouchDB has it included.
-
-The result for our reduce view now looks like this:
-
-.. code-block:: javascript
-
-    {"rows":[
-        {"key":null,"value":15}
-    ]}
-
-The total sum of all age fields in all our documents is 15. Just what we wanted.
-The key member of the result object is null, as we can’t know anymore which
-documents took part in the creation of the reduced result. We’ll cover more
-advanced reduce cases later on.
-
-As a rule of thumb, the reduce function should reduce to a single scalar value.
-That is, an integer; a string; or a small, fixed-size list or object that
-includes an aggregated value (or values) from the values argument.
-It should never just return values or similar. CouchDB will give you a warning
-if you try to use reduce “the wrong way”:
-
-.. code-block:: javascript
-
-    {
-        "error":"reduce_overflow_error",
-        "message":"Reduce output must shrink more rapidly: Current output: ..."
-    }
-
-Get Unique Values
-=================
-
-How you would do this in SQL::
-
-    SELECT DISTINCT field FROM table
-
-How you can do this in CouchDB?
-
-Getting unique values is not as easy as adding a keyword. But a reduce view and
-a special query parameter give us the same result. Let’s say you want a list of
-tags that your users have tagged themselves with and no duplicates.
-
-First, let’s look at the source documents. We punt on ``_id`` and ``_rev``
-attributes here:
-
-.. code-block:: javascript
-
-    {
-        "name":"Chris",
-        "tags":["mustache", "music", "couchdb"]
-    }
-
-.. code-block:: javascript
-
-    {
-        "name":"Noah",
-        "tags":["hypertext", "philosophy", "couchdb"]
-    }
-
-.. code-block:: javascript
-
-    {
-        "name":"Jan",
-        "tags":["drums", "bike", "couchdb"]
-    }
-
-Next, we need a list of all tags. A map function will do the trick:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.name && doc.tags) {
-            doc.tags.forEach(function(tag) {
-                emit(tag, null);
-            });
-        }
-    }
-
-The result will look like this:
-
-.. code-block:: javascript
-
-    {"total_rows":9,"offset":0,"rows":[
-    {"id":"3525ab874bc4965fa3cda7c549e92d30","key":"bike","value":null},
-    {"id":"3525ab874bc4965fa3cda7c549e92d30","key":"couchdb","value":null},
-    {"id":"53f82b1f0ff49a08ac79a9dff41d7860","key":"couchdb","value":null},
-    {"id":"da5ea89448a4506925823f4d985aabbd","key":"couchdb","value":null},
-    {"id":"3525ab874bc4965fa3cda7c549e92d30","key":"drums","value":null},
-    {"id":"53f82b1f0ff49a08ac79a9dff41d7860","key":"hypertext","value":null},
-    {"id":"da5ea89448a4506925823f4d985aabbd","key":"music","value":null},
-    {"id":"da5ea89448a4506925823f4d985aabbd","key":"mustache","value":null},
-    {"id":"53f82b1f0ff49a08ac79a9dff41d7860","key":"philosophy","value":null}
-    ]}
-
-As promised, these are all the tags, including duplicates. Since each document
-gets run through the map function in isolation, it cannot know if the same key
-has been emitted already. At this stage, we need to live with that. To achieve
-uniqueness, we need a reduce:
-
-.. code-block:: javascript
-
-    function(keys, values) {
-        return true;
-    }
-
-This reduce doesn’t do anything, but it allows us to specify a special query
-parameter when querying the view::
-
-    /dudes/_design/dude-data/_view/tags?group=true
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {"rows":[
-    {"key":"bike","value":true},
-    {"key":"couchdb","value":true},
-    {"key":"drums","value":true},
-    {"key":"hypertext","value":true},
-    {"key":"music","value":true},
-    {"key":"mustache","value":true},
-    {"key":"philosophy","value":true}
-    ]}
-
-In this case, we can ignore the value part because it is always true, but the
-result includes a list of all our tags and no duplicates!
-
-With a small change we can put the reduce to good use, too. Let’s see how many
-of the non-unique tags are there for each tag. To calculate the tag frequency,
-we just use the summing up we already learned about. In the map function,
-we emit a 1 instead of null:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.name && doc.tags) {
-            doc.tags.forEach(function(tag) {
-                emit(tag, 1);
-            });
-        }
-    }
-
-In the reduce function, we return the sum of all values:
-
-.. code-block:: javascript
-
-    function(keys, values) {
-        return sum(values);
-    }
-
-Now, if we query the view with the ``?group=true`` parameter, we get back the
-count for each tag:
-
-.. code-block:: javascript
-
-    {"rows":[
-    {"key":"bike","value":1},
-    {"key":"couchdb","value":3},
-    {"key":"drums","value":1},
-    {"key":"hypertext","value":1},
-    {"key":"music","value":1},
-    {"key":"mustache","value":1},
-    {"key":"philosophy","value":1}
-    ]}
-
-Enforcing Uniqueness
-====================
-
-How you would do this in SQL::
-
-    UNIQUE KEY(column)
-
-How you can do this in CouchDB?
-
-Use case: your applications require that a certain value exists only once in a
-database.
-
-This is an easy one: within a CouchDB database, each document must have a
-unique ``_id`` field. If you require unique values in a database, just assign
-them to a document’s ``_id`` field and CouchDB will enforce uniqueness for you.
-
-There’s one caveat, though: in the distributed case, when you are running more
-than one CouchDB node that accepts write requests, uniqueness can be guaranteed
-only per node or outside of CouchDB. CouchDB will allow two identical IDs to be
-written to two different nodes. On replication, CouchDB will detect a conflict
-and flag the document accordingly.
diff --git a/src/ddocs/views/pagination.rst b/src/ddocs/views/pagination.rst
deleted file mode 100644
index e0e3d43..0000000
--- a/src/ddocs/views/pagination.rst
+++ /dev/null
@@ -1,267 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _views/pagination:
-
-=================
-Pagination Recipe
-=================
-
-This recipe explains how to paginate over view results.
-Pagination is a user interface (UI) pattern that allows the display of a
-large number of rows (`the result set`) without loading all the rows into the
-UI at once. A fixed-size subset, the `page`, is displayed along with next and
-previous links or buttons that can move the `viewport` over the result set to
-an adjacent page.
-
-We assume you’re familiar with creating and querying documents and views as
-well as the multiple view query options.
-
-Example Data
-============
-
-To have some data to work with, we’ll create a list of bands,
-one document per band::
-
-    { "name":"Biffy Clyro" }
-
-    { "name":"Foo Fighters" }
-
-    { "name":"Tool" }
-
-    { "name":"Nirvana" }
-
-    { "name":"Helmet" }
-
-    { "name":"Tenacious D" }
-
-    { "name":"Future of the Left" }
-
-    { "name":"A Perfect Circle" }
-
-    { "name":"Silverchair" }
-
-    { "name":"Queens of the Stone Age" }
-
-    { "name":"Kerub" }
-
-A View
-=======
-
-We need a simple map function that gives us an alphabetical list of band
-names. This should be easy, but we’re adding extra smarts to filter out “The”
-and “A” in front of band names to put them into the right position:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc.name) {
-            var name = doc.name.replace(/^(A|The) /, "");
-            emit(name, null);
-        }
-    }
-
-The views result is an alphabetical list of band names. Now say we want to
-display band names five at a time and have a link pointing to the next five
-names that make up one page, and a link for the previous five,
-if we’re not on the first page.
-
-We learned how to use the ``startkey``, ``limit``, and ``skip`` parameters in
-earlier documents. We’ll use these again here. First, let’s have a look at
-the full result set:
-
-.. code-block:: javascript
-
-    {"total_rows":11,"offset":0,"rows":[
-        {"id":"a0746072bba60a62b01209f467ca4fe2","key":"Biffy Clyro","value":null},
-        {"id":"b47d82284969f10cd1b6ea460ad62d00","key":"Foo Fighters","value":null},
-        {"id":"45ccde324611f86ad4932555dea7fce0","key":"Tenacious D","value":null},
-        {"id":"d7ab24bb3489a9010c7d1a2087a4a9e4","key":"Future of the Left","value":null},
-        {"id":"ad2f85ef87f5a9a65db5b3a75a03cd82","key":"Helmet","value":null},
-        {"id":"a2f31cfa68118a6ae9d35444fcb1a3cf","key":"Nirvana","value":null},
-        {"id":"67373171d0f626b811bdc34e92e77901","key":"Kerub","value":null},
-        {"id":"3e1b84630c384f6aef1a5c50a81e4a34","key":"Perfect Circle","value":null},
-        {"id":"84a371a7b8414237fad1b6aaf68cd16a","key":"Queens of the Stone Age","value":null},
-        {"id":"dcdaf08242a4be7da1a36e25f4f0b022","key":"Silverchair","value":null},
-        {"id":"fd590d4ad53771db47b0406054f02243","key":"Tool","value":null}
-    ]}
-
-Setup
-=====
-
-The mechanics of paging are very simple:
-
-- Display first page
-- If there are more rows to show, show next link
-- Draw subsequent page
-- If this is not the first page, show a previous link
-- If there are more rows to show, show next link
-
-Or in a pseudo-JavaScript snippet:
-
-.. code-block:: javascript
-
-    var result = new Result();
-    var page = result.getPage();
-
-    page.display();
-
-    if(result.hasPrev()) {
-        page.display_link('prev');
-    }
-
-    if(result.hasNext()) {
-        page.display_link('next');
-    }
-
-Paging
-======
-
-To get the first five rows from the view result, you use the ``?limit=5``
-query parameter::
-
-    curl -X GET http://127.0.0.1:5984/artists/_design/artists/_view/by-name?limit=5
-
-The result:
-
-.. code-block:: javascript
-
-    {"total_rows":11,"offset":0,"rows":[
-        {"id":"a0746072bba60a62b01209f467ca4fe2","key":"Biffy Clyro","value":null},
-        {"id":"b47d82284969f10cd1b6ea460ad62d00","key":"Foo Fighters","value":null},
-        {"id":"45ccde324611f86ad4932555dea7fce0","key":"Tenacious D","value":null},
-        {"id":"d7ab24bb3489a9010c7d1a2087a4a9e4","key":"Future of the Left","value":null},
-        {"id":"ad2f85ef87f5a9a65db5b3a75a03cd82","key":"Helmet","value":null}
-    ]}
-
-By comparing the ``total_rows`` value to our ``limit`` value,
-we can determine if there are more pages to display. We also know by the
-`offset` member that we are on the first page. We can calculate the value for
-``skip=`` to get the results for the next page:
-
-.. code-block:: javascript
-
-    var rows_per_page = 5;
-    var page = (offset / rows_per_page) + 1; // == 1
-    var skip = page * rows_per_page; // == 5 for the first page, 10 for the second ...
-
-So we query CouchDB with::
-
-    curl -X GET 'http://127.0.0.1:5984/artists/_design/artists/_view/by-name?limit=5&skip=5'
-
-Note we have to use ``'`` (single quotes) to escape the ``&`` character that is
-special to the shell we execute curl in.
-
-The result:
-
-.. code-block:: javascript
-
-    {"total_rows":11,"offset":5,"rows":[
-        {"id":"a2f31cfa68118a6ae9d35444fcb1a3cf","key":"Nirvana","value":null},
-        {"id":"67373171d0f626b811bdc34e92e77901","key":"Kerub","value":null},
-        {"id":"3e1b84630c384f6aef1a5c50a81e4a34","key":"Perfect Circle","value":null},
-        {"id":"84a371a7b8414237fad1b6aaf68cd16a","key":"Queens of the Stone Age",
-        "value":null},
-        {"id":"dcdaf08242a4be7da1a36e25f4f0b022","key":"Silverchair","value":null}
-    ]}
-
-Implementing the ``hasPrev()`` and ``hasNext()`` method is pretty
-straightforward:
-
-.. code-block:: javascript
-
-    function hasPrev()
-    {
-        return page > 1;
-    }
-
-    function hasNext()
-    {
-        var last_page = Math.floor(total_rows / rows_per_page) +
-            (total_rows % rows_per_page);
-        return page != last_page;
-    }
-
-Paging (Alternate Method)
-=========================
-
-The method described above performed poorly with large skip values until
-CouchDB 1.2. Additionally, some use cases may call for the following
-alternate method even with newer versions of CouchDB. One such case is when
-duplicate results should be prevented. Using skip alone it is possible for
-new documents to be inserted during pagination which could change the offset
-of the start of the subsequent page.
-
-A correct solution is not much harder. Instead of slicing the result set
-into equally sized pages, we look at 10 rows at a time and use ``startkey`` to
-jump to the next 10 rows. We even use skip, but only with the value 1.
-
-Here is how it works:
-
-- Request `rows_per_page + 1` rows from the view
-- Display `rows_per_page` rows, `store + 1` row as ``next_startkey`` and
-  ``next_startkey_docid``
-- As page information, keep ``startkey`` and ``next_startkey``
-- Use the ``next_*`` values to create the next link, and use the others to
-  create the previous link
-
-The trick to finding the next page is pretty simple. Instead of requesting 10
-rows for a page, you request 11 rows, but display only 10 and use the values
-in the 11th row as the ``startkey`` for the next page. Populating the link to
-the previous page is as simple as carrying the current ``startkey`` over to the
-next page. If there’s no previous ``startkey``, we are on the first page. We
-stop displaying the link to the next page if we get `rows_per_page` or less
-rows back. This is called linked list pagination, as we go from page to
-page, or list item to list item, instead of jumping directly to a
-pre-computed page. There is one caveat, though. Can you spot it?
-
-CouchDB view keys do not have to be unique; you can have multiple index
-entries read. What if you have more index entries for a key than rows that
-should be on a page? ``startkey`` jumps to the first row, and you’d be screwed
-if CouchDB didn’t have an additional parameter for you to use. All view keys
-with the same value are internally sorted by `docid`, that is, the ID of
-the document that created that view row. You can use the ``startkey_docid``
-and ``endkey_docid`` parameters to get subsets of these rows. For
-pagination, we still don’t need ``endkey_docid``, but ``startkey_docid`` is very
-handy. In addition to ``startkey`` and ``limit``, you also use
-``startkey_docid`` for pagination if, and only if, the extra row you fetch to
-find the next page has the same key as the current ``startkey``.
-
-It is important to note that the ``*_docid`` parameters only work in addition to
-the ``*key`` parameters and are only useful to further narrow down the result set
-of a view for a single key. They do not work on their own (the one exception
-being the built-in :ref:`_all_docs view <api/db/all_docs>`  that already sorts
-by document ID).
-
-The advantage of this approach is that all the key operations can be
-performed on the super-fast B-tree index behind the view. Looking up a page
-doesn’t include scanning through hundreds and thousands of rows unnecessarily.
-
-Jump to Page
-============
-
-One drawback of the linked list style pagination is that you can’t
-pre-compute the rows for a particular page from the page number and the rows
-per page. Jumping to a specific page doesn’t really work. Our gut reaction,
-if that concern is raised, is, “Not even Google is doing that!” and we tend
-to get away with it. Google always pretends on the first page to find 10 more
-pages of results. Only if you click on the second page (something very few
-people actually do) might Google display a reduced set of pages. If you page
-through the results, you get links for the previous and next 10 pages,
-but no more. Pre-computing the necessary ``startkey`` and ``startkey_docid``
-for 20 pages is a feasible operation and a pragmatic optimization to know the
-rows for every page in a result set that is potentially tens of thousands
-of rows long, or more.
-
-If you really do need to jump to a page over the full range of documents (we
-have seen applications that require that), you can still maintain an integer
-value index as the view index and take a hybrid approach at solving pagination.
diff --git a/src/docs.app.src b/src/docs.app.src
deleted file mode 100644
index 737b9c0..0000000
--- a/src/docs.app.src
+++ /dev/null
@@ -1,18 +0,0 @@
-% Licensed under the Apache License, Version 2.0 (the "License"); you may not
-% use this file except in compliance with the License. You may obtain a copy of
-% the License at
-%
-%   http://www.apache.org/licenses/LICENSE-2.0
-%
-% Unless required by applicable law or agreed to in writing, software
-% distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-% WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-% License for the specific language governing permissions and limitations under
-% the License.
-
-{application, docs, [
-    {description, "CouchDB Documentation"},
-    {vsn, git},
-    {registered, []},
-    {applications, [kernel, stdlib]}
-]}.
diff --git a/src/experimental.rst b/src/experimental.rst
deleted file mode 100644
index c5dc4d4..0000000
--- a/src/experimental.rst
+++ /dev/null
@@ -1,40 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _experimental:
-
-=====================
-Experimental Features
-=====================
-
-This is a list of experimental features in CouchDB. They are included in
-a release because the development team is requesting feedback from the
-larger developer community. As such, please play around with these
-features and send us feedback, thanks!
-
-Use at your own risk! Do not rely on these features for critical applications.
-
-Content-Security-Policy (CSP) Header Support for /_utils (Fauxton)
-==================================================================
-
-This will just work with Fauxton. You can enable it in your config: you
-can enable the feature in general and change the default header that is
-sent for everything in /_utils.
-
-    .. code-block:: ini
-
-        [csp]
-        enable = true
-
-Then restart CouchDB.
-
-Have fun!
diff --git a/src/fauxton/index.rst b/src/fauxton/index.rst
deleted file mode 100644
index 6fe0335..0000000
--- a/src/fauxton/index.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _fauxton:
-
-=======
-Fauxton
-=======
-
-.. toctree::
-
-    install
diff --git a/src/fauxton/install.rst b/src/fauxton/install.rst
deleted file mode 100644
index 55cff8b..0000000
--- a/src/fauxton/install.rst
+++ /dev/null
@@ -1,83 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _fauxton/install:
-
-=============
-Fauxton Setup
-=============
-
-Fauxton is included with CouchDB 2.0, so make sure CouchDB is running, then go to::
-
-    http://127.0.0.1:5984/_utils/
-
-You can also upgrade to the latest version of Fauxton by using npm::
-
-    $ npm install -g fauxton
-    $ fauxton
-
-(Recent versions of `node.js`_ and `npm`_ are required.)
-
-.. _node.js: http://nodejs.org/
-.. _npm: https://npmjs.org/doc/README.html
-
-Fauxton Visual Guide
-====================
-You can find the Visual Guide here:
-    http://couchdb.apache.org/fauxton-visual-guide
-
-Development Server
-==================
-
-Recent versions of `node.js`_ and `npm`_ are required.
-
-.. _node.js: http://nodejs.org/
-.. _npm: https://npmjs.org/doc/README.html
-
-Using the dev server is the easiest way to use Fauxton, specially when developing for it::
-
-    $ git clone https://github.com/apache/couchdb-fauxton.git
-    $ npm install && npm run dev
-
-Understanding Fauxton Code layout
-=================================
-
-Each bit of functionality is its own separate module or addon.
-
-All core modules are stored under `app/module` and any addons that are optional
-are under `app/addons`.
-
-We use `backbone.js`_ and `Backbone.layoutmanager`_ quite heavily, so best to
-get an idea how they work. Its best at this point to read through a couple of
-the modules and addons to get an idea of how they work.
-
-Two good starting points are `app/addon/config` and `app/modules/databases`.
-
-Each module must have a `base.js` file, this is read and compile when Fauxton is
-deployed.
-
-The `resource.js` file is usually for your ``Backbone.Models`` and
-``Backbone.Collections``, `view.js` for your ``Backbone.Views``.
-
-The `routes.js` is used to register a url path for your view along with what
-layout, data, breadcrumbs and api point is required for the view.
-
-.. _backbone.js: http://backbonejs.org/
-.. _Backbone.layoutmanager: https://github.com/tbranyen/backbone.layoutmanager
-
-ToDo items
-----------
-
-Checkout `JIRA` or `GitHub Issues`_  for a list of items to do.
-
-.. _JIRA: https://issues.apache.org/jira/browse/COUCHDB/component/12320406
-.. _GitHub Issues: https://github.com/apache/couchdb-fauxton/issues
diff --git a/src/index.rst b/src/index.rst
deleted file mode 100644
index 9251a31..0000000
--- a/src/index.rst
+++ /dev/null
@@ -1,59 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. This file exists solely to hold the top-level Table of Contents.
-
-==============
-Apache CouchDB
-==============
-
-.. toctree::
-    :caption: User Guides
-    :maxdepth: 3
-    :numbered:
-
-    intro/index
-    replication/index
-    ddocs/index
-    best-practices/index
-
-.. toctree::
-    :caption: Administration Guides
-    :maxdepth: 3
-    :numbered:
-
-    install/index
-    setup/index
-    config/index
-    cluster/index
-    maintenance/index
-    fauxton/index
-    experimental
-
-.. toctree::
-    :caption: Reference Guides
-    :maxdepth: 3
-    :numbered:
-
-    api/index
-    json-structure
-    query-server/index
-    partitioned-dbs/index
-
-.. toctree::
-    :caption: Other
-    :numbered:
-
-    whatsnew/index
-    cve/index
-    License <about>
-    contributing
diff --git a/src/install/docker.rst b/src/install/docker.rst
deleted file mode 100644
index 1787d12..0000000
--- a/src/install/docker.rst
+++ /dev/null
@@ -1,43 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/docker:
-
-=======================
-Installation via Docker
-=======================
-
-Apache CouchDB provides 'convenience binary' Docker images through
-Docker Hub at ``apache/couchdb``. This is our upstream release; it
-is usually mirrored downstream at Docker's top-level ``couchdb``
-as well.
-
-At least these tags are always available on the image:
-
-* ``latest`` - always the latest
-* ``3``: always the latest 3.x version
-* ``2``: always the latest 2.x version
-* ``1``, ``1.7``, ``1.7.2``: CouchDB 1.7.2 (convenience only; no longer supported)
-* ``1-couchperuser``, ``1.7-couchperuser``, ``1.7.2-couchperuser``: CouchDB
-  1.7.2 with couchperuser plugin (convenience only; no longer supported)
-
-These images expose CouchDB on port ``5984`` of the container, run everything
-as user ``couchdb`` (uid ``5984``), and support use of a Docker volume for data
-at ``/opt/couchdb/data``.
-
-**Your installation is not complete. Be sure to complete the**
-:ref:`Setup <setup>` **steps for a single node or clustered installation.**
-
-Further details on the Docker configuration are available in our
-`couchdb-docker git repository`_.
-
-.. _couchdb-docker git repository: https://github.com/apache/couchdb-docker
diff --git a/src/install/freebsd.rst b/src/install/freebsd.rst
deleted file mode 100644
index 8e5cc87..0000000
--- a/src/install/freebsd.rst
+++ /dev/null
@@ -1,86 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/freebsd:
-
-=======================
-Installation on FreeBSD
-=======================
-
-Installation from ports
-=======================
-
-.. code-block:: text
-
-    cd /usr/ports/databases/couchdb
-    make install clean
-
-This will install CouchDB from the ports collection.
-
-.. note::
-   Be sure to :ref:`create an admin user<config/admins>` before starting
-   CouchDB for the first time!
-
-Start script
-------------
-
-The following options for ``/etc/rc.conf`` or ``/etc/rc.conf.local`` are
-supported by the start script (defaults shown)::
-
-    couchdb_enable="NO"
-    couchdb_enablelogs="YES"
-    couchdb_user="couchdb"
-
-After enabling the couchdb rc service use the following command to start CouchDB::
-
-    /usr/local/etc/rc.d/couchdb start
-
-This script responds to the arguments `start`, `stop`, `status`, `rcvar` etc..
-
-The start script will also use settings from the following config files:
-
-- /usr/local/etc/couchdb/default.ini
-- /usr/local/etc/couchdb/local.ini
-
-Administrators should use ``default.ini`` as reference and only modify the
-``local.ini`` file.
-
-Post install
-------------
-**Your installation is not complete. Be sure to complete the**
-:ref:`Setup <setup>` **steps for a single node or clustered installation.**
-
-In case the install script fails to install a non-interactive user "couchdb" to
-be used for the database, the user needs to be created manually:
-
-I used the ``pw`` command to add a user "couchdb" in group "couchdb":
-
-.. code-block:: text
-
-    pw user add couchdb
-    pw user mod couchdb -c 'CouchDB, time to relax' -s /usr/sbin/nologin -d /var/lib/couchdb
-    pw group add couchdb
-
-The user is added to ``/etc/passwd`` and should look similar to the following:
-
-.. code-block:: text
-
-    shell#  grep couchdb /etc/passwd
-    couchdb:*:1013:1013:Couchdb, time to relax:/var/lib/couchdb/:/usr/sbin/nologin
-
-To change any of these settings, please refrain from editing `/etc/passwd` and
-instead use ``pw user mod ...`` or ``vipw``. Make sure that the user has no
-shell, but instead uses ``/usr/sbin/nologin``. The '*' in the second field means
-that this user can not login via password authorization. For details use
-`man 5 passwd`_.
-
-.. _man 5 passwd: http://linux.die.net/man/5/passwd
diff --git a/src/install/index.rst b/src/install/index.rst
deleted file mode 100644
index fc8f6f6..0000000
--- a/src/install/index.rst
+++ /dev/null
@@ -1,31 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install:
-
-============
-Installation
-============
-
-.. toctree::
-    :maxdepth: 2
-
-    unix
-    windows
-    mac
-    freebsd
-    docker
-    snap
-    kubernetes
-    search
-    upgrading
-    troubleshooting
diff --git a/src/install/kubernetes.rst b/src/install/kubernetes.rst
deleted file mode 100644
index 01241ca..0000000
--- a/src/install/kubernetes.rst
+++ /dev/null
@@ -1,35 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/kubernetes:
-
-==========================
-Installation on Kubernetes
-==========================
-
-Apache CouchDB provides a `Helm chart`_ to enable deployment to
-Kubernetes.
-
-To install the chart with the release name ``my-release``:
-
-.. code-block:: sh
-
-    helm repo add couchdb https://apache.github.io/couchdb-helm
-
-    helm repo update
-
-    helm install --name my-release couchdb/couchdb
-
-Further details on the configuration options are available in
-the `Helm chart`_ readme.
-
-.. _Helm chart: https://hub.helm.sh/charts/couchdb/couchdb
diff --git a/src/install/mac.rst b/src/install/mac.rst
deleted file mode 100644
index 4b6a581..0000000
--- a/src/install/mac.rst
+++ /dev/null
@@ -1,82 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/mac:
-
-=====================
-Installation on macOS
-=====================
-
-.. _install/mac/binary:
-
-Installation using the Apache CouchDB native application
-========================================================
-
-The easiest way to run CouchDB on macOS is through the native macOS
-application. Just follow the below instructions:
-
-#. `Download Apache CouchDB for macOS`_.
-   Old releases are available at `archive`_.
-#. Double click on the Zip file
-#. Drag and drop the Apache CouchDB.app into Applications folder
-
-.. _Download Apache CouchDB for macOS: http://couchdb.apache.org/#download
-.. _archive: http://archive.apache.org/dist/couchdb/binary/mac/
-
-That's all, now CouchDB is installed on your Mac:
-
-#. Run Apache CouchDB application
-#. `Open up Fauxton`_, the CouchDB admin interface
-#. Verify the install by clicking on `Verify`, then `Verify Installation`.
-#. **Your installation is not complete. Be sure to complete the**
-   :ref:`Setup <setup>` **steps for a single node or clustered installation.**
-#. Time to Relax!
-
-.. _Open up Fauxton: http://localhost:5984/_utils
-
-.. _install/mac/homebrew:
-
-Installation with Homebrew
-==========================
-
-CouchDB can be installed via `Homebrew`_.  Fetch the newest version of Homebrew
-and all formulae and install CouchDB with the following commands:
-
-.. code:: bash
-
-   brew update
-   brew install couchdb
-
-.. _Homebrew: http://brew.sh/
-
-Installation from source
-========================
-
-Installation on macOS is possible from source. Download the `source tarball`_,
-extract it, and follow the instructions in the ``INSTALL.Unix.md`` file.
-
-.. _source tarball: http://couchdb.apache.org/#download
-
-Running as a Daemon
--------------------
-
-CouchDB itself no longer ships with any daemonization scripts.
-
-The CouchDB team recommends `runit <http://smarden.org/runit/>`_ to
-run CouchDB persistently and reliably. Configuration of runit is
-straightforward; if you have questions, reach out to the CouchDB
-user mailing list.
-
-Naturally, you can configure launchd or other init daemons to launch CouchDB
-and keep it running using standard configuration files.
-
-Consult your system documentation for more information.
diff --git a/src/install/search.rst b/src/install/search.rst
deleted file mode 100644
index d214c90..0000000
--- a/src/install/search.rst
+++ /dev/null
@@ -1,111 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/search:
-
-==========================
-Search Plugin Installation
-==========================
-
-.. versionadded:: 3.0
-
-.. highlight:: ini
-
-CouchDB can build and query full-text search indexes using an external Java
-service that embeds `Apache Lucene <http://lucene.apache.org>`_. Typically, this
-service is installed on the same host as CouchDB and communicates with it over
-the loopback network.
-
-The search plugin is runtime-compatible with Java JDKs 6, 7 and 8. Building a
-release from source requires JDK 6. **It will not work with any newer version of
-Java.** Sorry about that.
-
-Installation of Binary Packages
-===============================
-
-Binary packages that bundle all the necessary dependencies of the search plugin are
-available on `GitHub`_.  The files in each release should be unpacked into a directory on
-the Java classpath. If you do not have a classpath already set, or you wish to explicitly
-set the classpath location for Clouseau, then add the line::
-
-    -classpath '/path/to/clouseau/*'
-
-to the server command below. If clouseau is installed in ``/opt/clouseau`` the line would be::
-
-    -classpath '/opt/clouseau/*'
-
-The service expects to find a couple of configuration files
-conventionally called ``clouseau.ini`` and ``log4j.properties`` with the following
-content:
-
-**clouseau.ini**::
-
-    [clouseau]
-
-    ; the name of the Erlang node created by the service, leave this unchanged
-    name=clouseau@127.0.0.1
-
-    ; set this to the same distributed Erlang cookie used by the CouchDB nodes
-    cookie=monster
-
-    ; the path where you would like to store the search index files
-    dir=/path/to/index/storage
-
-    ; the number of search indexes that can be open simultaneously
-    max_indexes_open=500
-
-**log4j.properties**::
-
-    log4j.rootLogger=debug, CONSOLE
-    log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
-    log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
-    log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} %c [%p] %m%n
-
-Once these files are in place the service can be started with an invocation like
-the following::
-
-    java -server \
-         -Xmx2G \
-         -Dsun.net.inetaddr.ttl=30 \
-         -Dsun.net.inetaddr.negative.ttl=30 \
-         -Dlog4j.configuration=file:/path/to/log4j.properties \
-         -XX:OnOutOfMemoryError="kill -9 %p" \
-         -XX:+UseConcMarkSweepGC \
-         -XX:+CMSParallelRemarkEnabled \
-         com.cloudant.clouseau.Main \
-         /path/to/clouseau.ini
-
-Chef
-====
-
-The CouchDB `cookbook`_ can build the search plugin from source and install it
-on a server alongside CouchDB.
-
-Kubernetes
-==========
-
-Users running CouchDB on Kubernetes via the `Helm chart`_ can add the search
-service to each CouchDB Pod by setting ``enableSearch: true`` in the chart
-values.
-
-Additional Details
-==================
-
-The :ref:`Search User Guide <ddoc/search>` provides detailed information on
-creating and querying full-text indexes using this plugin.
-
-The source code for the plugin and additional configuration documentation is
-available on GitHub at https://github.com/cloudant-labs/clouseau.
-
-.. _GitHub: https://github.com/cloudant-labs/clouseau/releases
-.. _cookbook: https://supermarket.chef.io/cookbooks/couchdb
-.. _Helm chart: https://github.com/apache/couchdb-helm
diff --git a/src/install/snap.rst b/src/install/snap.rst
deleted file mode 100644
index 904ce65..0000000
--- a/src/install/snap.rst
+++ /dev/null
@@ -1,47 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/snap:
-
-=====================
-Installation via Snap
-=====================
-
-.. highlight:: sh
-
-Apache CouchDB provides 'convenience binary' Snap builds through the
-Ubuntu snapcraft repository under the name ``couchdb``. Only snaps built
-from official stable CouchDB releases (``2.0``, ``2.1``, etc.) are available
-through this channel. There are separate snap channels for each major
-release stream, e.g. ``2.x``, ``3.x``, as well as a ``latest`` stream.
-
-After `installing snapd`_, the CouchDB snap can be installed via::
-
-    $ sudo snap install couchdb
-
-CouchDB will be installed at ``/snap/couchdb``. Data will be stored at
-``/var/snap/couchdb/``.
-
-Please note that all other file system paths are **relative to the snap
-`chroot`** instead of the system root. In addition, the exact path
-depends on your system. For example, when you normally want to
-reference `/opt/couchdb/etc/local.ini`, under snap, this could live at
-`/snap/couchdb/5/opt/couchdb/etc/local.ini`.
-
-**Your installation is not complete. Be sure to complete the**
-:ref:`Setup <setup>` **steps for a single node or clustered installation.**
-
-Further details on the snap build process are available in our
-`couchdb-pkg git repository`_.
-
-.. _installing snapd: https://snapcraft.io/docs/core/install
-.. _couchdb-pkg git repository: https://github.com/apache/couchdb-pkg
diff --git a/src/install/troubleshooting.rst b/src/install/troubleshooting.rst
deleted file mode 100644
index 368d71d..0000000
--- a/src/install/troubleshooting.rst
+++ /dev/null
@@ -1,358 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/troubleshooting:
-
-===============================
-Troubleshooting an Installation
-===============================
-
-First Install
-=============
-
-If your CouchDB doesn't start after you've just installed, check the following
-things:
-
-- On UNIX-like systems, this is usually this is a permissions issue. Ensure
-  that you've followed the :ref:`install/unix/security`
-  ``chown``/``chmod`` commands. This problem is indicated by the presence of
-  the keyword ``eacces`` somewhere in the error output from CouchDB itself.
-- Some Linux distributions split up Erlang into multiple packages. For your
-  distribution, check that you **really** installed all the required Erlang
-  modules. This varies from platform to platform, so you'll just have to
-  work it out for yourself. For example, on recent versions of Ubuntu/Debian,
-  the ``erlang`` package includes all Erlang modules.
-- Confirm that Erlang itself starts up with crypto (SSL) support:
-
-.. code-block:: text
-
-    ## what version of erlang are you running? Ensure it is supported
-    erl -noshell -eval 'io:put_chars(erlang:system_info(otp_release)).' -s erlang halt
-    ## are the erlang crypto (SSL) libraries working?
-    erl -noshell -eval 'case application:load(crypto) of ok -> io:put_chars("yay_crypto\n") ; _ -> exit(no_crypto) end.' -s init stop
-
-- Next, identify where your Erlang CouchDB libraries are installed. This will
-  typically be the lib/ subdirectory of the release that you have installed.
-- Use this to start up Erlang with the CouchDB libraries in its path:
-
-.. code-block:: text
-
-    erl -env ERL_LIBS $ERL_LIBS:/path/to/couchdb/lib -couch_ini -s crypto
-
-- In that Erlang shell, let's check that the key libraries are running. The
-  ``%%`` lines are comments, so you can skip them:
-
-.. code-block:: text
-
-    %% test SSL support. If this fails, ensure you have the OTP erlang-crypto library installed
-    crypto:md5_init().
-
-    %% test Snappy compression. If this fails, check your CouchDB configure script output or alternatively
-    %% if your distro comes with erlang-snappy make sure you're using only the CouchDB supplied version
-    snappy:compress("gogogogogogogogogogogogogogo").
-
-    %% test the CouchDB JSON encoder. CouchDB uses different encoders in each release, this one matches
-    %% what is used in 2.0.x.
-    jiffy:decode(jiffy:encode(<<"[1,2,3,4,5]">>)).
-
-    %% this is how you quit the erlang shell.
-    q().
-
-- The output should resemble this, or an error will be thrown:
-
-.. code-block:: text
-
-    Erlang/OTP 17 [erts-6.2] [source] [64-bit] [smp:2:2] [async-threads:10] [kernel-poll:false]
-
-    Eshell V6.2  (abort with ^G)
-    1> crypto:md5_init().
-    <<1,35,69,103,137,171,205,239,254,220,186,152,118,84,50,
-      16,0,0,0,0,0,0,0,0,0,0,0,0,0,...>>
-    2> snappy:compress("gogogogogogogogogogogogogogo").
-    {ok,<<28,4,103,111,102,2,0>>}
-    3> jiffy:decode(jiffy:encode(<<"[1,2,3,4,5]">>)).
-    <<"[1,2,3,4,5]">>
-    4> q().
-
-- At this point the only remaining dependency is your system's Unicode support
-  library, ICU, and the Spidermonkey Javascript VM from Mozilla. Make sure that
-  your ``LD_LIBRARY_PATH`` or equivalent for non-Linux systems
-  (``DYLD_LIBRARY_PATH`` on macOS) makes these available to CouchDB.
-  Linux example running as normal user:
-
-.. code-block:: text
-
-    LD_LIBRARY_PATH=/usr/local/lib:/usr/local/spidermonkey/lib couchdb
-
-  Linux example running as couchdb user:
-
-.. code-block:: text
-
-    echo LD_LIBRARY_PATH=/usr/local/lib:/usr/local/spidermonkey/lib couchdb | sudo -u couchdb sh
-
-- If you receive an error message including the key word ``eaddrinuse``,
-  such as this:
-
-.. code-block:: text
-
-    Failure to start Mochiweb: eaddrinuse
-
-  edit your ``etc/default.ini`` or ``etc/local.ini`` file and change the
-  ``[chttpd] port = 5984`` line to an available port.
-
-- If you receive an error including the string:
-
-.. code-block:: text
-
-    … OS Process Error … {os_process_error,{exit_status,127}}
-
-then it is likely your SpiderMonkey JavaScript VM installation is not
-correct. Please recheck your build dependencies and try again.
-
-- If you receive an error including the string:
-
-.. code-block:: text
-
-    … OS Process Error … {os_process_error,{exit_status,139}}
-
-this is caused by the fact that SELinux blocks access to certain areas of
-the file system. You must re-configure SELinux, or you can fully disable
-SELinux using the command:
-
-.. code-block:: text
-
-    setenforce 0
-
-- If you are still not able to get CouchDB to start at this point, keep
-  reading.
-
-Quick Build
-===========
-
-Having problems getting CouchDB to run for the first time? Follow this simple
-procedure and report back to the user mailing list or IRC with the output
-of each step. Please put the output of these steps into a paste service (such
-as https://paste.ee/) rather than including the output of your entire
-run in IRC or the mailing list directly.
-
-1. Note down the name and version of your operating system and your processor
-   architecture.
-
-2. Note down the installed versions of CouchDB's dependencies.
-
-3. Follow the checkout instructions to get a fresh copy of CouchDB's trunk.
-
-4. Configure from the couchdb directory:
-
-.. code-block:: text
-
-    ./configure
-
-5. Build the release:
-
-.. code-block:: text
-
-    make release
-
-6. Run the couchdb command and log the output:
-
-.. code-block:: text
-
-    cd rel/couchdb
-    bin/couchdb
-
-7. Use your system's kernel trace tool and log the output of the above command.
-
-   a) For example, linux systems should use ``strace``:
-
-.. code-block:: text
-
-    strace bin/couchdb 2> strace.out
-
-8. Report back to the mailing list (or IRC) with the output of each step.
-
-Upgrading
-=========
-
-Are you upgrading from CouchDB 1.x? Install CouchDB into a fresh directory.
-CouchDB's directory layout has changed and may be confused by libraries
-present from previous releases.
-
-Runtime Errors
-==============
-
-Erlang stack trace contains ``system_limit``, ``open_port``, or ``emfile``
---------------------------------------------------------------------------
-Modern Erlang has a default limit of 65536 ports (8196 on Windows), where each
-open file handle, tcp connection, and linked-in driver uses one port. OSes have
-different soft and hard limits on the number of open handles per process, often
-as low as 1024 or 4096 files. You've probably exceeded this.
-
-There are two settings that need changing to increase this value. Consult your
-OS documentation for how to increase the limit for your process. Under Linux
-and systemd, this setting can be adjusted via ``systemctl edit couchdb`` and
-adding the lines:
-
-.. code-block:: ini
-
-    [Service]
-    LimitNOFILE=65536
-
-to the file in the editor.
-
-To increase this value higher than 65536, you must also add the Erlang ``+Q``
-parameter to your ``etc/vm.args`` file by adding the line:
-
-.. code-block:: text
-
-    +Q 102400
-
-The old ``ERL_MAX_PORTS`` environment variable is ignored by the version of
-Erlang supplied with CouchDB.
-
-Lots of memory being used on startup
-------------------------------------
-Is your CouchDB using a lot of memory (several hundred MB) on startup? This one
-seems to especially affect Dreamhost installs. It's really an issue with the
-Erlang VM pre-allocating data structures when ulimit is very large or
-unlimited. A detailed discussion can be found on the erlang-questions list,
-but the short answer is that you should decrease ``ulimit -n`` or lower the
-``vm.args`` parameter ``+Q`` to something reasonable like 1024.
-
-function raised exception (Cannot encode 'undefined' value as JSON)
--------------------------------------------------------------------
-If you see this in the CouchDB error logs, the JavaScript code you are using
-for either a map or reduce function is referencing an object member that is
-not defined in at least one document in your database. Consider this
-document:
-
-.. code-block:: text
-
-    {
-      "_id":"XYZ123",
-      "_rev":"1BB2BB",
-      "field":"value"
-    }
-
-and this map function:
-
-.. code-block:: javascript
-
-    function(doc) {
-      emit(doc.name, doc.address);
-    }
-
-This will fail on the above document, as it does not contain a ``name`` or
-``address`` member. Instead, use guarding to make sure the function only
-accesses members when they exist in a document:
-
-.. code-block:: javascript
-
-    function(doc) {
-      if(doc.name && doc.address) {
-        emit(doc.name, doc.address);
-      }
-    }
-
-While the above guard will work in most cases, it's worth bearing JavaScript's
-understanding of 'false' values in mind. Testing against a property with a
-value of 0 (zero), ``''`` (empty String), ``false`` or ``null`` will return
-false. If this is undesired, a guard of the form ``if (doc.foo !== undefined)``
-should do the trick.
-
-This error can also be caused if a reduce function does not return a value. For
-example, this reduce function will cause an error:
-
-.. code-block:: javascript
-
-    function(key, values) {
-      sum(values);
-    }
-
-The function needs to return a value:
-
-.. code-block:: javascript
-
-    function(key, values) {
-      return sum(values);
-    }
-
-erlang stack trace contains ``bad_utf8_character_code``
--------------------------------------------------------
-
-CouchDB 1.1.1 and later contain stricter handling of UTF8 encoding. If you are
-replicating from older versions to newer versions, then this error may occur
-during replication.
-
-A number of work-arounds exist; the simplest is to do an in-place upgrade of
-the relevant CouchDB and then compact prior to replicating.
-
-Alternatively, if the number of documents impacted is small, use filtered
-replication to exclude only those documents.
-
-FIPS mode
----------
-
-Operating systems can be configured to disallow the use of OpenSSL MD5 hash
-functions in order to prevent use of MD5 for cryptographic purposes. CouchDB
-makes use of MD5 hashes for verifying the integrity of data (and not for
-cryptography) and will not run without the ability to use MD5 hashes.
-
-The message below indicates that the operating system is running in "FIPS mode,"
-which, among other restrictions, does not allow the use of OpenSSL's
-MD5 functions:
-
-.. code-block:: text
-
-    md5_dgst.c(82): OpenSSL internal error, assertion failed: Digest MD5 forbidden in FIPS mode!
-    [os_mon] memory supervisor port (memsup): Erlang has closed
-    [os_mon] cpu supervisor port (cpu_sup): Erlang has closed
-    Aborted
-
-A workaround for this is provided with the ``--erlang-md5`` compile flag. Use of
-the flag results in CouchDB substituting the OpenSSL MD5 function calls with
-equivalent calls to Erlang's built-in library ``erlang:md5.`` NOTE: there may be
-a performance penalty associated with this workaround.
-
-Because CouchDB does not make use of MD5 hashes for cryptographic purposes, this
-workaround does not defeat the purpose of "FIPS mode," provided that the system
-owner is aware of and consents to its use.
-
-Debugging startup
------------------
-If you've compiled from scratch and are having problems getting CouchDB to even
-start up, you may want to see more detail. Start by enabling logging at the debug
-level:
-
-.. code-block:: ini
-
-    [log]
-    level = debug
-
-You can then pass the ``-init_debug +W i +v +V -emu_args`` flags in the ``ERL_FLAGS``
-environment variable to turn on additional debugging information that CouchDB
-developers can use to help you.
-
-Then, reach out to the CouchDB development team using the links provided on the
-`CouchDB home page <https://couchdb.apache.org/>`_ for assistance.
-
-macOS Known Issues
-====================
-undefined error, exit_status 134
---------------------------------
-
-Sometimes the ``Verify Installation`` fails with an ``undefined`` error.
-This could be due to a missing dependency with Mac.
-In the logs, you will find ``couchdb exit_status,134``.
-
-Installing the missing ``nspr`` via ``brew install nspr`` resolves the issue.
-(see: https://github.com/apache/couchdb/issues/979)
diff --git a/src/install/unix.rst b/src/install/unix.rst
deleted file mode 100644
index 2623d96..0000000
--- a/src/install/unix.rst
+++ /dev/null
@@ -1,453 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/unix:
-
-=================================
-Installation on Unix-like systems
-=================================
-
-.. warning::
-    CouchDB 3.0+ will not run without an admin user being created first.
-    Be sure to :ref:`create an admin user<config/admins>` before starting
-    CouchDB!
-
-.. _install/unix/binary:
-
-Installation using the Apache CouchDB convenience binary packages
-=================================================================
-
-If you are running one of the following operating systems, the easiest way
-to install CouchDB is to use the convenience binary packages:
-
-* CentOS/RHEL 7
-* CentOS/RHEL 8
-* Debian 10 (buster)
-* Debian 11 (bullseye)
-* Ubuntu 18.04 (bionic)
-* Ubuntu 20.04 (focal)
-
-These RedHat-style rpm packages and Debian-style deb packages will install CouchDB at
-``/opt/couchdb`` and ensure CouchDB is run at system startup by the appropriate init
-subsystem (SysV-style initd or systemd).
-
-The Debian-style deb packages *also* pre-configure CouchDB as a standalone or clustered
-node, prompt for the address to which it will bind, and a password for the admin user.
-Responses to these prompts may be pre-seeded using standard ``debconf`` tools. Further
-details are in the `README.Debian`_ file.
-
-.. _README.Debian: https://github.com/apache/couchdb-pkg/blob/main/debian/README.Debian
-
-For distributions lacking a compatible SpiderMonkey library, Apache CouchDB
-also provides packages for the 1.8.5 version.
-
-Enabling the Apache CouchDB package repository
-----------------------------------------------
-
-.. highlight:: sh
-
-**Debian or Ubuntu**: Run the following commands::
-
-    sudo apt update && sudo apt install -y curl apt-transport-https gnupg
-    curl https://couchdb.apache.org/repo/keys.asc | gpg --dearmor | sudo tee /usr/share/keyrings/couchdb-archive-keyring.gpg >/dev/null 2>&1
-    source /etc/os-release
-    echo "deb [signed-by=/usr/share/keyrings/couchdb-archive-keyring.gpg] https://apache.jfrog.io/artifactory/couchdb-deb/ ${VERSION_CODENAME} main" \
-        | sudo tee /etc/apt/sources.list.d/couchdb.list >/dev/null
-
-**RedHat or CentOS**: Run the following commands::
-
-    sudo yum install -y yum-utils
-    sudo yum-config-manager --add-repo https://couchdb.apache.org/repo/couchdb.repo
-
-Installing the Apache CouchDB packages
---------------------------------------
-
-.. highlight:: sh
-
-**Debian or Ubuntu**: Run the following commands::
-
-    sudo apt update
-    sudo apt install -y couchdb
-
-Debian/Ubuntu installs from binaries can be pre-configured for single node or
-clustered installations. For clusters, multiple nodes will still need to be
-joined together and configured consistently across all machines; **follow the**
-:ref:`Cluster Setup <setup/cluster>` **walkthrough** to complete the process.
-
-**RedHat/CentOS**: Run the command::
-
-    sudo yum install -y couchdb
-
-Once installed, :ref:`create an admin user<config/admins>` by hand before
-starting CouchDB, if your installer didn't do this for you already.
-
-You can now start the service.
-
-**Your installation is not complete. Be sure to complete the**
-:ref:`Setup <setup>` **steps for a single node or clustered installation.**
-
-**Relax!** CouchDB is installed and running.
-
-GPG keys used for signing the CouchDB repositories
---------------------------------------------------
-
-As of 2021.04.25, the *repository* signing key for both types of supported packages
-is::
-
-    pub   rsa8192 2015-01-19 [SC]
-          390EF70BB1EA12B2773962950EE62FB37A00258D
-    uid           The Apache Software Foundation (Package repository signing key) <root@apache.org>
-
-As of 2021.04.25, the *package* signing key (only used for ``rpm`` packages) is::
-
-    pub   rsa4096 2017-07-28 [SC] [expires: 2022-07-27]
-          2EC788AE3F239FA13E82D215CDE711289384AE37
-    uid           Joan Touzet (Apache Code Signing Key) <wohali@apache.org>
-
-As of 2021.11.13, the *package* signing key (only used for ``rpm`` packages) is::
-
-     pub   rsa4096 2019-09-05 [SC] [expires: 2039-01-02]
-           0BD7A98499C4AB41C910EE65FC04DFBC9657A78E
-     uid           Nicolae Vatamaniuc <vatamane@apache.org>
-     uid           default <vatamane@gmail.com>
-
-All are available from most popular GPG key servers. The ``rpm``
-signing keys should be listed in the `KEYS
-<https://downloads.apache.org/couchdb/KEYS>`_ list as well.
-
-Installation from source
-========================
-
-The remainder of this document describes the steps required to install CouchDB
-directly from source code.
-
-This guide, as well as the INSTALL.Unix document in the official tarball
-release are the canonical sources of installation information. However, many
-systems have gotchas that you need to be aware of. In addition, dependencies
-frequently change as distributions update their archives.
-
-.. _install/unix/dependencies:
-
-Dependencies
-============
-
-You should have the following installed:
-
-* `Erlang OTP (20.x >= 20.3.8.11, 21.x >= 21.2.3, 22.x >= 22.0.5, 23.x, 24.x) <http://erlang.org/>`_
-* `ICU                          <http://icu-project.org/>`_
-* `OpenSSL                      <http://www.openssl.org/>`_
-* `Mozilla SpiderMonkey (1.8.5, 60, 68, 78, 91) <https://spidermonkey.dev/>`_
-* `GNU Make                     <http://www.gnu.org/software/make/>`_
-* `GNU Compiler Collection      <http://gcc.gnu.org/>`_
-* `libcurl                      <http://curl.haxx.se/libcurl/>`_
-* `help2man                     <http://www.gnu.org/s/help2man/>`_
-* `Python (>=3.6) for docs and tests      <http://python.org/>`_
-* `Python Sphinx (>=1.1.3)      <http://pypi.python.org/pypi/Sphinx>`_
-
-You will only need libcurl if you plan to run the JavaScript test suite. And
-help2man is only need if you plan on installing the CouchDB man pages.
-Sphinx is only required for building the online documentation.
-Documentation build can be disabled by adding the ``--disable-docs`` flag to
-the ``configure`` script.
-
-Debian-based Systems
---------------------
-
-You can install the dependencies by running::
-
-    sudo apt-get --no-install-recommends -y install \
-        build-essential pkg-config erlang \
-        libicu-dev libmozjs185-dev libcurl4-openssl-dev
-
-Be sure to update the version numbers to match your system's available
-packages.
-
-RedHat-based (Fedora, CentOS, RHEL) Systems
--------------------------------------------
-
-You can install the dependencies by running::
-
-    sudo yum install autoconf autoconf-archive automake \
-        curl-devel erlang-asn1 erlang-erts erlang-eunit gcc-c++ \
-        erlang-os_mon erlang-xmerl erlang-erl_interface help2man \
-        libicu-devel libtool perl-Test-Harness
-
-Warning: To build a release for CouchDB the erlang-reltool package is required,
-yet on CentOS/RHEL this package depends on erlang-wx which pulls in wxGTK
-and several X11 libraries. If CouchDB is being built on a console only
-server it might be a good idea to install this in a separate step to the
-rest of the dependencies, so that the package and all its dependencies
-can be removed using the ``yum history`` tool after the release is built.
-(reltool is needed only during release build but not for CouchDB functioning)
-
-The package can be installed by running::
-
-    sudo yum install erlang-reltool
-
-Fedora 36
----------
-
-On Fedora 36, you may need these packages in addition to the ones listed above:
-
-* `mozjs91-devel`
-* `erlang-rebar`
-
-If the system contains dangling links to Erlang chunk files, the compiler will
-abort. They can be deleted with the following command::
-
-    find -L /usr/lib64/erlang/lib/ -type l -name chunks | xargs rm -f
-
-Fauxton is not built on the Node.js version (v16) shipped by the system. The
-installation of v12.22.12 can be done via::
-
-    wget https://nodejs.org/download/release/v12.22.12/node-v12.22.12-linux-x64.tar.gz
-    mkdir -p /usr/local/lib/nodejs
-    tar -xvf node-v12.22.12-linux-x64.tar.gz -C /usr/local/lib/nodejs
-    export PATH=/usr/local/lib/nodejs/node-v12.22.12-linux-x64/bin:$PATH
-
-Note that due to a problem with the Python package sphinx-build, it is not
-possible to compile the documentation on Fedora 36. You can skip compiling the
-documentation via::
-
-    ./configure --disable-docs --spidermonkey-version 91
-
-Mac OS X
---------
-
-Follow :ref:`install/mac/homebrew` reference for Mac App installation.
-
-If you are installing from source, you will need to install the Command
-Line Tools::
-
-    xcode-select --install
-
-You can then install the other dependencies by running::
-
-    brew install autoconf autoconf-archive automake libtool \
-        erlang icu4c spidermonkey curl pkg-config
-
-You will need `Homebrew` installed to use the ``brew`` command.
-
-Some versions of Mac OS X ship a problematic OpenSSL library. If
-you're experiencing troubles with CouchDB crashing intermittently with
-a segmentation fault or a bus error, you will need to install your own
-version of OpenSSL. See the wiki, mentioned above, for more information.
-
-.. seealso::
-
-    * `Homebrew <http://mxcl.github.com/homebrew/>`_
-
-FreeBSD
--------
-
-FreeBSD requires the use of GNU Make. Where ``make`` is specified in this
-documentation, substitute ``gmake``.
-
-You can install this by running::
-
-    pkg install gmake
-
-Installing
-==========
-
-Once you have satisfied the dependencies you should run::
-
-    ./configure
-
-If you wish to customize the installation, pass ``--help`` to this script.
-
-If everything was successful you should see the following message::
-
-    You have configured Apache CouchDB, time to relax.
-
-Relax.
-
-To build CouchDB you should run::
-
-    make release
-
-Try ``gmake`` if ``make`` is giving you any problems.
-
-If include paths or other compiler options must be specified, they can be passed to rebar, which compiles CouchDB, with the ERL_CFLAGS environment variable. Likewise, options may be passed to the linker with the ERL_LDFLAGS environment variable::
-
-    make release ERL_CFLAGS="-I/usr/local/include/js -I/usr/local/lib/erlang/usr/include"
-
-If everything was successful you should see the following message::
-
-    ... done
-    You can now copy the rel/couchdb directory anywhere on your system.
-    Start CouchDB with ./bin/couchdb from within that directory.
-
-Relax.
-
-Note: a fully-fledged ``./configure`` with the usual GNU Autotools options
-for package managers and a corresponding ``make install`` are in
-development, but not part of the 2.0.0 release.
-
-.. _install/unix/security:
-
-User Registration and Security
-==============================
-
-For OS X, in the steps below, substitute ``/Users/couchdb`` for
-``/home/couchdb``.
-
-You should create a special ``couchdb`` user for CouchDB.
-
-On many Unix-like systems you can run::
-
-    adduser --system \
-            --shell /bin/bash \
-            --group --gecos \
-            "CouchDB Administrator" couchdb
-
-On Mac OS X you can use the Workgroup Manager to create users up to version
-10.9, and dscl or sysadminctl after version 10.9. Search Apple's support
-site to find the documentation appropriate for your system. As of recent
-versions of OS X, this functionality is also included in Server.app,
-available through the App Store only as part of OS X Server.
-
-You must make sure that the user has a working POSIX shell and a writable
-home directory.
-
-You can test this by:
-
-* Trying to log in as the ``couchdb`` user
-* Running ``pwd`` and checking the present working directory
-
-As a recommendation, copy the ``rel/couchdb`` directory into
-``/home/couchdb`` or ``/Users/couchdb``.
-
-Ex: copy the built couchdb release to the new user's home directory::
-
-    cp -R /path/to/couchdb/rel/couchdb /home/couchdb
-
-Change the ownership of the CouchDB directories by running::
-
-    chown -R couchdb:couchdb /home/couchdb
-
-Change the permission of the CouchDB directories by running::
-
-    find /home/couchdb -type d -exec chmod 0770 {} \;
-
-Update the permissions for your ini files::
-
-    chmod 0644 /home/couchdb/etc/*
-
-First Run
-=========
-
-.. note::
-    Be sure to :ref:`create an admin user<config/admins>` before trying to
-    start CouchDB!
-
-You can start the CouchDB server by running::
-
-    sudo -i -u couchdb /home/couchdb/bin/couchdb
-
-This uses the ``sudo`` command to run the ``couchdb`` command as the
-``couchdb`` user.
-
-When CouchDB starts it should eventually display following messages::
-
-    {database_does_not_exist,[{mem3_shards,load_shards_from_db,"_users" ...
-
-Don't be afraid, we will fix this in a moment.
-
-To check that everything has worked, point your web browser to::
-
-    http://127.0.0.1:5984/_utils/index.html
-
-From here you should verify your installation by pointing your web browser to::
-
-    http://localhost:5984/_utils/index.html#verifyinstall
-
-**Your installation is not complete. Be sure to complete the**
-:ref:`Setup <setup>` **steps for a single node or clustered installation.**
-
-Running as a Daemon
-===================
-
-CouchDB no longer ships with any daemonization scripts.
-
-The CouchDB team recommends `runit <http://smarden.org/runit/>`_ to
-run CouchDB persistently and reliably. According to official site:
-
-    *runit* is a cross-platform Unix init scheme with service supervision,
-    a replacement for sysvinit, and other init schemes. It runs on
-    GNU/Linux, \*BSD, MacOSX, Solaris, and can easily be adapted to
-    other Unix operating systems.
-
-Configuration of runit is straightforward; if you have questions, contact
-the CouchDB `user mailing list <http://mail-archives.apache.org/mod_mbox/couchdb-user/>`_
-or `IRC-channel #couchdb <http://webchat.freenode.net/?channels=#couchdb>`_
-in FreeNode network.
-
-Let's consider configuring runit on Ubuntu 18.04. The following
-steps should be considered only as an example. Details will vary
-by operating system and distribution. Check your system's package
-management tools for specifics.
-
-Install runit::
-
-    sudo apt-get install runit
-
-Create a directory where logs will be written::
-
-    sudo mkdir /var/log/couchdb
-    sudo chown couchdb:couchdb /var/log/couchdb
-
-Create directories that will contain runit configuration for CouchDB::
-
-    sudo mkdir /etc/sv/couchdb
-    sudo mkdir /etc/sv/couchdb/log
-
-Create /etc/sv/couchdb/log/run script::
-
-    #!/bin/sh
-    exec svlogd -tt /var/log/couchdb
-
-Basically it determines where and how exactly logs will be written.
-See ``man svlogd`` for more details.
-
-Create /etc/sv/couchdb/run::
-
-    #!/bin/sh
-    export HOME=/home/couchdb
-    exec 2>&1
-    exec chpst -u couchdb /home/couchdb/bin/couchdb
-
-This script determines how exactly CouchDB will be launched.
-Feel free to add any additional arguments and environment
-variables here if necessary.
-
-Make scripts executable::
-
-    sudo chmod u+x /etc/sv/couchdb/log/run
-    sudo chmod u+x /etc/sv/couchdb/run
-
-Then run::
-
-    sudo ln -s /etc/sv/couchdb/ /etc/service/couchdb
-
-In a few seconds runit will discover a new symlink and start CouchDB.
-You can control CouchDB service like this::
-
-    sudo sv status couchdb
-    sudo sv stop couchdb
-    sudo sv start couchdb
-
-Naturally now CouchDB will start automatically shortly after system starts.
-
-You can also configure systemd, launchd or SysV-init daemons to launch
-CouchDB and keep it running using standard configuration files. Consult
-your system documentation for more information.
diff --git a/src/install/upgrading.rst b/src/install/upgrading.rst
deleted file mode 100644
index 97364a7..0000000
--- a/src/install/upgrading.rst
+++ /dev/null
@@ -1,83 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/upgrading:
-
-=====================================
-Upgrading from prior CouchDB releases
-=====================================
-
-Important Notes
-===============
-
-* **Always back up your** ``data/`` **and** ``etc/`` **directories prior to
-  upgrading CouchDB.**
-* We recommend that you overwrite your ``etc/default.ini`` file with the
-  version provided by the new release. New defaults sometimes contain
-  mandatory changes to enable default functionality. Always places your
-  customizations in ``etc/local.ini`` or any ``etc/local.d/*.ini`` file.
-
-Upgrading from CouchDB 2.x
-==========================
-
-If you are coming from a prior release of CouchDB 2.x, upgrading is simple.
-
-Standalone (single) node upgrades
----------------------------------
-
-If you are running a standalone (single) CouchDB node:
-
-#. Plan for downtime.
-#. Backup everything.
-#. Check for new recommended settings in the shipped ``etc/local.ini`` file,
-   and merge any changes desired into your own local settings file(s).
-#. Stop CouchDB.
-#. Upgrade CouchDB in place.
-#. Be sure to :ref:`create an admin user<config/admins>` if you do not have
-   one. CouchDB 3.0+ **require** an admin user to start (the admin party has
-   ended).
-#. Start CouchDB.
-#. Relax! You're done.
-
-Cluster upgrades
-----------------
-
-CouchDB 2.x and 3.x are explicitly designed to allow "mixed clusters" during
-the upgrade process. This allows you to perform a rolling restart across
-a cluster, upgrading one node at a time, for a *zero downtime upgrade*.
-The process is also entirely scriptable within your configuration
-management tool of choice.
-
-We're proud of this feature, and you should be, too!
-
-If you are running a CouchDB cluster:
-
-#. Backup everything.
-#. Check for new recommended settings in the shipped ``etc/local.ini`` file,
-   and merge any changes desired into your own local settings file(s),
-   staging these changes to occur as you upgrade the node.
-#. Stop CouchDB on a single node.
-#. Upgrade that CouchDB install in place.
-#. Start CouchDB.
-#. Double-check that the node has re-joined the cluster through the
-   :ref:`/_membership<api/server/membership>` endpoint. If your load balancer has
-   health check functionality driven by the :ref:`/_up<api/server/up>` endpoint,
-   check whether it thinks the node is healthy as well.
-#. Repeat the last 4 steps on the remaining nodes in the cluster.
-#. Relax! You're done.
-
-Upgrading from CouchDB 1.x
-==========================
-
-To upgrade from CouchDB 1.x, first upgrade to a version of CouchDB 2.x.  You
-will need to convert all databases to CouchDB 2.x format first; see the Upgrade
-Notes there for instructions. Then, upgrade to CouchDB 3.x.
diff --git a/src/install/windows.rst b/src/install/windows.rst
deleted file mode 100644
index 1f41c32..0000000
--- a/src/install/windows.rst
+++ /dev/null
@@ -1,104 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _install/windows:
-
-=======================
-Installation on Windows
-=======================
-
-There are two ways to install CouchDB on Windows.
-
-Installation from binaries
-==========================
-
-This is the simplest way to go.
-
-.. warning::
-    Windows 8, 8.1, and 10 require the `.NET Framework v3.5`_ to be installed.
-
-#. Get `the latest Windows binaries`_ from the `CouchDB web site`_.
-   Old releases are available at `archive`_.
-
-#. Follow the installation wizard steps. **Be sure to install CouchDB to a
-   path with no spaces, such as** ``C:\CouchDB``.
-
-#. **Your installation is not complete. Be sure to complete the**
-   :ref:`Setup <setup>` **steps for a single node or clustered installation.**
-
-#. `Open up Fauxton`_
-
-#. It's time to Relax!
-
-.. note::
-    In some cases you might been asked to reboot Windows to complete
-    installation process, because of using on different Microsoft Visual C++
-    runtimes by CouchDB.
-
-.. note::
-    **Upgrading note**
-
-    It's recommended to uninstall previous CouchDB version before upgrading,
-    especially if the new one is built against different Erlang release.
-    The reason is simple: there may be leftover libraries with alternative or
-    incompatible versions from old Erlang release that may create conflicts,
-    errors and weird crashes.
-
-    In this case, make sure you backup of your `local.ini` config and CouchDB
-    database/index files.
-
-.. _Open up Fauxton: http://localhost:5984/_utils
-.. _CouchDB web site: http://couchdb.apache.org/
-.. _archive: http://archive.apache.org/dist/couchdb/binary/win/
-.. _the latest Windows binaries: http://couchdb.apache.org/#download
-.. _.NET Framework v3.5: https://docs.microsoft.com/en-us/dotnet/framework/install/dotnet-35-windows-10
-
-.. _install/windows/silent:
-
-Silent Install
---------------
-
-The Windows installer supports silent installs. Here are some sample commands, supporting
-the new features of the 3.0 installer.
-
-Install CouchDB without a service, but with an admin user:password of ``admin:hunter2``:
-
-.. code-block:: batch
-
-    msiexec /i apache-couchdb-3.0.0.msi /quiet ADMINUSER=admin ADMINPASSWORD=hunter2 /norestart
-
-The same as above, but also install and launch CouchDB as a service:
-
-.. code-block:: batch
-
-    msiexec /i apache-couchdb-3.0.0.msi /quiet INSTALLSERVICE=1 ADMINUSER=admin ADMINPASSWORD=hunter2 /norestart
-
-Unattended uninstall of CouchDB to target directory `D:\CouchDB`:
-
-.. code-block:: batch
-
-    msiexec /x apache-couchdb-3.0.0.msi INSTALLSERVICE=1 APPLICATIONFOLDER="D:\CouchDB" ADMINUSER=admin ADMINPASSWORD=hunter2 /quiet /norestart
-
-Unattended uninstall if the installer file is unavailable:
-
-.. code-block:: batch
-
-    msiexec /x {4CD776E0-FADF-4831-AF56-E80E39F34CFC} /quiet /norestart
-
-Add ``/l* log.txt`` to any of the above to generate a useful logfile for debugging.
-
-Installation from sources
-=========================
-
-.. seealso::
-    `Glazier: Automate building of CouchDB from source on Windows
-    <https://github.com/apache/couchdb-glazier>`_
diff --git a/src/intro/api.rst b/src/intro/api.rst
deleted file mode 100644
index f18bb5d..0000000
--- a/src/intro/api.rst
+++ /dev/null
@@ -1,747 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/api:
-
-============
-The Core API
-============
-
-This document explores the CouchDB in minute detail. It shows all the
-nitty-gritty and clever bits. We show you best practices and guide you around
-common pitfalls.
-
-We start out by revisiting the basic operations we ran in the previous document
-:ref:`intro/tour`, looking behind the scenes. We also show what Fauxton needs to
-do behind its user interface to give us the nice features we saw earlier.
-
-This document is both an introduction to the core CouchDB API as well as a
-reference. If you can't remember how to run a particular request or why some
-parameters are needed, you can always come back here and look things up (we
-are probably the heaviest users of this document).
-
-While explaining the API bits and pieces, we sometimes need to take a larger
-detour to explain the reasoning for a particular request. This is a good
-opportunity for us to tell you why CouchDB works the way it does.
-
-The API can be subdivided into the following sections. We'll explore them
-individually:
-
-.. contents::
-    :depth: 1
-    :local:
-
-Server
-======
-
-This one is basic and simple. It can serve as a sanity check to see if
-CouchDB is running at all. It can also act as a safety guard for libraries
-that require a certain version of CouchDB. We're using the `curl`_ utility
-again::
-
-  curl http://127.0.0.1:5984/
-
-CouchDB replies, all excited to get going:
-
-.. code-block:: javascript
-
-    {
-      "couchdb": "Welcome",
-      "version": "3.0.0",
-      "git_sha": "83bdcf693",
-      "uuid": "56f16e7c93ff4a2dc20eb6acc7000b71",
-      "features": [
-        "access-ready",
-        "partitioned",
-        "pluggable-storage-engines",
-        "reshard",
-        "scheduler"
-      ],
-      "vendor": {
-        "name": "The Apache Software Foundation"
-      }
-    }
-
-You get back a JSON string, that, if parsed into a native object or data
-structure of your programming language, gives you access to the welcome
-string and version information.
-
-This is not terribly useful, but it illustrates nicely the way CouchDB
-behaves. You send an HTTP request and you receive a JSON string in the HTTP
-response as a result.
-
-.. _curl: http://curl.haxx.se/
-
-Databases
-=========
-
-Now let's do something a little more useful: *create databases*.
-For the strict, CouchDB is a *database management system* (DMS). That means it
-can hold multiple databases. A database is a bucket that holds "related data".
-We'll explore later what that means exactly. In practice, the terminology is
-overlapping -- often people refer to a DMS as "a database" and also a database
-within the DMS as "a database." We might follow that slight oddity, so don't
-get confused by it. In general, it should be clear from the context if we are
-talking about the whole of CouchDB or a single database within CouchDB.
-
-Now let's make one! We want to store our favorite music albums,
-and we creatively give our database the name albums. Note that we're now
-using the ``-X`` option again to tell curl to send a :method:`PUT` request
-instead of the default :method:`GET` request::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/albums
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {"ok":true}
-
-That's it. You created a database and CouchDB told you that all went well.
-What happens if you try to create a database that already exists? Let's try
-to create that database again::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/albums
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {"error":"file_exists","reason":"The database could not be created, the file already exists."}
-
-We get back an error. This is pretty convenient. We also learn a little bit
-about how CouchDB works. CouchDB stores each database in a single file.
-Very simple.
-
-Let's create another database, this time with curl's ``-v`` (for "verbose")
-option. The verbose option tells curl to show us not only the essentials --
-the HTTP response body -- but all the underlying request and response details::
-
-    curl -vX PUT http://admin:password@127.0.0.1:5984/albums-backup
-
-curl elaborates::
-
-    * About to connect() to 127.0.0.1 port 5984 (#0)
-    *   Trying 127.0.0.1... connected
-    * Connected to 127.0.0.1 (127.0.0.1) port 5984 (#0)
-    > PUT /albums-backup HTTP/1.1
-    > User-Agent: curl/7.16.3 (powerpc-apple-darwin9.0) libcurl/7.16.3 OpenSSL/0.9.7l zlib/1.2.3
-    > Host: 127.0.0.1:5984
-    > Accept: */*
-    >
-    < HTTP/1.1 201 Created
-    < Server: CouchDB (Erlang/OTP)
-    < Date: Sun, 05 Jul 2009 22:48:28 GMT
-    < Content-Type: text/plain;charset=utf-8
-    < Content-Length: 12
-    < Cache-Control: must-revalidate
-    <
-    {"ok":true}
-    * Connection #0 to host 127.0.0.1 left intact
-    * Closing connection #0
-
-What a mouthful. Let's step through this line by line to understand what's
-going on and find out what's important. Once you've seen this output a few
-times, you'll be able to spot the important bits more easily. ::
-
-    * About to connect() to 127.0.0.1 port 5984 (#0)
-
-This is curl telling us that it is going to establish a TCP connection to the
-CouchDB server we specified in our request URI. Not at all important,
-except when debugging networking issues. ::
-
-    *   Trying 127.0.0.1... connected
-    * Connected to 127.0.0.1 (127.0.0.1) port 5984 (#0)
-
-curl tells us it successfully connected to CouchDB. Again,
-not important if you aren't trying to find problems with your network.
-
-The following lines are prefixed with ``>`` and ``<`` characters.
-The ``>`` means the line was sent to CouchDB verbatim (without the actual
-``>``). The ``<`` means the line was sent back to curl by CouchDB. ::
-
-    > PUT /albums-backup HTTP/1.1
-
-This initiates an HTTP request. Its *method* is :method:`PUT`, the *URI* is
-``/albums-backup``, and the HTTP version is ``HTTP/1.1``. There is also
-``HTTP/1.0``, which is simpler in some cases, but for all practical reasons
-you should be using ``HTTP/1.1``.
-
-Next, we see a number of *request headers*. These are used to provide
-additional details about the request to CouchDB. ::
-
-    > User-Agent: curl/7.16.3 (powerpc-apple-darwin9.0) libcurl/7.16.3 OpenSSL/0.9.7l zlib/1.2.3
-
-The User-Agent header tells CouchDB which piece of client software is doing
-the HTTP request. We don't learn anything new: it's curl. This header is
-often useful in web development when there are known errors in client
-implementations that a server might want to prepare the response for.
-It also helps to determine which platform a user is on. This information
-can be used for technical and statistical reasons. For CouchDB, the
-:header:`User-Agent` header is irrelevant. ::
-
-    > Host: 127.0.0.1:5984
-
-The :header:`Host` header is required by ``HTTP 1.1``. It tells the server
-the hostname that came with the request. ::
-
-    > Accept: */*
-
-The :header:`Accept` header tells CouchDB that curl accepts any media type.
-We'll look into why this is useful a little later. ::
-
-    >
-
-An empty line denotes that the request headers are now finished and the rest
-of the request contains data we're sending to the server. In this case,
-we're not sending any data, so the rest of the curl output is dedicated to
-the HTTP response. ::
-
-    < HTTP/1.1 201 Created
-
-The first line of CouchDB's HTTP response includes the HTTP version
-information (again, to acknowledge that the requested version could be
-processed), an HTTP *status code*, and a *status code message*.
-Different requests trigger different response codes. There's a whole range of
-them telling the client (curl in our case) what effect the request had on the
-server. Or, if an error occurred, what kind of error. :rfc:`2616` (the HTTP 1.1
-specification) defines clear behavior for response codes. CouchDB fully
-follows the RFC.
-
-The :statuscode:`201` status code tells the client that the resource
-the request was made against was successfully created. No surprise here,
-but if you remember that we got an error message when we tried to create this
-database twice, you now know that this response could include a different
-response code. Acting upon responses based on response codes is a common
-practice. For example, all response codes of :statuscode:`400` or larger
-tell you that some error occurred. If you want to shortcut your logic and
-immediately deal with the error, you could just check a >= ``400`` response
-code. ::
-
-    < Server: CouchDB (Erlang/OTP)
-
-The :header:`Server` header is good for diagnostics. It tells us which
-CouchDB version and which underlying Erlang version we are talking to.
-In general, you can ignore this header, but it is good to know it's there if
-you need it. ::
-
-    < Date: Sun, 05 Jul 2009 22:48:28 GMT
-
-The :header:`Date` header tells you the time of the server. Since client
-and server time are not necessarily synchronized, this header is purely
-informational. You shouldn't build any critical application logic on top
-of this! ::
-
-    < Content-Type: text/plain;charset=utf-8
-
-The :header:`Content-Type` header tells you which MIME type
-the HTTP response body is and its encoding. We already know CouchDB returns
-JSON strings. The appropriate :header:`Content-Type` header is
-:mimetype:`application/json`. Why do we see :mimetype:`text/plain`?
-This is where pragmatism wins over purity. Sending an
-:mimetype:`application/json` :header:`Content-Type` header will make
-a browser offer you the returned JSON for download instead of
-just displaying it. Since it is extremely useful to be able to test CouchDB
-from a browser, CouchDB sends a :mimetype:`text/plain` content type, so all
-browsers will display the JSON as text.
-
-.. note::
-    There are some extensions that make your browser JSON-aware,
-    but they are not installed by default. For more information, look at
-    the popular `JSONView`_ extension, available for both Firefox and Chrome.
-
-    .. _JSONView: http://jsonview.com/
-
-Do you remember the :header:`Accept` request header and how it is set to
-``*/*`` to express interest in any MIME type? If you send ``Accept:
-application/json`` in your request, CouchDB knows that you can deal with a pure
-JSON response with the proper :header:`Content-Type` header and will
-use it instead of :mimetype:`text/plain`. ::
-
-    < Content-Length: 12
-
-The :header:`Content-Length` header simply tells us how many bytes
-the response body has. ::
-
-    < Cache-Control: must-revalidate
-
-This :header:`Cache-Control` header tells you, or any proxy server between
-CouchDB and you, not to cache this response. ::
-
-    <
-
-This empty line tells us we're done with the response headers and what
-follows now is the response body.
-
-.. code-block:: javascript
-
-    {"ok":true}
-
-We've seen this before. ::
-
-    * Connection #0 to host 127.0.0.1 left intact
-    * Closing connection #0
-
-The last two lines are curl telling us that it kept the TCP connection it
-opened in the beginning open for a moment, but then closed it after it
-received the entire response.
-
-Throughout the documents, we'll show more requests with the ``-v`` option,
-but we'll omit some of the headers we've seen here and include only those
-that are important for the particular request.
-
-Creating databases is all fine, but how do we get rid of one? Easy -- just
-change the HTTP method::
-
-    > curl -vX DELETE http://admin:password@127.0.0.1:5984/albums-backup
-
-This deletes a CouchDB database. The request will remove the file that the
-database contents are stored in. There is no *"Are you sure?"* safety net or
-any *"Empty the trash"* magic you've got to do to delete a database. Use this
-command with care. Your data will be deleted without a chance to bring it
-back easily if you don't have a backup copy.
-
-This section went knee-deep into HTTP and set the stage for discussing the
-rest of the core CouchDB API. Next stop: documents.
-
-Documents
-=========
-
-.. _GUID: http://en.wikipedia.org/wiki/Globally_unique_identifier
-.. _UUID: http://en.wikipedia.org/wiki/Universally_unique_identifier
-
-Documents are CouchDB's central data structure. The idea behind a document
-is, unsurprisingly, that of a real-world document -- a sheet of paper such as
-an invoice, a recipe, or a business card. We already learned that CouchDB uses
-the JSON format to store documents. Let's see how this storing works at the
-lowest level.
-
-Each document in CouchDB has an *ID*. This ID is unique per database. You are
-free to choose any string to be the ID, but for best results we recommend a
-`UUID`_ (or `GUID`_), i.e., a Universally (or Globally) Unique IDentifier.
-UUIDs are random numbers that have such a low collision probability that
-everybody can make thousands of UUIDs a minute for millions of years without
-ever creating a duplicate. This is a great way to ensure two independent people
-cannot create two different documents with the same ID. Why should you care
-what somebody else is doing? For one, that somebody else could be you at a
-later time or on a different computer; secondly, CouchDB replication lets you
-share documents with others and using UUIDs ensures that it all works.
-But more on that later; let's make some documents::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af -d '{"title":"There is Nothing Left to Lose","artist":"Foo Fighters"}'
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {"ok":true,"id":"6e1295ed6c29495e54cc05947f18c8af","rev":"1-2902191555"}
-
-The curl command appears complex, but let's break it down.
-First, ``-X PUT`` tells curl to make a :method:`PUT` request.
-It is followed by the URL that specifies your CouchDB IP address and port.
-The resource part of the URL ``/albums/6e1295ed6c29495e54cc05947f18c8af``
-specifies the location of a document inside our albums database.
-The wild collection of numbers and characters is a UUID. This UUID is your
-document's ID. Finally, the ``-d`` flag tells curl to use the following
-string as the body for the :method:`PUT` request. The string is a simple JSON
-structure including ``title`` and ``artist`` attributes with their respective
-values.
-
-.. note::
-    If you don't have a UUID handy, you can ask CouchDB to give you one (in
-    fact, that is what we did just now without showing you). Simply send a
-    :get:`/_uuids` request::
-
-        curl -X GET http://127.0.0.1:5984/_uuids
-
-    CouchDB replies:
-
-    .. code-block:: javascript
-
-        {"uuids":["6e1295ed6c29495e54cc05947f18c8af"]}
-
-    Voilà, a UUID. If you need more than one, you can pass in the ``?count=10``
-    HTTP parameter to request 10 UUIDs, or really, any number you need.
-
-To double-check that CouchDB isn't lying about having saved your document (it
-usually doesn't), try to retrieve it by sending a GET request::
-
-    curl -X GET http://admin:password@127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af
-
-We hope you see a pattern here. Everything in CouchDB has an address, a URI,
-and you use the different HTTP methods to operate on these URIs.
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {"_id":"6e1295ed6c29495e54cc05947f18c8af","_rev":"1-2902191555","title":"There is Nothing Left to Lose","artist":"Foo Fighters"}
-
-This looks a lot like the document you asked CouchDB to save, which is good.
-But you should notice that CouchDB added two fields to your JSON structure.
-The first is ``_id``, which holds the UUID we asked CouchDB to save our document
-under. We always know the ID of a document if it is included, which is very
-convenient.
-
-The second field is ``_rev``. It stands for *revision*.
-
-Revisions
----------
-
-If you want to change a document in CouchDB, you don't tell it to go and find
-a field in a specific document and insert a new value. Instead, you load
-the full document out of CouchDB, make your changes in the JSON structure
-(or object, when you are doing actual programming), and save the entire new
-revision (or version) of that document back into CouchDB. Each revision is
-identified by a new ``_rev`` value.
-
-If you want to update or delete a document, CouchDB expects you to include
-the ``_rev`` field of the revision you wish to change. When CouchDB accepts
-the change, it will generate a new revision number. This mechanism ensures that,
-in case somebody else made a change without you knowing before you got to
-request the document update, CouchDB will not accept your update because you
-are likely to overwrite data you didn't know existed. Or simplified: whoever
-saves a change to a document first, wins. Let's see what happens if we don't
-provide a ``_rev`` field (which is equivalent to providing a outdated value)::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af \
-         -d '{"title":"There is Nothing Left to Lose","artist":"Foo Fighters","year":"1997"}'
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {"error":"conflict","reason":"Document update conflict."}
-
-If you see this, add the latest revision number of your document to the JSON
-structure::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af \
-         -d '{"_rev":"1-2902191555","title":"There is Nothing Left to Lose","artist":"Foo Fighters","year":"1997"}'
-
-Now you see why it was handy that CouchDB returned that ``_rev`` when we made
-the initial request. CouchDB replies:
-
-.. code-block:: javascript
-
-    {"ok":true,"id":"6e1295ed6c29495e54cc05947f18c8af","rev":"2-8aff9ee9d06671fa89c99d20a4b3ae"}
-
-CouchDB accepted your write and also generated a new revision number.
-The revision number is the *MD5 hash* of the transport representation of a
-document with an ``N-`` prefix denoting the number of times a document got
-updated. This is useful for replication. See :ref:`replication/conflicts` for
-more information.
-
-There are multiple reasons why CouchDB uses this revision system,
-which is also called Multi-Version Concurrency Control (`MVCC`_). They all work
-hand-in-hand, and this is a good opportunity to explain some of them.
-
-.. _MVCC: http://en.wikipedia.org/wiki/Multiversion_concurrency_control
-
-One of the aspects of the HTTP protocol that CouchDB uses is that it is
-stateless. What does that mean? When talking to CouchDB you need to make
-requests. Making a request includes opening a network connection to CouchDB,
-exchanging bytes, and closing the connection. This is done every time you
-make a request. Other protocols allow you to open a connection, exchange bytes,
-keep the connection open, exchange more bytes later -- maybe depending on the
-bytes you exchanged at the beginning -- and eventually close the connection.
-Holding a connection open for later use requires the server to do extra work.
-One common pattern is that for the lifetime of a connection, the client has
-a consistent and static view of the data on the server. Managing huge amounts
-of parallel connections is a significant amount of work. HTTP connections are
-usually short-lived, and making the same guarantees is a lot easier.
-As a result, CouchDB can handle many more concurrent connections.
-
-Another reason CouchDB uses MVCC is that this model is simpler conceptually
-and, as a consequence, easier to program. CouchDB uses less code to make this
-work, and less code is always good because the ratio of defects per lines of
-code is static.
-
-The revision system also has positive effects on replication and storage
-mechanisms, but we'll explore these later in the documents.
-
-.. warning::
-    The terms *version* and *revision* might sound familiar (if you are
-    programming without version control, stop reading this guide right now and
-    start learning one of the popular systems). Using new versions for document
-    changes works a lot like version control, but there's an important
-    difference: **CouchDB does not guarantee that older versions are kept
-    around. Don't use the ``_rev`` token in CouchDB as a revision control system
-    for your documents.**
-
-Documents in Detail
--------------------
-
-Now let's have a closer look at our document creation requests with the curl
-``-v`` flag that was helpful when we explored the database API earlier.
-This is also a good opportunity to create more documents that we can use in
-later examples.
-
-We'll add some more of our favorite music albums. Get a fresh UUID from the
-``/_uuids`` resource. If you don't remember how that works, you can look it up
-a few pages back. ::
-
-    curl -vX PUT http://admin:password@127.0.0.1:5984/albums/70b50bfa0a4b3aed1f8aff9e92dc16a0 \
-         -d '{"title":"Blackened Sky","artist":"Biffy Clyro","year":2002}'
-
-.. note::
-    By the way, if you happen to know more information about your favorite
-    albums, don't hesitate to add more properties. And don't worry about not
-    knowing all the information for all the albums. CouchDB's schema-less
-    documents can contain whatever you know. After all, you should relax and not
-    worry about data.
-
-Now with the ``-v`` option, CouchDB's reply (with only the important bits shown)
-looks like this::
-
-    > PUT /albums/70b50bfa0a4b3aed1f8aff9e92dc16a0 HTTP/1.1
-    >
-    < HTTP/1.1 201 Created
-    < Location: http://127.0.0.1:5984/albums/70b50bfa0a4b3aed1f8aff9e92dc16a0
-    < ETag: "1-e89c99d29d06671fa0a4b3ae8aff9e"
-    <
-    {"ok":true,"id":"70b50bfa0a4b3aed1f8aff9e92dc16a0","rev":"1-e89c99d29d06671fa0a4b3ae8aff9e"}
-
-We're getting back the :statuscode:`201` HTTP status code in the response
-headers, as we saw earlier when we created a database. The :header:`Location`
-header gives us a full URL to our newly created document. And there's a new
-header. An :header:`ETag` in HTTP-speak identifies a specific version of a
-resource. In this case, it identifies a specific version (the first one) of our
-new document. Sound familiar? Yes, conceptually, an :header:`ETag` is the same
-as a CouchDB document revision number, and it shouldn't come as a surprise that
-CouchDB uses revision numbers for ETags. ETags are useful for caching
-infrastructures.
-
-Attachments
------------
-
-CouchDB documents can have attachments just like an email message can have
-attachments. An attachment is identified by a name and includes its MIME type
-(or :header:`Content-Type`) and the number of bytes the attachment
-contains. Attachments can be any data. It is easiest to think about attachments
-as files attached to a document. These files can be text, images, Word
-documents, music, or movie files. Let's make one.
-
-Attachments get their own URL where you can upload data. Say we want to add
-the album artwork to the ``6e1295ed6c29495e54cc05947f18c8af`` document
-(*"There is Nothing Left to Lose"*), and let's also say the artwork is in a file
-`artwork.jpg` in the current directory::
-
-    curl -vX PUT http://admin:password@127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg?rev=2-2739352689 \
-         --data-binary @artwork.jpg -H "Content-Type:image/jpg"
-
-.. note::
-    The ``--data-binary`` ``@`` option tells curl to read a file's contents into
-    the HTTP request body. We're using the ``-H`` option to tell CouchDB that
-    we're uploading a JPEG file. CouchDB will keep this information around and
-    will send the appropriate header when requesting this attachment; in case of
-    an image like this, a browser will render the image instead of offering you
-    the data for download. This will come in handy later. Note that you need
-    to provide the current revision number of the document you're attaching
-    the artwork to, just as if you would update the document. Because, after
-    all, attaching some data is changing the document.
-
-You should now see your artwork image if you point your browser to
-http://127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af/artwork.jpg
-
-If you request the document again, you'll see a new member::
-
-    curl http://admin:password@127.0.0.1:5984/albums/6e1295ed6c29495e54cc05947f18c8af
-
-CouchDB replies:
-
-.. code-block:: javascript
-
-    {
-        "_id": "6e1295ed6c29495e54cc05947f18c8af",
-        "_rev": "3-131533518",
-        "title": "There is Nothing Left to Lose",
-        "artist": "Foo Fighters",
-        "year": "1997",
-        "_attachments": {
-            "artwork.jpg": {
-                "stub": true,
-                "content_type": "image/jpg",
-                "length": 52450
-            }
-        }
-    }
-
-``_attachments`` is a list of keys and values where the values are JSON objects
-containing the attachment metadata. ``stub=true`` tells us that this entry is
-just the metadata. If we use the ``?attachments=true`` HTTP option when
-requesting this document, we'd get a `Base64`_ encoded string containing the
-attachment data.
-
-.. _Base64: http://en.wikipedia.org/wiki/Base64
-
-We'll have a look at more document request options later as we explore more
-features of CouchDB, such as replication, which is the next topic.
-
-Replication
-===========
-
-CouchDB replication is a mechanism to synchronize databases. Much like `rsync`_
-synchronizes two directories locally or over a network, replication synchronizes
-two databases locally or remotely.
-
-.. _rsync: http://en.wikipedia.org/wiki/Rsync
-
-In a simple :method:`POST` request, you tell CouchDB the *source* and the
-*target* of a replication and CouchDB will figure out which documents and new
-document revisions are on *source* that are not yet on *target*, and will
-proceed  to move the missing documents and revisions over.
-
-We'll take an in-depth look at replication in the document
-:ref:`replication/intro`; in this document, we'll just show you how to use it.
-
-First, we'll create a target database. Note that CouchDB won't automatically
-create a target database for you, and will return a replication failure if
-the target doesn't exist (likewise for the source, but that mistake isn't as
-easy to make)::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/albums-replica
-
-Now we can use the database `albums-replica` as a replication target::
-
-    curl -vX POST http://admin:password@127.0.0.1:5984/_replicate \
-         -d '{"source":"http://127.0.0.1:5984/albums","target":"http://127.0.0.1:5984/albums-replica"}' \
-         -H "Content-Type: application/json"
-
-.. note::
-    As of CouchDB 2.0.0, fully qualified URLs are required for both the
-    replication ``source`` and ``target`` parameters.
-
-.. note::
-    CouchDB supports the option ``"create_target":true`` placed in the JSON
-    POSTed to the :ref:`_replicate <api/server/replicate>` URL. It implicitly
-    creates the target database if it doesn't exist.
-
-CouchDB replies (this time we formatted the output so you can read it more
-easily):
-
-.. code-block:: javascript
-
-    {
-        "history": [
-            {
-                "start_last_seq": 0,
-                "missing_found": 2,
-                "docs_read": 2,
-                "end_last_seq": 5,
-                "missing_checked": 2,
-                "docs_written": 2,
-                "doc_write_failures": 0,
-                "end_time": "Sat, 11 Jul 2009 17:36:21 GMT",
-                "start_time": "Sat, 11 Jul 2009 17:36:20 GMT"
-            }
-        ],
-        "source_last_seq": 5,
-        "session_id": "924e75e914392343de89c99d29d06671",
-        "ok": true
-    }
-
-CouchDB maintains a *session history* of replications. The response for a
-replication request contains the history entry for this *replication session*.
-It is also worth noting that the request for replication will stay open until
-replication closes. If you have a lot of documents, it'll take a while until
-they are all replicated and you won't get back the replication response
-until all documents are replicated. It is important to note that
-replication replicates the database only as it was at the point in time
-when replication was started. So, any additions, modifications,
-or deletions subsequent to the start of replication will not be replicated.
-
-We'll punt on the details again -- the ``"ok": true`` at the end tells us all
-went well. If you now have a look at the albums-replica database,
-you should see all the documents that you created in the albums database.
-Neat, eh?
-
-What you just did is called local replication in CouchDB terms. You created a
-local copy of a database. This is useful for backups or to keep snapshots of
-a specific state of your data around for later. You might want to do this
-if you are developing your applications but want to be able to roll back to
-a stable version of your code and data.
-
-There are more types of replication useful in other situations. The source
-and target members of our replication request are actually links (like in
-HTML) and so far we've seen links relative to the server we're working on
-(hence local). You can also specify a remote database as the target::
-
-    curl -vX POST http://admin:password@127.0.0.1:5984/_replicate \
-         -d '{"source":"http://127.0.0.1:5984/albums","target":"http://example.org:5984/albums-replica"}' \
-         -H "Content-Type:application/json"
-
-Using a *local source* and a *remote target* database is called *push
-replication*. We're pushing changes to a remote server.
-
-.. note::
-    Since we don't have a second CouchDB server around just yet, we'll just use
-    the absolute address of our single server, but you should be able to infer
-    from this that you can put any remote server in there.
-
-This is great for sharing local changes with remote servers or buddies next
-door.
-
-You can also use a *remote source* and a *local target* to do a *pull
-replication*. This is great for getting the latest changes from a server that
-is used by others::
-
-    curl -vX POST http://admin:password@127.0.0.1:5984/_replicate \
-         -d '{"source":"http://example.org:5984/albums-replica","target":"http://127.0.0.1:5984/albums"}' \
-         -H "Content-Type:application/json"
-
-Finally, you can run remote replication, which is mostly useful for management
-operations::
-
-    curl -vX POST http://admin:password@127.0.0.1:5984/_replicate \
-         -d '{"source":"http://example.org:5984/albums","target":"http://example.org:5984/albums-replica"}' \
-         -H"Content-Type: application/json"
-
-.. note::
-    **CouchDB and REST**
-
-    CouchDB prides itself on having a `RESTful`_ API, but these replication
-    requests don't look very RESTy to the trained eye. What's up with that?
-    While CouchDB's core database, document, and attachment API are RESTful,
-    not all of CouchDB's API is. The replication API is one example. There are
-    more, as we'll see later in the documents.
-
-    Why are there RESTful and non-RESTful APIs mixed up here? Have the
-    developers been too lazy to go REST all the way? Remember, REST is an
-    architectural style that lends itself to certain architectures (such as the
-    CouchDB document API). But it is not a one-size-fits-all. Triggering an
-    event like replication does not make a whole lot of sense in the REST world.
-    It is more like a traditional remote procedure call. And there is nothing
-    wrong with this.
-
-    We very much believe in the "use the right tool for the job" philosophy,
-    and REST does not fit every job. For support, we refer to Leonard Richardson
-    and Sam Ruby who wrote `RESTful Web Services`_ (O'Reilly), as they share our
-    view.
-
-    .. _RESTful: http://en.wikipedia.org/wiki/Representational_state_transfer
-    .. _RESTful Web Services: http://oreilly.com/catalog/9780596529260
-
-Wrapping Up
-===========
-
-This is still not the full CouchDB API, but we discussed the essentials in
-great detail. We're going to fill in the blanks as we go. For now, we believe
-you're ready to start building CouchDB applications.
-
-.. seealso::
-    :ref:`Complete HTTP API Reference <api>`:
-
-    - :ref:`Server API Reference <api/server>`
-    - :ref:`Database API Reference <api/database>`
-    - :ref:`Document API Reference <api/document>`
-    - :ref:`Replication API <api/server/replicate>`
diff --git a/src/intro/consistency.rst b/src/intro/consistency.rst
deleted file mode 100644
index 6d7f06c..0000000
--- a/src/intro/consistency.rst
+++ /dev/null
@@ -1,443 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/consistency:
-
-====================
-Eventual Consistency
-====================
-
-In the previous document :ref:`intro/why`, we saw that CouchDB's flexibility
-allows us to evolve our data as our applications grow and change. In this topic,
-we'll explore how working "with the grain" of CouchDB promotes simplicity in
-our applications and helps us naturally build scalable, distributed systems.
-
-Working with the Grain
-======================
-
-A *distributed system* is a system that operates robustly over a wide network.
-A particular feature of network computing is that network links can
-potentially disappear, and there are plenty of strategies for managing this
-type of network segmentation. CouchDB differs from others by accepting
-eventual consistency, as opposed to putting absolute consistency ahead of raw
-availability, like `RDBMS`_ or `Paxos`_. What these systems have in common is
-an awareness that data acts differently when many people are accessing it
-simultaneously. Their approaches differ when it comes to which aspects of
-*consistency*, *availability*, or *partition* tolerance they prioritize.
-
-Engineering distributed systems is tricky. Many of the caveats and "gotchas"
-you will face over time aren't immediately obvious. We don't have all the
-solutions, and CouchDB isn't a panacea, but when you work with CouchDB's
-grain rather than against it, the path of least resistance leads you to
-naturally scalable applications.
-
-Of course, building a distributed system is only the beginning. A website
-with a database that is available only half the time is next to worthless.
-Unfortunately, the traditional relational database approach to consistency
-makes it very easy for application programmers to rely on global state,
-global clocks, and other high availability no-nos, without even realizing
-that they're doing so. Before examining how CouchDB promotes scalability,
-we'll look at the constraints faced by a distributed system. After we've seen
-the problems that arise when parts of your application can't rely on being
-in constant contact with each other, we'll see that CouchDB provides an
-intuitive and useful way for modeling applications around high availability.
-
-.. _RDBMS: http://en.wikipedia.org/wiki/Relational_database_management_system
-.. _Paxos: http://en.wikipedia.org/wiki/Paxos_%28computer_science%29
-
-.. _cap:
-
-The CAP Theorem
-===============
-
-The CAP theorem describes a few different strategies for distributing
-application logic across networks. CouchDB's solution uses replication to
-propagate application changes across participating nodes. This is a
-fundamentally different approach from consensus algorithms and relational
-databases, which operate at different intersections of consistency,
-availability, and partition tolerance.
-
-The CAP theorem, shown in :ref:`intro/consistency-01`,
-identifies three distinct concerns:
-
-- **Consistency**:
-  All database clients see the same data, even with concurrent updates.
-- **Availability**:
-  All database clients are able to access some version of the data.
-- **Partition tolerance**:
-  The database can be split over multiple servers.
-
-Pick two.
-
-.. _intro/consistency-01:
-
-.. figure:: ../../images/intro-consistency-01.png
-    :align: center
-    :alt: The CAP theorem
-
-    Figure 1. The CAP theorem
-
-When a system grows large enough that a single database node is unable to
-handle the load placed on it, a sensible solution is to add more servers.
-When we add nodes, we have to start thinking about how to partition data
-between them. Do we have a few databases that share exactly the same data?
-Do we put different sets of data on different database servers?
-Do we let only certain database servers write data and let others handle
-the reads?
-
-Regardless of which approach we take, the one problem we'll keep bumping into
-is that of keeping all these database servers in sync. If you write some
-information to one node, how are you going to make sure that a read request
-to another database server reflects this newest information? These events
-might be milliseconds apart. Even with a modest collection of database
-servers, this problem can become extremely complex.
-
-When it's absolutely critical that all clients see a consistent view of the
-database, the users of one node will have to wait for any other nodes to come
-into agreement before being able to read or write to the database.
-In this instance, we see that availability takes a backseat to consistency.
-However, there are situations where availability trumps consistency:
-
-    Each node in a system should be able to make decisions purely based on
-    local state. If you need to do something under high load with failures
-    occurring and you need to reach agreement, you're lost. If you're
-    concerned about scalability, any algorithm that forces you to run
-    agreement will eventually become your bottleneck. Take that as a given.
-
-    -- Werner Vogels, Amazon CTO and Vice President
-
-If availability is a priority, we can let clients write data to one node of
-the database without waiting for other nodes to come into agreement.
-If the database knows how to take care of reconciling these operations between
-nodes, we achieve a sort of "eventual consistency" in exchange for high
-availability. This is a surprisingly applicable trade-off for many applications.
-
-Unlike traditional relational databases, where each action performed is
-necessarily subject to database-wide consistency checks,
-CouchDB makes it really simple to build applications that sacrifice immediate
-consistency for the huge performance improvements that come with simple
-distribution.
-
-Local Consistency
-=================
-
-Before we attempt to understand how CouchDB operates in a cluster,
-it's important that we understand the inner workings of a single CouchDB node.
-The CouchDB API is designed to provide a convenient but thin wrapper around
-the database core. By taking a closer look at the structure of the database
-core, we'll have a better understanding of the API that surrounds it.
-
-The Key to Your Data
---------------------
-
-At the heart of CouchDB is a powerful *B-tree* storage engine.
-A B-tree is a sorted data structure that allows for searches, insertions,
-and deletions in logarithmic time. As :ref:`intro/consistency-02`
-illustrates, CouchDB uses this B-tree storage engine for all internal data,
-documents, and views. If we understand one, we will understand them all.
-
-.. _intro/consistency-02:
-
-.. figure:: ../../images/intro-consistency-02.png
-    :align: center
-    :alt: Anatomy of a view request
-
-    Figure 2. Anatomy of a view request
-
-CouchDB uses MapReduce to compute the results of a view. MapReduce makes use
-of two functions, "map" and "reduce", which are applied to each document in
-isolation. Being able to isolate these operations means that view computation
-lends itself to parallel and incremental computation. More important,
-because these functions produce key/value pairs, CouchDB is able to insert
-them into the B-tree storage engine, sorted by key. Lookups by key,
-or key range, are extremely efficient operations with a B-tree,
-described in `big O` notation as ``O(log N)`` and ``O(log N + K)``,
-respectively.
-
-In CouchDB, we access documents and view results by key or key range.
-This is a direct mapping to the underlying operations performed on CouchDB's
-B-tree storage engine. Along with document inserts and updates,
-this direct mapping is the reason we describe CouchDB's API as being a thin
-wrapper around the database core.
-
-Being able to access results by key alone is a very important restriction
-because it allows us to make huge performance gains. As well as the massive
-speed improvements, we can partition our data over multiple nodes,
-without affecting our ability to query each node in isolation.
-`BigTable`_, `Hadoop`_, `SimpleDB`_, and `memcached`_ restrict object lookups
-by key for  exactly these reasons.
-
-.. _BigTable: http://en.wikipedia.org/wiki/BigTable
-.. _Hadoop: http://hadoop.apache.org
-.. _SimpleDB: http://aws.amazon.com/simpledb/
-.. _memcached: http://memcached.org
-
-No Locking
-----------
-
-A table in a relational database is a single data structure. If you want to
-modify a table -- say, update a row -- the database system must ensure
-that nobody else is trying to update that row and that nobody can read from
-that row while it is being updated. The common way to handle this uses what's
-known as a lock. If multiple clients want to access a table, the first client
-gets the lock, making everybody else wait. When the first client's request is
-processed, the next client is given access while everybody else waits,
-and so on. This serial execution of requests, even when they arrived in
-parallel, wastes a significant amount of your server's processing power.
-Under high load, a relational database can spend more time figuring out who
-is allowed to do what, and in which order, than it does doing any actual work.
-
-.. note::
-    Modern relational databases avoid locks by implementing MVCC under
-    the hood, but hide it from the end user, requiring them to coordinate
-    concurrent changes of single rows or fields.
-
-Instead of locks, CouchDB uses `Multi-Version Concurrency Control` (MVCC) to
-manage concurrent access to the database. :ref:`intro/consistency-03`
-illustrates the differences between MVCC and traditional locking mechanisms.
-MVCC means that CouchDB can run at full speed, all the time,
-even under high load. Requests are run in parallel, making excellent use of
-every last drop of processing power your server has to offer.
-
-.. _intro/consistency-03:
-
-.. figure:: ../../images/intro-consistency-03.png
-    :align: center
-    :alt: MVCC means no locking
-
-    Figure 3. MVCC means no locking
-
-Documents in CouchDB are versioned, much like they would be in a regular
-version control system such as `Subversion`_. If you want to change
-a value in a document, you create an entire new version of that document
-and save it over the old one. After doing this, you end up with two versions
-of the same document, one old and one new.
-
-How does this offer an improvement over locks? Consider a set of requests
-wanting to access a document. The first request reads the document.
-While this is being processed, a second request changes the document.
-Since the second request includes a completely new version of the document,
-CouchDB can simply append it to the database without having to wait for the
-read request to finish.
-
-When a third request wants to read the same document, CouchDB will point it
-to the new version that has just been written. During this whole process,
-the first request could still be reading the original version.
-
-A read request will always see the most recent snapshot of your database at
-the time of the beginning of the request.
-
-.. _Subversion: http://subversion.apache.org/
-
-Validation
-==========
-
-As application developers, we have to think about what sort of input we
-should accept and what we should reject. The expressive power to do this type
-of validation over complex data within a traditional relational database
-leaves a lot to be desired. Fortunately, CouchDB provides a powerful way to
-perform per-document validation from within the database.
-
-CouchDB can validate documents using JavaScript functions similar to those
-used for MapReduce. Each time you try to modify a document,
-CouchDB will pass the validation function a copy of the existing document,
-a copy of the new document, and a collection of additional information,
-such as user authentication details. The validation function now has the
-opportunity to approve or deny the update.
-
-By working with the grain and letting CouchDB do this for us,
-we save ourselves a tremendous amount of CPU cycles that would otherwise have
-been spent serializing object graphs from SQL, converting them into domain
-objects, and using those objects to do application-level validation.
-
-Distributed Consistency
-=======================
-
-Maintaining consistency within a single database node is relatively easy for
-most databases. The real problems start to surface when you try to maintain
-consistency between multiple database servers. If a client makes a write
-operation on server `A`, how do we make sure that this is consistent with
-server `B`, or `C`, or `D`? For relational databases, this is a very complex
-problem with entire books devoted to its solution. You could use
-multi-master, single-master, partitioning, sharding, write-through caches,
-and all sorts of other complex techniques.
-
-Incremental Replication
-=======================
-
-CouchDB's operations take place within the context of a single document.
-As CouchDB achieves eventual consistency between multiple databases by using
-incremental replication you no longer have to worry about your database
-servers being able to stay in constant communication. Incremental replication
-is a process where document changes are periodically copied between servers.
-We are able to build what's known as a *shared nothing* cluster of databases
-where each node is independent and self-sufficient, leaving no single point
-of contention across the system.
-
-Need to scale out your CouchDB database cluster? Just throw in another server.
-
-As illustrated in :ref:`intro/consistency-04`, with CouchDB's incremental
-replication, you can synchronize your data between any two databases however
-you like and whenever you like. After replication, each database is able
-to work independently.
-
-You could use this feature to synchronize database servers within a cluster
-or between data centers using a job scheduler such as cron,
-or you could use it to synchronize data with your laptop for offline work as
-you travel. Each database can be used in the usual fashion,
-and changes between databases can be synchronized later in both directions.
-
-.. _intro/consistency-04:
-
-.. figure:: ../../images/intro-consistency-04.png
-    :align: center
-    :alt: Incremental replication between CouchDB nodes
-
-    Figure 4. Incremental replication between CouchDB nodes
-
-What happens when you change the same document in two different databases and
-want to synchronize these with each other? CouchDB's replication system
-comes with automatic conflict detection and resolution. When CouchDB detects
-that a document has been changed in both databases, it flags this document
-as being in conflict, much like they would be in a regular version control
-system.
-
-This isn't as troublesome as it might first sound. When two versions of a
-document conflict during replication, the winning version is saved as the
-most recent version in the document's history. Instead of throwing the losing
-version away, as you might expect, CouchDB saves this as a previous version
-in the document's history, so that you can access it if you need to. This
-happens automatically and consistently, so both databases will make exactly
-the same choice.
-
-It is up to you to handle conflicts in a way that makes sense for your
-application. You can leave the chosen document versions in place,
-revert to the older version, or try to merge the two versions and save the
-result.
-
-Case Study
-==========
-
-Greg Borenstein, a friend and coworker, built a small library for converting
-Songbird playlists to JSON objects and decided to store these in CouchDB as
-part of a backup application. The completed software uses CouchDB's MVCC and
-document revisions to ensure that Songbird playlists are backed up robustly
-between nodes.
-
-.. note::
-    `Songbird`_ is a free software media player with an integrated web browser,
-    based on the Mozilla XULRunner platform. Songbird is available for Microsoft
-    Windows, Apple Mac OS X, Solaris, and Linux.
-
-    .. _Songbird: http://en.wikipedia.org/wiki/Songbird_%28software%29
-
-Let's examine the workflow of the Songbird backup application,
-first as a user backing up from a single computer, and then using Songbird to
-synchronize playlists between multiple computers. We'll see how document
-revisions turn what could have been a hairy problem into something that *just
-works*.
-
-The first time we use this backup application, we feed our playlists to the
-application and initiate a backup. Each playlist is converted to a JSON
-object and handed to a CouchDB database. As illustrated in
-:ref:`intro/consistency-05`, CouchDB hands back the document ID and
-revision of each playlist as it's saved to the database.
-
-.. _intro/consistency-05:
-
-.. figure:: ../../images/intro-consistency-05.png
-    :align: center
-    :alt: Backing up to a single database
-
-    Figure 5. Backing up to a single database
-
-After a few days, we find that our playlists have been updated and we want to
-back up our changes. After we have fed our playlists to the backup
-application, it fetches the latest versions from CouchDB,
-along with the corresponding document revisions. When the application hands
-back the new playlist document, CouchDB requires that the document revision
-is included in the request.
-
-CouchDB then makes sure that the document revision handed to it in the
-request matches the current revision held in the database. Because CouchDB
-updates the revision with every modification, if these two are out of sync it
-suggests that someone else has made changes to the document between the time
-we requested it from the database and the time we sent our updates. Making
-changes to a document after someone else has modified it without first
-inspecting those changes is usually a bad idea.
-
-Forcing clients to hand back the correct document revision is the heart of
-CouchDB's optimistic concurrency.
-
-We have a laptop we want to keep synchronized with our desktop computer.
-With all our playlists on our desktop, the first step is to
-"restore from backup" onto our laptop. This is the first time we've done this,
-so afterward our laptop  should hold an exact replica of our desktop playlist
-collection.
-
-After editing our Argentine Tango playlist on our laptop to add a few new
-songs we've purchased, we want to save our changes. The backup application
-replaces the playlist document in our laptop CouchDB database and a new
-document revision is generated. A few days later, we remember our new songs
-and want to copy the playlist across to our desktop computer. As illustrated
-in :ref:`intro/consistency-06`, the backup application copies the new document
-and the new revision to the desktop CouchDB database. Both CouchDB databases
-now have the same document revision.
-
-.. _intro/consistency-06:
-
-.. figure:: ../../images/intro-consistency-06.png
-    :align: center
-    :alt: Synchronizing between two databases
-
-    Figure 6. Synchronizing between two databases
-
-Because CouchDB tracks document revisions, it ensures that updates like these
-will work only if they are based on current information. If we had made
-modifications to the playlist backups between synchronization,
-things wouldn't go as smoothly.
-
-We back up some changes on our laptop and forget to synchronize. A few days
-later, we're editing playlists on our desktop computer, make a backup,
-and want to synchronize this to our laptop. As illustrated in
-:ref:`intro/consistency-07`, when our backup application tries to replicate
-between the two databases, CouchDB sees that the changes being sent from our
-desktop computer are modifications of out-of-date documents and helpfully
-informs us that there has been a conflict.
-
-Recovering from this error is easy to accomplish from an application
-perspective. Just download CouchDB's version of the playlist and provide an
-opportunity to merge the changes or save local modifications into a new
-playlist.
-
-.. _intro/consistency-07:
-
-.. figure:: ../../images/intro-consistency-07.png
-    :align: center
-    :alt: Synchronization conflicts between two databases
-
-    Figure 7. Synchronization conflicts between two databases
-
-Wrapping Up
-===========
-
-CouchDB's design borrows heavily from web architecture and the lessons
-learned deploying massively distributed systems on that architecture.
-By understanding why this architecture works the way it does,
-and by learning to spot which parts of your application can be easily
-distributed and which parts cannot, you'll enhance your ability to design
-distributed and scalable applications, with CouchDB or without it.
-
-We've covered the main issues surrounding CouchDB's consistency model and
-hinted at some of the benefits to be had when you work *with* CouchDB and not
-against it. But enough theory -- let's get up and running and see what all the
-fuss is about!
diff --git a/src/intro/curl.rst b/src/intro/curl.rst
deleted file mode 100644
index 6c08d6a..0000000
--- a/src/intro/curl.rst
+++ /dev/null
@@ -1,145 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/curl:
-
-==============================
-cURL: Your Command Line Friend
-==============================
-
-The ``curl`` utility is a command line tool available on Unix, Linux, Mac OS X,
-Windows, and many other platforms. ``curl`` provides easy access to the HTTP
-protocol (among others) directly from the command line and is therefore an
-ideal way of interacting with CouchDB over the HTTP REST API.
-
-For simple ``GET`` requests you can supply the URL of the request. For example,
-to get the database information:
-
-.. code-block:: bash
-
-    shell> curl http://admin:password@127.0.0.1:5984
-
-This returns the database information (formatted in the output below for
-clarity):
-
-.. code-block:: json
-
-    {
-      "couchdb": "Welcome",
-      "version": "3.0.0",
-      "git_sha": "83bdcf693",
-      "uuid": "56f16e7c93ff4a2dc20eb6acc7000b71",
-      "features": [
-        "access-ready",
-        "partitioned",
-        "pluggable-storage-engines",
-        "reshard",
-        "scheduler"
-      ],
-      "vendor": {
-        "name": "The Apache Software Foundation"
-      }
-    }
-
-.. note::
-    For some URLs, especially those that include special characters such as
-    ampersand, exclamation mark, or question mark, you should quote the URL you
-    are specifying on the command line. For example:
-
-    .. code-block:: bash
-
-        shell> curl 'http://couchdb:5984/_uuids?count=5'
-
-.. note::
-    On Microsoft Windows, use double-quotes anywhere you see single-quotes in
-    the following examples. Use doubled double-quotes ("") anywhere you see
-    single double-quotes. For example, if you see:
-
-    .. code-block:: bash
-
-        shell> curl -X PUT 'http://127.0.0.1:5984/demo/doc' -d '{"motto": "I love gnomes"}'
-
-    you should replace it with:
-
-    .. code-blocK:: bash
-
-        shell> curl -X PUT "http://127.0.0.1:5984/demo/doc" -d "{""motto"": ""I love gnomes""}"
-
-    If you prefer, ``^"`` and ``\"`` may be used to escape the double-quote
-    character in quoted strings instead.
-
-You can explicitly set the HTTP command using the ``-X`` command line option.
-For example, when creating a database, you set the name of the database in the
-URL you send using a PUT request:
-
-.. code-block:: bash
-
-    shell> curl -X PUT http://user:pass@127.0.0.1:5984/demo
-    {"ok":true}
-
-But to obtain the database information you use a ``GET`` request (with
-the return information formatted for clarity):
-
-.. code-block:: bash
-
-    shell> curl -X GET http://user:pass@127.0.0.1:5984/demo
-    {
-        "compact_running" : false,
-        "doc_count" : 0,
-        "db_name" : "demo",
-        "purge_seq" : 0,
-        "committed_update_seq" : 0,
-        "doc_del_count" : 0,
-        "disk_format_version" : 5,
-        "update_seq" : 0,
-        "instance_start_time" : "0",
-        "disk_size" : 79
-    }
-
-For certain operations, you must specify the content type of request, which you
-do by specifying the ``Content-Type`` header using the ``-H`` command-line
-option:
-
-.. code-block:: bash
-
-    shell> curl -H 'Content-Type: application/json' http://127.0.0.1:5984/_uuids
-
-You can also submit 'payload' data, that is, data in the body of the HTTP
-request using the ``-d`` option. This is useful if you need to submit JSON
-structures, for example document data, as part of the request. For example, to
-submit a simple document to the ``demo`` database:
-
-.. code-block:: bash
-
-    shell> curl -H 'Content-Type: application/json' \
-                -X POST http://user:pass@127.0.0.1:5984/demo \
-                -d '{"company": "Example, Inc."}'
-    {"ok":true,"id":"8843faaf0b831d364278331bc3001bd8",
-     "rev":"1-33b9fbce46930280dab37d672bbc8bb9"}
-
-In the above example, the argument after the ``-d`` option is the JSON of the
-document we want to submit.
-
-The document can be accessed by using the automatically generated document ID
-that was returned:
-
-.. code-block:: bash
-
-    shell> curl -X GET http://user:pass@127.0.0.1:5984/demo/8843faaf0b831d364278331bc3001bd8
-    {"_id":"8843faaf0b831d364278331bc3001bd8",
-     "_rev":"1-33b9fbce46930280dab37d672bbc8bb9",
-     "company":"Example, Inc."}
-
-The API samples in the :ref:`api/basics` show the HTTP command, URL and any
-payload information that needs to be submitted (and the expected return value).
-All of these examples can be reproduced using ``curl`` with the command-line
-examples shown above.
diff --git a/src/intro/index.rst b/src/intro/index.rst
deleted file mode 100644
index a01fb1e..0000000
--- a/src/intro/index.rst
+++ /dev/null
@@ -1,52 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro:
-
-============
-Introduction
-============
-
-CouchDB is a database that completely embraces the web. Store your data with
-JSON documents. Access your documents with your web browser, :ref:`via HTTP
-<api/basics>`. :ref:`Query <api/doc>`, :ref:`combine <views>`, and
-:ref:`transform <listfun>` your documents with :ref:`JavaScript
-<query-server/js>`. CouchDB works well with modern web and mobile apps.  You
-can distribute your data, efficiently using CouchDB’s :ref:`incremental
-replication <replication/intro>`. CouchDB supports master-master setups with
-:ref:`automatic conflict <replication/conflicts>` detection.
-
-CouchDB comes with a suite of features, such as on-the-fly document
-transformation and real-time :ref:`change notifications <changes>`, that make
-web development a breeze. It even comes with an easy to use web administration
-console, served directly out of CouchDB! We care a lot about `distributed
-scaling`_.  CouchDB is highly available and partition tolerant, but is also
-:ref:`eventually consistent <intro/consistency>`. And we care *a lot* about
-your data.  CouchDB has a fault-tolerant storage engine that puts the safety
-of your data first.
-
-In this section you'll learn about every basic bit of CouchDB, see upon what
-conceptions and technologies it built and walk through short tutorial that
-teach how to use CouchDB.
-
-.. _distributed scaling: http://en.wikipedia.org/wiki/CAP_theorem
-
-.. toctree::
-    :maxdepth: 2
-
-    overview
-    why
-    consistency
-    curl
-    security
-    tour
-    api
diff --git a/src/intro/overview.rst b/src/intro/overview.rst
deleted file mode 100644
index b58ce0e..0000000
--- a/src/intro/overview.rst
+++ /dev/null
@@ -1,361 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/overview:
-
-==================
-Technical Overview
-==================
-
-Document Storage
-================
-
-A CouchDB server hosts named databases, which store **documents**.
-Each document is uniquely named in the database, and CouchDB provides
-a `RESTful`_ :ref:`HTTP API <api/basics>` for reading and updating (add, edit,
-delete)  database documents.
-
-Documents are the primary unit of data in CouchDB and consist of any number
-of fields and attachments. Documents also include metadata that’s maintained
-by the database system. Document fields are uniquely named and contain values
-of :ref:`varying types <json>` (text, number, boolean, lists, etc),
-and there is no set limit to text size or element count.
-
-The CouchDB document update model is lockless and optimistic.
-Document edits are made by client applications loading documents,
-applying changes, and saving them back to the database. If another client
-editing the same document saves their changes first, the client gets an edit
-conflict error on save. To resolve the update conflict, the latest document
-version can be opened, the edits reapplied and the update tried again.
-
-Single document updates (add, edit, delete) are all or nothing, either succeeding
-entirely or failing completely. The database never contains partially saved
-or edited documents.
-
-.. _RESTful: http://en.wikipedia.org/wiki/REST
-
-ACID Properties
-===============
-
-The CouchDB file layout and commitment system features all `Atomic Consistent
-Isolated Durable` (`ACID`_) properties. On-disk, CouchDB never overwrites
-committed data or associated structures, ensuring the database file is always
-in a consistent state. This is a "crash-only" design where the CouchDB
-server does not go through a shut down process, it's simply terminated.
-
-Document updates (add, edit, delete) are serialized, except for binary blobs
-which are written concurrently. Database readers are never locked out and
-never have to wait on writers or other readers. Any number of clients can be
-reading documents without being locked out or interrupted by concurrent
-updates, even on the same document. CouchDB read operations use a
-`Multi-Version Concurrency Control` (`MVCC`_) model where each client sees a
-consistent snapshot of the database from the beginning to the end of the read
-operation. This means that CouchDB can guarantee transactional semantics on
-a per-document basis.
-
-Documents are indexed in `B-trees`_ by their name (DocID) and a Sequence ID.
-Each update to a database instance generates a new sequential number.
-Sequence IDs are used later for incrementally finding changes in a database.
-These B-tree indexes are updated simultaneously when documents are saved or
-deleted. The index updates always occur at the end of the file (append-only
-updates).
-
-Documents have the advantage of data being already conveniently packaged for
-storage rather than split out across numerous tables and rows in most
-database systems. When documents are committed to disk, the document fields
-and metadata are packed into buffers, sequentially one document after another
-(helpful later for efficient building of views).
-
-When CouchDB documents are updated, all data and associated indexes are
-flushed to disk and the transactional commit always leaves the database
-in a completely consistent state. Commits occur in two steps:
-
-#. All document data and associated index updates are synchronously flushed
-   to disk.
-
-#. The updated database header is written in two consecutive, identical chunks
-   to make up the first 4k of the file, and then synchronously flushed to disk.
-
-In the event of an OS crash or power failure during step 1,
-the partially flushed updates are simply forgotten on restart. If such a
-crash happens during step 2 (committing the header), a surviving copy of the
-previous identical headers will remain, ensuring coherency of all previously
-committed data. Excepting the header area, consistency checks or fix-ups
-after a crash or a power failure are never necessary.
-
-.. _ACID: http://en.wikipedia.org/wiki/ACID
-.. _MVCC: http://en.wikipedia.org/wiki/Multiversion_concurrency_control
-.. _B-trees: http://en.wikipedia.org/wiki/B-tree
-
-Compaction
-==========
-
-Wasted space is recovered by occasional compaction. On schedule, or when the
-database file exceeds a certain amount of wasted space, the compaction process
-clones all the active data to a new file and then discards the old file.
-The database remains completely online the entire time and all updates and
-reads are allowed to complete successfully. The old database file is deleted
-only when all the data has been copied and all users transitioned to the new
-file.
-
-Views
-=====
-
-ACID properties only deal with storage and updates, but we also need the ability
-to show our data in interesting and useful ways. Unlike SQL databases where
-data must be carefully decomposed into tables, data in CouchDB is stored in
-semi-structured documents. CouchDB documents are flexible and each has its
-own implicit structure, which alleviates the most difficult problems and
-pitfalls of bi-directionally replicating table schemas and their contained data.
-
-But beyond acting as a fancy file server, a simple document model for data
-storage and sharing is too simple to build real applications on -- it simply
-doesn't do enough of the things we want and expect. We want to slice and dice
-and see our data in many different ways. What is needed is a way to filter,
-organize and report on data that hasn't been decomposed into tables.
-
-.. seealso::
-    :ref:`views`
-
-View Model
-----------
-
-To address this problem of adding structure back to unstructured and
-semi-structured data, CouchDB integrates a view model. Views are the method
-of aggregating and reporting on the documents in a database, and are built
-on-demand to aggregate, join and report on database documents. Because views
-are built dynamically and don’t affect the underlying document, you can have
-as many different view representations of the same data as you like.
-
-View definitions are strictly virtual and only display the documents from the
-current database instance, making them separate from the data they display
-and compatible with replication. CouchDB views are defined inside special
-**design documents** and can replicate across database instances like
-regular documents, so that not only data replicates in CouchDB,
-but entire application designs replicate too.
-
-JavaScript View Functions
--------------------------
-
-Views are defined using JavaScript functions acting as the map part in a
-`map-reduce system`_. A :ref:`view function <viewfun>` takes a CouchDB document
-as an argument and then does whatever computation it needs to do to determine
-the data that is to be made available through the view, if any.
-It can add multiple rows to the view based on a single document,
-or it can add no rows at all.
-
-.. _map-reduce system: http://en.wikipedia.org/wiki/MapReduce
-
-.. seealso::
-    :ref:`viewfun`
-
-View Indexes
-------------
-
-Views are a dynamic representation of the actual document contents of a
-database, and CouchDB makes it easy to create useful views of data.
-But generating a view of a database with hundreds of thousands or millions of
-documents is time and resource consuming, it's not something the system
-should do from scratch each time.
-
-To keep view querying fast, the view engine maintains indexes of its views,
-and incrementally updates them to reflect changes in the database.
-CouchDB’s core design is largely optimized around the need for efficient,
-incremental creation of views and their indexes.
-
-Views and their functions are defined inside special "design" documents,
-and a design document may contain any number of uniquely named view functions.
-When a user opens a view and its index is automatically updated, all the views
-in the same design document are indexed as a single group.
-
-The view builder uses the database sequence ID to determine if the view group
-is fully up-to-date with the database. If not, the view engine examines
-all database documents (in packed sequential order) changed since the last
-refresh. Documents are read in the order they occur in the disk file,
-reducing the frequency and cost of disk head seeks.
-
-The views can be read and queried simultaneously while also being refreshed.
-If a client is slowly streaming out the contents of a large view,
-the same view can be concurrently opened and refreshed for another client
-without blocking the first client. This is true for any number of
-simultaneous client readers, who can read and query the view while the index
-is concurrently being refreshed for other clients without causing problems
-for the readers.
-
-As documents are processed by the view engine through your 'map' and 'reduce'
-functions, their previous row values are removed from the view indexes, if
-they exist. If the document is selected by a view function, the function results
-are inserted into the view as a new row.
-
-When view index changes are written to disk, the updates are always appended
-at the end of the file, serving to both reduce disk head seek times during
-disk commits and to ensure crashes and power failures can not cause
-corruption of indexes. If a crash occurs while updating a view index,
-the incomplete index updates are simply lost and rebuilt incrementally from
-its previously committed state.
-
-Security and Validation
-=======================
-
-To protect who can read and update documents, CouchDB has a simple reader
-access and update validation model that can be extended to implement custom
-security models.
-
-.. seealso::
-    :ref:`api/db/security`
-
-Administrator Access
---------------------
-
-CouchDB database instances have administrator accounts. Administrator
-accounts can create other administrator accounts and update design documents.
-Design documents are special documents containing view definitions and other
-special formulas, as well as regular fields and blobs.
-
-Update Validation
------------------
-
-As documents are written to disk, they can be validated dynamically by
-JavaScript functions for both security and data validation. When the document
-passes all the formula validation criteria, the update is allowed to continue.
-If the validation fails, the update is aborted and the user client gets an
-error response.
-
-Both the user's credentials and the updated document are given as inputs to
-the validation formula, and can be used to implement custom security models
-by validating a user's permissions to update a document.
-
-A basic "author only" update document model is trivial to implement,
-where document updates are validated to check if the user is listed in an
-"author" field in the existing document. More dynamic models are also possible,
-like checking a separate user account profile for permission settings.
-
-The update validations are enforced for both live usage and replicated
-updates, ensuring security and data validation in a shared, distributed system.
-
-.. seealso::
-    :ref:`vdufun`
-
-Distributed Updates and Replication
-===================================
-
-CouchDB is a peer-based distributed database system. It allows users and servers
-to access and update the same shared data while disconnected. Those changes can
-then be replicated bi-directionally later.
-
-The CouchDB document storage, view and security models are designed to work
-together to make true bi-directional replication efficient and reliable.
-Both documents and designs can replicate, allowing full database applications
-(including application design, logic and data) to be replicated to laptops
-for offline use, or replicated to servers in remote offices where slow or
-unreliable connections make sharing data difficult.
-
-The replication process is incremental. At the database level,
-replication only examines documents updated since the last replication.
-If replication fails at any step, due to network
-problems or crash for example, the next replication restarts at the last
-checkpoint.
-
-Partial replicas can be created and maintained. Replication can be filtered
-by a JavaScript function, so that only particular documents or those meeting
-specific criteria are replicated. This can allow users to take subsets of a
-large shared database application offline for their own use, while maintaining
-normal interaction with the application and that subset of data.
-
-Conflicts
----------
-
-Conflict detection and management are key issues for any distributed edit
-system. The CouchDB storage system treats edit conflicts as a common state,
-not an exceptional one. The conflict handling model is simple and
-"non-destructive" while preserving single document semantics and allowing for
-decentralized conflict resolution.
-
-CouchDB allows for any number of conflicting documents to exist
-simultaneously in the database, with each database instance deterministically
-deciding which document is the "winner" and which are conflicts. Only the
-winning document can appear in views, while "losing" conflicts are still
-accessible and remain in the database until deleted or purged during
-database compaction. Because conflict documents are still regular documents,
-they replicate just like regular documents and are subject to the same
-security and validation rules.
-
-When distributed edit conflicts occur, every database replica sees the same
-winning revision and each has the opportunity to resolve the conflict.
-Resolving conflicts can be done manually or, depending on the nature of the
-data and the conflict, by automated agents. The system makes decentralized
-conflict resolution possible while maintaining single document database
-semantics.
-
-Conflict management continues to work even if multiple disconnected users or
-agents attempt to resolve the same conflicts. If resolved conflicts result in
-more conflicts, the system accommodates them in the same manner, determining
-the same winner on each machine and maintaining single document semantics.
-
-.. seealso::
-    :ref:`replication/conflicts`
-
-Applications
-------------
-
-Using just the basic replication model, many traditionally single server
-database applications can be made distributed with almost no extra work.
-CouchDB replication is designed to be immediately useful for basic database
-applications, while also being extendable for more elaborate and full-featured
-uses.
-
-With very little database work, it is possible to build a distributed
-document management application with granular security and full revision
-histories. Updates to documents can be implemented to exploit incremental
-field and blob replication, where replicated updates are nearly as efficient
-and incremental as the actual edit differences ("diffs").
-
-Implementation
-==============
-
-CouchDB is built on the `Erlang OTP platform`_, a functional,
-concurrent programming language and development platform. Erlang was
-developed for real-time telecom applications with an extreme emphasis on
-reliability and availability.
-
-Both in syntax and semantics, Erlang is very different from conventional
-programming languages like C or Java. Erlang uses lightweight "processes" and
-message passing for concurrency, it has no shared state threading and all
-data is immutable. The robust, concurrent nature of Erlang is ideal for a
-database server.
-
-CouchDB is designed for lock-free concurrency, in the conceptual model and
-the actual Erlang implementation. Reducing bottlenecks and avoiding locks
-keeps the entire system working predictably under heavy loads. CouchDB can
-accommodate many clients replicating changes, opening and updating documents,
-and querying views whose indexes are simultaneously being refreshed for
-other clients, without needing locks.
-
-For higher availability and more concurrent users, CouchDB is designed for
-"shared nothing" clustering. In a "shared nothing" cluster, each machine
-is independent and replicates data with its cluster mates, allowing individual
-server failures with zero downtime. And because consistency scans
-and fix-ups aren’t needed on restart,
-if the entire cluster fails -- due to a power outage in a datacenter,
-for example -- the entire CouchDB distributed system becomes immediately
-available after a restart.
-
-CouchDB is built from the start with a consistent vision of a distributed
-document database system. Unlike cumbersome attempts to bolt distributed
-features on top of the same legacy models and databases,
-it is the result of careful ground-up design, engineering and integration.
-The document, view, security and replication models, the special purpose query
-language, the efficient and robust disk layout and the concurrent and reliable
-nature of the Erlang platform are all carefully integrated for a reliable
-and efficient system.
-
-.. _Erlang OTP platform: http://www.erlang.org/
diff --git a/src/intro/security.rst b/src/intro/security.rst
deleted file mode 100644
index 0a4aebb..0000000
--- a/src/intro/security.rst
+++ /dev/null
@@ -1,530 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/security:
-
-========
-Security
-========
-
-In this document, we'll look at the basic security mechanisms in CouchDB:
-`Basic Authentication` and `Cookie Authentication`. This is how CouchDB
-handles users and protects their credentials.
-
-Authentication
-==============
-
-CouchDB has the idea of an *admin user* (e.g. an administrator, a super user,
-or root) that is allowed to do anything to a CouchDB installation. By default,
-one admin user **must** be created for CouchDB to start up successfully.
-
-CouchDB also defines a set of requests that only admin users are allowed to
-do. If you have defined one or more specific admin users, CouchDB will ask for
-identification for certain requests:
-
-- Creating a database (:put:`PUT /database </{db}>`)
-- Deleting a database (:put:`DELETE /database </{db}>`)
-- Setup a database security (:put:`PUT /database/_security
-  </{db}/_security>`)
-- Creating a design document (:put:`PUT /database/_design/app
-  </{db}/_design/{ddoc}>`)
-- Updating a design document (:put:`PUT /database/_design/app?rev=1-4E2
-  </{db}/_design/{ddoc}>`)
-- Deleting a design document (:delete:`DELETE /database/_design/app?rev=2-6A7
-  </{db}/_design/{ddoc}>`)
-- Triggering compaction (:post:`POST /database/_compact </{db}/_compact>`)
-- Reading the task status list (:get:`GET /_active_tasks </_active_tasks>`)
-- Restarting the server on a given node
-  (:post:`POST /_node/{node-name}/_restart </_restart>`)
-- Reading the active configuration
-  (:get:`GET /_node/{node-name}/_config </_config>`)
-- Updating the active configuration
-  (:put:`PUT /_node/{node-name}/_config/section/key </_config/{section}/{key}>`)
-
-Creating a New Admin User
--------------------------
-
-If your installation process did not set up an admin user, you will have to add
-one to the configuration file by hand and restart CouchDB first. For the purposes of
-this example, we'll create a default ``admin`` user with the password ``password``.
-
-.. warning::
-    Don't just type in the following without thinking! Pick a good name for your
-    administrator user that isn't easily guessable, and pick a secure password.
-
-To the end of your ``etc/local.ini`` file, after the ``[admins]`` line, add the text
-``admin = password``, so it looks like this:
-
-.. code-block:: ini
-
-    [admins]
-    admin = password
-
-(Don't worry about the password being in plain text; we'll come back to this.)
-
-Now, restart CouchDB using the method appropriate for your operating system.
-You should now be able to access CouchDB using your new administrator account::
-
-    > curl http://admin:password@127.0.0.1:5984/_up
-    {"status":"ok","seeds":{}}
-
-Great!
-
-Let's create an admin user through the HTTP API. We'll call her ``anna``, and
-her password is ``secret``.  Note the double quotes in the following code; they
-are needed to denote a string value for the :ref:`configuration API
-<api/config>`::
-
-    > HOST="http://admin:password@127.0.0.1:5984"
-    > NODENAME="_local"
-    > curl -X PUT $HOST/_node/$NODENAME/_config/admins/anna -d '"secret"'
-    ""
-
-As per the :ref:`_config <api/config>` API's behavior, we're getting the previous value
-for the config item we just wrote. Since our admin user didn't exist, we get an empty
-string.
-
-Please note that ``_local`` serves as an  alias for the local node name, so for all
-configuration URLs, ``NODENAME`` may be set to ``_local``, to interact with the local
-node’s configuration.
-
-.. seealso::
-    :ref:`Node Management <cluster/nodes>`
-
-Hashing Passwords
-^^^^^^^^^^^^^^^^^
-
-Seeing the plain-text password is scary, isn't it? No worries, CouchDB doesn't
-show the plain-text password anywhere. It gets hashed right away. Go ahead and
-look at your ``local.ini`` file now. You'll see that CouchDB has rewritten the
-plain text passwords so they are hashed:
-
-.. code-block:: ini
-
-    [admins]
-    admin = -pbkdf2-71c01cb429088ac1a1e95f3482202622dc1e53fe,226701bece4ae0fc9a373a5e02bf5d07,10
-    anna = -pbkdf2-2d86831c82b440b8887169bd2eebb356821d621b,5e11b9a9228414ab92541beeeacbf125,10
-
-The hash is that big, ugly, long string that starts out with ``-pbkdf2-``.
-
-To compare a plain-text password during authentication with the stored hash,
-the hashing algorithm is run and the resulting hash is compared to the stored
-hash. The probability of two identical hashes for different passwords is too
-insignificant to mention (c.f. `Bruce Schneier`_). Should the stored hash fall
-into the hands of an attacker, it is, by current standards, way too inconvenient
-(i.e., it'd take a lot of money and time) to find the plain-text password from
-the hash.
-
-.. _Bruce Schneier: http://en.wikipedia.org/wiki/Bruce_Schneier
-
-When CouchDB starts up, it reads a set of ``.ini`` files with config settings. It
-loads these settings into an internal data store (not a database). The config
-API lets you read the current configuration as well as change it and create new
-entries. CouchDB writes any changes back to the ``.ini`` files.
-
-The ``.ini`` files can also be edited by hand when CouchDB is not running.
-Instead of creating the admin user as we showed previously, you could have
-stopped CouchDB, opened your ``local.ini``, added ``anna = secret`` to the
-:config:section:`admins`, and restarted CouchDB. Upon reading the new line from
-``local.ini``, CouchDB would run the hashing algorithm and write back the hash
-to ``local.ini``, replacing the plain-text password — just as it did for our
-original ``admin`` user. To make sure CouchDB only hashes plain-text passwords
-and not an existing hash a second time, it prefixes the hash with ``-pbkdf2-``,
-to distinguish between plain-text passwords and `PBKDF2`_ hashed passwords. This
-means your plain-text password can't start with the characters ``-pbkdf2-``,
-but that's pretty unlikely to begin with.
-
-.. _PBKDF2: http://en.wikipedia.org/wiki/PBKDF2
-
-.. _intro/security/basicauth:
-
-Basic Authentication
---------------------
-
-CouchDB will not allow us to create new databases unless we give the correct admin user
-credentials. Let's verify::
-
-    > HOST="http://127.0.0.1:5984"
-    > curl -X PUT $HOST/somedatabase
-    {"error":"unauthorized","reason":"You are not a server admin."}
-
-That looks about right. Now we try again with the correct credentials::
-
-    > HOST="http://anna:secret@127.0.0.1:5984"
-    > curl -X PUT $HOST/somedatabase
-    {"ok":true}
-
-If you have ever accessed a website or FTP server that was password-protected,
-the ``username:password@`` URL variant should look familiar.
-
-If you are security conscious, the missing ``s`` in ``http://`` will make you
-nervous. We're sending our password to CouchDB in plain text. This is a bad
-thing, right? Yes, but consider our scenario: CouchDB listens on ``127.0.0.1``
-on a development box that we're the sole user of. Who could possibly sniff our
-password?
-
-If you are in a production environment, however, you need to reconsider. Will
-your CouchDB instance communicate over a public network? Even a LAN shared
-with other collocation customers is public. There are multiple ways to secure
-communication between you or your application and CouchDB that exceed the
-scope of this documentation. CouchDB as of version :ref:`1.1.0 <release/1.1.0>`
-comes with :ref:`SSL built in <config/ssl>`.
-
-.. seealso::
-    :ref:`Basic Authentication API Reference <api/auth/basic>`
-
-.. _intro/security/cookie:
-
-Cookie Authentication
----------------------
-
-Basic authentication that uses plain-text passwords is nice and convenient,
-but not very secure if no extra measures are taken. It is also a very poor
-user experience. If you use basic authentication to identify admins,
-your application's users need to deal with an ugly, unstylable browser modal
-dialog that says non-professional at work more than anything else.
-
-To remedy some of these concerns, CouchDB supports cookie authentication.
-With cookie authentication your application doesn't have to include the ugly
-login dialog that the users' browsers come with. You can use a regular HTML
-form to submit logins to CouchDB. Upon receipt, CouchDB will generate a
-one-time token that the client can use in its next request to CouchDB. When
-CouchDB sees the token in a subsequent request, it will authenticate the user
-based on the token without the need to see the password again. By default,
-a token is valid for 10 minutes.
-
-To obtain the first token and thus authenticate a user for the first time,
-the username and password must be sent to the :ref:`_session <api/auth/session>`
-API. The API is smart enough to decode HTML form submissions, so you don't have
-to resort to any smarts in your application.
-
-If you are not using HTML forms to log in, you need to send an HTTP request
-that looks as if an HTML form generated it. Luckily, this is super simple::
-
-    > HOST="http://127.0.0.1:5984"
-    > curl -vX POST $HOST/_session \
-           -H 'Content-Type:application/x-www-form-urlencoded' \
-           -d 'name=anna&password=secret'
-
-CouchDB replies, and we'll give you some more detail::
-
-    < HTTP/1.1 200 OK
-    < Set-Cookie: AuthSession=YW5uYTo0QUIzOTdFQjrC4ipN-D-53hw1sJepVzcVxnriEw;
-    < Version=1; Path=/; HttpOnly
-    > ...
-    <
-    {"ok":true}
-
-A :statuscode:`200` response code tells us all is well, a :header:`Set-Cookie`
-header includes the token we can use for the next request, and the standard JSON
-response tells us again that the request was successful.
-
-Now we can use this token to make another request as the same user without
-sending the username and password again::
-
-    > curl -vX PUT $HOST/mydatabase \
-           --cookie AuthSession=YW5uYTo0QUIzOTdFQjrC4ipN-D-53hw1sJepVzcVxnriEw \
-           -H "X-CouchDB-WWW-Authenticate: Cookie" \
-           -H "Content-Type:application/x-www-form-urlencoded"
-    {"ok":true}
-
-You can keep using this token for 10 minutes by default. After 10 minutes you
-need to authenticate your user again. The token lifetime can be configured
-with the timeout (in seconds) setting in the :ref:`chttpd_auth
-<config/chttpd_auth>` configuration section.
-
-.. seealso::
-    :ref:`Cookie Authentication API Reference <api/auth/cookie>`
-
-Authentication Database
-=======================
-
-You may already note that CouchDB administrators are defined within the config
-file and are wondering if regular users are also stored there. No, they are not.
-CouchDB has a special `authentication database`, named ``_users`` by default,
-that stores all registered users as JSON documents.
-
-This special database is a `system database`. This means that while it shares
-the common :ref:`database API <api/database>`, there are some
-special security-related constraints applied. Below is a list of how the
-`authentication database` is different from the other databases.
-
-- Only administrators may browse list of all documents
-  (:get:`GET /_users/_all_docs </{db}/_all_docs>`)
-- Only administrators may listen to :ref:`changes feed
-  <changes>` (:get:`GET /_users/_changes </{db}/_changes>`)
-- Only administrators may execute design functions like :ref:`views <viewfun>`.
-- There is a special design document ``_auth`` that cannot be modified
-- Every document except the `design documents` represent registered
-  CouchDB users and belong to them
-- By default, the ``_security`` settings of the ``_users`` database disallow
-  users from accessing or modifying documents
-
-.. note::
-
-    Settings can be changed so that users do have access to the ``_users`` database,
-    but even then they may only access (:get:`GET /_users/org.couchdb.user:Jan
-    </{db}/{docid}>`) or modify (:put:`PUT /_users/org.couchdb.user:Jan
-    </{db}/{docid}>`) documents that they own. This will not be possible in CouchDB 4.0.
-
-These draconian rules are necessary since CouchDB cares about its users'
-personal information and will not disclose it to just anyone. Often, user
-documents contain system information like `login`, `password hash` and `roles`,
-apart from sensitive personal information like real name, email, phone, special
-internal identifications and more. This is not information that you
-want to share with the World.
-
-Users Documents
----------------
-
-Each CouchDB user is stored in document format. These documents contain
-several *mandatory* fields, that CouchDB needs for authentication:
-
-- **_id** (*string*): Document ID. Contains user's login with special prefix
-  :ref:`org.couchdb.user`
-- **derived_key** (*string*): `PBKDF2`_ key derived from salt/iterations.
-- **name** (*string*): User's name aka login. **Immutable** e.g. you cannot
-  rename an existing user - you have to create new one
-- **roles** (*array* of *string*): List of user roles. CouchDB doesn't provide
-  any built-in roles, so you're free to define your own depending on your needs.
-  However, you cannot set system roles like ``_admin`` there. Also, only
-  administrators may assign roles to users - by default all users have no roles
-- **password** (*string*): A plaintext password can be provided, but will be replaced
-  by hashed fields before the document is actually stored.
-- **password_sha** (*string*): Hashed password with salt. Used for ``simple``
-  `password_scheme`
-- **password_scheme** (*string*): Password hashing scheme. May be ``simple`` or
-  ``pbkdf2``
-- **salt** (*string*): Hash salt. Used for both ``simple`` and ``pbkdf2``
-  ``password_scheme`` options.
-- **iterations** (*integer*): Number of iterations to derive key, used for ``pbkdf2``
-  ``password_scheme`` See the :ref:`configuration API <config/chttpd_auth>`:: for details.
-- **type** (*string*): Document type. Constantly has the value ``user``
-
-Additionally, you may specify any custom fields that relate to the target
-user.
-
-.. _org.couchdb.user:
-
-Why the ``org.couchdb.user:`` prefix?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The reason there is a special prefix before a user's login name is to have
-namespaces that users belong to. This prefix is designed to prevent
-replication conflicts when you try merging two or more `_user` databases.
-
-For current CouchDB releases, all users belong to the same
-``org.couchdb.user`` namespace and this cannot be changed. This may be changed
-in future releases.
-
-Creating a New User
--------------------
-
-Creating a new user is a very trivial operation. You just need to do a
-:method:`PUT` request with the user's data to CouchDB. Let's create a user with
-login `jan` and password `apple`::
-
-    curl -X PUT http://localhost:5984/_users/org.couchdb.user:jan \
-         -H "Accept: application/json" \
-         -H "Content-Type: application/json" \
-         -d '{"name": "jan", "password": "apple", "roles": [], "type": "user"}'
-
-This `curl` command will produce the following HTTP request:
-
-.. code-block:: http
-
-    PUT /_users/org.couchdb.user:jan HTTP/1.1
-    Accept: application/json
-    Content-Length: 62
-    Content-Type: application/json
-    Host: localhost:5984
-    User-Agent: curl/7.31.0
-
-And CouchDB responds with:
-
-.. code-block:: http
-
-    HTTP/1.1 201 Created
-    Cache-Control: must-revalidate
-    Content-Length: 83
-    Content-Type: application/json
-    Date: Fri, 27 Sep 2013 07:33:28 GMT
-    ETag: "1-e0ebfb84005b920488fc7a8cc5470cc0"
-    Location: http://localhost:5984/_users/org.couchdb.user:jan
-    Server: CouchDB (Erlang OTP)
-
-    {"ok":true,"id":"org.couchdb.user:jan","rev":"1-e0ebfb84005b920488fc7a8cc5470cc0"}
-
-The document was successfully created! The user `jan` should now exist in our
-database. Let's check if this is true::
-
-    curl -X POST http://localhost:5984/_session -d 'name=jan&password=apple'
-
-CouchDB should respond with:
-
-.. code-block:: javascript
-
-    {"ok":true,"name":"jan","roles":[]}
-
-This means that the username was recognized and the password's hash matches
-with the stored one. If we specify an incorrect login and/or password, CouchDB
-will notify us with the following error message:
-
-.. code-block:: javascript
-
-    {"error":"unauthorized","reason":"Name or password is incorrect."}
-
-Password Changing
------------------
-
-Let's define what is password changing from the point of view of CouchDB and
-the authentication database. Since "users" are "documents", this operation is
-just updating the document with a special field ``password`` which contains
-the *plain text password*. Scared? No need to be. The authentication database
-has a special internal hook on document update which looks for this field and
-replaces it with the *secured hash* depending on the chosen ``password_scheme``.
-
-Summarizing the above process - we need to get the document's content, add
-the ``password`` field with the new password in plain text and then store the
-JSON result to the authentication database. ::
-
-    curl -X GET http://localhost:5984/_users/org.couchdb.user:jan
-
-.. code-block:: javascript
-
-    {
-        "_id": "org.couchdb.user:jan",
-        "_rev": "1-e0ebfb84005b920488fc7a8cc5470cc0",
-        "derived_key": "e579375db0e0c6a6fc79cd9e36a36859f71575c3",
-        "iterations": 10,
-        "name": "jan",
-        "password_scheme": "pbkdf2",
-        "roles": [],
-        "salt": "1112283cf988a34f124200a050d308a1",
-        "type": "user"
-    }
-
-Here is our user's document. We may strip hashes from the stored document to
-reduce the amount of posted data::
-
-    curl -X PUT http://localhost:5984/_users/org.couchdb.user:jan \
-         -H "Accept: application/json" \
-         -H "Content-Type: application/json" \
-         -H "If-Match: 1-e0ebfb84005b920488fc7a8cc5470cc0" \
-         -d '{"name":"jan", "roles":[], "type":"user", "password":"orange"}'
-
-.. code-block:: javascript
-
-    {"ok":true,"id":"org.couchdb.user:jan","rev":"2-ed293d3a0ae09f0c624f10538ef33c6f"}
-
-Updated! Now let's check that the password was really changed::
-
-    curl -X POST http://localhost:5984/_session -d 'name=jan&password=apple'
-
-CouchDB should respond with:
-
-.. code-block:: javascript
-
-    {"error":"unauthorized","reason":"Name or password is incorrect."}
-
-Looks like the password ``apple`` is wrong, what about ``orange``? ::
-
-    curl -X POST http://localhost:5984/_session -d 'name=jan&password=orange'
-
-CouchDB should respond with:
-
-.. code-block:: javascript
-
-    {"ok":true,"name":"jan","roles":[]}
-
-Hooray! You may wonder why this was so complex - we need to retrieve user's
-document, add a special field to it, and post it back.
-
-.. note::
-    There is no password confirmation for API request: you should implement it
-    in your application layer.
-
-Authorization
-=============
-
-Now that you have a few users who can log in, you probably want to set up some
-restrictions on what actions they can perform based on their identity and their
-roles.  Each database on a CouchDB server can contain its own set of
-authorization rules that specify which users are allowed to read and write
-documents, create design documents, and change certain database configuration
-parameters.  The authorization rules are set up by a server admin and can be
-modified at any time.
-
-Database authorization rules assign a user into one of two classes:
-
-- `members`, who are allowed to read all documents and create and modify any
-  document except for design documents.
-- `admins`, who can read and write all types of documents, modify which users
-  are members or admins, and set certain per-database configuration options.
-
-Note that a database admin is not the same as a server admin -- the actions
-of a database admin are restricted to a specific database.
-
-When a database is first created, there are no members or admins.  HTTP
-requests that have no authentication credentials or have credentials for a
-normal user are treated as members, and those with server admin credentials
-are treated as database admins.  To change the default permissions, you must
-create a :ref:`_security <api/db/security>` document in the database::
-
-    > curl -X PUT http://localhost:5984/mydatabase/_security \
-         -u anna:secret \
-         -H "Content-Type: application/json" \
-         -d '{"admins": { "names": [], "roles": [] }, "members": { "names": ["jan"], "roles": [] } }'
-
-The HTTP request to create the `_security` document must contain the
-credentials of a server admin.  CouchDB will respond with:
-
-.. code-block:: javascript
-
-    {"ok":true}
-
-The database is now secured against anonymous reads and writes::
-
-    > curl http://localhost:5984/mydatabase/
-
-.. code-block:: javascript
-
-    {"error":"unauthorized","reason":"You are not authorized to access this db."}
-
-You declared user "jan" as a member in this database, so he is able to read and
-write normal documents::
-
-    > curl -u jan:apple http://localhost:5984/mydatabase/
-
-.. code-block:: javascript
-
-    {"db_name":"mydatabase","doc_count":1,"doc_del_count":0,"update_seq":3,"purge_seq":0,
-    "compact_running":false,"sizes":{"active":272,"disk":12376,"external":350},
-    "instance_start_time":"0","disk_format_version":6,"committed_update_seq":3}
-
-If Jan attempted to create a design doc, however, CouchDB would return a
-401 Unauthorized error because the username "jan" is not in the list of
-admin names and the `/_users/org.couchdb.user:jan` document doesn't contain
-a role that matches any of the declared admin roles.  If you want to promote
-Jan to an admin, you can update the security document to add `"jan"` to
-the `names` array under `admin`.  Keeping track of individual database
-admin usernames is tedious, though, so you would likely prefer to create a
-database admin role and assign that role to the `org.couchdb.user:jan` user
-document::
-
-    > curl -X PUT http://localhost:5984/mydatabase/_security \
-         -u anna:secret \
-         -H "Content-Type: application/json" \
-         -d '{"admins": { "names": [], "roles": ["mydatabase_admin"] }, "members": { "names": [], "roles": [] } }'
-
-See the :ref:`_security document reference page <api/db/security>` for
-additional details about specifying database members and admins.
diff --git a/src/intro/tour.rst b/src/intro/tour.rst
deleted file mode 100644
index 135eab8..0000000
--- a/src/intro/tour.rst
+++ /dev/null
@@ -1,409 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/tour:
-
-===============
-Getting Started
-===============
-
-In this document, we'll take a quick tour of CouchDB's features.
-We'll create our first document and experiment with CouchDB views.
-
-All Systems Are Go!
-===================
-
-We'll have a very quick look at CouchDB's bare-bones Application Programming
-Interface (API) by using the command-line utility curl. Please note that this
-is not the only way of talking to CouchDB. We will show you plenty more
-throughout the rest of the documents. What's interesting about curl is that it
-gives you control over raw HTTP requests, and you can see exactly what is
-going on "underneath the hood" of your database.
-
-Make sure CouchDB is still running, and then do::
-
-    curl http://127.0.0.1:5984/
-
-This issues a GET request to your newly installed CouchDB instance.
-
-The reply should look something like:
-
-.. code-block:: javascript
-
-    {
-      "couchdb": "Welcome",
-      "version": "3.0.0",
-      "git_sha": "83bdcf693",
-      "uuid": "56f16e7c93ff4a2dc20eb6acc7000b71",
-      "features": [
-        "access-ready",
-        "partitioned",
-        "pluggable-storage-engines",
-        "reshard",
-        "scheduler"
-      ],
-      "vendor": {
-        "name": "The Apache Software Foundation"
-      }
-    }
-
-Not all that spectacular. CouchDB is saying "hello" with the running version
-number.
-
-Next, we can get a list of databases::
-
-    curl -X GET http://admin:password@127.0.0.1:5984/_all_dbs
-
-All we added to the previous request is the _all_dbs string, and our admin user
-name and password (set when installing CouchDB).
-
-The response should look like::
-
-    ["_replicator","_users"]
-
-.. note::
-    In case this returns an empty Array for you, it means you haven't finished
-    installation correctly. Please refer to :ref:`setup` for further
-    information on this.
-
-    For the purposes of this example, we'll not be showing the system databases
-    past this point. In *your* installation, any time you ``GET /_all_dbs``,
-    you should see the system databases in the list, too.
-
-Oh, that's right, we didn't create any user databases yet!
-
-.. note::
-    The curl command issues GET requests by default. You can issue POST requests
-    using ``curl -X POST``. To make it easy to work with our terminal history,
-    we usually use the ``-X`` option even when issuing GET requests.
-    If we want to send a POST next time, all we have to change is the method.
-
-    HTTP does a bit more under the hood than you can see in the examples here.
-    If you're interested in every last detail that goes over the wire,
-    pass in the ``-v`` option (e.g., ``curl -vX GET``), which will show you
-    the server curl tries to connect to, the request headers it sends,
-    and response headers it receives back. Great for debugging!
-
-Let's create a database::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/baseball
-
-CouchDB will reply with::
-
-    {"ok":true}
-
-Retrieving the list of databases again shows some useful results this time::
-
-    curl -X GET http://admin:password@127.0.0.1:5984/_all_dbs
-
-::
-
-    ["baseball"]
-
-.. note::
-    We should mention JavaScript Object Notation (JSON) here, the data format
-    CouchDB speaks. JSON is a lightweight data interchange format based on
-    JavaScript syntax. Because JSON is natively compatible with JavaScript, your
-    web browser is an ideal client for CouchDB.
-
-    Brackets (``[]``) represent ordered lists, and curly braces (``{}``)
-    represent key/value dictionaries. Keys must be strings, delimited by quotes
-    (``"``), and values can be strings, numbers, booleans, lists, or key/value
-    dictionaries. For a more detailed description of JSON, see Appendix E, JSON
-    Primer.
-
-Let's create another database::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/baseball
-
-CouchDB will reply with::
-
-    {"error":"file_exists","reason":"The database could not be created,
-    the file already exists."}
-
-We already have a database with that name, so CouchDB will respond with an
-error. Let's try again with a different database name::
-
-    curl -X PUT http://admin:password@127.0.0.1:5984/plankton
-
-CouchDB will reply with::
-
-    {"ok":true}
-
-Retrieving the list of databases yet again shows some useful results::
-
-    curl -X GET http://admin:password@127.0.0.1:5984/_all_dbs
-
-CouchDB will respond with::
-
-    ["baseball", "plankton"]
-
-To round things off, let's delete the second database::
-
-    curl -X DELETE http://admin:password@127.0.0.1:5984/plankton
-
-CouchDB will reply with::
-
-    {"ok":true}
-
-The list of databases is now the same as it was before::
-
-    curl -X GET http://admin:password@127.0.0.1:5984/_all_dbs
-
-CouchDB will respond with::
-
-    ["baseball"]
-
-For brevity, we'll skip working with documents, as the next section covers a
-different and potentially easier way of working with CouchDB that should
-provide experience with this. As we work through the example,
-keep in mind that "under the hood" everything is being done by the
-application exactly as you have been doing here manually.
-Everything is done using GET, PUT, POST, and DELETE with a URI.
-
-.. _intro/tour/fauxton:
-
-Welcome to Fauxton
-==================
-
-After having seen CouchDB's raw API, let's get our feet wet by playing with
-Fauxton, the built-in administration interface. Fauxton provides full access
-to all of CouchDB's features and makes it easy to work with some of the more
-complex ideas involved. With Fauxton we can create and destroy databases; view
-and edit documents; compose and run MapReduce views; and trigger replication
-between databases.
-
-To load Fauxton in your browser, visit::
-
-    http://127.0.0.1:5984/_utils/
-
-and log in when prompted with your admin password.
-
-In later documents, we'll focus on using CouchDB from server-side languages
-such as Ruby and Python. As such, this document is a great opportunity to
-showcase an example of natively serving up a dynamic web application using
-nothing more than CouchDB's integrated web server, something you may wish to do
-with your own applications.
-
-The first thing we should do with a fresh installation of CouchDB is run the
-test suite to verify that everything is working properly. This assures us
-that any problems we may run into aren't due to bothersome issues with our
-setup. By the same token, failures in the Fauxton test suite are a red flag,
-telling us to double-check our installation before attempting to use a
-potentially broken database server, saving us the confusion when nothing
-seems to be working quite like we expect!
-
-To validate your installation, click on the `Verify` link on the left-hand
-side, then press the green `Verify Installation` button. All tests should
-pass with a check mark. If any fail, re-check your installation steps.
-
-Your First Database and Document
-================================
-
-Creating a database in Fauxton is simple. From the overview page,
-click "Create Database." When asked for a name, enter ``hello-world`` and click
-the Create button.
-
-After your database has been created, Fauxton will display a list of all its
-documents. This list will start out empty, so let's
-create our first document. Click the plus sign next to "All Documents" and
-select the "New Doc" link. CouchDB will generate a UUID for you.
-
-For demoing purposes, having CouchDB assign a UUID is fine. When you write
-your first programs, we recommend assigning your own UUIDs. If you rely on
-the server to generate the UUID and you end up making two POST requests
-because the first POST request bombed out, you might generate two docs and
-never find out about the first one because only the second one will be
-reported back. Generating your own UUIDs makes sure that you'll never end up
-with duplicate documents.
-
-Fauxton will display the newly created document, with its _id field. To create
-a new field, simply use the editor to write valid JSON. Add a new field by
-appending a comma to the ``_id`` value, then adding the text::
-
-    "hello": "my new value"
-
-Click the green Create Document button to finalize creating the
-document.
-
-You can experiment with other JSON values; e.g., ``[1, 2, "c"]`` or
-``{"foo": "bar"}``.
-
-You'll notice that the document's _rev has been added. We'll go into more detail
-about this in later documents, but for now, the important thing to note is
-that _rev acts like a safety feature when saving a document. As long as you
-and CouchDB agree on the most recent _rev of a document, you can successfully
-save your changes.
-
-For clarity, you may want to display the contents of the document in the all
-document view. To enable this, from the upper-right corner of the window,
-select Options, then check the Include Docs option. Finally, press the Run
-Query button. The full document should be displayed along with the ``_id``
-and ``_rev`` values.
-
-Running a Mango Query
-=====================
-
-Now that we have stored documents successfully, we want to be able to query
-them. The easiest way to do this in CouchDB is running a Mango Query. There are
-always two parts to a Mango Query: the index and the selector.
-
-The index specifies which fields we want to be able to query on, and the
-selector includes the actual query parameters that define what we are looking
-for exactly.
-
-Indexes are stored as rows that are kept sorted by the fields you specify. This
-makes retrieving data from a range of keys efficient even when there are
-thousands or millions of rows.
-
-Before we can run an example query, we'll need some data to run it on. We'll
-create documents with information about movies. Let's create documents for
-three movies. (Allow CouchDB to generate the ``_id`` and ``_rev`` fields.) Use Fauxton
-to create documents that have a final JSON structure that look like this:
-
-.. code-block:: javascript
-
-    {
-        "_id": "00a271787f89c0ef2e10e88a0c0001f4",
-        "type": "movie",
-        "title": "My Neighbour Totoro",
-        "year": 1988,
-        "director": "miyazaki",
-        "rating": 8.2
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id": "00a271787f89c0ef2e10e88a0c0003f0",
-        "type": "movie",
-        "title": "Kikis Delivery Service",
-        "year": 1989,
-        "director": "miyazaki",
-        "rating": 7.8
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id": "00a271787f89c0ef2e10e88a0c00048b",
-        "type": "movie",
-        "title": "Princess Mononoke",
-        "year": 1997,
-        "director": "miyazaki",
-        "rating": 8.4
-    }
-
-Now we want to be able to find a movie by its release year, we need to create a
-Mango Index. To do this, go to “Run A Query with Mango” in the Database
-overview. Then click on “manage indexes”, and change the index field on the
-left to look like this:
-
-.. code-block:: javascript
-
-    {
-       "index": {
-          "fields": [
-             "year"
-          ]
-       },
-       "name": "year-json-index",
-       "type": "json"
-    }
-
-This defines an index on the field ``year`` and allows us to send queries for
-documents from a specific year.
-
-Next, click on “edit query” and change the Mango Query to look like this:
-
-.. code-block:: javascript
-
-  {
-     "selector": {
-        "year": {
-           "$eq": 1988
-        }
-     }
-  }
-
-Then click on ”Run Query”.
-
-The result should be a single result, the movie “My Neighbour Totoro” which
-has the year value of 1988. ``$eq`` here stands for “equal”.
-
-.. note::
-    Note that if you skip adding the index, the query will still return the
-    correct results, although you will see a warning about not using a
-    pre-existing index. Not using an index will work fine on small databases
-    and is acceptable for testing out queries in development or training, but
-    we very strongly discourage doing this in any other case, since an index is
-    absolutely vital to good query performance.
-
-You can also query for all movies during the 1980s, with this selector:
-
-.. code-block:: javascript
-
-  {
-     "selector": {
-        "year": {
-           "$lt": 1990,
-           "$gte": 1980
-        }
-     }
-  }
-
-The result are the two movies from 1988 and 1989. ``$lt`` here means “lower
-than”, and ``$gte`` means “greater than or equal to”. The latter currently
-doesn’t have any effect, given that all of our movies are more recent than
-1980, but this makes the query future-proof and allows us to add older
-movies later.
-
-Triggering Replication
-======================
-
-Fauxton can trigger replication between two local databases,
-between a local and remote database, or even between two remote databases.
-We'll show you how to replicate data from one local database to another,
-which is a simple way of making backups of your databases as we're working
-through the examples.
-
-First we'll need to create an empty database to be the target of replication.
-Return to the Databases overview and create a database called
-``hello-replication``. Now click "Replication" in the sidebar and choose
-``hello-world`` as the source and ``hello-replication`` as the target. Click
-"Replicate" to replicate your database.
-
-To view the result of your replication, click on the Databases tab again.
-You should see the ``hello-replication`` database has the same number of documents
-as the ``hello-world`` database, and it should take up roughly the same size as
-well.
-
-.. note::
-    For larger databases, replication can take much longer. It is important to
-    leave the browser window open while replication is taking place.
-    As an alternative, you can trigger replication via curl or some other HTTP
-    client that can handle long-running connections. If your client closes the
-    connection before replication finishes, you'll have to retrigger it.
-    Luckily, CouchDB's replication can take over from where it left off
-    instead of starting from scratch.
-
-Wrapping Up
-===========
-
-Now that you've seen most of Fauxton's features, you'll be prepared to dive in
-and inspect your data as we build our example application in the next few
-documents. Fauxton's pure JavaScript approach to managing CouchDB shows how
-it's possible to build a fully featured web application using only CouchDB's
-HTTP API and integrated web server.
-
-But before we get there, we'll have another look at CouchDB's HTTP API -- now
-with a magnifying glass. Let's curl up on the couch and relax.
diff --git a/src/intro/why.rst b/src/intro/why.rst
deleted file mode 100644
index aa4b683..0000000
--- a/src/intro/why.rst
+++ /dev/null
@@ -1,300 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _intro/why:
-
-============
-Why CouchDB?
-============
-
-Apache CouchDB is one of a new breed of database management systems.
-This topic explains why there's a need for new systems as well as the
-motivations behind building CouchDB.
-
-As CouchDB developers, we're naturally very excited to be using CouchDB.
-In this topic we'll share with you the reasons for our enthusiasm.
-We'll show you how CouchDB's schema-free document model is a better fit
-for common applications, how the built-in query engine is a powerful way
-to use and process your data, and how CouchDB's design lends itself
-to modularization and scalability.
-
-Relax
-=====
-
-If there's one word to describe CouchDB, it is *relax*. It is the byline
-to CouchDB's official logo and when you start CouchDB, you see::
-
-    Apache CouchDB has started. Time to relax.
-
-Why is relaxation important? Developer productivity roughly doubled in the
-last five years. The chief reason for the boost is more powerful tools that
-are easier to use. Take Ruby on Rails as an example. It is an infinitely
-complex framework, but it's easy to get started with. Rails is a success
-story because of the core design focus on ease of use. This is one reason why
-CouchDB is relaxing: learning CouchDB and understanding its core concepts
-should feel natural to most everybody who has been doing any work on the Web.
-And it is still pretty easy to explain to non-technical people.
-
-Getting out of the way when creative people try to build specialized
-solutions is in itself a core feature and one thing that CouchDB aims to get
-right. We found existing tools too cumbersome to work with during development
-or in production, and decided to focus on making CouchDB easy, even a pleasure,
-to use.
-
-Another area of relaxation for CouchDB users is the production setting.
-If you have a live running application, CouchDB again goes out of its way
-to avoid troubling you. Its internal architecture is fault-tolerant,
-and failures occur in a controlled environment and are dealt with gracefully.
-Single problems do not cascade through an entire server system but stay
-isolated in single requests.
-
-CouchDB's core concepts are simple (yet powerful) and well understood.
-Operations teams (if you have a team; otherwise, that's you) do not have to
-fear random behavior and untraceable errors. If anything should go wrong,
-you can easily find out what the problem is, but these situations are rare.
-
-CouchDB is also designed to handle varying traffic gracefully. For instance,
-if a website is experiencing a sudden spike in traffic, CouchDB will generally
-absorb a lot of concurrent requests without falling over. It may take a little
-more time for each request, but they all get answered. When the spike is over,
-CouchDB will work with regular speed again.
-
-The third area of relaxation is growing and shrinking the underlying hardware
-of your application. This is commonly referred to as scaling. CouchDB enforces
-a set of limits on the programmer. On first look, CouchDB might seem
-inflexible, but some features are left out by design for the simple reason
-that if CouchDB supported them, it would allow a programmer to create
-applications that couldn't deal with scaling up or down.
-
-.. note::
-    CouchDB doesn't let you do things that would get you in trouble later on.
-    This sometimes means you'll have to unlearn best practices you might have
-    picked up in your current or past work.
-
-A Different Way to Model Your Data
-==================================
-
-We believe that CouchDB will drastically change the way you build
-document-based applications. CouchDB combines an intuitive document storage
-model with a powerful query engine in a way that's so simple you'll probably
-be tempted to ask, “Why has no one built something like this before?”
-
-    Django may be built for the Web, but CouchDB is built of the Web. I've
-    never seen software that so completely embraces the philosophies behind
-    HTTP. CouchDB makes Django look old-school in the same way that Django
-    makes ASP look outdated.
-
-    -- Jacob Kaplan-Moss, Django developer
-
-CouchDB's design borrows heavily from web architecture and the concepts of
-resources, methods, and representations. It augments this with powerful ways
-to query, map, combine, and filter your data. Add fault tolerance, extreme
-scalability, and incremental replication, and CouchDB defines a sweet spot
-for document databases.
-
-A Better Fit for Common Applications
-====================================
-
-We write software to improve our lives and the lives of others. Usually this
-involves taking some mundane information such as contacts, invoices,
-or receipts and manipulating it using a computer application. CouchDB is a
-great fit for common applications like this because it embraces the natural
-idea of evolving, self-contained documents as the very core of its data model.
-
-Self-Contained Data
--------------------
-
-An invoice contains all the pertinent information about a single transaction
-the seller, the buyer, the date, and a list of the items or services sold.
-As shown in :ref:`intro/why-01`, there's no abstract reference on this
-piece of paper that points to some other piece of paper with the seller's
-name and address. Accountants appreciate the simplicity of having everything
-in one place. And given the choice, programmers appreciate that, too.
-
-.. _intro/why-01:
-
-.. figure:: ../../images/intro-why-01.png
-    :align: center
-    :alt: Self-contained documents
-
-    Figure 1. Self-contained documents
-
-Yet using references is exactly how we model our data in a relational
-database! Each invoice is stored in a table as a row that refers to other
-rows in other tables one row for seller information, one for the buyer,
-one row for each item billed, and more rows still to describe the item
-details, manufacturer details, and so on and so forth.
-
-This isn't meant as a detraction of the relational model, which is widely
-applicable and extremely useful for a number of reasons. Hopefully, though, it
-illustrates the point that sometimes your model may not “fit” your data
-in the way it occurs in the real world.
-
-Let's take a look at the humble contact database to illustrate a different
-way of modeling data, one that more closely “fits” its real-world counterpart
--- a pile of business cards. Much like our invoice example, a business card
-contains all the important information, right there on the cardstock.
-We call this “self-contained” data, and it's an important concept
-in understanding document databases like CouchDB.
-
-Syntax and Semantics
---------------------
-
-Most business cards contain roughly the same information -- someone's identity,
-an affiliation, and some contact information. While the exact form of this
-information can vary between business cards, the general information being
-conveyed remains the same, and we're easily able to recognize it as a
-business card. In this sense, we can describe a business card as a *real-world
-document*.
-
-Jan's business card might contain a phone number but no fax number,
-whereas J. Chris's business card contains both a phone and a fax number. Jan
-does not have to make his lack of a fax machine explicit by writing something
-as ridiculous as “Fax: None” on the business card. Instead, simply omitting
-a fax number implies that he doesn't have one.
-
-We can see that real-world documents of the same type, such as business cards,
-tend to be very similar in *semantics* -- the sort of information they carry,
-but can vary hugely in *syntax*, or how that information is structured. As human
-beings, we're naturally comfortable dealing with this kind of variation.
-
-While a traditional relational database requires you to model your data
-*up front*, CouchDB's schema-free design unburdens you with a powerful way to
-aggregate your data *after the fact*, just like we do with real-world
-documents. We'll look in depth at how to design applications with this
-underlying storage paradigm.
-
-Building Blocks for Larger Systems
-==================================
-
-CouchDB is a storage system useful on its own. You can build many applications
-with the tools CouchDB gives you. But CouchDB is designed with a bigger picture
-in mind. Its components can be used as building blocks that solve storage
-problems in slightly different ways for larger and more complex systems.
-
-Whether you need a system that's crazy fast but isn't too concerned with
-reliability (think logging), or one that guarantees storage in two or more
-physically separated locations for reliability, but you're willing to take a
-performance hit, CouchDB lets you build these systems.
-
-There are a multitude of knobs you could turn to make a system work better in
-one area, but you'll affect another area when doing so. One example would be
-the CAP theorem discussed in :ref:`intro/consistency`. To give you an idea of
-other things that affect storage systems, see
-:ref:`Figure 2 <intro/why-figure-02>` and :ref:`Figure 3 <intro/why-figure-03>`.
-
-By reducing latency for a given system (and that is true not only for storage
-systems), you affect concurrency and throughput capabilities.
-
-.. _intro/why-figure-02:
-
-.. figure:: ../../images/intro-why-02.png
-    :align: center
-    :alt: Throughput, latency, or concurrency
-
-    Figure 2. Throughput, latency, or concurrency
-
-.. _intro/why-figure-03:
-
-.. figure:: ../../images/intro-why-03.png
-    :align: center
-    :alt: Scaling: read requests, write requests, or data
-
-    Figure 3. Scaling: read requests, write requests, or data
-
-When you want to scale out, there are three distinct issues to deal with:
-scaling read requests, write requests, and data. Orthogonal to all three and
-to the items shown in :ref:`Figure 2 <intro/why-figure-02>` and :ref:`Figure 3
-<intro/why-figure-03>` are many more attributes like reliability or simplicity.
-You can draw many of these graphs that show how different features or attributes
-pull into different directions and thus shape the system they describe.
-
-CouchDB is very flexible and gives you enough building blocks to create a
-system shaped to suit your exact problem. That's not saying that CouchDB can
-be bent to solve any problem -- CouchDB is no silver bullet -- but in the
-area of data storage, it can get you a long way.
-
-CouchDB Replication
-===================
-
-CouchDB replication is one of these building blocks. Its fundamental function
-is to synchronize two or more CouchDB databases. This may sound simple,
-but the simplicity is key to allowing replication to solve a number of
-problems: reliably synchronize databases between multiple machines for
-redundant data storage; distribute data to a cluster of CouchDB instances
-that share a subset of the total number of requests that hit the cluster
-(load balancing); and distribute data between physically distant locations,
-such as one office in New York and another in Tokyo.
-
-CouchDB replication uses the same REST API all clients use. HTTP is
-ubiquitous and well understood. Replication works incrementally; that is,
-if during replication anything goes wrong, like dropping your network
-connection, it will pick up where it left off the next time it runs. It also
-only transfers data that is needed to synchronize databases.
-
-A core assumption CouchDB makes is that things can go wrong,
-like network connection troubles, and it is designed for graceful error
-recovery instead of assuming all will be well. The replication system's
-incremental design shows that best. The ideas behind “things that can go
-wrong” are embodied in the `Fallacies of Distributed Computing`_:
-
-- The network is reliable.
-- Latency is zero.
-- Bandwidth is infinite.
-- The network is secure.
-- Topology doesn't change.
-- There is one administrator.
-- Transport cost is zero.
-- The network is homogeneous.
-
-Existing tools often try to hide the fact that there is a network and that
-any or all of the previous conditions don't exist for a particular system.
-This usually results in fatal error scenarios when something finally goes
-wrong. In contrast, CouchDB doesn't try to hide the network; it just handles
-errors gracefully and lets you know when actions on your end are required.
-
-.. _Fallacies of Distributed Computing: http://en.wikipedia.org/wiki/Fallacies_of_Distributed_Computing
-
-Local Data Is King
-==================
-
-CouchDB takes quite a few lessons learned from the Web,
-but there is one thing that could be improved about the Web: latency.
-Whenever you have to wait for an application to respond or a website to
-render, you almost always wait for a network connection that isn't as fast as
-you want it at that point. Waiting a few seconds instead of milliseconds
-greatly affects user experience and thus user satisfaction.
-
-What do you do when you are offline? This happens all the time -- your DSL or
-cable provider has issues, or your iPhone, G1, or Blackberry has no bars,
-and no connectivity means no way to get to your data.
-
-CouchDB can solve this scenario as well, and this is where scaling is
-important again. This time it is scaling down. Imagine CouchDB installed on
-phones and other mobile devices that can synchronize data with centrally
-hosted CouchDBs when they are on a network. The synchronization is not bound
-by user interface constraints like sub-second response times. It is easier to
-tune for high bandwidth and higher latency than for low bandwidth and very
-low latency. Mobile applications can then use the local CouchDB to fetch
-data, and since no remote networking is required for that,
-latency is low by default.
-
-Can you really use CouchDB on a phone? Erlang, CouchDB's implementation
-language has been designed to run on embedded devices magnitudes smaller and
-less powerful than today's phones.
-
-Wrapping Up
-===========
-
-The next document :ref:`intro/consistency` further explores the distributed
-nature of CouchDB. We should have given you enough bites to whet your interest.
-Let's go!
diff --git a/src/json-structure.rst b/src/json-structure.rst
deleted file mode 100644
index 2220244..0000000
--- a/src/json-structure.rst
+++ /dev/null
@@ -1,693 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-========================
-JSON Structure Reference
-========================
-
-The following appendix provides a quick reference to all the JSON structures
-that you can supply to CouchDB, or get in return to requests.
-
-All Database Documents
-======================
-
-+-----------------------+------------------------------------------+
-| Field                 | Description                              |
-+=======================+==========================================+
-| total_rows            | Number of documents in the database/view |
-+-----------------------+------------------------------------------+
-| offset                | Offset where the document list started   |
-+-----------------------+------------------------------------------+
-| update_seq (optional) | Current update sequence for the database |
-+-----------------------+------------------------------------------+
-| rows [array]          | Array of document object                 |
-+-----------------------+------------------------------------------+
-
-Bulk Document Response
-======================
-
-+--------------+-----------------------------------+
-| Field        | Description                       |
-+==============+===================================+
-| docs [array] | Bulk Docs Returned Documents      |
-+--------------+-----------------------------------+
-| id           | Document ID                       |
-+--------------+-----------------------------------+
-| error        | Error type                        |
-+--------------+-----------------------------------+
-| reason       | Error string with extended reason |
-+--------------+-----------------------------------+
-
-Bulk Documents
-==============
-
-+---------------------+----------------------------------------+
-| Field               | Description                            |
-+=====================+========================================+
-| docs [array]        | Bulk Documents Document                |
-+---------------------+----------------------------------------+
-| _id (optional)      | Document ID                            |
-+---------------------+----------------------------------------+
-| _rev (optional)     | Revision ID (when updating an existing |
-|                     | document)                              |
-+---------------------+----------------------------------------+
-| _deleted (optional) | Whether the document should be deleted |
-+---------------------+----------------------------------------+
-
-Changes information for a database
-==================================
-
-+-----------------+-------------------------------------------+
-| Field           | Description                               |
-+=================+===========================================+
-| last_seq        | Last update sequence                      |
-+-----------------+-------------------------------------------+
-| pending         | Count of remaining items in the feed      |
-+-----------------+-------------------------------------------+
-| results [array] | Changes made to a database                |
-+-----------------+-------------------------------------------+
-| seq             | Update sequence                           |
-+-----------------+-------------------------------------------+
-| id              | Document ID                               |
-+-----------------+-------------------------------------------+
-| changes [array] | List of changes, field-by-field, for this |
-|                 | document                                  |
-+-----------------+-------------------------------------------+
-
-CouchDB Document
-================
-
-+-----------------+----------------------------------------+
-| Field           | Description                            |
-+=================+========================================+
-| _id (optional)  | Document ID                            |
-+-----------------+----------------------------------------+
-| _rev (optional) | Revision ID (when updating an existing |
-|                 | document)                              |
-+-----------------+----------------------------------------+
-
-CouchDB Error Status
-====================
-
-+--------+-----------------------------------+
-| Field  | Description                       |
-+========+===================================+
-| id     | Document ID                       |
-+--------+-----------------------------------+
-| error  | Error type                        |
-+--------+-----------------------------------+
-| reason | Error string with extended reason |
-+--------+-----------------------------------+
-
-.. _dbinfo_object:
-
-CouchDB database information object
-===================================
-
-+----------------------+----------------------------------------------+
-| Field                | Description                                  |
-+======================+==============================================+
-| db_name              | The name of the database.                    |
-+----------------------+----------------------------------------------+
-| committed_update_seq | The number of committed updates.             |
-+----------------------+----------------------------------------------+
-| doc_count            | The number of documents in the database.     |
-+----------------------+----------------------------------------------+
-| doc_del_count        | The number of deleted documents.             |
-+----------------------+----------------------------------------------+
-| compact_running      | Set to true if the database compaction       |
-|                      | routine is operating on this database.       |
-+----------------------+----------------------------------------------+
-| disk_format_version  | The version of the physical format used for  |
-|                      | the data when it is stored on hard disk.     |
-+----------------------+----------------------------------------------+
-| disk_size            | Size in bytes of the data as stored on disk. |
-|                      | View indexes are not included in the         |
-|                      | calculation.                                 |
-+----------------------+----------------------------------------------+
-| instance_start_time  | Timestamp indicating when the database was   |
-|                      | opened, expressed in microseconds since the  |
-|                      | epoch.                                       |
-+----------------------+----------------------------------------------+
-| purge_seq            | The number of purge operations on the        |
-|                      | database.                                    |
-+----------------------+----------------------------------------------+
-| update_seq           | Current update sequence for the database.    |
-+----------------------+----------------------------------------------+
-
-Design Document
-===============
-
-+-------------------+--------------------------+
-| Field             | Description              |
-+===================+==========================+
-| _id               | Design Document ID       |
-+-------------------+--------------------------+
-| _rev              | Design Document Revision |
-+-------------------+--------------------------+
-| views             | View                     |
-+-------------------+--------------------------+
-| viewname          | View Definition          |
-+-------------------+--------------------------+
-| map               | Map Function for View    |
-+-------------------+--------------------------+
-| reduce (optional) | Reduce Function for View |
-+-------------------+--------------------------+
-
-Design Document Information
-===========================
-
-+-----------------+----------------------------------------------+
-| Field           | Description                                  |
-+=================+==============================================+
-| name            | Name/ID of Design Document                   |
-+-----------------+----------------------------------------------+
-| view_index      | View Index                                   |
-+-----------------+----------------------------------------------+
-| compact_running | Indicates whether a compaction routine is    |
-|                 | currently running on the view                |
-+-----------------+----------------------------------------------+
-| disk_size       | Size in bytes of the view as stored on disk  |
-+-----------------+----------------------------------------------+
-| language        | Language for the defined views               |
-+-----------------+----------------------------------------------+
-| purge_seq       | The purge sequence that has been processed   |
-+-----------------+----------------------------------------------+
-| signature       | MD5 signature of the views for the design    |
-|                 | document                                     |
-+-----------------+----------------------------------------------+
-| update_seq      | The update sequence of the corresponding     |
-|                 | database that has been indexed               |
-+-----------------+----------------------------------------------+
-| updater_running | Indicates if the view is currently being     |
-|                 | updated                                      |
-+-----------------+----------------------------------------------+
-| waiting_clients | Number of clients waiting on views from this |
-|                 | design document                              |
-+-----------------+----------------------------------------------+
-| waiting_commit  | Indicates if there are outstanding commits   |
-|                 | to the underlying database that need to      |
-|                 | processed                                    |
-+-----------------+----------------------------------------------+
-
-Document with Attachments
-=========================
-
-+-------------------------+-----------------------------------------+
-| Field                   | Description                             |
-+=========================+=========================================+
-| _id (optional)          | Document ID                             |
-+-------------------------+-----------------------------------------+
-| _rev (optional)         | Revision ID (when updating an existing  |
-|                         | document)                               |
-+-------------------------+-----------------------------------------+
-| _attachments (optional) | Document Attachment                     |
-+-------------------------+-----------------------------------------+
-| filename                | Attachment information                  |
-+-------------------------+-----------------------------------------+
-| content_type            | MIME Content type string                |
-+-------------------------+-----------------------------------------+
-| data                    | File attachment content, Base64 encoded |
-+-------------------------+-----------------------------------------+
-
-List of Active Tasks
-====================
-
-+---------------+---------------------+
-| Field         | Description         |
-+===============+=====================+
-| tasks [array] | Active Tasks        |
-+---------------+---------------------+
-| pid           | Process ID          |
-+---------------+---------------------+
-| status        | Task status message |
-+---------------+---------------------+
-| task          | Task name           |
-+---------------+---------------------+
-| type          | Operation Type      |
-+---------------+---------------------+
-
-.. _replication-settings:
-
-Replication Settings
-====================
-
-+--------------------------------+---------------------------------------------+
-| Field                          | Description                                 |
-+================================+=============================================+
-| source                         | Source database name or URL.                |
-+--------------------------------+---------------------------------------------+
-| target                         | Target database name or URL.                |
-+--------------------------------+---------------------------------------------+
-| cancel (optional)              | Cancels the replication.                    |
-+--------------------------------+---------------------------------------------+
-| checkpoint_interval (optional) | Specifies the checkpoint interval in ms.    |
-+--------------------------------+---------------------------------------------+
-| continuous (optional)          | Configure the replication to be continuous. |
-+--------------------------------+---------------------------------------------+
-| create_target (optional)       | Creates the target database.                |
-+--------------------------------+---------------------------------------------+
-| doc_ids (optional)             | Array of document IDs to be synchronized.   |
-+--------------------------------+---------------------------------------------+
-| filter (optional)              | name of the filter function in the form of  |
-|                                | ``ddoc/myfilter``.                          |
-+--------------------------------+---------------------------------------------+
-| source_proxy (optional)        | Address of a proxy server through which     |
-|                                | replication from the source should occur.   |
-+--------------------------------+---------------------------------------------+
-| target_proxy (optional)        | Address of a proxy server through which     |
-|                                | replication to the target should occur.     |
-+--------------------------------+---------------------------------------------+
-| query_params (optional)        | Query parameter that are passed to the      |
-|                                | filter function; the value should be a      |
-|                                | document containing parameters as members.  |
-+--------------------------------+---------------------------------------------+
-| selector (optional)            | Select the documents included in the        |
-|                                | replication. This option provides           |
-|                                | performance benefits compared with using    |
-|                                | the ``filter`` option.                      |
-+--------------------------------+---------------------------------------------+
-| since_seq (optional)           | Sequence from which the replication should  |
-|                                | start.                                      |
-+--------------------------------+---------------------------------------------+
-| use_checkpoints (optional)     | Whether to use replication checkpoints      |
-|                                | or not.                                     |
-+--------------------------------+---------------------------------------------+
-| winning_revs_only (optional)   | Replicate only the winning revisions.       |
-+--------------------------------+---------------------------------------------+
-
-.. _replication-status:
-
-Replication Status
-==================
-
-+--------------------+-------------------------------------------+
-| Field              | Description                               |
-+====================+===========================================+
-| ok                 | Replication status                        |
-+--------------------+-------------------------------------------+
-| session_id         | Unique session ID                         |
-+--------------------+-------------------------------------------+
-| source_last_seq    | Last sequence number read from the source |
-|                    | database                                  |
-+--------------------+-------------------------------------------+
-| history [array]    | Replication History                       |
-+--------------------+-------------------------------------------+
-| session_id         | Session ID for this replication operation |
-+--------------------+-------------------------------------------+
-| recorded_seq       | Last recorded sequence number             |
-+--------------------+-------------------------------------------+
-| docs_read          | Number of documents read                  |
-+--------------------+-------------------------------------------+
-| docs_written       | Number of documents written to target     |
-+--------------------+-------------------------------------------+
-| doc_write_failures | Number of document write failures         |
-+--------------------+-------------------------------------------+
-| start_time         | Date/Time replication operation started   |
-+--------------------+-------------------------------------------+
-| start_last_seq     | First sequence number in changes stream   |
-+--------------------+-------------------------------------------+
-| end_time           | Date/Time replication operation completed |
-+--------------------+-------------------------------------------+
-| end_last_seq       | Last sequence number in changes stream    |
-+--------------------+-------------------------------------------+
-| missing_checked    | Number of missing documents checked       |
-+--------------------+-------------------------------------------+
-| missing_found      | Number of missing documents found         |
-+--------------------+-------------------------------------------+
-| bulk_get_attempts  | Number of attempted _bulk_get fetches     |
-+--------------------+-------------------------------------------+
-| bulk_get_docs      | Number of documents read with _bulk_get   |
-+--------------------+-------------------------------------------+
-
-.. _request_object:
-
-Request object
-==============
-
-+----------------+---------------------------------------------+
-| Field          | Description                                 |
-+================+=============================================+
-| body           | Request body data as `string`.              |
-|                | If the request method is `GET` this field   |
-|                | contains the value ``"undefined"``. If the  |
-|                | method is `DELETE` or `HEAD` the value is   |
-|                | ``""`` (empty string).                      |
-+----------------+---------------------------------------------+
-| cookie         | Cookies `object`.                           |
-+----------------+---------------------------------------------+
-| form           | Form data `object`.                         |
-|                | Contains the decoded body as key-value      |
-|                | pairs if the `Content-Type` header was      |
-|                | ``application/x-www-form-urlencoded``.      |
-+----------------+---------------------------------------------+
-| headers        | Request headers `object`.                   |
-+----------------+---------------------------------------------+
-| id             | Requested document id `string` if it was    |
-|                | specified or ``null`` otherwise.            |
-+----------------+---------------------------------------------+
-| info           | :ref:`Database information <dbinfo_object>` |
-+----------------+---------------------------------------------+
-| method         | Request method as `string` or `array`.      |
-|                | String value is a method as one of: `HEAD`, |
-|                | `GET`, `POST`, `PUT`, `DELETE`, `OPTIONS`,  |
-|                | and `TRACE`. Otherwise it will be           |
-|                | represented as an array of char codes.      |
-+----------------+---------------------------------------------+
-| path           | List of requested path sections.            |
-+----------------+---------------------------------------------+
-| peer           | Request source IP address.                  |
-+----------------+---------------------------------------------+
-| query          | URL query parameters `object`.              |
-|                | Note that multiple keys are not supported   |
-|                | and the last key value suppresses others.   |
-+----------------+---------------------------------------------+
-| requested_path | List of actual requested path section.      |
-+----------------+---------------------------------------------+
-| raw_path       | Raw requested path `string`.                |
-+----------------+---------------------------------------------+
-| secObj         | :ref:`security_object`.                     |
-+----------------+---------------------------------------------+
-| userCtx        | :ref:`userctx_object`.                      |
-+----------------+---------------------------------------------+
-| uuid           | Generated UUID by a specified algorithm in  |
-|                | the config file.                            |
-+----------------+---------------------------------------------+
-
-.. code-block:: javascript
-
-    {
-        "body": "undefined",
-        "cookie": {
-            "AuthSession": "cm9vdDo1MDZBRjQzRjrfcuikzPRfAn-EA37FmjyfM8G8Lw",
-            "m": "3234"
-        },
-        "form": {},
-        "headers": {
-            "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
-            "Accept-Charset": "ISO-8859-1,utf-8;q=0.7,*;q=0.3",
-            "Accept-Encoding": "gzip,deflate,sdch",
-            "Accept-Language": "en-US,en;q=0.8",
-            "Connection": "keep-alive",
-            "Cookie": "m=3234:t|3247:t|6493:t|6967:t|34e2:|18c3:t|2c69:t|5acb:t|ca3:t|c01:t|5e55:t|77cb:t|2a03:t|1d98:t|47ba:t|64b8:t|4a01:t; AuthSession=cm9vdDo1MDZBRjQzRjrfcuikzPRfAn-EA37FmjyfM8G8Lw",
-            "Host": "127.0.0.1:5984",
-            "User-Agent": "Mozilla/5.0 (Windows NT 5.2) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.75 Safari/535.7"
-        },
-        "id": "foo",
-        "info": {
-            "committed_update_seq": 2701412,
-            "compact_running": false,
-            "db_name": "mailbox",
-            "disk_format_version": 6,
-            "doc_count": 2262757,
-            "doc_del_count": 560,
-            "instance_start_time": "1347601025628957",
-            "purge_seq": 0,
-            "sizes": {
-              "active": 7580843252,
-              "disk": 14325313673,
-              "external": 7803423459
-            },
-            "update_seq": 2701412
-        },
-        "method": "GET",
-        "path": [
-            "mailbox",
-            "_design",
-            "request",
-            "_show",
-            "dump",
-            "foo"
-        ],
-        "peer": "127.0.0.1",
-        "query": {},
-        "raw_path": "/mailbox/_design/request/_show/dump/foo",
-        "requested_path": [
-            "mailbox",
-            "_design",
-            "request",
-            "_show",
-            "dump",
-            "foo"
-        ],
-        "secObj": {
-            "admins": {
-                "names": [
-                    "Bob"
-                ],
-                "roles": []
-            },
-            "members": {
-                "names": [
-                    "Mike",
-                    "Alice"
-                ],
-                "roles": []
-            }
-        },
-        "userCtx": {
-            "db": "mailbox",
-            "name": "Mike",
-            "roles": [
-                "user"
-            ]
-        },
-        "uuid": "3184f9d1ea934e1f81a24c71bde5c168"
-    }
-
-.. _request2_object:
-
-Request2 object
-===============
-
-+----------------+---------------------------------------------+
-| Field          | Description                                 |
-+================+=============================================+
-| body           | Request body data as `string`.              |
-|                | If the request method is `GET` this field   |
-|                | contains the value ``"undefined"``. If the  |
-|                | method is `DELETE` or `HEAD` the value is   |
-|                | ``""`` (empty string).                      |
-+----------------+---------------------------------------------+
-| cookie         | Cookies `object`.                           |
-+----------------+---------------------------------------------+
-| headers        | Request headers `object`.                   |
-+----------------+---------------------------------------------+
-| method         | Request method as `string` or `array`.      |
-|                | String value is a method as one of: `HEAD`, |
-|                | `GET`, `POST`, `PUT`, `DELETE`, `OPTIONS`,  |
-|                | and `TRACE`. Otherwise it will be           |
-|                | represented as an array of char codes.      |
-+----------------+---------------------------------------------+
-| path           | List of requested path sections.            |
-+----------------+---------------------------------------------+
-| peer           | Request source IP address.                  |
-+----------------+---------------------------------------------+
-| query          | URL query parameters `object`.              |
-|                | Note that multiple keys are not supported   |
-|                | and the last key value suppresses others.   |
-+----------------+---------------------------------------------+
-| requested_path | List of actual requested path section.      |
-+----------------+---------------------------------------------+
-| raw_path       | Raw requested path `string`.                |
-+----------------+---------------------------------------------+
-| secObj         | :ref:`security_object`.                     |
-+----------------+---------------------------------------------+
-| userCtx        | :ref:`userctx_object`.                      |
-+----------------+---------------------------------------------+
-
-.. _response_object:
-
-Response object
-===============
-
-+---------+--------------------------------------------+
-| Field   | Description                                |
-+=========+============================================+
-| code    | HTTP status code `number`.                 |
-+---------+--------------------------------------------+
-| json    | JSON encodable `object`.                   |
-|         | Implicitly sets `Content-Type` header as   |
-|         | ``application/json``.                      |
-+---------+--------------------------------------------+
-| body    | Raw response text `string`.                |
-|         | Implicitly sets `Content-Type` header as   |
-|         | ``text/html; charset=utf-8``.              |
-+---------+--------------------------------------------+
-| base64  | Base64 encoded `string`.                   |
-|         | Implicitly sets `Content-Type` header as   |
-|         | ``application/binary``.                    |
-+---------+--------------------------------------------+
-| headers | Response headers `object`.                 |
-|         | `Content-Type` header from this object     |
-|         | overrides any implicitly assigned one.     |
-+---------+--------------------------------------------+
-| stop    | `boolean` signal to stop iteration over    |
-|         | view result rows (for list functions only) |
-+---------+--------------------------------------------+
-
-.. warning::
-    The ``body``, ``base64`` and ``json`` object keys are overlapping each other
-    where the last one wins. Since most realizations of key-value objects do
-    not preserve the key order or if they are mixed, confusing situations can
-    occur. Try to use only one of them.
-
-.. note::
-    Any custom property makes CouchDB raise an internal exception. Furthermore,
-    the `Response object` could be a simple string value which would be
-    implicitly wrapped into a ``{"body": ...}`` object.
-
-Returned CouchDB Document with Detailed Revision Info
-=====================================================
-
-+--------------------+-----------------------------------------+
-| Field              | Description                             |
-+====================+=========================================+
-| _id (optional)     | Document ID                             |
-+--------------------+-----------------------------------------+
-| _rev (optional)    | Revision ID (when updating an existing  |
-|                    | document)                               |
-+--------------------+-----------------------------------------+
-| _revs_info [array] | CouchDB document extended revision info |
-+--------------------+-----------------------------------------+
-| rev                | Full revision string                    |
-+--------------------+-----------------------------------------+
-| status             | Status of the revision                  |
-+--------------------+-----------------------------------------+
-
-Returned CouchDB Document with Revision Info
-============================================
-
-+-----------------+-----------------------------------------+
-| Field           | Description                             |
-+=================+=========================================+
-| _id (optional)  | Document ID                             |
-+-----------------+-----------------------------------------+
-| _rev (optional) | Revision ID (when updating an existing  |
-|                 | document)                               |
-+-----------------+-----------------------------------------+
-| _revisions      | CouchDB document revisions              |
-+-----------------+-----------------------------------------+
-| ids [array]     | Array of valid revision IDs, in reverse |
-|                 | order (latest first)                    |
-+-----------------+-----------------------------------------+
-| start           | Prefix number for the latest revision   |
-+-----------------+-----------------------------------------+
-
-Returned Document with Attachments
-==================================
-
-+-------------------------+--------------------------------------------+
-| Field                   | Description                                |
-+=========================+============================================+
-| _id (optional)          | Document ID                                |
-+-------------------------+--------------------------------------------+
-| _rev (optional)         | Revision ID (when updating an existing     |
-|                         | document)                                  |
-+-------------------------+--------------------------------------------+
-| _attachments (optional) | Document attachment                        |
-+-------------------------+--------------------------------------------+
-| filename                | Attachment                                 |
-+-------------------------+--------------------------------------------+
-| stub                    | Indicates whether the attachment is a stub |
-+-------------------------+--------------------------------------------+
-| content_type            | MIME Content type string                   |
-+-------------------------+--------------------------------------------+
-| length                  | Length (bytes) of the attachment data      |
-+-------------------------+--------------------------------------------+
-| revpos                  | Revision where this attachment exists      |
-+-------------------------+--------------------------------------------+
-
-.. _security_object:
-
-Security Object
-===============
-
-+---------------+---------------------------------------+
-| Field         | Description                           |
-+===============+=======================================+
-| admins        | Roles/Users with admin privileges     |
-+---------------+---------------------------------------+
-| roles [array] | List of roles with parent privilege   |
-+---------------+---------------------------------------+
-| names [array] | List of users with parent privilege   |
-+---------------+---------------------------------------+
-| members       | Roles/Users with non-admin privileges |
-+---------------+---------------------------------------+
-| roles [array] | List of roles with parent privilege   |
-+---------------+---------------------------------------+
-| names [array] | List of users with parent privilege   |
-+---------------+---------------------------------------+
-
-.. code-block:: javascript
-
-    {
-        "admins": {
-            "names": [
-                "Bob"
-            ],
-            "roles": []
-        },
-        "members": {
-            "names": [
-                "Mike",
-                "Alice"
-            ],
-            "roles": []
-        }
-    }
-
-.. _userctx_object:
-
-User Context Object
-===================
-
-+-------+-------------------------------------+
-| Field | Description                         |
-+=======+=====================================+
-| db    | Database name in the context of the |
-|       | provided operation.                 |
-+-------+-------------------------------------+
-| name  | User name.                          |
-+-------+-------------------------------------+
-| roles | List of user roles.                 |
-+-------+-------------------------------------+
-
-.. code-block:: javascript
-
-    {
-        "db": "mailbox",
-        "name": null,
-        "roles": [
-            "_admin"
-        ]
-    }
-
-.. _view_head_info_object:
-
-View Head Information
-=====================
-
-+------------+----------------------------------------+
-| Field      | Description                            |
-+============+========================================+
-| total_rows | Number of documents in the view        |
-+------------+----------------------------------------+
-| offset     | Offset where the document list started |
-+------------+----------------------------------------+
-
-.. code-block:: javascript
-
-    {
-        "total_rows": 42,
-        "offset": 3
-    }
diff --git a/src/maintenance/backups.rst b/src/maintenance/backups.rst
deleted file mode 100644
index 574b668..0000000
--- a/src/maintenance/backups.rst
+++ /dev/null
@@ -1,90 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _backups:
-
-==================
-Backing up CouchDB
-==================
-
-CouchDB has three different types of files it can create during runtime:
-
-* Database files (including secondary indexes)
-* Configuration files (``*.ini``)
-* Log files (if configured to log to disk)
-
-Below are strategies for ensuring consistent backups of all of these files.
-
-Database Backups
-================
-
-The simplest and easiest approach for CouchDB backup is to use :ref:`CouchDB
-replication <replication>` to another CouchDB installation.  You can choose
-between :ref:`normal (one-shot) or continuous replications <Normal vs Continuous
-Replications>` depending on your need.
-
-However, you can also copy the actual ``.couch`` files from the CouchDB data
-directory (by default, ``data/``) at any time, without problem. CouchDB's
-append-only storage format for both databases and secondary indexes ensures that
-this will work without issue.
-
-To ensure reliability of backups, it is recommended that you *back up secondary
-indexes* (stored under ``data/.shards``) *prior to backing up the main database
-files* (stored under ``data/shards`` as well as the system-level databases at the
-parent ``data/`` directory). This is because CouchDB will automatically handle
-views/secondary indexes that are slightly out of date by updating them on the
-next read access, but views or secondary indexes that are *newer* than their
-associated databases will trigger a *full rebuild of the index*. This can be a
-very costly and time-consuming operation, and can impact your ability to
-recover quickly in a disaster situation.
-
-On supported operating systems/storage environments, you can also make use of
-`storage snapshots <https://en.wikipedia.org/wiki/Snapshot_(computer_storage)>`_.
-These have the advantage of being near-instantaneous when working with block
-storage systems such as `ZFS <https://en.wikipedia.org/wiki/ZFS>`_ or `LVM
-<https://en.wikipedia.org/wiki/Logical_Volume_Manager_(Linux)>`_ or `Amazon EBS
-<https://en.wikipedia.org/wiki/Amazon_Elastic_Block_Store>`_. When using
-snapshots at the block-storage level, be sure to quiesce the file system with an
-OS-level utility such as Linux's `fsfreeze
-<https://linux.die.net/man/8/fsfreeze>`_ if necessary. If unsure, consult your
-operating system's or cloud provider's documentation for more detail.
-
-Configuration Backups
-=====================
-
-CouchDB's :ref:`configuration system <config/intro>` stores data in ``.ini`` files
-under the configuration directory (by default, ``etc/``). If changes are made
-to the configuration at runtime, the very last file in the configuration chain
-will be updated with the changes.
-
-Simple back up the entire ``etc/`` directory to ensure a consistent configuration
-after restoring from backup.
-
-If no changes to the configuration are made at runtime through the HTTP API,
-and all configuration files are managed by a configuration management system
-(such as `Ansible <https://en.wikipedia.org/wiki/Ansible_(software)>`_ or
-`Chef <https://en.wikipedia.org/wiki/Chef_(software)>`_), there is no need to
-backup the configuration directory.
-
-Log Backups
-===========
-
-If :ref:`configured to log to a file <config/log>`, you may want to back up the
-log files written by CouchDB. Any backup solution for these files works.
-
-Under UNIX-like systems, if using log rotation software, a copy-then-truncate
-approach is necessary. This will truncate the original log file to zero size in
-place after creating a copy. CouchDB does not recognize any signal to be told to
-close its log file and create a new one. Because of this, and because of
-differences in how file handles function, there is no straightforward log
-rotation solution under Microsoft Windows other than periodic restarts of the
-CouchDB process.
diff --git a/src/maintenance/compaction.rst b/src/maintenance/compaction.rst
deleted file mode 100644
index c15344f..0000000
--- a/src/maintenance/compaction.rst
+++ /dev/null
@@ -1,358 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _compact:
-
-==========
-Compaction
-==========
-
-The `compaction` operation is a way to reduce disk space usage by removing
-unused and old data from database or view index files. This operation is very
-similar to the `vacuum` (`SQLite`_ ex.) operation available for other database
-management systems.
-
-.. _SQLite: http://www.sqlite.org/lang_vacuum.html
-
-During compaction, CouchDB re-creates the database or view in a new file
-with the ``.compact`` extension. As this requires roughly twice the disk storage,
-CouchDB first checks for available disk space before proceeding.
-
-When all actual data is successfully transferred to the newly compacted file,
-CouchDB transparently swaps the compacted file into service, and removes the
-old database or view file.
-
-Since CouchDB 2.1.1, automated compaction is enabled by default, and is
-described in the next section. It is still possible to trigger manual
-compaction if desired or necessary. This is described in the subsequent
-sections.
-
-.. _compact/auto:
-
-Automatic Compaction
-====================
-
-CouchDB's automatic compaction daemon, internally known as "smoosh", will
-trigger compaction jobs for both databases and views based on configurable
-thresholds for the sparseness of a file and the total amount of space that can
-be recovered.
-
-Channels
---------
-
-Smoosh works using the concept of channels. A channel is essentially a queue of
-pending compactions. There are separate sets of active channels for databases
-and views. Each channel is assigned a configuration which defines whether a
-compaction ends up in the channel's queue and how compactions are prioritized
-within that queue.
-
-Smoosh takes each channel and works through the compactions queued in each in
-priority order. Each channel is processed concurrently, so the priority levels
-only matter within a given channel. Each channel has an assigned number of
-active compactions, which defines how many compactions happen for that channel
-in parallel. For example, a cluster with a lot of database churn but few views
-might require more active compactions in the database channel(s).
-
-It's important to remember that a channel is local to a CouchDB node; that is,
-each node maintains and processes an independent set of compactions. Channels
-are defined as either "ratio" channels or "slack" channels, depending on the
-type of algorithm used for prioritization:
-
--   Ratio: uses the ratio of sizes.file / sizes.active as its driving
-    calculation. The result X must be greater than some configurable value Y for
-    a compaction to be added to the queue. Compactions are then prioritised for
-    higher values of X.
-
--   Slack: uses the difference of sizes.file - sizes.active as its driving
-    calculation. The result X must be greater than some configurable value Y for
-    a compaction to be added to the queue. Compactions are prioritised for
-    higher values of X.
-
-In both cases, Y is set using the ``min_priority`` configuration variable. CouchDB
-ships with four channels pre-configured: one channel of each type for databases,
-and another one for views.
-
-Channel Configuration
----------------------
-
-Channels are defined using ``[smoosh.<channel_name>]`` configuration blocks, and
-activated by naming the channel in the ``db_channels`` or ``view_channels``
-configuration setting in the ``[smoosh]`` block. The default configuration is
-
-.. code-block:: ini
-
-    [smoosh]
-    db_channels = upgrade_dbs,ratio_dbs,slack_dbs
-    view_channels = upgrade_views,ratio_views,slack_views
-
-    [smoosh.ratio_dbs]
-    priority = ratio
-    min_priority = 2.0
-
-    [smoosh.ratio_views]
-    priority = ratio
-    min_priority = 2.0
-
-    [smoosh.slack_dbs]
-    priority = slack
-    min_priority = 536870912
-
-    [smoosh.slack_views]
-    priority = slack
-    min_priority = 536870912
-
-The "upgrade" channels are a special pair of channels that only check whether
-the `disk_format_version` for the file matches the current version, and enqueue
-the file for compaction (which has the side effect of upgrading the file format)
-if that's not the case. There are several additional properties that can be
-configured for each channel; these are documented in the :ref:`configuration API
-<config/compactions>`
-
-Scheduling Windows
-------------------
-
-Each compaction channel can be configured to run only during certain hours of
-the day. The channel-specific `from`, `to`, and `strict_window` configuration
-settings control this behavior. For example
-
-.. code-block:: ini
-
-    [smoosh.overnight_channel]
-    from = 20:00
-    to = 06:00
-    strict_window = true
-
-where `overnight_channel` is the name of the channel you want to configure.
-
-Note: CouchDB determines time via the UTC (GMT) timezone, so these settings must be
-expressed as UTC (GMT).
-
-The ``strict_window`` setting will cause the compaction daemon to suspend all
-active compactions in this channel when exiting the window, and resume them when
-re-entering. If ``strict_window`` is left at its default of false, the active
-compactions will be allowed to complete but no new compactions will be started.
-
-.. note::
-    When a channel is created, a 60s timer is started to check if the channel
-    should be processing any compactions based on the time window defined in your config.
-
-    The channel is set to pending and after 60s it checks if it should be running
-    at all and is set to paused if not.
-    At the end of the check another 60s timer is started to schedule another check.
-
-    Eventually, when in the time window, it starts processing compactions.
-    But since it will continue running a check every 60s running compaction
-    processes will be suspended when exiting the time window and resume them when
-    re-entering the window.
-
-    This means that for the first 60s after exiting the time window,
-    or when a channel is created and you are outside the time window,
-    compactions are run for up to 60s.This is different to the behavior of the
-    old compaction daemon which would cancel the compactions outright.
-
-Migration Guide
----------------
-
-Previous versions of CouchDB shipped with a simpler compaction daemon. The
-configuration system for the new daemon is not backwards-compatible with the old
-one, so users with customized compaction configurations will need to port them
-to the new setup. The old daemon's compaction rules configuration looked like
-
-.. code-block:: ini
-
-    [compaction_daemon]
-    min_file_size = 131072
-    check_interval = 3600
-    snooze_period_ms = 3000
-
-    [compactions]
-    mydb = [{db_fragmentation, "70%"}, {view_fragmentation, "60%"}, {parallel_view_compaction, true}]
-    _default = [{db_fragmentation, "50%"}, {view_fragmentation, "55%"}, {from, "20:00"}, {to, "06:00"}, {strict_window, true}]
-
-Many of the elements of this configuration can be ported over to the new system.
-Examining each in detail:
-
-*   ``min_file_size`` is now configured on a per-channel basis using the
-    min_size config setting.
-
-*   ``db_fragmentation`` is equivalent to configuring a priority = ratio
-    channel with min_priority set to 1.0 / (1 - db_fragmentation/100)
-    and then listing that channel in the [smoosh] db_channels config
-    setting.
-
-*   ``view_fragmention`` is likewise equivalent to configuring a priority = ratio
-    channel with min_priority set to 1.0 / (1 - view_fragmentation/100)
-    and then listing that channel in the [smoosh] view_channels config
-    setting.
-
-*   ``from`` / ``to`` / ``strict_window``: each of these settings can be applied
-    on a per-channel basis in the new daemon. The one behavior change is that
-    the new daemon will suspend compactions upon exiting the allowed window
-    instead of canceling them outright, and resume them when re-entering.
-
-*   ``parallel_view_compaction``: each compaction channel has a concurrency
-    setting that controls how many compactions will execute in parallel in that
-    channel. The total parallelism is the sum of the concurrency settings of all
-    active channels. This is a departure from the previous behavior, in which
-    the daemon would only focus on one database and/or its views (depending on
-    the value of this flag) at a time.
-
-The ``check_interval`` and ``snooze_period_ms`` settings are obsolete in the
-event-driven design of the new daemon. The new daemon does not support setting
-database-specific thresholds as in the ``mydb`` setting above. Rather, channels
-can be configured to focus on specific classes of files: large databases, small
-view indexes, and so on. Most cases of named database compaction rules can be
-expressed using properties of those databases and/or their associated views.
-
-.. _compact/db:
-
-Manual Database Compaction
-==========================
-
-Database compaction compresses the database file by removing unused file
-sections created during updates. Old documents revisions are replaced with
-small amount of metadata called `tombstone` which are used for conflicts
-resolution during replication. The number of stored revisions
-(and their `tombstones`) can be configured by using the :get:`_revs_limit
-</{db}/_revs_limit>` URL endpoint.
-
-Compaction can be manually triggered per database and runs as a background
-task. To start it for specific database there is need to send HTTP
-:post:`/{db}/_compact` sub-resource of the target database::
-
-    curl -H "Content-Type: application/json" -X POST http://localhost:5984/my_db/_compact
-
-On success, HTTP status :statuscode:`202` is returned immediately:
-
-.. code-block:: http
-
-    HTTP/1.1 202 Accepted
-    Cache-Control: must-revalidate
-    Content-Length: 12
-    Content-Type: text/plain; charset=utf-8
-    Date: Wed, 19 Jun 2013 09:43:52 GMT
-    Server: CouchDB (Erlang/OTP)
-
-.. code-block:: javascript
-
-    {"ok":true}
-
-Although the request body is not used you must still specify
-:header:`Content-Type` header with :mimetype:`application/json` value
-for the request. If you don't, you will be aware about with HTTP status
-:statuscode:`415` response:
-
-.. code-block:: http
-
-    HTTP/1.1 415 Unsupported Media Type
-    Cache-Control: must-revalidate
-    Content-Length: 78
-    Content-Type: application/json
-    Date: Wed, 19 Jun 2013 09:43:44 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {"error":"bad_content_type","reason":"Content-Type must be application/json"}
-
-When the compaction is successful started and running it is possible to get
-information about it via :ref:`database information resource <api/db>`::
-
-    curl http://localhost:5984/my_db
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 246
-    Content-Type: application/json
-    Date: Wed, 19 Jun 2013 16:51:20 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    {
-        "committed_update_seq": 76215,
-        "compact_running": true,
-        "db_name": "my_db",
-        "disk_format_version": 6,
-        "doc_count": 5091,
-        "doc_del_count": 0,
-        "instance_start_time": "0",
-        "purge_seq": 0,
-        "sizes": {
-          "active": 3787996,
-          "disk": 17703025,
-          "external": 4763321
-        },
-        "update_seq": 76215
-    }
-
-Note that ``compact_running`` field is ``true`` indicating that compaction
-is actually running. To track the compaction progress you may query the
-:get:`_active_tasks </_active_tasks>` resource::
-
-    curl http://localhost:5984/_active_tasks
-
-.. code-block:: http
-
-    HTTP/1.1 200 OK
-    Cache-Control: must-revalidate
-    Content-Length: 175
-    Content-Type: application/json
-    Date: Wed, 19 Jun 2013 16:27:23 GMT
-    Server: CouchDB (Erlang/OTP)
-
-    [
-        {
-            "changes_done": 44461,
-            "database": "my_db",
-            "pid": "<0.218.0>",
-            "progress": 58,
-            "started_on": 1371659228,
-            "total_changes": 76215,
-            "type": "database_compaction",
-            "updated_on": 1371659241
-        }
-    ]
-
-.. _compact/views:
-
-Manual View Compaction
-======================
-
-`Views` also need compaction. Unlike databases, views are compacted by groups
-per `design document`. To start their compaction, send the HTTP
-:post:`/{db}/_compact/{ddoc}` request::
-
-    curl -H "Content-Type: application/json" -X POST http://localhost:5984/dbname/_compact/designname
-
-.. code-block:: javascript
-
-    {"ok":true}
-
-This compacts the view index from the current version of the specified design
-document. The HTTP response code is :statuscode:`202`
-(like :ref:`compaction for databases <compact/db>`) and a compaction background
-task will be created.
-
-.. _compact/views/cleanup:
-
-Views cleanup
--------------
-
-View indexes on disk are named after their `MD5` hash of the view definition.
-When you change a view, old indexes remain on disk. To clean up all outdated
-view indexes (files named after the MD5 representation of views, that does not
-exist anymore) you can trigger a :ref:`view cleanup <api/db/view_cleanup>`::
-
-    curl -H "Content-Type: application/json" -X POST http://localhost:5984/dbname/_view_cleanup
-
-.. code-block:: javascript
-
-    {"ok":true}
diff --git a/src/maintenance/index.rst b/src/maintenance/index.rst
deleted file mode 100644
index 449deff..0000000
--- a/src/maintenance/index.rst
+++ /dev/null
@@ -1,21 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-===========
-Maintenance
-===========
-
-.. toctree::
-
-    compaction
-    performance
-    backups
diff --git a/src/maintenance/performance.rst b/src/maintenance/performance.rst
deleted file mode 100644
index 63d25ca..0000000
--- a/src/maintenance/performance.rst
+++ /dev/null
@@ -1,326 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _performance:
-
-===========
-Performance
-===========
-
-With up to tens of thousands of documents you will generally find CouchDB to
-perform well no matter how you write your code. Once you start getting into
-the millions of documents you need to be a lot more careful.
-
-Disk I/O
-========
-
-File Size
----------
-
-The smaller your file size, the less `I/O` operations there will be,
-the more of the file can be cached by CouchDB and the operating system,
-the quicker it is to replicate, backup etc. Consequently you should carefully
-examine the data you are storing. For example it would be silly to use keys
-that are hundreds of characters long, but your program would be hard to
-maintain if you only used single character keys. Carefully consider data
-that is duplicated by putting it in views.
-
-Disk and File System Performance
---------------------------------
-
-Using faster disks, striped RAID arrays and modern file systems can all speed
-up your CouchDB deployment. However, there is one option that can increase
-the responsiveness of your CouchDB server when disk performance is a
-bottleneck. From the Erlang documentation for the file module:
-
-    On operating systems with thread support, it is possible to let file
-    operations be performed in threads of their own, allowing other Erlang
-    processes to continue executing in parallel with the file operations.
-    See the `command line flag +A in erl(1)`_.
-
-Setting this argument to a number greater than zero can keep your CouchDB
-installation responsive even during periods of heavy disk utilization. The
-easiest way to set this option is through the ``ERL_FLAGS`` environment
-variable. For example, to give Erlang four threads with which to perform I/O
-operations add the following to ``(prefix)/etc/defaults/couchdb``
-(or equivalent)::
-
-    export ERL_FLAGS="+A 4"
-
-.. _command line flag +A in erl(1): http://erlang.org/doc/man/erl.html
-
-System Resource Limits
-======================
-
-One of the problems that administrators run into as their deployments become
-large are resource limits imposed by the system and by the application
-configuration. Raising these limits can allow your deployment to grow beyond
-what the default configuration will support.
-
-CouchDB Configuration Options
------------------------------
-
-max_dbs_open
-^^^^^^^^^^^^
-
-In your :ref:`configuration <config>` (local.ini or similar) familiarize
-yourself with the :config:option:`couchdb/max_dbs_open`:
-
-.. code-block:: ini
-
-    [couchdb]
-    max_dbs_open = 100
-
-This option places an upper bound on the number of databases that can be
-open at one time. CouchDB reference counts database accesses internally and
-will close idle databases when it must. Sometimes it is necessary to keep
-more than the default open at once, such as in deployments where many databases
-will be continuously replicating.
-
-Erlang
-------
-
-Even if you've increased the maximum connections CouchDB will allow,
-the Erlang runtime system will not allow more than 65536 connections by
-default. Adding the following directive to ``(prefix)/etc/vm.args`` (or
-equivalent) will increase this limit (in this case to 102400)::
-
-    +Q 102400
-
-Note that on Windows, Erlang will not actually increase the file descriptor
-limit past 8192 (i.e. the system header–defined value of ``FD_SETSIZE``). On
-macOS, the limit may be as low as 1024. See `this tip for a possible
-workaround`_ and `this thread for a deeper explanation`_.
-
-.. _this tip for a possible workaround: http://erlang.org/pipermail/erlang-questions/2011-December/063119.html
-.. _this thread for a deeper explanation: http://erlang.org/pipermail/erlang-questions/2011-October/061971.html
-
-Maximum open file descriptors (ulimit)
---------------------------------------
-
-In general, modern UNIX-like systems can handle very large numbers of file
-handles per process (e.g. 100000) without problem. Don't be afraid to increase
-this limit on your system.
-
-The method of increasing these limits varies, depending on your init system and
-particular OS release. The default value for many OSes is 1024 or 4096. On a
-system with many databases or many views, CouchDB can very rapidly hit this
-limit.
-
-For systemd-based Linuxes (such as CentOS/RHEL 7, Ubuntu 16.04+, Debian 8
-or newer), assuming you are launching CouchDB from systemd, you must
-override the upper limit via editing the override file. The best practice
-for this is via the ``systemctl edit couchdb`` command. Add these lines to
-the file in the editor::
-
-    [Service]
-    LimitNOFILE=65536
-
-...or whatever value you like. To increase this value higher than 65536, you
-must also add the Erlang ``+Q`` parameter to your ``etc/vm.args`` file by
-adding the line::
-
-    +Q 102400
-
-The old ``ERL_MAX_PORTS`` environment variable is ignored by the version of
-Erlang supplied with CouchDB.
-
-If your system is set up to use the Pluggable Authentication Modules (`PAM`_),
-and you are **not** launching CouchDB from systemd, increasing this limit
-is straightforward. For example, creating a file named
-``/etc/security/limits.d/100-couchdb.conf`` with the following contents will
-ensure that CouchDB can open up to 65536 file descriptors at once::
-
-    #<domain>    <type>    <item>    <value>
-    couchdb      hard      nofile    65536
-    couchdb      soft      nofile    65536
-
-If you are using our Debian/Ubuntu sysvinit script (``/etc/init.d/couchdb``),
-you also need to raise the limits for the root user::
-
-    #<domain>    <type>    <item>    <value>
-    root         hard      nofile    65536
-    root         soft      nofile    65536
-
-You may also have to edit the ``/etc/pam.d/common-session`` and
-``/etc/pam.d/common-session-noninteractive`` files to add the line::
-
-    session required pam_limits.so
-
-if it is not already present.
-
-If your system does not use PAM, a `ulimit` command is usually available for
-use in a custom script to launch CouchDB with increased resource limits.
-Typical syntax would be something like `ulimit -n 65536`.
-
-.. _PAM: http://www.linux-pam.org/
-
-Network
-=======
-
-There is latency overhead making and receiving each request/response.
-In general you should do your requests in batches. Most APIs have some
-mechanism to do batches, usually by supplying lists of documents or keys in
-the request body. Be careful what size you pick for the batches. The larger
-batch requires more time your client has to spend encoding the items into JSON
-and more time is spent decoding that number of responses. Do some benchmarking
-with your own configuration and typical data to find the sweet spot.
-It is likely to be between one and ten thousand documents.
-
-If you have a fast I/O system then you can also use concurrency - have
-multiple requests/responses at the same time. This mitigates the latency
-involved in assembling JSON, doing the networking and decoding JSON.
-
-As of CouchDB 1.1.0, users often report lower write performance of documents
-compared to older releases. The main reason is that this release ships with
-the more recent version of the HTTP server library MochiWeb, which by default
-sets the TCP socket option `SO_NODELAY`_ to false. This means that small data
-sent to the TCP socket, like the reply to a document write request (or reading
-a very small document), will not be sent immediately to the network - TCP will
-buffer it for a while hoping that it will be asked to send more data through
-the same socket and then send all the data at once for increased performance.
-This TCP buffering behaviour can be disabled via
-:config:option:`httpd/socket_options`:
-
-.. code-block:: ini
-
-    [httpd]
-    socket_options = [{nodelay, true}]
-
-.. _SO_NODELAY: http://en.wikipedia.org/wiki/Nagle%27s_algorithm
-
-.. seealso::
-    Bulk :ref:`load <api/db/all_docs>` and :ref:`store <api/db/bulk_docs>` API.
-
-Connection limit
-----------------
-
-`MochiWeb`_ handles CouchDB requests.
-The default maximum number of connections is 2048. To change this limit, use the
-``server_options`` configuration variable. ``max`` indicates maximum number of
-connections.
-
-.. code-block:: ini
-
-    [chttpd]
-    server_options = [{backlog, 128}, {acceptor_pool_size, 16}, {max, 4096}]
-
-.. _MochiWeb: https://github.com/mochi/mochiweb
-
-CouchDB
-=======
-
-DELETE operation
-----------------
-
-When you :method:`DELETE` a document the database will create a new
-revision which contains the ``_id`` and ``_rev`` fields as well as
-the `_deleted` flag. This revision will remain even after a `database
-compaction` so that the deletion can be replicated. Deleted documents, like
-non-deleted documents, can affect view build times, :method:`PUT` and
-:method:`DELETE` request times, and the size of the database since they
-increase the size of the B+Tree. You can see the number of deleted documents
-in :get:`database information </{db}>`. If your use case creates lots of
-deleted documents (for example, if you are storing short-term data like log
-entries, message queues, etc), you might want to periodically switch to a new
-database and delete the old one (once the entries in it have all expired).
-
-Document's ID
--------------
-
-The db file size is derived from your document and view sizes but also on a
-multiple of your ``_id`` sizes. Not only is the ``_id`` present in the document,
-but it and parts of it are duplicated in the binary tree structure CouchDB uses
-to navigate the file to find the document in the first place. As a real world
-example for one user switching from 16 byte ids to 4 byte ids made a database
-go from 21GB to 4GB with 10 million documents (the raw JSON text when from
-2.5GB to 2GB).
-
-Inserting with sequential (and at least sorted) ids is faster than random ids.
-Consequently you should consider generating ids yourself, allocating them
-sequentially and using an encoding scheme that consumes fewer bytes.
-For example, something that takes 16 hex digits to represent can be done in
-4 base 62 digits (10 numerals, 26 lower case, 26 upper case).
-
-Views
-=====
-
-.. _views/generation:
-
-Views Generation
-----------------
-
-Views with the JavaScript query server are extremely slow to generate when
-there are a non-trivial number of documents to process. The generation process
-won't even saturate a single CPU let alone your I/O. The cause is the latency
-involved in the CouchDB server and separate `couchjs` query server, dramatically
-indicating how important it is to take latency out of your implementation.
-
-You can let view access be "stale" but it isn't practical to determine when
-that will occur giving you a quick response and when views will be updated
-which will take a long time. (A 10 million document database took about 10
-minutes to load into CouchDB but about 4 hours to do view generation).
-
-In a cluster, "stale" requests are serviced by a fixed set of shards in order
-to present users with consistent results between requests. This comes with an
-availability trade-off - the fixed set of shards might not be the most
-responsive / available within the cluster. If you don't need this kind of
-consistency (e.g. your indexes are relatively static), you can tell CouchDB to
-use any available replica by specifying ``stable=false&update=false`` instead of
-``stale=ok``, or ``stable=false&update=lazy`` instead of ``stale=update_after``.
-
-View information isn't replicated - it is rebuilt on each database so you
-can't do the view generation on a separate sever.
-
-Built-In Reduce Functions
--------------------------
-
-If you’re using a very simple view function that only performs a sum or count
-reduction, you can call native Erlang implementations of them by simply
-writing ``_sum`` or ``_count`` in place of your function declaration.
-This will speed up things dramatically, as it cuts down on IO between CouchDB
-and the :ref:`JavaScript query server <query-server/js>`. For example, as
-`mentioned on the mailing list`_, the time for outputting an (already indexed
-and cached) view with about 78,000 items went down from 60 seconds to 4 seconds.
-
-Before:
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/foo",
-        "views": {
-            "bar": {
-                "map": "function (doc) { emit(doc.author, 1); }",
-                "reduce": "function (keys, values, rereduce) { return sum(values); }"
-            }
-        }
-    }
-
-After:
-
-.. code-block:: javascript
-
-    {
-        "_id": "_design/foo",
-        "views": {
-            "bar": {
-                "map": "function (doc) { emit(doc.author, 1); }",
-                "reduce": "_sum"
-            }
-        }
-    }
-
-.. _mentioned on the mailing list: http://mail-archives.apache.org/mod_mbox/couchdb-user/201003.mbox/%3c5E07E00E-3D69-4A8C-ADA3-1B20CF0BA4C8@julianstahnke.com%3e
-
-.. seealso::
-    :ref:`reducefun/builtin`
diff --git a/src/partitioned-dbs/index.rst b/src/partitioned-dbs/index.rst
deleted file mode 100644
index b1c9e90..0000000
--- a/src/partitioned-dbs/index.rst
+++ /dev/null
@@ -1,396 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _partitioned-dbs:
-
-=====================
-Partitioned Databases
-=====================
-
-A partitioned database forms documents into logical partitions by using
-a partition key. All documents are assigned to a partition, and many documents
-are typically given the same partition key. The benefit of partitioned databases
-is that secondary indices can be significantly more efficient when locating
-matching documents since their entries are contained within their partition.
-This means a given secondary index read will only scan a single partition
-range instead of having to read from a copy of every shard.
-
-As a means to introducing partitioned databases, we'll consider a motivating
-use case to describe the benefits of this feature. For this example, we'll
-consider a database that stores readings from a large network of soil
-moisture sensors.
-
-.. note::
-    Before reading this document you should be familiar with the
-    :ref:`theory <cluster/theory>` of :ref:`sharding <cluster/sharding>`
-    in CouchDB.
-
-Traditionally, a document in this database may have something like the
-following structure:
-
-.. code-block:: javascript
-
-    {
-        "_id": "sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-        "_rev":"1-14e8f3262b42498dbd5c672c9d461ff0",
-        "sensor_id": "sensor-260",
-        "location": [41.6171031, -93.7705674],
-        "field_name": "Bob's Corn Field #5",
-        "readings": [
-            ["2019-01-21T00:00:00", 0.15],
-            ["2019-01-21T06:00:00", 0.14],
-            ["2019-01-21T12:00:00", 0.16],
-            ["2019-01-21T18:00:00", 0.11]
-        ]
-    }
-
-.. note::
-    While this example uses IoT sensors, the main thing to consider is that
-    there is a logical grouping of documents. Similar use cases might be
-    documents grouped by user or scientific data grouped by experiment.
-
-So we've got a bunch of sensors, all grouped by the field they monitor
-along with their readouts for a given day (or other appropriate time period).
-
-Along with our documents, we might expect to have two secondary indexes
-for querying our database that might look something like:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc._id.indexOf("sensor-reading-") != 0) {
-            return;
-        }
-        for(var r in doc.readings) {
-            emit([doc.sensor_id, r[0]], r[1])
-        }
-    }
-
-and:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc._id.indexOf("sensor-reading-") != 0) {
-            return;
-        }
-        emit(doc.field_name, doc.sensor_id)
-    }
-
-With these two indexes defined, we can easily find all readings for a given
-sensor, or list all sensors in a given field.
-
-Unfortunately, in CouchDB, when we read from either of these indexes, it
-requires finding a copy of every shard and asking for any documents related
-to the particular sensor or field. This means that as our database scales
-up the number of shards, every index request must perform more work,
-which is unnecessary since we are only interested in a small number of documents.
-Fortunately for you, dear reader, partitioned databases were created to solve
-this precise problem.
-
-What is a partition?
-====================
-
-In the previous section, we introduced a hypothetical database that contains
-sensor readings from an IoT field monitoring service. In this particular
-use case, it's quite logical to group all documents by their ``sensor_id``
-field. In this case, we would call the ``sensor_id`` the partition key.
-
-A good partition has two basic properties. First, it should have a high
-cardinality. That is, a large partitioned database should have many more
-partitions than documents in any single partition. A database that has
-a single partition would be an anti-pattern for this feature. Secondly,
-the amount of data per partition should be "small". The general
-recommendation is to limit individual partitions to less than ten
-gigabytes (10 GB) of data. Which, for the example of sensor documents,
-equates to roughly 60,000 years of data.
-
-.. note::
-
-    The ``max_partition_size`` under CouchDB dictates the partition limit.
-    The default value for this option is 10GiB but can be changed accordingly.
-    Setting the value for this option to 0 disables the partition limit.
-
-Why use partitions?
-===================
-
-The primary benefit of using partitioned databases is for the performance
-of partitioned queries. Large databases with lots of documents often
-have a similar pattern where there are groups of related documents that
-are queried together.
-
-By using partitions, we can execute queries against these individual groups
-of documents more efficiently by placing the entire group within a specific
-shard on disk. Thus, the view engine only has to consult one copy of the
-given shard range when executing a query instead of executing
-the query across all ``q`` shards in the database. This mean that you do
-not have to wait for all ``q`` shards to respond, which is both
-efficient and faster.
-
-Partitions By Example
-=====================
-
-To create a partitioned database, we simply need to pass a query string
-parameter:
-
-.. code-block:: bash
-
-    shell> curl -X PUT http://127.0.0.1:5984/my_new_db?partitioned=true
-    {"ok":true}
-
-To see that our database is partitioned, we can look at the database
-information:
-
-.. code-block:: bash
-
-    shell> curl http://127.0.0.1:5984/my_new_db
-    {
-      "cluster": {
-        "n": 3,
-        "q": 8,
-        "r": 2,
-        "w": 2
-      },
-      "compact_running": false,
-      "db_name": "my_new_db",
-      "disk_format_version": 7,
-      "doc_count": 0,
-      "doc_del_count": 0,
-      "instance_start_time": "0",
-      "props": {
-        "partitioned": true
-      },
-      "purge_seq": "0-g1AAAAFDeJzLYWBg4M...",
-      "sizes": {
-        "active": 0,
-        "external": 0,
-        "file": 66784
-      },
-      "update_seq": "0-g1AAAAFDeJzLYWBg4M..."
-    }
-
-You'll now see that the ``"props"`` member contains ``"partitioned": true``.
-
-.. note::
-
-    Every document in a partitioned database (except _design
-    and _local documents) must have the format “partition:docid”.
-    More specifically, the partition for a given document is
-    everything before the first colon. The document id is everything
-    after the first colon, which may include more colons.
-
-.. note::
-
-    System databases (such as _users) are *not* allowed to be partitioned. This is
-    due to system databases already having their own incompatible
-    requirements on document ids.
-
-Now that we've created a partitioned database, it's time to add some documents.
-Using our earlier example, we could do this as such:
-
-.. code-block:: bash
-
-    shell> cat doc.json
-    {
-        "_id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-        "sensor_id": "sensor-260",
-        "location": [41.6171031, -93.7705674],
-        "field_name": "Bob's Corn Field #5",
-        "readings": [
-            ["2019-01-21T00:00:00", 0.15],
-            ["2019-01-21T06:00:00", 0.14],
-            ["2019-01-21T12:00:00", 0.16],
-            ["2019-01-21T18:00:00", 0.11]
-        ]
-    }
-    shell> $ curl -X POST -H "Content-Type: application/json" \
-                http://127.0.0.1:5984/my_new_db -d @doc.json
-    {
-        "ok": true,
-        "id": "sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-        "rev": "1-05ed6f7abf84250e213fcb847387f6f5"
-    }
-
-The only change required to the first example document is that we are now
-including the partition name in the document id by prepending it to the
-old id separated by a colon.
-
-.. note::
-
-    The partition name in the document id is not magical. Internally,
-    the database is simply using only the partition for hashing
-    the document to a given shard, instead of the entire document id.
-
-Working with documents in a partitioned database is no different than
-a non-partitioned database. All APIs are available, and existing client
-code will all work seamlessly.
-
-Now that we have created a document, we can get some info about the partition
-containing the document:
-
-.. code-block:: bash
-
-    shell> curl http://127.0.0.1:5984/my_new_db/_partition/sensor-260
-    {
-      "db_name": "my_new_db",
-      "doc_count": 1,
-      "doc_del_count": 0,
-      "partition": "sensor-260",
-      "sizes": {
-        "active": 244,
-        "external": 347
-      }
-    }
-
-And we can also list all documents in a partition:
-
-.. code-block:: bash
-
-    shell> curl http://127.0.0.1:5984/my_new_db/_partition/sensor-260/_all_docs
-    {"total_rows": 1, "offset": 0, "rows":[
-        {
-            "id":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-            "key":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf",
-            "value": {"rev": "1-05ed6f7abf84250e213fcb847387f6f5"}
-        }
-    ]}
-
-Note that we can use all of the normal bells and whistles available to
-``_all_docs`` requests. Accessing ``_all_docs`` through the
-``/dbname/_partition/name/_all_docs`` endpoint is mostly a convenience
-so that requests are guaranteed to be scoped to a given partition. Users
-are free to use the normal ``/dbname/_all_docs`` to read documents from
-multiple partitions. Both query styles have the same performance.
-
-Next, we'll create a design document containing our index for
-getting all readings from a given sensor. The map function is similar to
-our earlier example except we've accounted for the change in the document
-id.
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc._id.indexOf(":sensor-reading-") < 0) {
-            return;
-        }
-        for(var r in doc.readings) {
-            emit([doc.sensor_id, r[0]], r[1])
-        }
-    }
-
-After uploading our design document, we can try out a partitioned query:
-
-.. code-block:: bash
-
-    shell> cat ddoc.json
-    {
-        "_id": "_design/sensor-readings",
-        "views": {
-            "by_sensor": {
-                "map": "function(doc) { ... }"
-            }
-        }
-    }
-    shell> $ curl -X POST -H "Content-Type: application/json" http://127.0.0.1:5984/my_new_db -d @ddoc2.json
-    {
-        "ok": true,
-        "id": "_design/all_sensors",
-        "rev": "1-4a8188d80fab277fccf57bdd7154dec1"
-    }
-    shell> curl http://127.0.0.1:5984/my_new_db/_partition/sensor-260/_design/sensor-readings/_view/by_sensor
-    {"total_rows":4,"offset":0,"rows":[
-    {"id":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf","key":["sensor-260","0"],"value":null},
-    {"id":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf","key":["sensor-260","1"],"value":null},
-    {"id":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf","key":["sensor-260","2"],"value":null},
-    {"id":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf","key":["sensor-260","3"],"value":null}
-    ]}
-
-Hooray! Our first partitioned query. For experienced users, that may not
-be the most exciting development, given that the only things that have
-changed are a slight tweak to the document id, and accessing views with
-a slightly different path. However, for anyone who likes performance
-improvements, it's actually a big deal. By knowing that the view results
-are all located within the provided partition name, our partitioned
-queries now perform nearly as fast as document lookups!
-
-The last thing we'll look at is how to query data across multiple partitions.
-For that, we'll implement the example sensors by field query from our
-initial example. The map function will use the same update to account
-for the new document id format, but is otherwise identical to the previous
-version:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if(doc._id.indexOf(":sensor-reading-") < 0) {
-            return;
-        }
-        emit(doc.field_name, doc.sensor_id)
-    }
-
-Next, we'll create a new design doc with this function. Be sure to notice
-that the ``"options"`` member contains ``"partitioned": false``.
-
-.. code-block:: bash
-
-    shell> cat ddoc2.json
-    {
-      "_id": "_design/all_sensors",
-      "options": {
-        "partitioned": false
-      },
-      "views": {
-        "by_field": {
-          "map": "function(doc) { ... }"
-        }
-      }
-    }
-    shell> $ curl -X POST -H "Content-Type: application/json" http://127.0.0.1:5984/my_new_db -d @ddoc2.json
-    {
-        "ok": true,
-        "id": "_design/all_sensors",
-        "rev": "1-4a8188d80fab277fccf57bdd7154dec1"
-    }
-
-.. note::
-
-    Design documents in a partitioned database default to being
-    partitioned. Design documents that contain views for queries
-    across multiple partitions must contain the ``"partitioned": false``
-    member in the ``"options"`` object.
-
-.. note::
-
-    Design documents are either partitioned or global. They cannot
-    contain a mix of partitioned and global indexes.
-
-And to see a request showing us all sensors in a field, we would use a
-request like:
-
-.. code-block:: bash
-
-    shell> curl -u adm:pass http://127.0.0.1:15984/my_new_db/_design/all_sensors/_view/by_field
-    {"total_rows":1,"offset":0,"rows":[
-    {"id":"sensor-260:sensor-reading-ca33c748-2d2c-4ed1-8abf-1bca4d9d03cf","key":"Bob's Corn Field #5","value":"sensor-260"}
-    ]}
-
-Notice that we're not using the ``/dbname/_partition/...`` path for global
-queries. This is because global queries, by definition, do not cover individual
-partitions. Other than having the ``"partitioned": false`` parameter in the
-design document, global design documents and queries are identical in
-behavior to design documents on non-partitioned databases.
-
-.. warning::
-
-    To be clear, this means that global queries perform identically to
-    queries on non-partitioned databases. Only partitioned queries
-    on a partitioned database benefit from the performance improvements.
diff --git a/src/query-server/erlang.rst b/src/query-server/erlang.rst
deleted file mode 100644
index 4478c9b..0000000
--- a/src/query-server/erlang.rst
+++ /dev/null
@@ -1,136 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: js
-
-.. _query-server/erlang:
-
-======
-Erlang
-======
-
-.. note::
-    The Erlang query server is disabled by default.
-    Read :ref:`configuration guide <config/native_query_servers>` about
-    reasons why and how to enable it.
-
-.. function:: Emit(Id, Value)
-
-    Emits `key`-`value` pairs to view indexer process.
-
-    .. code-block:: erlang
-
-        fun({Doc}) ->
-            <<K,_/binary>> = proplists:get_value(<<"_rev">>, Doc, null),
-            V = proplists:get_value(<<"_id">>, Doc, null),
-            Emit(<<K>>, V)
-        end.
-
-.. function:: FoldRows(Fun, Acc)
-
-    Helper to iterate over all rows in a list function.
-
-    :param Fun: Function object.
-    :param Acc: The value previously returned by `Fun`.
-
-    .. code-block:: erlang
-
-        fun(Head, {Req}) ->
-            Fun = fun({Row}, Acc) ->
-                Id = couch_util:get_value(<<"id">>, Row),
-                Send(list_to_binary(io_lib:format("Previous doc id: ~p~n", [Acc]))),
-                Send(list_to_binary(io_lib:format("Current  doc id: ~p~n", [Id]))),
-                {ok, Id}
-            end,
-            FoldRows(Fun, nil),
-            ""
-        end.
-
-.. function:: GetRow()
-
-    Retrieves the next row from a related view result.
-
-    .. code-block:: erlang
-
-        %% FoldRows background implementation.
-        %% https://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=blob;f=src/couchdb/couch_native_process.erl;hb=HEAD#l368
-        %%
-        foldrows(GetRow, ProcRow, Acc) ->
-            case GetRow() of
-                nil ->
-                    {ok, Acc};
-                Row ->
-                    case (catch ProcRow(Row, Acc)) of
-                        {ok, Acc2} ->
-                            foldrows(GetRow, ProcRow, Acc2);
-                        {stop, Acc2} ->
-                            {ok, Acc2}
-                    end
-        end.
-
-.. function:: Log(Msg)
-
-    :param Msg: Log a message at the `INFO` level.
-
-    .. code-block:: erlang
-
-        fun({Doc}) ->
-            <<K,_/binary>> = proplists:get_value(<<"_rev">>, Doc, null),
-            V = proplists:get_value(<<"_id">>, Doc, null),
-            Log(lists:flatten(io_lib:format("Hello from ~s doc!", [V]))),
-            Emit(<<K>>, V)
-        end.
-
-    After the map function has run, the following line can be found in
-    CouchDB logs (e.g. at `/var/log/couchdb/couch.log`):
-
-    .. code-block:: text
-
-        [Sun, 04 Nov 2012 11:33:58 GMT] [info] [<0.9144.2>] Hello from 8d300b86622d67953d102165dbe99467 doc!
-
-.. function:: Send(Chunk)
-
-    Sends a single string `Chunk` in response.
-
-    .. code-block:: erlang
-
-        fun(Head, {Req}) ->
-            Send("Hello,"),
-            Send(" "),
-            Send("Couch"),
-            "!"
-        end.
-
-    The function above produces the following response:
-
-    .. code-block:: text
-
-        Hello, Couch!
-
-.. function:: Start(Headers)
-
-    :param Headers: Proplist of :ref:`response object<response_object>`.
-
-    Initialize :ref:`listfun` response. At this point, response code and headers
-    may be defined. For example, this function redirects to the CouchDB
-    web site:
-
-    .. code-block:: erlang
-
-        fun(Head, {Req}) ->
-            Start({[{<<"code">>, 302},
-                    {<<"headers">>, {[
-                        {<<"Location">>, <<"http://couchdb.apache.org">>}]
-                    }}
-                ]}),
-            "Relax!"
-        end.
diff --git a/src/query-server/index.rst b/src/query-server/index.rst
deleted file mode 100644
index 5673655..0000000
--- a/src/query-server/index.rst
+++ /dev/null
@@ -1,37 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _query-server:
-
-============
-Query Server
-============
-
-The `Query server` is an external process that communicates with CouchDB by JSON
-protocol through stdio interface and processes all
-:ref:`design functions <ddocs>` calls, such as JavaScript :ref:`views <viewfun>`.
-
-The default query server is written in
-:ref:`JavaScript <query-server/js>`, running via `Mozilla SpiderMonkey`_.
-You can use :ref:`other languages <config/query_servers>` by setting a Query
-server key in the ``language`` property of a design document or the
-`Content-Type` header of a `temporary view`. Design documents that do not
-specify a ``language`` property are assumed to be of type `javascript`.
-
-.. _Mozilla SpiderMonkey: https://spidermonkey.dev/
-
-.. toctree::
-    :maxdepth: 2
-
-    protocol
-    javascript
-    erlang
diff --git a/src/query-server/javascript.rst b/src/query-server/javascript.rst
deleted file mode 100644
index ce48c02..0000000
--- a/src/query-server/javascript.rst
+++ /dev/null
@@ -1,278 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. default-domain:: js
-
-.. _query-server/js:
-
-==========
-JavaScript
-==========
-
-.. note::
-    While every design function has access to all JavaScript objects, the table
-    below describes appropriate usage cases. For example, you may use
-    :func:`emit` in :ref:`mapfun`, but :func:`getRow` is not permitted
-    during :ref:`mapfun`.
-
-+--------------------------------+---------------------------------------------+
-| JS Function                    | Reasonable to use in design doc functions   |
-+================================+=============================================+
-| :func:`emit`                   | :ref:`mapfun`                               |
-+--------------------------------+---------------------------------------------+
-| :func:`getRow`                 | :ref:`listfun`                              |
-+--------------------------------+---------------------------------------------+
-| :data:`JSON`                   | any                                         |
-+--------------------------------+---------------------------------------------+
-| :func:`isArray`                | any                                         |
-+--------------------------------+---------------------------------------------+
-| :func:`log`                    | any                                         |
-+--------------------------------+---------------------------------------------+
-| :func:`provides`               | :ref:`showfun`, :ref:`listfun`              |
-+--------------------------------+---------------------------------------------+
-| :func:`registerType`           | :ref:`showfun`, :ref:`listfun`              |
-+--------------------------------+---------------------------------------------+
-| :func:`require`                | any, except :ref:`reducefun`                |
-+--------------------------------+---------------------------------------------+
-| :func:`send`                   | :ref:`listfun`                              |
-+--------------------------------+---------------------------------------------+
-| :func:`start`                  | :ref:`listfun`                              |
-+--------------------------------+---------------------------------------------+
-| :func:`sum`                    | any                                         |
-+--------------------------------+---------------------------------------------+
-| :func:`toJSON`                 | any                                         |
-+--------------------------------+---------------------------------------------+
-
-Design functions context
-========================
-
-Each design function executes in a special context of predefined objects,
-modules and functions:
-
-.. function:: emit(key, value)
-
-    Emits a `key`-`value` pair for further processing by CouchDB after the map
-    function is done.
-
-    :param key: The view key
-    :param value: The `key`'s associated value
-
-    .. code-block:: javascript
-
-        function(doc){
-            emit(doc._id, doc._rev);
-        }
-
-.. function:: getRow()
-
-    Extracts the next row from a related view result.
-
-    :return: View result row
-    :rtype: object
-
-    .. code-block:: javascript
-
-        function(head, req){
-            send('[');
-            row = getRow();
-            if (row){
-                send(toJSON(row));
-                while(row = getRow()){
-                    send(',');
-                    send(toJSON(row));
-                }
-            }
-            return ']';
-        }
-
-.. data:: JSON
-
-    `JSON2 <https://github.com/apache/couchdb/blob/main/share/server/json2.js>`_
-    object.
-
-.. function:: isArray(obj)
-
-    A helper function to check if the provided value is an `Array`.
-
-    :param obj: Any JavaScript value
-    :return: ``true`` if `obj` is `Array`-typed, ``false`` otherwise
-    :rtype: boolean
-
-.. function:: log(message)
-
-    Log a message to the CouchDB log (at the `INFO` level).
-
-    :param message: Message to be logged
-
-    .. code-block:: javascript
-
-        function(doc){
-            log('Procesing doc ' + doc['_id']);
-            emit(doc['_id'], null);
-        }
-
-    After the map function has run, the following line can be found in CouchDB
-    logs (e.g. at `/var/log/couchdb/couch.log`):
-
-    .. code-block:: text
-
-        [Sat, 03 Nov 2012 17:38:02 GMT] [info] [<0.7543.0>] OS Process #Port<0.3289> Log :: Processing doc 8d300b86622d67953d102165dbe99467
-
-.. function:: provides(key, func)
-
-    Registers callable handler for specified MIME key.
-
-    :param key: MIME key previously defined by :func:`registerType`
-    :param func: MIME type handler
-
-.. function:: registerType(key, *mimes)
-
-    Registers list of MIME types by associated `key`.
-
-    :param key: MIME types
-    :param mimes: MIME types enumeration
-
-    Predefined mappings (`key`-`array`):
-
-    - **all**: ``*/*``
-    - **text**: ``text/plain; charset=utf-8``, ``txt``
-    - **html**: ``text/html; charset=utf-8``
-    - **xhtml**: ``application/xhtml+xml``, ``xhtml``
-    - **xml**: ``application/xml``, ``text/xml``, ``application/x-xml``
-    - **js**: ``text/javascript``, ``application/javascript``,
-      ``application/x-javascript``
-    - **css**: ``text/css``
-    - **ics**: ``text/calendar``
-    - **csv**: ``text/csv``
-    - **rss**: ``application/rss+xml``
-    - **atom**: ``application/atom+xml``
-    - **yaml**: ``application/x-yaml``, ``text/yaml``
-    - **multipart_form**: ``multipart/form-data``
-    - **url_encoded_form**: ``application/x-www-form-urlencoded``
-    - **json**: ``application/json``, ``text/x-json``
-
-.. function:: require(path)
-
-    Loads CommonJS module by a specified `path`. The path should not start with
-    a slash.
-
-    :param path: A CommonJS module path started from design document root
-    :return: Exported statements
-
-.. function:: send(chunk)
-
-    Sends a single string `chunk` in response.
-
-    :param chunk: Text chunk
-
-    .. code-block:: javascript
-
-        function(head, req){
-            send('Hello,');
-            send(' ');
-            send('Couch');
-            return ;
-        }
-
-.. function:: start(init_resp)
-
-    Initiates chunked response. As an option, a custom
-    :ref:`response <response_object>` object may be sent at this point.
-    For `list`-functions only!
-
-    .. note::
-        list functions may set the `HTTP response code` and `headers` by calling
-        this function. This function must be called before :func:`send`,
-        :func:`getRow` or a `return` statement; otherwise, the query server will
-        implicitly call this function with the empty object (``{}``).
-
-    .. code-block:: javascript
-
-        function(head, req){
-            start({
-                "code": 302,
-                "headers": {
-                    "Location": "http://couchdb.apache.org"
-                }
-            });
-            return "Relax!";
-        }
-
-.. function:: sum(arr)
-
-    Sum `arr`'s items.
-
-    :param arr: Array of numbers
-    :rtype: number
-
-.. function:: toJSON(obj)
-
-    Encodes `obj` to JSON string. This is an alias for the ``JSON.stringify``
-    method.
-
-    :param obj: JSON-encodable object
-    :return: JSON string
-
-.. _commonjs:
-
-CommonJS Modules
-================
-
-Support for `CommonJS Modules <http://wiki.commonjs.org/wiki/Modules/1.1.1>`_
-(introduced in CouchDB 0.11.0) allows you to create modular design functions
-without the need for duplication of functionality.
-
-Here's a CommonJS module that checks user permissions:
-
-.. code-block:: javascript
-
-    function user_context(userctx, secobj) {
-        var is_admin = function() {
-            return userctx.indexOf('_admin') != -1;
-        }
-        return {'is_admin': is_admin}
-    }
-
-    exports['user'] = user_context
-
-Each module has access to additional global variables:
-
-- **module** (`object`): Contains information about the stored module
-
-  - **id** (`string`): The module id; a JSON path in ddoc context
-  - **current** (`code`): Compiled module code object
-  - **parent** (`object`): Parent frame
-  - **exports** (`object`): Export statements
-
-- **exports** (`object`): Shortcut to the ``module.exports`` object
-
-The CommonJS module can be added to a design document, like so:
-
-.. code-block:: javascript
-
-    {
-        "views": {
-            "lib": {
-                "security": "function user_context(userctx, secobj) { ... }"
-            }
-        },
-        "validate_doc_update": "function(newdoc, olddoc, userctx, secobj) {
-            user = require('views/lib/security').user_context(userctx, secobj);
-            return user.is_admin();
-        }"
-        "_id": "_design/test"
-    }
-
-Modules paths are relative to the design document's ``views`` object, but
-modules can only be loaded from the object referenced via ``lib``. The
-``lib`` structure can still be used for view functions as well, by simply
-storing view functions at e.g. ``views.lib.map``, ``views.lib.reduce``, etc.
diff --git a/src/query-server/protocol.rst b/src/query-server/protocol.rst
deleted file mode 100644
index 4420e9c..0000000
--- a/src/query-server/protocol.rst
+++ /dev/null
@@ -1,1060 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _query-server/protocol:
-
-=====================
-Query Server Protocol
-=====================
-
-A `Query Server` is an external process that communicates with CouchDB via a
-simple, custom JSON protocol over stdin/stdout. It is used to processes all
-design functions calls: `views`, `shows`, `lists`, `filters`, `updates` and
-`validate_doc_update`.
-
-CouchDB communicates with the Query Server process through stdin/stdout with
-JSON messages that are terminated by a newline character. Messages that are
-sent to the Query Server are always `array`-typed and follow the pattern
-``[<command>, <*arguments>]\n``.
-
-.. note::
-    In the documentation examples, we omit the trailing ``\n`` for greater
-    readability. Also, examples contain formatted JSON values while real data
-    is transferred in compact mode without formatting spaces.
-
-.. _qs/reset:
-
-``reset``
-=========
-
-:Command: ``reset``
-:Arguments: :ref:`Query server state <config/query_server_config>` (optional)
-:Returns: ``true``
-
-This resets the state of the Query Server and makes it forget all previous
-input. If applicable, this is the point to run garbage collection.
-
-CouchDB sends::
-
-    ["reset"]
-
-The Query Server answers::
-
-    true
-
-To set up new Query Server state, the second argument is used with object data.
-
-CouchDB sends::
-
-    ["reset", {"reduce_limit": true, "timeout": 5000}]
-
-The Query Server answers::
-
-    true
-
-.. _qs/add_lib:
-
-``add_lib``
-===========
-
-:Command: ``add_lib``
-:Arguments: CommonJS library object by ``views/lib`` path
-:Returns: ``true``
-
-Adds :ref:`CommonJS <commonjs>` library to Query Server state for further usage
-in `map` functions.
-
-CouchDB sends::
-
-    [
-        "add_lib",
-        {
-            "utils": "exports.MAGIC = 42;"
-        }
-    ]
-
-The Query Server answers::
-
-    true
-
-.. note::
-    This library shouldn't have any side effects nor track its own state
-    or you'll have a lot of happy debugging time if something goes wrong.
-    Remember that a complete index rebuild is a heavy operation and this is
-    the only way to fix mistakes with shared state.
-
-.. _qs/add_fun:
-
-``add_fun``
-===========
-
-:Command: ``add_fun``
-:Arguments: Map function source code.
-:Returns: ``true``
-
-When creating or updating a view, this is how the Query Server is sent the
-view function for evaluation. The Query Server should parse, compile, and
-evaluate the function it receives to make it callable later. If this fails, the
-Query Server returns an error. CouchDB may store multiple functions before
-sending any documents.
-
-CouchDB sends::
-
-    [
-        "add_fun",
-        "function(doc) { if(doc.score > 50) emit(null, {'player_name': doc.name}); }"
-    ]
-
-The Query Server answers::
-
-    true
-
-.. _qs/map_doc:
-
-``map_doc``
-===========
-
-:Command: ``map_doc``
-:Arguments: Document object
-:Returns: Array of key-value pairs per applied :ref:`function <qs/add_fun>`
-
-When the view function is stored in the Query Server, CouchDB starts sending
-all the documents in the database, one at a time. The Query Server calls the
-previously stored functions one after another with a document and stores its
-result. When all functions have been called, the result is returned as a JSON
-string.
-
-CouchDB sends::
-
-    [
-        "map_doc",
-        {
-            "_id": "8877AFF9789988EE",
-            "_rev": "3-235256484",
-            "name": "John Smith",
-            "score": 60
-        }
-    ]
-
-If the function above is the only function stored, the Query Server answers::
-
-    [
-        [
-            [null, {"player_name": "John Smith"}]
-        ]
-    ]
-
-That is, an array with the result for every function for the given document.
-
-If a document is to be excluded from the view, the array should be empty.
-
-CouchDB sends::
-
-    [
-        "map_doc",
-        {
-            "_id": "9590AEB4585637FE",
-            "_rev": "1-674684684",
-            "name": "Jane Parker",
-            "score": 43
-        }
-    ]
-
-The Query Server answers::
-
-    [[]]
-
-.. _qs/reduce:
-
-``reduce``
-==========
-
-:Command: ``reduce``
-:Arguments:
-
-    - Reduce function source
-    - Array of :ref:`map function <mapfun>` results where each item represented
-      in format ``[[key, id-of-doc], value]``
-
-:Returns: Array with pair values: ``true`` and another array with reduced result
-
-If the view has a reduce function defined, CouchDB will enter into the reduce
-phase. The Query Server will receive a list of reduce functions and some map
-results on which it can apply them.
-
-CouchDB sends::
-
-    [
-        "reduce",
-        [
-            "function(k, v) { return sum(v); }"
-        ],
-        [
-            [[1, "699b524273605d5d3e9d4fd0ff2cb272"], 10],
-            [[2, "c081d0f69c13d2ce2050d684c7ba2843"], 20],
-            [[null, "foobar"], 3]
-        ]
-    ]
-
-The Query Server answers::
-
-    [
-        true,
-        [33]
-    ]
-
-Note that even though the view server receives the map results in the form
-``[[key, id-of-doc], value]``, the function may receive them in a different
-form. For example, the JavaScript Query Server applies functions on the list of
-keys and the list of values.
-
-.. _qs/rereduce:
-
-``rereduce``
-============
-
-:Command: ``rereduce``
-:Arguments:
-
-    - Reduce function source
-    - List of values
-
-When building a view, CouchDB will apply the reduce step directly to the output
-of the map step and the rereduce step to the output of a previous reduce step.
-
-CouchDB will send a list of reduce functions and a list of values, with no keys
-or document ids to the rereduce step.
-
-CouchDB sends::
-
-    [
-        "rereduce",
-        [
-            "function(k, v, r) { return sum(v); }"
-        ],
-        [
-            33,
-            55,
-            66
-        ]
-    ]
-
-The Query Server answers::
-
-    [
-        true,
-        [154]
-    ]
-
-.. _qs/ddoc:
-
-``ddoc``
-========
-
-:Command: ``ddoc``
-:Arguments: Array of objects.
-
-    - First phase (ddoc initialization):
-
-      - ``"new"``
-      - Design document ``_id``
-      - Design document object
-
-    - Second phase (design function execution):
-
-      - Design document ``_id``
-      - Function path as an array of object keys
-      - Array of function arguments
-
-:Returns:
-
-    - First phase (ddoc initialization): ``true``
-    - Second phase (design function execution): custom object depending on
-      executed function
-
-This command acts in two phases: `ddoc` registration and `design function`
-execution.
-
-In the first phase CouchDB sends a full design document content to the Query
-Server to let it cache it by ``_id`` value for further function execution.
-
-To do this, CouchDB sends::
-
-    [
-        "ddoc",
-        "new",
-        "_design/temp",
-        {
-            "_id": "_design/temp",
-            "_rev": "8-d7379de23a751dc2a19e5638a7bbc5cc",
-            "language": "javascript",
-            "shows": {
-                "request": "function(doc,req){ return {json: req}; }",
-                "hello": "function(doc,req){ return {body: 'Hello, ' + (doc || {})._id + '!'}; }"
-            }
-        }
-    ]
-
-The Query Server answers::
-
-    true
-
-After this, the design document will be ready to serve subcommands in the
-second phase.
-
-.. note::
-    Each ``ddoc`` subcommand is the root design document key, so they are not
-    actually subcommands, but first elements of the JSON path that may be handled
-    and processed.
-
-    The pattern for subcommand execution is common:
-
-    ``["ddoc", <design_doc_id>, [<subcommand>, <funcname>], [<argument1>, <argument2>, ...]]``
-
-.. _qs/ddoc/shows:
-
-``shows``
----------
-
-.. warning::
-
-    Show functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-:Command: ``ddoc``
-:SubCommand: ``shows``
-:Arguments:
-
-    - Document object or ``null`` if document `id` isn't specified in request
-    - :ref:`request_object`
-
-:Returns: Array with two elements:
-
-    - ``"resp"``
-    - :ref:`response_object`
-
-Executes :ref:`show function <showfun>`.
-
-Couchdb sends::
-
-    [
-        "ddoc",
-        "_design/temp",
-        [
-            "shows",
-            "doc"
-        ],
-        [
-            null,
-            {
-                "info": {
-                    "db_name": "test",
-                    "doc_count": 8,
-                    "doc_del_count": 0,
-                    "update_seq": 105,
-                    "purge_seq": 0,
-                    "compact_running": false,
-                    "sizes": {
-                      "active": 1535048,
-                      "disk": 15818856,
-                      "external": 15515850
-                    },
-                    "instance_start_time": "1359952188595857",
-                    "disk_format_version": 6,
-                    "committed_update_seq": 105
-                },
-                "id": null,
-                "uuid": "169cb4cc82427cc7322cb4463d0021bb",
-                "method": "GET",
-                "requested_path": [
-                    "api",
-                    "_design",
-                    "temp",
-                    "_show",
-                    "request"
-                ],
-                "path": [
-                    "api",
-                    "_design",
-                    "temp",
-                    "_show",
-                    "request"
-                ],
-                "raw_path": "/api/_design/temp/_show/request",
-                "query": {},
-                "headers": {
-                    "Accept": "*/*",
-                    "Host": "localhost:5984",
-                    "User-Agent": "curl/7.26.0"
-                },
-                "body": "undefined",
-                "peer": "127.0.0.1",
-                "form": {},
-                "cookie": {},
-                "userCtx": {
-                    "db": "api",
-                    "name": null,
-                    "roles": [
-                        "_admin"
-                    ]
-                },
-                "secObj": {}
-            }
-        ]
-    ]
-
-The Query Server sends::
-
-    [
-        "resp",
-        {
-            "body": "Hello, undefined!"
-        }
-    ]
-
-.. _qs/ddoc/lists:
-
-``lists``
----------
-
-.. warning::
-
-    List functions are deprecated in CouchDB 3.0, and will be removed in CouchDB 4.0.
-
-:Command: ``ddoc``
-:SubCommand: ``lists``
-:Arguments:
-
-    - :ref:`view_head_info_object`:
-    - :ref:`request_object`
-
-:Returns: Array. See below for details.
-
-Executes :ref:`list function <listfun>`.
-
-The communication protocol for `list` functions is a bit complex so let's use
-an example to illustrate.
-
-Assume we have view a function that emits `id-rev` pairs::
-
-    function(doc) {
-        emit(doc._id, doc._rev);
-    }
-
-And we'd like to emulate ``_all_docs`` JSON response with list function. Our
-*first* version of the list functions looks like this::
-
-    function(head, req){
-        start({'headers': {'Content-Type': 'application/json'}});
-        var resp = head;
-        var rows = [];
-        while(row=getRow()){
-            rows.push(row);
-        }
-        resp.rows = rows;
-        return toJSON(resp);
-    }
-
-The whole communication session during list function execution could be divided
-on three parts:
-
-#. Initialization
-
-   The first returned object from the list function is an array with the
-   following structure::
-
-       ["start", <chunks>, <headers>]
-
-   Where ``<chunks>`` is an array of text chunks that will be sent to the client
-   and ``<headers>`` is an object with response HTTP headers.
-
-   This message is sent from the Query Server to CouchDB on the
-   :js:func:`start` call which initializes the HTTP response to the client::
-
-       [
-           "start",
-           [],
-           {
-               "headers": {
-                   "Content-Type": "application/json"
-               }
-           }
-       ]
-
-   After this, the list function may start to process view rows.
-
-#. View Processing
-
-   Since view results can be extremely large, it is not wise to pass all its
-   rows in a single command. Instead, CouchDB can send view rows one by one
-   to the Query Server allowing view processing and output generation to be
-   processed as a stream.
-
-   CouchDB sends a special array that carries view row data::
-
-       [
-           "list_row",
-           {
-               "id": "0cb42c267fe32d4b56b3500bc503e030",
-               "key": "0cb42c267fe32d4b56b3500bc503e030",
-               "value": "1-967a00dff5e02add41819138abb3284d"
-           }
-       ]
-
-   If the Query Server has something to return on this, it returns an array
-   with a ``"chunks"`` item in the head and an array of data in the tail. For
-   this example it has nothing to return, so the response will be::
-
-       [
-         "chunks",
-         []
-       ]
-
-   When there are no more view rows to process, CouchDB sends a `list_end`
-   message to signify there is no more data to send::
-
-       ["list_end"]
-
-#. Finalization
-
-   The last stage of the communication process is the returning *list tail*:
-   the last data chunk. After this, processing of the list function will be
-   complete and the client will receive a complete response.
-
-   For our example the last message is::
-
-       [
-           "end",
-           [
-               "{\"total_rows\":2,\"offset\":0,\"rows\":[{\"id\":\"0cb42c267fe32d4b56b3500bc503e030\",\"key\":\"0cb42c267fe32d4b56b3500bc503e030\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"},{\"id\":\"431926a69504bde41851eb3c18a27b1f\",\"key\":\"431926a69504bde41851eb3c18a27b1f\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"}]}"
-           ]
-       ]
-
-In this example, we have returned our result in a single message from the Query
-Server. This is okay for small numbers of rows, but for large data sets,
-perhaps with millions of documents or millions of view rows, this would not be
-acceptable.
-
-Let's fix our list function and see the changes in communication::
-
-    function(head, req){
-        start({'headers': {'Content-Type': 'application/json'}});
-        send('{');
-        send('"total_rows":' + toJSON(head.total_rows) + ',');
-        send('"offset":' + toJSON(head.offset) + ',');
-        send('"rows":[');
-        if (row=getRow()){
-            send(toJSON(row));
-        }
-        while(row=getRow()){
-            send(',' + toJSON(row));
-        }
-        send(']');
-        return '}';
-    }
-
-"Wait, what?" - you'd like to ask. Yes, we'd build JSON response manually by
-string chunks, but let's take a look on logs::
-
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["start",["{","\"total_rows\":2,","\"offset\":0,","\"rows\":["],{"headers":{"Content-Type":"application/json"}}]
-    [Wed, 24 Jul 2013 05:45:30 GMT] [info] [<0.18963.1>] 127.0.0.1 - - GET /blog/_design/post/_list/index/all_docs 200
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Input  :: ["list_row",{"id":"0cb42c267fe32d4b56b3500bc503e030","key":"0cb42c267fe32d4b56b3500bc503e030","value":"1-967a00dff5e02add41819138abb3284d"}]
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["chunks",["{\"id\":\"0cb42c267fe32d4b56b3500bc503e030\",\"key\":\"0cb42c267fe32d4b56b3500bc503e030\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"}"]]
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Input  :: ["list_row",{"id":"431926a69504bde41851eb3c18a27b1f","key":"431926a69504bde41851eb3c18a27b1f","value":"1-967a00dff5e02add41819138abb3284d"}]
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["chunks",[",{\"id\":\"431926a69504bde41851eb3c18a27b1f\",\"key\":\"431926a69504bde41851eb3c18a27b1f\",\"value\":\"1-967a00dff5e02add41819138abb3284d\"}"]]
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Input  :: ["list_end"]
-    [Wed, 24 Jul 2013 05:45:30 GMT] [debug] [<0.19191.1>] OS Process #Port<0.4444> Output :: ["end",["]","}"]]
-
-Note, that now the Query Server sends response by lightweight chunks and if
-our communication process was extremely slow, the client will see how response
-data appears on their screen. Chunk by chunk, without waiting for the complete
-result, like they have for our previous list function.
-
-.. _qs/ddoc/updates:
-
-``updates``
------------
-
-:Command: ``ddoc``
-:SubCommand: ``updates``
-:Arguments:
-
-    - Document object or ``null`` if document `id` wasn't specified in request
-    - :ref:`request_object`
-
-:Returns: Array with there elements:
-
-    - ``"up"``
-    - Document object or ``null`` if nothing should be stored
-    - :ref:`response_object`
-
-Executes :ref:`update function <updatefun>`.
-
-CouchDB sends::
-
-    [
-        "ddoc",
-        "_design/id",
-        [
-            "updates",
-            "nothing"
-        ],
-        [
-            null,
-            {
-                "info": {
-                    "db_name": "test",
-                    "doc_count": 5,
-                    "doc_del_count": 0,
-                    "update_seq": 16,
-                    "purge_seq": 0,
-                    "compact_running": false,
-                    "sizes": {
-                      "active": 7979745,
-                      "disk": 8056936,
-                      "external": 8024930
-                    },
-                    "instance_start_time": "1374612186131612",
-                    "disk_format_version": 6,
-                    "committed_update_seq": 16
-                },
-                "id": null,
-                "uuid": "7b695cb34a03df0316c15ab529002e69",
-                "method": "POST",
-                "requested_path": [
-                    "test",
-                    "_design",
-                    "1139",
-                    "_update",
-                    "nothing"
-                ],
-                "path": [
-                    "test",
-                    "_design",
-                    "1139",
-                    "_update",
-                    "nothing"
-                ],
-                "raw_path": "/test/_design/1139/_update/nothing",
-                "query": {},
-                "headers": {
-                    "Accept": "*/*",
-                    "Accept-Encoding": "identity, gzip, deflate, compress",
-                    "Content-Length": "0",
-                    "Host": "localhost:5984"
-                },
-                "body": "",
-                "peer": "127.0.0.1",
-                "form": {},
-                "cookie": {},
-                "userCtx": {
-                    "db": "test",
-                    "name": null,
-                    "roles": [
-                        "_admin"
-                    ]
-                },
-                "secObj": {}
-            }
-        ]
-    ]
-
-The Query Server answers::
-
-    [
-        "up",
-        null,
-        {"body": "document id wasn't provided"}
-    ]
-
-or in case of successful update::
-
-    [
-        "up",
-        {
-            "_id": "7b695cb34a03df0316c15ab529002e69",
-            "hello": "world!"
-        },
-        {"body": "document was updated"}
-    ]
-
-.. _qs/ddoc/filters:
-
-``filters``
------------
-
-:Command: ``ddoc``
-:SubCommand: ``filters``
-:Arguments:
-
-    - Array of document objects
-    - :ref:`request_object`
-
-:Returns: Array of two elements:
-
-    - ``true``
-    - Array of booleans in the same order of input documents.
-
-Executes :ref:`filter function <filterfun>`.
-
-CouchDB sends::
-
-    [
-        "ddoc",
-        "_design/test",
-        [
-            "filters",
-            "random"
-        ],
-        [
-            [
-                {
-                    "_id": "431926a69504bde41851eb3c18a27b1f",
-                    "_rev": "1-967a00dff5e02add41819138abb3284d",
-                    "_revisions": {
-                        "start": 1,
-                        "ids": [
-                            "967a00dff5e02add41819138abb3284d"
-                        ]
-                    }
-                },
-                {
-                    "_id": "0cb42c267fe32d4b56b3500bc503e030",
-                    "_rev": "1-967a00dff5e02add41819138abb3284d",
-                    "_revisions": {
-                        "start": 1,
-                        "ids": [
-                            "967a00dff5e02add41819138abb3284d"
-                        ]
-                    }
-                }
-            ],
-            {
-                "info": {
-                    "db_name": "test",
-                    "doc_count": 5,
-                    "doc_del_count": 0,
-                    "update_seq": 19,
-                    "purge_seq": 0,
-                    "compact_running": false,
-                    "sizes": {
-                      "active": 7979745,
-                      "disk": 8056936,
-                      "external": 8024930
-                    },
-                    "instance_start_time": "1374612186131612",
-                    "disk_format_version": 6,
-                    "committed_update_seq": 19
-                },
-                "id": null,
-                "uuid": "7b695cb34a03df0316c15ab529023a81",
-                "method": "GET",
-                "requested_path": [
-                    "test",
-                    "_changes?filter=test",
-                    "random"
-                ],
-                "path": [
-                    "test",
-                    "_changes"
-                ],
-                "raw_path": "/test/_changes?filter=test/random",
-                "query": {
-                    "filter": "test/random"
-                },
-                "headers": {
-                    "Accept": "application/json",
-                    "Accept-Encoding": "identity, gzip, deflate, compress",
-                    "Content-Length": "0",
-                    "Content-Type": "application/json; charset=utf-8",
-                    "Host": "localhost:5984"
-                },
-                "body": "",
-                "peer": "127.0.0.1",
-                "form": {},
-                "cookie": {},
-                "userCtx": {
-                    "db": "test",
-                    "name": null,
-                    "roles": [
-                        "_admin"
-                    ]
-                },
-                "secObj": {}
-            }
-        ]
-    ]
-
-The Query Server answers::
-
-    [
-        true,
-        [
-            true,
-            false
-        ]
-    ]
-
-.. _qs/ddoc/views:
-
-``views``
----------
-
-:Command: ``ddoc``
-:SubCommand: ``views``
-:Arguments: Array of document objects
-:Returns: Array of two elements:
-
-    - ``true``
-    - Array of booleans in the same order of input documents.
-
-.. versionadded:: 1.2
-
-Executes :ref:`view function <viewfilter>` in place of the filter.
-
-Acts in the same way as :ref:`qs/ddoc/filters` command.
-
-.. _qs/ddoc/validate_doc_update:
-
-``validate_doc_update``
------------------------
-
-:Command: ``ddoc``
-:SubCommand: ``validate_doc_update``
-:Arguments:
-
-    - Document object that will be stored
-    - Document object that will be replaced
-    - :ref:`userctx_object`
-    - :ref:`security_object`
-
-:Returns: ``1``
-
-Executes :ref:`validation function <vdufun>`.
-
-CouchDB send::
-
-    [
-        "ddoc",
-        "_design/id",
-        ["validate_doc_update"],
-        [
-            {
-                "_id": "docid",
-                "_rev": "2-e0165f450f6c89dc6b071c075dde3c4d",
-                "score": 10
-            },
-            {
-                "_id": "docid",
-                "_rev": "1-9f798c6ad72a406afdbf470b9eea8375",
-                "score": 4
-            },
-            {
-                "name": "Mike",
-                "roles": ["player"]
-            },
-            {
-                "admins": {},
-                "members": []
-            }
-        ]
-    ]
-
-The Query Server answers::
-
-    1
-
-.. note::
-    While the only valid response for this command is ``true``, to prevent the
-    document from being saved, the Query Server needs to raise an error:
-    ``forbidden`` or ``unauthorized``; these errors will be turned into correct
-    ``HTTP 403`` and ``HTTP 401`` responses respectively.
-
-.. _qs/ddoc/rewrites:
-
-``rewrites``
------------------------
-
-:Command: ``ddoc``
-:SubCommand: ``rewrites``
-:Arguments:
-
-    - :ref:`request2_object`
-
-:Returns: ``1``
-
-Executes :ref:`rewrite function <api/ddoc/rewrite>`.
-
-CouchDB send::
-
-    [
-        "ddoc",
-        "_design/id",
-        ["rewrites"],
-        [
-            {
-                "method": "POST",
-                "requested_path": [
-                    "test",
-                    "_design",
-                    "1139",
-                    "_update",
-                    "nothing"
-                ],
-                "path": [
-                    "test",
-                    "_design",
-                    "1139",
-                    "_update",
-                    "nothing"
-                ],
-                "raw_path": "/test/_design/1139/_update/nothing",
-                "query": {},
-                "headers": {
-                    "Accept": "*/*",
-                    "Accept-Encoding": "identity, gzip, deflate, compress",
-                    "Content-Length": "0",
-                    "Host": "localhost:5984"
-                },
-                "body": "",
-                "peer": "127.0.0.1",
-                "cookie": {},
-                "userCtx": {
-                    "db": "test",
-                    "name": null,
-                    "roles": [
-                        "_admin"
-                    ]
-                },
-                "secObj": {}
-            }
-        ]
-    ]
-
-The Query Server answers::
-
-    [
-        "ok",
-        {
-            "path": "some/path",
-            "query": {"key1": "value1", "key2": "value2"},
-            "method": "METHOD",
-            "headers": {"Header1": "value1", "Header2": "value2"},
-            "body": ""
-        }
-    ]
-
-or in case of direct response::
-
-    [
-        "ok",
-        {
-            "headers": {"Content-Type": "text/plain"},
-            "body": "Welcome!",
-            "code": 200
-        }
-    ]
-
-or for immediate redirect::
-
-    [
-        "ok",
-        {
-            "headers": {"Location": "http://example.com/path/"},
-            "code": 302
-        }
-    ]
-
-.. _qs/errors:
-
-Returning errors
-================
-
-When something goes wrong, the Query Server can inform CouchDB by sending a
-special message in response to the received command.
-
-Error messages prevent further command execution and return an error description
-to CouchDB. Errors are logically divided into two groups:
-
-- `Common errors`. These errors only break the current Query Server command and
-  return the error info to the CouchDB instance *without* terminating the Query
-  Server process.
-- `Fatal errors`. Fatal errors signal a condition that cannot be recovered.
-  For instance, if your a design function is unable to import a third party
-  module, it's better to count such error as fatal and terminate whole process.
-
-.. _qs/error:
-
-``error``
----------
-
-To raise an error, the Query Server should respond with::
-
-    ["error", "error_name", "reason why"]
-
-The ``"error_name"`` helps to classify problems by their type e.g. if it's
-``"value_error"`` to indicate improper data, ``"not_found"`` to indicate a
-missing resource and ``"type_error"`` to indicate an improper data type.
-
-The ``"reason why"`` explains in human-readable terms what went wrong, and
-possibly how to resolve it.
-
-For example, calling :ref:`updatefun` against a non-existent document could
-produce the error message::
-
-    ["error", "not_found", "Update function requires existent document"]
-
-.. _qs/error/forbidden:
-
-``forbidden``
--------------
-
-The `forbidden` error is widely used by :ref:`vdufun` to stop further function
-processing and prevent storage of the new document revision. Since this is not
-actually an error, but an assertion against user actions, CouchDB doesn't log
-it at `"error"` level, but returns `HTTP 403 Forbidden` response with error
-information object.
-
-To raise this error, the Query Server should respond with::
-
-    {"forbidden": "reason why"}
-
-.. _qs/error/unauthorized:
-
-``unauthorized``
-----------------
-
-The `unauthorized` error mostly acts like `forbidden` one, but with
-the meaning of *please authorize first*. This small difference helps end users
-to understand what they can do to solve the problem. Similar to `forbidden`,
-CouchDB doesn't log it at `"error"` level, but returns a `HTTP 401 Unauthorized`
-response with an error information object.
-
-To raise this error, the Query Server should respond with::
-
-    {"unauthorized": "reason why"}
-
-.. _qs/log:
-
-Logging
-=======
-
-At any time, the Query Server may send some information that will be saved in
-CouchDB's log file. This is done by sending a special `log` object with a single
-argument, on a separate line::
-
-    ["log", "some message"]
-
-CouchDB does not respond, but writes the received message to the log file::
-
-    [Sun, 13 Feb 2009 23:31:30 GMT] [info] [<0.72.0>] Query Server Log Message: some message
-
-These messages are only logged at :config:option:`info level <log/level>`.
diff --git a/src/replication/conflicts.rst b/src/replication/conflicts.rst
deleted file mode 100644
index f876400..0000000
--- a/src/replication/conflicts.rst
+++ /dev/null
@@ -1,787 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication/conflicts:
-
-==============================
-Replication and conflict model
-==============================
-
-Let's take the following example to illustrate replication and conflict
-handling.
-
-- Alice has a document containing Bob's business card;
-- She synchronizes it between her desktop PC and her laptop;
-- On the desktop PC, she updates Bob's E-mail address;
-  Without syncing again, she updates Bob's mobile number on the laptop;
-- Then she replicates the two to each other again.
-
-So on the desktop the document has Bob's new E-mail address and his old mobile
-number, and on the laptop it has his old E-mail address and his new mobile
-number.
-
-The question is, what happens to these conflicting updated documents?
-
-CouchDB replication
-===================
-
-CouchDB works with JSON documents inside databases. Replication of databases
-takes place over HTTP, and can be either a "pull" or a "push", but is
-unidirectional. So the easiest way to perform a full sync is to do a "push"
-followed by a "pull" (or vice versa).
-
-So, Alice creates v1 and sync it. She updates to v2a on one side and v2b on the
-other, and then replicates. What happens?
-
-The answer is simple: both versions exist on both sides!
-
-.. code-block:: text
-
-      DESKTOP                          LAPTOP
-    +---------+
-    | /db/bob |                                     INITIAL
-    |   v1    |                                     CREATION
-    +---------+
-
-    +---------+                      +---------+
-    | /db/bob |  ----------------->  | /db/bob |     PUSH
-    |   v1    |                      |   v1    |
-    +---------+                      +---------+
-
-    +---------+                      +---------+  INDEPENDENT
-    | /db/bob |                      | /db/bob |     LOCAL
-    |   v2a   |                      |   v2b   |     EDITS
-    +---------+                      +---------+
-
-    +---------+                      +---------+
-    | /db/bob |  ----------------->  | /db/bob |     PUSH
-    |   v2a   |                      |   v2a   |
-    +---------+                      |   v2b   |
-                                     +---------+
-
-    +---------+                      +---------+
-    | /db/bob |  <-----------------  | /db/bob |     PULL
-    |   v2a   |                      |   v2a   |
-    |   v2b   |                      |   v2b   |
-    +---------+                      +---------+
-
-After all, this is not a file system, so there's no restriction that only one
-document can exist with the name /db/bob. These are just "conflicting" revisions
-under the same name.
-
-Because the changes are always replicated, the data is safe. Both machines have
-identical copies of both documents, so failure of a hard drive on either side
-won't lose any of the changes.
-
-Another thing to notice is that peers do not have to be configured or tracked.
-You can do regular replications to peers, or you can do one-off, ad-hoc pushes
-or pulls. After the replication has taken place, there is no record kept of
-which peer any particular document or revision came from.
-
-So the question now is: what happens when you try to read /db/bob? By default,
-CouchDB picks one arbitrary revision as the "winner", using a deterministic
-algorithm so that the same choice will be made on all peers. The same happens
-with views: the deterministically-chosen winner is the only revision fed into
-your map function.
-
-Let's say that the winner is v2a. On the desktop, if Alice reads the document
-she'll see v2a, which is what she saved there. But on the laptop, after
-replication, she'll also see only v2a. It could look as if the changes she made
-there have been lost - but of course they have not, they have just been hidden
-away as a conflicting revision. But eventually she'll need these changes merged
-into Bob's business card, otherwise they will effectively have been lost.
-
-Any sensible business-card application will, at minimum, have to present the
-conflicting versions to Alice and allow her to create a new version
-incorporating information from them all. Ideally it would merge the updates
-itself.
-
-Conflict avoidance
-==================
-
-When working on a single node, CouchDB will avoid creating conflicting revisions
-by returning a :statuscode:`409` error. This is because, when you
-PUT a new version of a document, you must give the ``_rev`` of the previous
-version. If that ``_rev`` has already been superseded, the update is rejected
-with a :statuscode:`409` response.
-
-So imagine two users on the same node are fetching Bob's business card, updating
-it concurrently, and writing it back:
-
-.. code-block:: text
-
-    USER1    ----------->  GET /db/bob
-             <-----------  {"_rev":"1-aaa", ...}
-
-    USER2    ----------->  GET /db/bob
-             <-----------  {"_rev":"1-aaa", ...}
-
-    USER1    ----------->  PUT /db/bob?rev=1-aaa
-             <-----------  {"_rev":"2-bbb", ...}
-
-    USER2    ----------->  PUT /db/bob?rev=1-aaa
-             <-----------  409 Conflict  (not saved)
-
-User2's changes are rejected, so it's up to the app to fetch /db/bob again,
-and either:
-
-#. apply the same changes as were applied to the earlier revision, and submit
-   a new PUT
-#. redisplay the document so the user has to edit it again
-#. just overwrite it with the document being saved before (which is not
-   advisable, as user1's changes will be silently lost)
-
-So when working in this mode, your application still has to be able to handle
-these conflicts and have a suitable retry strategy, but these conflicts never
-end up inside the database itself.
-
-Revision tree
-=============
-
-When you update a document in CouchDB, it keeps a list of the previous
-revisions. In the case where conflicting updates are introduced, this history
-branches into a tree, where the current conflicting revisions for this document
-form the tips (leaf nodes) of this tree:
-
-.. code-block:: text
-
-      ,--> r2a
-    r1 --> r2b
-      `--> r2c
-
-Each branch can then extend its history - for example if you read revision r2b
-and then PUT with ?rev=r2b then you will make a new revision along that
-particular branch.
-
-.. code-block:: text
-
-      ,--> r2a -> r3a -> r4a
-    r1 --> r2b -> r3b
-      `--> r2c -> r3c
-
-Here, (r4a, r3b, r3c) are the set of conflicting revisions. The way you resolve
-a conflict is to delete the leaf nodes along the other branches. So when you
-combine (r4a+r3b+r3c) into a single merged document, you would replace r4a and
-delete r3b and r3c.
-
-.. code-block:: text
-
-      ,--> r2a -> r3a -> r4a -> r5a
-    r1 --> r2b -> r3b -> (r4b deleted)
-      `--> r2c -> r3c -> (r4c deleted)
-
-Note that r4b and r4c still exist as leaf nodes in the history tree, but as
-deleted docs. You can retrieve them but they will be marked ``"_deleted":true``.
-
-When you compact a database, the bodies of all the non-leaf documents are
-discarded. However, the list of historical _revs is retained, for the benefit of
-later conflict resolution in case you meet any old replicas of the database at
-some time in future. There is "revision pruning" to stop this getting
-arbitrarily large.
-
-Working with conflicting documents
-==================================
-
-The basic :get:`/{doc}/{docid}` operation will not show you any
-information about conflicts. You see only the deterministically-chosen winner,
-and get no indication as to whether other conflicting revisions exist or not:
-
-.. code-block:: javascript
-
-    {
-        "_id":"test",
-        "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-        "hello":"bar"
-    }
-
-If you do ``GET /db/test?conflicts=true``, and the document is in a conflict
-state, then you will get the winner plus a _conflicts member containing an array
-of the revs of the other, conflicting revision(s). You can then fetch them
-individually using subsequent ``GET /db/test?rev=xxxx`` operations:
-
-.. code-block:: javascript
-
-    {
-        "_id":"test",
-        "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-        "hello":"bar",
-        "_conflicts":[
-            "2-65db2a11b5172bf928e3bcf59f728970",
-            "2-5bc3c6319edf62d4c624277fdd0ae191"
-        ]
-    }
-
-If you do ``GET /db/test?open_revs=all`` then you will get all the leaf nodes of
-the revision tree. This will give you all the current conflicts, but will also
-give you leaf nodes which have been deleted (i.e. parts of the conflict history
-which have since been resolved). You can remove these by filtering out documents
-with ``"_deleted":true``:
-
-.. code-block:: javascript
-
-    [
-        {"ok":{"_id":"test","_rev":"2-5bc3c6319edf62d4c624277fdd0ae191","hello":"foo"}},
-        {"ok":{"_id":"test","_rev":"2-65db2a11b5172bf928e3bcf59f728970","hello":"baz"}},
-        {"ok":{"_id":"test","_rev":"2-b91bb807b4685080c6a651115ff558f5","hello":"bar"}}
-    ]
-
-The ``"ok"`` tag is an artifact of ``open_revs``, which also lets you list
-explicit revisions as a JSON array, e.g. ``open_revs=[rev1,rev2,rev3]``. In this
-form, it would be possible to request a revision which is now missing, because
-the database has been compacted.
-
-.. note::
-    The order of revisions returned by ``open_revs=all`` is **NOT** related to
-    the deterministic "winning" algorithm. In the above example, the winning
-    revision is 2-b91b... and happens to be returned last, but in other cases it
-    can be returned in a different position.
-
-Once you have retrieved all the conflicting revisions, your application can then
-choose to display them all to the user. Or it could attempt to merge them, write
-back the merged version, and delete the conflicting versions - that is, to
-resolve the conflict permanently.
-
-As described above, you need to update one revision and delete all the
-conflicting revisions explicitly. This can be done using a single `POST` to
-``_bulk_docs``, setting ``"_deleted":true`` on those revisions you wish to
-delete.
-
-Multiple document API
-=====================
-
-Finding conflicted documents with Mango
----------------------------------------
-
-.. versionadded:: 2.2.0
-
-CouchDB's :ref:`Mango system <api/db/_find>` allows easy querying of
-documents with conflicts, returning the full body of each document as well.
-
-Here's how to use it to find all conflicts in a database:
-
-.. code-block:: bash
-
-    $ curl -X POST http://127.0.0.1/dbname/_find \
-        -d '{"selector": {"_conflicts": { "$exists": true}}, "conflicts": true}' \
-        -Hcontent-type:application/json
-
-.. code-block:: javascript
-
-    {"docs": [
-    {"_id":"doc","_rev":"1-3975759ccff3842adf690a5c10caee42","a":2,"_conflicts":["1-23202479633c2b380f79507a776743d5"]}
-    ],
-    "bookmark": "g1AAAABheJzLYWBgYMpgSmHgKy5JLCrJTq2MT8lPzkzJBYozA1kgKQ6YVA5QkBFMgKSVDHWNjI0MjEzMLc2MjZONkowtDNLMLU0NzBPNzc3MTYxTTLOysgCY2ReV"}
-
-The ``bookmark`` value can be used to navigate through additional pages of
-results if necessary. Mango by default only returns 25 results per request.
-
-If you expect to run this query often, be sure to create a Mango secondary
-index to speed the query:
-
-.. code-block:: bash
-
-    $ curl -X POST http://127.0.0.1/dbname/_index \
-        -d '{"index":{"fields": ["_conflicts"]}}' \
-        -Hcontent-type:application/json
-
-Of course, the selector can be enhanced to filter documents on additional
-keys in the document. Be sure to add those keys to your secondary index as
-well, or a full database scan will be triggered.
-
-Finding conflicted documents using the ``_all_docs`` index
-----------------------------------------------------------
-
-You can fetch multiple documents at once using ``include_docs=true`` on a view.
-However, a ``conflicts=true`` request is ignored; the "doc" part of the value
-never includes a ``_conflicts`` member. Hence you would need to do another query
-to determine for each document whether it is in a conflicting state:
-
-.. code-block:: bash
-
-    $ curl 'http://127.0.0.1:5984/conflict_test/_all_docs?include_docs=true&conflicts=true'
-
-.. code-block:: javascript
-
-    {
-        "total_rows":1,
-        "offset":0,
-        "rows":[
-            {
-                "id":"test",
-                "key":"test",
-                "value":{"rev":"2-b91bb807b4685080c6a651115ff558f5"},
-                "doc":{
-                    "_id":"test",
-                    "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-                    "hello":"bar"
-                }
-            }
-        ]
-    }
-
-.. code-block:: bash
-
-    $ curl 'http://127.0.0.1:5984/conflict_test/test?conflicts=true'
-
-.. code-block:: javascript
-
-    {
-        "_id":"test",
-        "_rev":"2-b91bb807b4685080c6a651115ff558f5",
-        "hello":"bar",
-        "_conflicts":[
-            "2-65db2a11b5172bf928e3bcf59f728970",
-            "2-5bc3c6319edf62d4c624277fdd0ae191"
-        ]
-    }
-
-View map functions
-==================
-
-Views only get the winning revision of a document. However they do also get a
-``_conflicts`` member if there are any conflicting revisions. This means you can
-write a view whose job is specifically to locate documents with conflicts.
-Here is a simple map function which achieves this:
-
-.. code-block:: javascript
-
-    function(doc) {
-        if (doc._conflicts) {
-            emit(null, [doc._rev].concat(doc._conflicts));
-        }
-    }
-
-which gives the following output:
-
-.. code-block:: javascript
-
-    {
-        "total_rows":1,
-        "offset":0,
-        "rows":[
-            {
-                "id":"test",
-                "key":null,
-                "value":[
-                    "2-b91bb807b4685080c6a651115ff558f5",
-                    "2-65db2a11b5172bf928e3bcf59f728970",
-                    "2-5bc3c6319edf62d4c624277fdd0ae191"
-                ]
-            }
-        ]
-    }
-
-If you do this, you can have a separate "sweep" process which periodically scans
-your database, looks for documents which have conflicts, fetches the conflicting
-revisions, and resolves them.
-
-Whilst this keeps the main application simple, the problem with this approach is
-that there will be a window between a conflict being introduced and it being
-resolved. From a user's viewpoint, this may appear that the document they just
-saved successfully may suddenly lose their changes, only to be resurrected some
-time later. This may or may not be acceptable.
-
-Also, it's easy to forget to start the sweeper, or not to implement it properly,
-and this will introduce odd behaviour which will be hard to track down.
-
-CouchDB's "winning" revision algorithm may mean that information drops out of a
-view until a conflict has been resolved. Consider Bob's business card again;
-suppose Alice has a view which emits mobile numbers, so that her telephony
-application can display the caller's name based on caller ID. If there are
-conflicting documents with Bob's old and new mobile numbers, and they happen to
-be resolved in favour of Bob's old number, then the view won't be able to
-recognise his new one. In this particular case, the application might have
-preferred to put information from both the conflicting documents into the view,
-but this currently isn't possible.
-
-Suggested algorithm to fetch a document with conflict resolution:
-
-#. Get document via ``GET docid?conflicts=true`` request
-#. For each member in the ``_conflicts`` array call ``GET docid?rev=xxx``.
-   If any errors occur at this stage, restart from step 1.
-   (There could be a race where someone else has already resolved this conflict
-   and deleted that rev)
-#. Perform application-specific merging
-#. Write ``_bulk_docs`` with an update to the first rev and deletes of the other
-   revs.
-
-This could either be done on every read (in which case you could replace all
-calls to GET in your application with calls to a library which does the above),
-or as part of your sweeper code.
-
-And here is an example of this in Ruby using the low-level `RestClient`_:
-
-.. _RestClient: https://rubygems.org/gems/rest-client
-
-.. code-block:: ruby
-
-    require 'rubygems'
-    require 'rest_client'
-    require 'json'
-    DB="http://127.0.0.1:5984/conflict_test"
-
-    # Write multiple documents
-    def writem(docs)
-        JSON.parse(RestClient.post("#{DB}/_bulk_docs", {
-            "docs" => docs,
-        }.to_json))
-    end
-
-    # Write one document, return the rev
-    def write1(doc, id=nil, rev=nil)
-        doc['_id'] = id if id
-        doc['_rev'] = rev if rev
-        writem([doc]).first['rev']
-    end
-
-    # Read a document, return *all* revs
-    def read1(id)
-        retries = 0
-        loop do
-            # FIXME: escape id
-            res = [JSON.parse(RestClient.get("#{DB}/#{id}?conflicts=true"))]
-            if revs = res.first.delete('_conflicts')
-                begin
-                    revs.each do |rev|
-                        res << JSON.parse(RestClient.get("#{DB}/#{id}?rev=#{rev}"))
-                    end
-                rescue
-                    retries += 1
-                    raise if retries >= 5
-                    next
-                end
-            end
-            return res
-        end
-    end
-
-    # Create DB
-    RestClient.delete DB rescue nil
-    RestClient.put DB, {}.to_json
-
-    # Write a document
-    rev1 = write1({"hello"=>"xxx"},"test")
-    p read1("test")
-
-    # Make three conflicting versions
-    write1({"hello"=>"foo"},"test",rev1)
-    write1({"hello"=>"bar"},"test",rev1)
-    write1({"hello"=>"baz"},"test",rev1)
-
-    res = read1("test")
-    p res
-
-    # Now let's replace these three with one
-    res.first['hello'] = "foo+bar+baz"
-    res.each_with_index do |r,i|
-        unless i == 0
-            r.replace({'_id'=>r['_id'], '_rev'=>r['_rev'], '_deleted'=>true})
-        end
-    end
-    writem(res)
-
-    p read1("test")
-
-An application written this way never has to deal with a ``PUT 409``, and is
-automatically multi-master capable.
-
-You can see that it's straightforward enough when you know what you're doing.
-It's just that CouchDB doesn't currently provide a convenient HTTP API for
-"fetch all conflicting revisions", nor "PUT to supersede these N revisions", so
-you need to wrap these yourself. At the time of writing, there are no known
-client-side libraries which provide support for this.
-
-Merging and revision history
-============================
-
-Actually performing the merge is an application-specific function. It depends
-on the structure of your data. Sometimes it will be easy: e.g. if a document
-contains a list which is only ever appended to, then you can perform a union of
-the two list versions.
-
-Some merge strategies look at the changes made to an object, compared to its
-previous version. This is how Git's merge function works.
-
-For example, to merge Bob's business card versions v2a and v2b, you could look
-at the differences between v1 and v2b, and then apply these changes to v2a as
-well.
-
-With CouchDB, you can sometimes get hold of old revisions of a document.
-For example, if you fetch ``/db/bob?rev=v2b&revs_info=true`` you'll get a list
-of the previous revision ids which ended up with revision v2b. Doing the same
-for v2a you can find their common ancestor revision. However if the database
-has been compacted, the content of that document revision will have been lost.
-``revs_info`` will still show that v1 was an ancestor, but report it as
-"missing"::
-
-    BEFORE COMPACTION           AFTER COMPACTION
-
-         ,-> v2a                     v2a
-       v1
-         `-> v2b                     v2b
-
-So if you want to work with diffs, the recommended way is to store those diffs
-within the new revision itself. That is: when you replace v1 with v2a, include
-an extra field or attachment in v2a which says which fields were changed from
-v1 to v2a. This unfortunately does mean additional book-keeping for your
-application.
-
-Comparison with other replicating data stores
-=============================================
-
-The same issues arise with other replicating systems, so it can be instructive
-to look at these and see how they compare with CouchDB. Please feel free to add
-other examples.
-
-Unison
-------
-
-`Unison`_ is a bi-directional file synchronisation tool. In this case, the
-business card would be a file, say `bob.vcf`.
-
-.. _Unison: http://www.cis.upenn.edu/~bcpierce/unison/
-
-When you run unison, changes propagate both ways. If a file has changed on one
-side but not the other, the new replaces the old. Unison maintains a local state
-file so that it knows whether a file has changed since the last successful
-replication.
-
-In our example it has changed on both sides. Only one file called `bob.vcf`
-can exist within the file system. Unison solves the problem by simply ducking
-out: the user can choose to replace the remote version with the local version,
-or vice versa (both of which would lose data), but the default action is to
-leave both sides unchanged.
-
-From Alice's point of view, at least this is a simple solution. Whenever she's
-on the desktop she'll see the version she last edited on the desktop, and
-whenever she's on the laptop she'll see the version she last edited there.
-
-But because no replication has actually taken place, the data is not protected.
-If her laptop hard drive dies, she'll lose all her changes made on the laptop;
-ditto if her desktop hard drive dies.
-
-It's up to her to copy across one of the versions manually (under a different
-filename), merge the two, and then finally push the merged version to the other
-side.
-
-Note also that the original file (version v1) has been lost at this point.
-So it's not going to be known from inspection alone whether v2a or v2b has the
-most up-to-date E-mail address for Bob, or which version has the most up-to-date
-mobile number. Alice has to remember which one she entered last.
-
-Git
----
-
-`Git`_ is a well-known distributed source control system. Like Unison, Git deals
-with files. However, Git considers the state of a whole set of files as a single
-object, the "tree". Whenever you save an update, you create a "commit" which
-points to both the updated tree and the previous commit(s), which in turn point
-to the previous tree(s). You therefore have a full history of all the states of
-the files. This history forms a branch, and a pointer is kept to the tip of the
-branch, from which you can work backwards to any previous state. The "pointer"
-is an SHA1 hash of the tip commit.
-
-.. _Git: http://git-scm.com/
-
-If you are replicating with one or more peers, a separate branch is made for
-each of those peers. For example, you might have::
-
-    main               -- my local branch
-    remotes/foo/main   -- branch on peer 'foo'
-    remotes/bar/main   -- branch on peer 'bar'
-
-In the regular workflow, replication is a "pull", importing changes from
-a remote peer into the local repository. A "pull" does two things: first "fetch"
-the state of the peer into the remote tracking branch for that peer; and then
-attempt to "merge" those changes into the local branch.
-
-Now let's consider the business card. Alice has created a Git repo containing
-``bob.vcf``, and cloned it across to the other machine. The branches look like
-this, where ``AAAAAAAA`` is the SHA1 of the commit::
-
-    ---------- desktop ----------           ---------- laptop ----------
-    main: AAAAAAAA                        main: AAAAAAAA
-    remotes/laptop/main: AAAAAAAA         remotes/desktop/main: AAAAAAAA
-
-Now she makes a change on the desktop, and commits it into the desktop repo;
-then she makes a different change on the laptop, and commits it into the laptop
-repo::
-
-    ---------- desktop ----------           ---------- laptop ----------
-    main: BBBBBBBB                        main: CCCCCCCC
-    remotes/laptop/main: AAAAAAAA         remotes/desktop/main: AAAAAAAA
-
-Now on the desktop she does ``git pull laptop``. First, the remote objects
-are copied across into the local repo and the remote tracking branch is
-updated::
-
-    ---------- desktop ----------           ---------- laptop ----------
-    main: BBBBBBBB                        main: CCCCCCCC
-    remotes/laptop/main: CCCCCCCC         remotes/desktop/main: AAAAAAAA
-
-.. note::
-    The repo still contains AAAAAAAA because commits BBBBBBBB and CCCCCCCC
-    point to it.
-
-Then Git will attempt to merge the changes in. Knowing that
-the parent commit to ``CCCCCCCC`` is ``AAAAAAAA``, it takes a diff between
-``AAAAAAAA`` and ``CCCCCCCC`` and tries to apply it to ``BBBBBBBB``.
-
-If this is successful, then you'll get a new version with a merge commit::
-
-    ---------- desktop ----------           ---------- laptop ----------
-    main: DDDDDDDD                        main: CCCCCCCC
-    remotes/laptop/main: CCCCCCCC         remotes/desktop/main: AAAAAAAA
-
-Then Alice has to logon to the laptop and run ``git pull desktop``. A similar
-process occurs. The remote tracking branch is updated::
-
-    ---------- desktop ----------           ---------- laptop ----------
-    main: DDDDDDDD                        main: CCCCCCCC
-    remotes/laptop/main: CCCCCCCC         remotes/desktop/main: DDDDDDDD
-
-Then a merge takes place. This is a special case: ``CCCCCCCC`` is one of the
-parent commits of ``DDDDDDDD``, so the laptop can `fast forward` update from
-``CCCCCCCC`` to ``DDDDDDDD`` directly without having to do any complex merging.
-This leaves the final state as::
-
-    ---------- desktop ----------           ---------- laptop ----------
-    main: DDDDDDDD                        main: DDDDDDDD
-    remotes/laptop/main: CCCCCCCC         remotes/desktop/main: DDDDDDDD
-
-Now this is all and good, but you may wonder how this is relevant when thinking
-about CouchDB.
-
-First, note what happens in the case when the merge algorithm fails.
-The changes are still propagated from the remote repo into the local one, and
-are available in the remote tracking branch. So, unlike Unison, you know the
-data is protected. It's just that the local working copy may fail to update, or
-may diverge from the remote version. It's up to you to create and commit the
-combined version yourself, but you are guaranteed to have all the history you
-might need to do this.
-
-Note that while it is possible to build new merge algorithms into Git,
-the standard ones are focused on line-based changes to source code. They don't
-work well for XML or JSON if it's presented without any line breaks.
-
-The other interesting consideration is multiple peers. In this case you have
-multiple remote tracking branches, some of which may match your local branch,
-some of which may be behind you, and some of which may be ahead of you
-(i.e. contain changes that you haven't yet merged)::
-
-    main: AAAAAAAA
-    remotes/foo/main: BBBBBBBB
-    remotes/bar/main: CCCCCCCC
-    remotes/baz/main: AAAAAAAA
-
-Note that each peer is explicitly tracked, and therefore has to be explicitly
-created. If a peer becomes stale or is no longer needed, it's up to you to
-remove it from your configuration and delete the remote tracking branch.
-This is different from CouchDB, which doesn't keep any peer state in the
-database.
-
-Another difference between CouchDB and Git is that it maintains all history
-back to time
-zero - Git compaction keeps diffs between all those versions in order to reduce
-size, but CouchDB discards them. If you are constantly updating a document,
-the size of a Git repo would grow forever. It is possible (with some effort)
-to use "history rewriting" to make Git forget commits earlier than a particular
-one.
-
-.. _replication/conflicts/git:
-
-What is the CouchDB replication protocol? Is it like Git?
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-:Author: Jason Smith
-:Date: 2011-01-29
-:Source: `StackOverflow <http://stackoverflow.com/questions/4766391/what-is-the-couchdb-replication-protocol-is-it-like-git>`_
-
-**Key points**
-
-**If you know Git, then you know how Couch replication works.** Replicating is
-*very* similar to pushing or pulling with distributed source managers like Git.
-
-**CouchDB replication does not have its own protocol.** A replicator simply
-connects to two DBs as a client, then reads from one and writes to the other.
-Push replication is reading the local data and updating the remote DB;
-pull replication is vice versa.
-
-* **Fun fact 1**: The replicator is actually an independent Erlang application,
-  in its own process. It connects to both couches, then reads records from one
-  and writes them to the other.
-* **Fun fact 2**: CouchDB has no way of knowing who is a normal client and who
-  is a replicator (let alone whether the replication is push or pull).
-  It all looks like client connections. Some of them read records. Some of them
-  write records.
-
-**Everything flows from the data model**
-
-The replication algorithm is trivial, uninteresting. A trained monkey could
-design it. It's simple because the cleverness is the data model, which has these
-useful characteristics:
-
-#. Every record in CouchDB is completely independent of all others. That sucks
-   if you want to do a JOIN or a transaction, but it's awesome if you want to
-   write a replicator. Just figure out how to replicate one record, and then
-   repeat that for each record.
-#. Like Git, records have a linked-list revision history. A record's revision ID
-   is the checksum of its own data. Subsequent revision IDs are checksums of:
-   the new data, plus the revision ID of the previous.
-
-#. In addition to application data (``{"name": "Jason", "awesome": true}``),
-   every record stores the evolutionary time line of all previous revision IDs
-   leading up to itself.
-
-   - Exercise: Take a moment of quiet reflection. Consider any two different
-     records, A and B. If A's revision ID appears in B's time line, then B
-     definitely evolved from A. Now consider Git's fast-forward merges.
-     Do you hear that? That is the sound of your mind being blown.
-
-#. Git isn't really a linear list. It has forks, when one parent has multiple
-   children. CouchDB has that too.
-
-   - Exercise: Compare two different records, A and B. A's revision ID does not
-     appear in B's time line; however, one revision ID, C, is in both A's and
-     B's time line. Thus A didn't evolve from B. B didn't evolve from A. But
-     rather, A and B have a common ancestor C. In Git, that is a "fork." In
-     CouchDB, it's a "conflict."
-
-   - In Git, if both children go on to develop their time lines independently,
-     that's cool. Forks totally support that.
-   - In CouchDB, if both children go on to develop their time lines
-     independently, that cool too. Conflicts totally support that.
-   - **Fun fact 3**: CouchDB "conflicts" do not correspond to Git "conflicts."
-     A Couch conflict is a divergent revision history, what Git calls a "fork."
-     For this reason the CouchDB community pronounces "conflict" with a silent
-     `n`: "co-flicked."
-
-#. Git also has merges, when one child has multiple parents. CouchDB *sort* of
-   has that too.
-
-   - **In the data model, there is no merge.** The client simply marks one
-     time line as deleted and continues to work with the only extant time line.
-   - **In the application, it feels like a merge.** Typically, the client merges
-     the *data* from each time line in an application-specific way.
-     Then it writes the new data to the time line. In Git, this is like copying
-     and pasting the changes from branch A into branch B, then committing to
-     branch B and deleting branch A. The data was merged, but there was no
-     `git merge`.
-   - These behaviors are different because, in Git, the time line itself is
-     important; but in CouchDB, the data is important and the time line is
-     incidental—it's just there to support replication. That is one reason why
-     CouchDB's built-in revisioning is inappropriate for storing revision data
-     like a wiki page.
-
-**Final notes**
-
-At least one sentence in this writeup (possibly this one) is complete BS.
diff --git a/src/replication/index.rst b/src/replication/index.rst
deleted file mode 100644
index d0c64a8..0000000
--- a/src/replication/index.rst
+++ /dev/null
@@ -1,37 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication:
-
-===========
-Replication
-===========
-
-Replication is an incremental one way process involving two databases
-(a source and a destination).
-
-The aim of replication is that at the end of the process, all active
-documents in the source database are also in the destination database
-and all documents that were deleted in the source database are also
-deleted in the destination database (if they even existed).
-
-The replication process only copies the last revision of a document, so all
-previous revisions that were only in the source database are not copied to the
-destination database.
-
-.. toctree::
-    :maxdepth: 2
-
-    intro
-    replicator
-    conflicts
-    protocol
diff --git a/src/replication/intro.rst b/src/replication/intro.rst
deleted file mode 100644
index 3f7a581..0000000
--- a/src/replication/intro.rst
+++ /dev/null
@@ -1,140 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication/intro:
-
-===========================
-Introduction to Replication
-===========================
-
-One of CouchDB's strengths is the ability to synchronize two copies of the same
-database. This enables users to distribute data across several nodes or
-data centers, but also to move data more closely to clients.
-
-Replication involves a source and a destination database, which can be on the
-same or on different CouchDB instances. The aim of replication is that at
-the end of the process, all active documents in the source database are also in
-the destination database and all documents that were deleted in the source
-database are also deleted in the destination database (if they even existed).
-
-Transient and Persistent Replication
-====================================
-
-There are two different ways to set up a replication. The first one that was
-introduced into CouchDB leads to a replication that could be called `transient`.
-Transient means that there are no documents backing up the replication. So after a
-restart of the CouchDB server the replication will disappear. Later, the
-:ref:`_replicator <replicator>` database was introduced, which keeps documents
-containing your replication parameters. Such a replication can be called `persistent`.
-Transient replications were kept for backward compatibility. Both replications can
-have different :ref:`replication states <replicator/states>`.
-
-Triggering, Stopping and Monitoring Replications
-================================================
-
-A persistent replication is controlled through a document in the
-:ref:`_replicator <replicator>` database, where each document describes one
-replication process (see :ref:`replication-settings`). For setting up a
-transient replication the api endpoint
-:ref:`/_replicate <api/server/replicate>` can be used. A replication is triggered
-by sending a JSON object either to the ``_replicate`` endpoint or storing it as a
-document into the ``_replicator`` database.
-
-If a replication is currently running its status can be inspected through the
-active tasks API (see :ref:`api/server/active_tasks`, :ref:`replication-status`
-and :ref:`api/server/_scheduler/jobs`).
-
-For document based-replications, :ref:`api/server/_scheduler/docs` can be used to
-get a complete state summary. This API is preferred as it will show the state of the
-replication document before it becomes a replication job.
-
-For transient replications there is no way to query their state when the job is
-finished.
-
-A replication can be stopped by deleting the document, or by updating it with
-its ``cancel`` property set to ``true``.
-
-Replication Procedure
-=====================
-
-During replication, CouchDB will compare the source and the destination
-database to determine which documents differ between the source and the
-destination database. It does so by following the :ref:`changes` on the source
-and comparing the documents to the destination. Changes are submitted to the
-destination in batches where they can introduce conflicts. Documents that
-already exist on the destination in the same revision are not transferred. As
-the deletion of documents is represented by a new revision, a document deleted
-on the source will also be deleted on the target.
-
-A replication task will finish once it reaches the end of the changes feed. If
-its ``continuous`` property is set to true, it will wait for new changes to
-appear until the task is canceled. Replication tasks also create checkpoint
-documents on the destination to ensure that a restarted task can continue from
-where it stopped, for example after it has crashed.
-
-When a replication task is initiated on the sending node, it is called *push*
-replication, if it is initiated by the receiving node, it is called *pull*
-replication.
-
-Master - Master replication
-===========================
-
-One replication task will only transfer changes in one direction. To achieve
-master-master replication, it is possible to set up two replication tasks in
-opposite direction. When a change is replicated from database A to B by the
-first task, the second task from B to A will discover that the new change on
-B already exists in A and will wait for further changes.
-
-Controlling which Documents to Replicate
-========================================
-
-There are three options for controlling which documents are replicated,
-and which are skipped:
-
-1. Defining documents as being local.
-2. Using :ref:`selectorobj`.
-3. Using :ref:`filterfun`.
-
-Local documents are never replicated (see :ref:`api/local`).
-
-:ref:`selectorobj` can be included in a replication document (see
-:ref:`replication-settings`). A selector object contains a query expression
-that is used to test whether a document should be replicated.
-
-:ref:`filterfun` can be used in a replication (see
-:ref:`replication-settings`). The replication task evaluates
-the filter function for each document in the changes feed. The document is
-only replicated if the filter returns ``true``.
-
-.. note::
-    Using a selector provides performance benefits when compared with using a
-    :ref:`filterfun`. You should use :ref:`selectorobj` where possible.
-
-.. note::
-    When using replication filters that depend on the document's content,
-    deleted documents may pose a problem, since the document passed to the
-    filter will not contain any of the document's content. This can be
-    resolved by adding a ``_deleted:true`` field to the document instead
-    of using the DELETE HTTP method, paired with the use of a
-    :ref:`validate document update <vdufun>` handler to ensure the fields
-    required for replication filters are always present. Take note, though,
-    that the deleted document will still contain all of its data (including
-    attachments)!
-
-Migrating Data to Clients
-=========================
-
-Replication can be especially useful for bringing data closer to clients.
-`PouchDB <http://pouchdb.com/>`_ implements the replication algorithm of CouchDB
-in JavaScript, making it possible to make data from a CouchDB database
-available in an offline browser application, and synchronize changes back to
-CouchDB.
diff --git a/src/replication/protocol.rst b/src/replication/protocol.rst
deleted file mode 100644
index 90101c9..0000000
--- a/src/replication/protocol.rst
+++ /dev/null
@@ -1,1898 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replication/protocol:
-
-============================
-CouchDB Replication Protocol
-============================
-
-:Version: 3
-
-The `CouchDB Replication Protocol` is a protocol for synchronising JSON
-documents between 2 peers over HTTP/1.1 by using the public :ref:`CouchDB REST
-API <api>` and is based on the Apache CouchDB MVCC_ Data model.
-
-Preface
-=======
-
-Language
---------
-
-The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
-"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
-document are to be interpreted as described in :rfc:`2119`.
-
-Goals
------
-
-The primary goal of this specification is to describe the `CouchDB Replication
-Protocol` under the hood.
-
-The secondary goal is to provide enough detailed information about the protocol
-to make it easy to build tools on any language and platform that can synchronize
-data with CouchDB.
-
-Definitions
------------
-
-JSON:
-    :abbr:`JSON (JavaScript Object Notation)` is a text format for the
-    serialization of structured data. It is described in `ECMA-262`_ and
-    :rfc:`4627`.
-
-URI:
-    A URI is defined by :rfc:`3986`. It can be a URL as defined
-    in :rfc:`1738`.
-
-ID:
-    An identifier (could be a UUID) as described in :rfc:`4122`.
-
-Revision:
-    A `MVCC`_ token value of following pattern: ``N-sig`` where ``N`` is ALWAYS
-    a positive integer and ``sig`` is the Document signature (custom).
-    Don't mix it up with the revision in version control systems!
-
-Leaf Revision:
-    The last Document Revision in a series of changes. Documents may have
-    multiple Leaf Revisions (aka Conflict Revisions) due to concurrent updates.
-
-Document:
-    A document is a JSON object with an ID and Revision defined in ``_id`` and
-    ``_rev`` fields respectively. A Document's ID MUST be unique within
-    the Database where it is stored.
-
-Database:
-    A collection of Documents with a unique URI.
-
-Changes Feed:
-    A stream of Document-changing events (create, update, delete) for
-    the specified Database.
-
-Sequence ID:
-    An ID provided by the Changes Feed. It MUST be incremental,
-    but MAY NOT always be an integer.
-
-Source:
-    Database from where the Documents are replicated.
-
-Target:
-    Database where the Documents are replicated to.
-
-Replication:
-    The one-way directed synchronization process of Source and Target endpoints.
-
-Checkpoint:
-    Intermediate Recorded Sequence ID used for Replication recovery.
-
-Replicator:
-    A service or an application which initiates and runs Replication.
-
-Filter Function:
-    A special function of any programming language that is used to filter
-    Documents during Replication (see :ref:`filterfun`)
-
-Filter Function Name:
-    An ID of a Filter Function that may be used as a symbolic reference (aka
-    callback function) to apply the related Filter Function to Replication.
-
-Filtered Replication:
-    Replication of Documents from Source to Target using a Filter Function.
-
-Full Replication:
-    Replication of all Documents from Source to Target.
-
-Push Replication:
-    Replication process where Source is a local endpoint and Target is remote.
-
-Pull Replication:
-    Replication process where Source is a remote endpoint and Target is local.
-
-Continuous Replication:
-    Replication that "never stops": after processing all events from the
-    Changes Feed, the Replicator doesn't close the connection, but awaits new
-    change events from the Source. The connection is kept alive by periodic
-    heartbeats.
-
-Replication Log:
-    A special Document that holds Replication history (recorded Checkpoints
-    and a few more statistics) between Source and Target.
-
-Replication ID:
-    A unique value that unambiguously identifies the Replication Log.
-
-Replication Protocol Algorithm
-==============================
-
-The `CouchDB Replication Protocol` is not *magical*, but
-an agreement on usage of the public :ref:`CouchDB HTTP REST API <api>` to
-enable Documents to be replicated from Source to Target.
-
-The reference implementation, written in Erlang_, is provided by the
-couch_replicator_ module in Apache CouchDB.
-
-It is RECOMMENDED that one follow this algorithm specification, use the same
-HTTP endpoints, and run requests with the same parameters to provide a
-completely compatible implementation. Custom Replicator implementations MAY use
-different HTTP API endpoints and request parameters depending on their local
-specifics and they MAY implement only part of the Replication Protocol to run
-only Push or Pull Replication. However, while such solutions could also run the
-Replication process, they loose compatibility with the CouchDB Replicator.
-
-Verify Peers
-------------
-
-.. code-block:: text
-
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-    ' Verify Peers:                                                             '
-    '                                                                           '
-    '                404 Not Found   +--------------------------------+         '
-    '       +----------------------- |     Check Source Existence     |         '
-    '       |                        +--------------------------------+         '
-    '       |                        |          HEAD /source          |         '
-    '       |                        +--------------------------------+         '
-    '       |                          |                                        '
-    '       |                          | 200 OK                                 '
-    '       |                          v                                        '
-    '       |                        +--------------------------------+         '
-    '       |                        |     Check Target Existence     | ----+   '
-    '       |                        +--------------------------------+     |   '
-    '       |                        |         HEAD /target           |     |   '
-    '       |                        +--------------------------------+     |   '
-    '       |                          |                                    |   '
-    '       |                          | 404 Not Found                      |   '
-    '       v                          v                                    |   '
-    '   +-------+    No              +--------------------------------+     |   '
-    '   | Abort | <----------------- |         Create Target?         |     |   '
-    '   +-------+                    +--------------------------------+     |   '
-    '       ^                          |                                    |   '
-    '       |                          | Yes                                |   '
-    '       |                          v                                    |   '
-    '       |        Failure         +--------------------------------+     |   '
-    '       +----------------------- |          Create Target         |     |   '
-    '                                +--------------------------------+     |   '
-    '                                |           PUT /target          |     |   '
-    '                                +--------------------------------+     |   '
-    '                                  |                                    |   '
-    '                                  | 201 Created                 200 OK |   '
-    '                                  |                                    |   '
-    + - - - - - - - - - - - - - - - -  | - - - - - - - - - - - - - - - - -  | - +
-                                       |                                    |
-    + - - - - - - - - - - - - - - - -  | - - - - - - - - - - - - - - - - -  | - +
-    ' Get Peers Information:           |                                    |   '
-    '                                  +------------------------------------+   '
-    '                                  |                                        '
-    '                                  v                                        '
-    '                                +--------------------------------+         '
-    '                                |     Get Source Information     |         '
-    '                                +--------------------------------+         '
-    '                                                                           '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-
-The Replicator MUST ensure that both Source and Target exist
-by using :head:`/{db}` requests.
-
-Check Source Existence
-^^^^^^^^^^^^^^^^^^^^^^
-
-    **Request**:
-
-    .. code-block:: http
-
-        HEAD /source HTTP/1.1
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 05 Oct 2013 08:50:39 GMT
-        Server: CouchDB (Erlang/OTP)
-
-Check Target Existence
-^^^^^^^^^^^^^^^^^^^^^^
-
-    **Request**:
-
-    .. code-block:: http
-
-        HEAD /target HTTP/1.1
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Sat, 05 Oct 2013 08:51:11 GMT
-        Server: CouchDB (Erlang/OTP)
-
-Create Target?
-^^^^^^^^^^^^^^
-
-In case of a non-existent Target, the Replicator MAY make a :put:`/{db}`
-request to create the Target:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /target HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Content-Length: 12
-        Content-Type: application/json
-        Date: Sat, 05 Oct 2013 08:58:41 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true
-        }
-
-However, the Replicator's PUT request MAY NOT succeeded due to insufficient
-privileges (which are granted by the provided credential) and so receive a
-:statuscode:`401` or a :statuscode:`403` error. Such errors SHOULD be expected
-and well handled:
-
-    .. code-block:: http
-
-        HTTP/1.1 500 Internal Server Error
-        Cache-Control: must-revalidate
-        Content-Length: 108
-        Content-Type: application/json
-        Date: Fri, 09 May 2014 13:50:32 GMT
-        Server: CouchDB (Erlang OTP)
-
-        {
-            "error": "unauthorized",
-            "reason": "unauthorized to access or create database http://localhost:5984/target"
-        }
-
-Abort
-^^^^^
-
-In case of a non-existent Source or Target, Replication SHOULD be aborted with
-an HTTP error response:
-
-    .. code-block:: http
-
-        HTTP/1.1 500 Internal Server Error
-        Cache-Control: must-revalidate
-        Content-Length: 56
-        Content-Type: application/json
-        Date: Sat, 05 Oct 2013 08:55:29 GMT
-        Server: CouchDB (Erlang OTP)
-
-        {
-            "error": "db_not_found",
-            "reason": "could not open source"
-        }
-
-Get Peers Information
----------------------
-
-.. code-block:: text
-
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
-    ' Verify Peers:                                                    '
-    '                         +------------------------+               '
-    '                         | Check Target Existence |               '
-    '                         +------------------------+               '
-    '                                     |                            '
-    '                                     | 200 OK                     '
-    '                                     |                            '
-    + - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - -+
-                                          |
-    + - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - -+
-    ' Get Peers Information:              |                            '
-    '                                     v                            '
-    '                         +------------------------+               '
-    '                         | Get Source Information |               '
-    '                         +------------------------+               '
-    '                         |      GET /source       |               '
-    '                         +------------------------+               '
-    '                                     |                            '
-    '                                     | 200 OK                     '
-    '                                     v                            '
-    '                         +------------------------+               '
-    '                         | Get Target Information |               '
-    '                         +------------------------+               '
-    '                         |      GET /target       |               '
-    '                         +------------------------+               '
-    '                                     |                            '
-    '                                     | 200 OK                     '
-    '                                     |                            '
-    + - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - -+
-                                          |
-    + - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - -+
-    ' Find Common Ancestry:               |                            '
-    '                                     |                            '
-    '                                     v                            '
-    '                         +-------------------------+              '
-    '                         | Generate Replication ID |              '
-    '                         +-------------------------+              '
-    '                                                                  '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -+
-
-The Replicator retrieves basic information both from Source and Target using
-:get:`/{db}` requests. The GET response MUST contain JSON objects with
-the following mandatory fields:
-
-- **instance_start_time** (*string*): Always ``"0"``. (Returned for legacy
-  reasons.)
-- **update_seq** (*number* / *string*): The current database Sequence ID.
-
-Any other fields are optional. The information that the Replicator needs
-is the ``update_seq`` field: this value will be used to define a *temporary*
-(because Database data is subject to change) upper bound for changes feed
-listening and statistic calculating to show proper Replication progress.
-
-Get Source Information
-^^^^^^^^^^^^^^^^^^^^^^
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /source HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 256
-        Content-Type: application/json
-        Date: Tue, 08 Oct 2013 07:53:08 GMT
-        Server: CouchDB (Erlang OTP)
-
-        {
-            "committed_update_seq": 61772,
-            "compact_running": false,
-            "db_name": "source",
-            "disk_format_version": 6,
-            "doc_count": 41961,
-            "doc_del_count": 3807,
-            "instance_start_time": "0",
-            "purge_seq": 0,
-            "sizes": {
-              "active": 70781613961,
-              "disk": 79132913799,
-              "external": 72345632950
-            },
-            "update_seq": 61772
-        }
-
-Get Target Information
-^^^^^^^^^^^^^^^^^^^^^^
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /target/ HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Length: 363
-        Content-Type: application/json
-        Date: Tue, 08 Oct 2013 12:37:01 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "compact_running": false,
-            "db_name": "target",
-            "disk_format_version": 5,
-            "doc_count": 1832,
-            "doc_del_count": 1,
-            "instance_start_time": "0",
-            "purge_seq": 0,
-            "sizes": {
-              "active": 50829452,
-              "disk": 77001455,
-              "external": 60326450
-            },
-            "update_seq": "1841-g1AAAADveJzLYWBgYMlgTmGQT0lKzi9KdUhJMtbLSs1LLUst0k"
-        }
-
-Find Common Ancestry
-------------------------
-
-.. code-block:: text
-
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-    ' Get Peers Information:                                                    '
-    '                                                                           '
-    '                             +-------------------------------------------+ '
-    '                             |           Get Target Information          | '
-    '                             +-------------------------------------------+ '
-    '                               |                                           '
-    + - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - - - - - - - +
-                                    |
-    + - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - - - - - - - +
-    ' Find Common Ancestry:         v                                           '
-    '                             +-------------------------------------------+ '
-    '                             |          Generate Replication ID          | '
-    '                             +-------------------------------------------+ '
-    '                               |                                           '
-    '                               |                                           '
-    '                               v                                           '
-    '                             +-------------------------------------------+ '
-    '                             |      Get Replication Log from Source      | '
-    '                             +-------------------------------------------+ '
-    '                             |     GET /source/_local/replication-id     | '
-    '                             +-------------------------------------------+ '
-    '                               |                                           '
-    '                               | 200 OK                                    '
-    '                               | 404 Not Found                             '
-    '                               v                                           '
-    '                             +-------------------------------------------+ '
-    '                             |      Get Replication Log from Target      | '
-    '                             +-------------------------------------------+ '
-    '                             |     GET /target/_local/replication-id     | '
-    '                             +-------------------------------------------+ '
-    '                               |                                           '
-    '                               | 200 OK                                    '
-    '                               | 404 Not Found                             '
-    '                               v                                           '
-    '                             +-------------------------------------------+ '
-    '                             |          Compare Replication Logs         | '
-    '                             +-------------------------------------------+ '
-    '                               |                                           '
-    '                               | Use latest common sequence as start point '
-    '                               |                                           '
-    + - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - - - - - - - +
-                                    |
-                                    |
-    + - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - - - - - - - - +
-    ' Locate Changed Documents:     |                                           '
-    '                               |                                           '
-    '                               v                                           '
-    '                             +-------------------------------------------+ '
-    '                             |        Listen Source Changes Feed         | '
-    '                             +-------------------------------------------+ '
-    '                                                                           '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-
-Generate Replication ID
-^^^^^^^^^^^^^^^^^^^^^^^
-
-Before Replication is started, the Replicator MUST generate a Replication ID.
-This value is used to track Replication History, resume and continue previously
-interrupted Replication process.
-
-The Replication ID generation algorithm is implementation specific. Whatever
-algorithm is used it MUST uniquely identify the Replication process. CouchDB's
-Replicator, for example, uses the following factors in generating a Replication
-ID:
-
-- Persistent Peer UUID value. For CouchDB, the local
-  :config:option:`Server UUID <couchdb/uuid>` is used
-- Source and Target URI and if Source or Target are local or remote Databases
-- If Target needed to be created
-- If Replication is Continuous
-- Any custom headers
-- :ref:`Filter function <filterfun>` code if used
-- Changes Feed query parameters, if any
-
-.. note::
-    See `couch_replicator_ids.erl`_ for an example of a Replication ID generation
-    implementation.
-
-    .. _couch_replicator_ids.erl: https://github.com/apache/couchdb/blob/main/src/couch_replicator/src/couch_replicator_ids.erl
-
-Retrieve Replication Logs from Source and Target
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Once the Replication ID has been generated, the Replicator SHOULD retrieve
-the Replication Log from both Source and Target using
-:get:`/{db}/_local/{docid}`:
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /source/_local/b3e44b920ee2951cb2e123b63044427a HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 1019
-        Content-Type: application/json
-        Date: Thu, 10 Oct 2013 06:18:56 GMT
-        ETag: "0-8"
-        Server: CouchDB (Erlang OTP)
-
-        {
-            "_id": "_local/b3e44b920ee2951cb2e123b63044427a",
-            "_rev": "0-8",
-            "history": [
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 2,
-                    "docs_written": 2,
-                    "end_last_seq": 5,
-                    "end_time": "Thu, 10 Oct 2013 05:56:38 GMT",
-                    "missing_checked": 2,
-                    "missing_found": 2,
-                    "recorded_seq": 5,
-                    "session_id": "d5a34cbbdafa70e0db5cb57d02a6b955",
-                    "start_last_seq": 3,
-                    "start_time": "Thu, 10 Oct 2013 05:56:38 GMT"
-                },
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 1,
-                    "docs_written": 1,
-                    "end_last_seq": 3,
-                    "end_time": "Thu, 10 Oct 2013 05:56:12 GMT",
-                    "missing_checked": 1,
-                    "missing_found": 1,
-                    "recorded_seq": 3,
-                    "session_id": "11a79cdae1719c362e9857cd1ddff09d",
-                    "start_last_seq": 2,
-                    "start_time": "Thu, 10 Oct 2013 05:56:12 GMT"
-                },
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 2,
-                    "docs_written": 2,
-                    "end_last_seq": 2,
-                    "end_time": "Thu, 10 Oct 2013 05:56:04 GMT",
-                    "missing_checked": 2,
-                    "missing_found": 2,
-                    "recorded_seq": 2,
-                    "session_id": "77cdf93cde05f15fcb710f320c37c155",
-                    "start_last_seq": 0,
-                    "start_time": "Thu, 10 Oct 2013 05:56:04 GMT"
-                }
-            ],
-            "replication_id_version": 3,
-            "session_id": "d5a34cbbdafa70e0db5cb57d02a6b955",
-            "source_last_seq": 5
-        }
-
-The Replication Log SHOULD contain the following fields:
-
-- **history** (*array* of *object*): Replication history. **Required**
-
-  - **doc_write_failures** (*number*): Number of failed writes
-  - **docs_read** (*number*): Number of read documents
-  - **docs_written** (*number*): Number of written documents
-  - **end_last_seq** (*number*): Last processed Update Sequence ID
-  - **end_time** (*string*): Replication completion timestamp in :rfc:`5322`
-    format
-  - **missing_checked** (*number*): Number of checked revisions on Source
-  - **missing_found** (*number*): Number of missing revisions found on Target
-  - **recorded_seq** (*number*): Recorded intermediate Checkpoint. **Required**
-  - **session_id** (*string*): Unique session ID. Commonly, a random UUID value
-    is used. **Required**
-  - **start_last_seq** (*number*): Start update Sequence ID
-  - **start_time** (*string*): Replication start timestamp in :rfc:`5322` format
-
-- **replication_id_version** (*number*): Replication protocol version. Defines
-  Replication ID calculation algorithm, HTTP API calls and the others
-  routines. **Required**
-- **session_id** (*string*): Unique ID of the last session. Shortcut to
-  the ``session_id`` field of the latest ``history`` object. **Required**
-- **source_last_seq** (*number*): Last processed Checkpoint. Shortcut to
-  the ``recorded_seq`` field of the latest ``history`` object. **Required**
-
-This request MAY fall with a :statuscode:`404` response:
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /source/_local/b6cef528f67aa1a8a014dd1144b10e09 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 404 Object Not Found
-        Cache-Control: must-revalidate
-        Content-Length: 41
-        Content-Type: application/json
-        Date: Tue, 08 Oct 2013 13:31:10 GMT
-        Server: CouchDB (Erlang OTP)
-
-        {
-            "error": "not_found",
-            "reason": "missing"
-        }
-
-That's OK. This means that there is no information about the current Replication
-so it must not have been run previously and as such the Replicator MUST run
-a Full Replication.
-
-Compare Replication Logs
-^^^^^^^^^^^^^^^^^^^^^^^^
-
-If the Replication Logs are successfully retrieved from both Source and Target
-then the Replicator MUST determine their common ancestry by following the next
-algorithm:
-
-- Compare ``session_id`` values for the chronological last session - if they
-  match both Source and Target have a common Replication history and it seems
-  to be valid. Use ``source_last_seq`` value for the startup Checkpoint
-
-- In case of mismatch, iterate over the ``history`` collection to search for
-  the latest (chronologically) common ``session_id`` for Source and Target.
-  Use value of ``recorded_seq`` field as startup Checkpoint
-
-If Source and Target has no common ancestry, the Replicator MUST run
-Full Replication.
-
-Locate Changed Documents
-------------------------
-
-.. code-block:: text
-
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-    ' Find Common Ancestry:                                                     '
-    '                                                                           '
-    '             +------------------------------+                              '
-    '             |   Compare Replication Logs   |                              '
-    '             +------------------------------+                              '
-    '                                          |                                '
-    '                                          |                                '
-    + - - - - - - - - - - - - - - - - - - - -  |  - - - - - - - - - - - - - - - +
-                                               |
-    + - - - - - - - - - - - - - - - - - - - -  |  - - - - - - - - - - - - - - - +
-    ' Locate Changed Documents:                |                                '
-    '                                          |                                '
-    '                                          |                                '
-    '                                          v                                '
-    '            +-------------------------------+                              '
-    '   +------> |     Listen to Changes Feed    | -----+                       '
-    '   |        +-------------------------------+      |                       '
-    '   |        |     GET  /source/_changes     |      |                       '
-    '   |        |     POST /source/_changes     |      |                       '
-    '   |        +-------------------------------+      |                       '
-    '   |                                      |        |                       '
-    '   |                                      |        |                       '
-    '   |                There are new changes |        | No more changes       '
-    '   |                                      |        |                       '
-    '   |                                      v        v                       '
-    '   |        +-------------------------------+    +-----------------------+ '
-    '   |        |     Read Batch of Changes     |    | Replication Completed | '
-    '   |        +-------------------------------+    +-----------------------+ '
-    '   |                                      |                                '
-    '   | No                                   |                                '
-    '   |                                      v                                '
-    '   |        +-------------------------------+                              '
-    '   |        |  Compare Documents Revisions  |                              '
-    '   |        +-------------------------------+                              '
-    '   |        |    POST /target/_revs_diff    |                              '
-    '   |        +-------------------------------+                              '
-    '   |                                      |                                '
-    '   |                               200 OK |                                '
-    '   |                                      v                                '
-    '   |        +-------------------------------+                              '
-    '   +------- |     Any Differences Found?    |                              '
-    '            +-------------------------------+                              '
-    '                                          |                                '
-    '                                      Yes |                                '
-    '                                          |                                '
-    + - - - - - - - - - - - - - - - - - - - -  |  - - - - - - - - - - - - - - - +
-                                               |
-    + - - - - - - - - - - - - - - - - - - - -  |  - - - - - - - - - - - - - - - +
-    ' Replicate Changes:                       |                                '
-    '                                          v                                '
-    '            +-------------------------------+                              '
-    '            |  Fetch Next Changed Document  |                              '
-    '            +-------------------------------+                              '
-    '                                                                           '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-
-Listen to Changes Feed
-^^^^^^^^^^^^^^^^^^^^^^
-
-When the start up Checkpoint has been defined, the Replicator SHOULD read
-the Source's :ref:`Changes Feed <changes>` by using a :get:`/{db}/_changes`
-request. This request MUST be made with the following query parameters:
-
-- ``feed`` parameter defines the Changes Feed response style: for Continuous
-  Replication the ``continuous`` value SHOULD be used, otherwise - ``normal``.
-
-- ``style=all_docs`` query parameter tells the Source that it MUST include
-  all Revision leaves for each document's event in output.
-
-- For Continuous Replication the ``heartbeat`` parameter defines the heartbeat
-  period in *milliseconds*. The RECOMMENDED value by default is ``10000``
-  (10 seconds).
-
-- If a startup Checkpoint was found during the Replication Logs comparison,
-  the ``since`` query parameter MUST be passed with this value.
-  In case of Full Replication it MAY be ``0`` (number zero) or
-  be omitted.
-
-Additionally, the ``filter`` query parameter MAY be specified to enable a
-:ref:`filter function <changes/filter>` on Source side. Other
-custom parameters MAY also be provided.
-
-Read Batch of Changes
-^^^^^^^^^^^^^^^^^^^^^
-
-Reading the whole feed in a single shot may not be an optimal use of resources.
-It is RECOMMENDED to process the feed in small chunks. However, there is
-no specific recommendation on chunk size since it is heavily dependent on
-available resources: large chunks requires more memory while they reduce
-I/O operations and vice versa.
-
-Note, that Changes Feed output format is different for a request with
-:ref:`feed=normal <changes/normal>` and with
-:ref:`feed=continuous <changes/continuous>` query parameter.
-
-Normal Feed:
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /source/_changes?feed=normal&style=all_docs&heartbeat=10000 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Fri, 09 May 2014 16:20:41 GMT
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {"results":[
-        {"seq":14,"id":"f957f41e","changes":[{"rev":"3-46a3"}],"deleted":true}
-        {"seq":29,"id":"ddf339dd","changes":[{"rev":"10-304b"}]}
-        {"seq":37,"id":"d3cc62f5","changes":[{"rev":"2-eec2"}],"deleted":true}
-        {"seq":39,"id":"f13bd08b","changes":[{"rev":"1-b35d"}]}
-        {"seq":41,"id":"e0a99867","changes":[{"rev":"2-c1c6"}]}
-        {"seq":42,"id":"a75bdfc5","changes":[{"rev":"1-967a"}]}
-        {"seq":43,"id":"a5f467a0","changes":[{"rev":"1-5575"}]}
-        {"seq":45,"id":"470c3004","changes":[{"rev":"11-c292"}]}
-        {"seq":46,"id":"b1cb8508","changes":[{"rev":"10-ABC"}]}
-        {"seq":47,"id":"49ec0489","changes":[{"rev":"157-b01f"},{"rev":"123-6f7c"}]}
-        {"seq":49,"id":"dad10379","changes":[{"rev":"1-9346"},{"rev":"6-5b8a"}]}
-        {"seq":50,"id":"73464877","changes":[{"rev":"1-9f08"}]}
-        {"seq":51,"id":"7ae19302","changes":[{"rev":"1-57bf"}]}
-        {"seq":63,"id":"6a7a6c86","changes":[{"rev":"5-acf6"}],"deleted":true}
-        {"seq":64,"id":"dfb9850a","changes":[{"rev":"1-102f"}]}
-        {"seq":65,"id":"c532afa7","changes":[{"rev":"1-6491"}]}
-        {"seq":66,"id":"af8a9508","changes":[{"rev":"1-3db2"}]}
-        {"seq":67,"id":"caa3dded","changes":[{"rev":"1-6491"}]}
-        {"seq":68,"id":"79f3b4e9","changes":[{"rev":"1-102f"}]}
-        {"seq":69,"id":"1d89d16f","changes":[{"rev":"1-3db2"}]}
-        {"seq":71,"id":"abae7348","changes":[{"rev":"2-7051"}]}
-        {"seq":77,"id":"6c25534f","changes":[{"rev":"9-CDE"},{"rev":"3-00e7"},{"rev":"1-ABC"}]}
-        {"seq":78,"id":"SpaghettiWithMeatballs","changes":[{"rev":"22-5f95"}]}
-        ],
-        "last_seq":78}
-
-Continuous Feed:
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /source/_changes?feed=continuous&style=all_docs&heartbeat=10000 HTTP/1.1
-        Accept: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Type: application/json
-        Date: Fri, 09 May 2014 16:22:22 GMT
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        {"seq":14,"id":"f957f41e","changes":[{"rev":"3-46a3"}],"deleted":true}
-        {"seq":29,"id":"ddf339dd","changes":[{"rev":"10-304b"}]}
-        {"seq":37,"id":"d3cc62f5","changes":[{"rev":"2-eec2"}],"deleted":true}
-        {"seq":39,"id":"f13bd08b","changes":[{"rev":"1-b35d"}]}
-        {"seq":41,"id":"e0a99867","changes":[{"rev":"2-c1c6"}]}
-        {"seq":42,"id":"a75bdfc5","changes":[{"rev":"1-967a"}]}
-        {"seq":43,"id":"a5f467a0","changes":[{"rev":"1-5575"}]}
-        {"seq":45,"id":"470c3004","changes":[{"rev":"11-c292"}]}
-        {"seq":46,"id":"b1cb8508","changes":[{"rev":"10-ABC"}]}
-        {"seq":47,"id":"49ec0489","changes":[{"rev":"157-b01f"},{"rev":"123-6f7c"}]}
-        {"seq":49,"id":"dad10379","changes":[{"rev":"1-9346"},{"rev":"6-5b8a"}]}
-        {"seq":50,"id":"73464877","changes":[{"rev":"1-9f08"}]}
-        {"seq":51,"id":"7ae19302","changes":[{"rev":"1-57bf"}]}
-        {"seq":63,"id":"6a7a6c86","changes":[{"rev":"5-acf6"}],"deleted":true}
-        {"seq":64,"id":"dfb9850a","changes":[{"rev":"1-102f"}]}
-        {"seq":65,"id":"c532afa7","changes":[{"rev":"1-6491"}]}
-        {"seq":66,"id":"af8a9508","changes":[{"rev":"1-3db2"}]}
-        {"seq":67,"id":"caa3dded","changes":[{"rev":"1-6491"}]}
-        {"seq":68,"id":"79f3b4e9","changes":[{"rev":"1-102f"}]}
-        {"seq":69,"id":"1d89d16f","changes":[{"rev":"1-3db2"}]}
-        {"seq":71,"id":"abae7348","changes":[{"rev":"2-7051"}]}
-        {"seq":75,"id":"SpaghettiWithMeatballs","changes":[{"rev":"21-5949"}]}
-        {"seq":77,"id":"6c255","changes":[{"rev":"9-CDE"},{"rev":"3-00e7"},{"rev":"1-ABC"}]}
-        {"seq":78,"id":"SpaghettiWithMeatballs","changes":[{"rev":"22-5f95"}]}
-
-For both Changes Feed formats record-per-line style is preserved to simplify
-iterative fetching and decoding JSON objects with less memory footprint.
-
-Calculate Revision Difference
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-After reading the batch of changes from the Changes Feed, the Replicator forms a
-JSON mapping object for Document ID and related leaf Revisions and sends
-the result to Target via a :post:`/{db}/_revs_diff` request:
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /target/_revs_diff HTTP/1.1
-        Accept: application/json
-        Content-Length: 287
-        Content-Type: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-        {
-            "baz": [
-                "2-7051cbe5c8faecd085a3fa619e6e6337"
-            ],
-            "foo": [
-                "3-6a540f3d701ac518d3b9733d673c5484"
-            ],
-            "bar": [
-                "1-d4e501ab47de6b2000fc8a02f84a0c77",
-                "1-967a00dff5e02add41819138abb3284d"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 88
-        Content-Type: application/json
-        Date: Fri, 25 Oct 2013 14:44:41 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "baz": {
-                "missing": [
-                    "2-7051cbe5c8faecd085a3fa619e6e6337"
-                ]
-            },
-            "bar": {
-                "missing": [
-                    "1-d4e501ab47de6b2000fc8a02f84a0c77"
-                ]
-            }
-        }
-
-In the response the Replicator receives a Document ID -- Revisions mapping,
-but only for Revisions that do not exist in Target and are REQUIRED to be
-transferred from Source.
-
-If all Revisions in the request match the current state of the Documents then
-the response will contain an empty JSON object:
-
-    **Request**
-
-    .. code-block:: http
-
-        POST /target/_revs_diff HTTP/1.1
-        Accept: application/json
-        Content-Length: 160
-        Content-Type: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-        {
-            "foo": [
-                "3-6a540f3d701ac518d3b9733d673c5484"
-            ],
-            "bar": [
-                "1-967a00dff5e02add41819138abb3284d"
-            ]
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 2
-        Content-Type: application/json
-        Date: Fri, 25 Oct 2013 14:45:00 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {}
-
-Replication Completed
-^^^^^^^^^^^^^^^^^^^^^
-
-When there are no more changes left to process and no more Documents left to
-replicate, the Replicator finishes the Replication process. If Replication
-wasn't Continuous, the Replicator MAY return a response to client with
-statistics about the process.
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Cache-Control: must-revalidate
-        Content-Length: 414
-        Content-Type: application/json
-        Date: Fri, 09 May 2014 15:14:19 GMT
-        Server: CouchDB (Erlang OTP)
-
-        {
-            "history": [
-                {
-                    "doc_write_failures": 2,
-                    "docs_read": 2,
-                    "docs_written": 0,
-                    "end_last_seq": 2939,
-                    "end_time": "Fri, 09 May 2014 15:14:19 GMT",
-                    "missing_checked": 1835,
-                    "missing_found": 2,
-                    "recorded_seq": 2939,
-                    "session_id": "05918159f64842f1fe73e9e2157b2112",
-                    "start_last_seq": 0,
-                    "start_time": "Fri, 09 May 2014 15:14:18 GMT"
-                }
-            ],
-            "ok": true,
-            "replication_id_version": 3,
-            "session_id": "05918159f64842f1fe73e9e2157b2112",
-            "source_last_seq": 2939
-        }
-
-Replicate Changes
------------------
-
-.. code-block:: text
-
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-    ' Locate Changed Documents:                                                       '
-    '                                                                                 '
-    '               +-------------------------------------+                           '
-    '               |      Any Differences Found?         |                           '
-    '               +-------------------------------------+                           '
-    '                                                   |                             '
-    '                                                   |                             '
-    '                                                   |                             '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - +
-                                                        |
-    + - - - - - - - - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - +
-    ' Replicate Changes:                                |                             '
-    '                                                   v                             '
-    '               +-------------------------------------+                           '
-    '   +---------> |     Fetch Next Changed Document     | <---------------------+   '
-    '   |           +-------------------------------------+                       |   '
-    '   |           |          GET /source/docid          |                       |   '
-    '   |           +-------------------------------------+                       |   '
-    '   |             |                                                           |   '
-    '   |             |                                                           |   '
-    '   |             |                                          201 Created      |   '
-    '   |             | 200 OK                                   401 Unauthorized |   '
-    '   |             |                                          403 Forbidden    |   '
-    '   |             |                                                           |   '
-    '   |             v                                                           |   '
-    '   |           +-------------------------------------+                       |   '
-    '   |   +------ |  Document Has Changed Attachments?  |                       |   '
-    '   |   |       +-------------------------------------+                       |   '
-    '   |   |         |                                                           |   '
-    '   |   |         |                                                           |   '
-    '   |   |         | Yes                                                       |   '
-    '   |   |         |                                                           |   '
-    '   |   |         v                                                           |   '
-    '   |   |       +------------------------+   Yes    +---------------------------+ '
-    '   |   | No    |  Are They Big Enough?  | -------> | Update Document on Target | '
-    '   |   |       +------------------------+          +---------------------------+ '
-    '   |   |         |                                 |     PUT /target/docid     | '
-    '   |   |         |                                 +---------------------------+ '
-    '   |   |         |                                                               '
-    '   |   |         | No                                                            '
-    '   |   |         |                                                               '
-    '   |   |         v                                                               '
-    '   |   |       +-------------------------------------+                           '
-    '   |   +-----> |     Put Document Into the Stack     |                           '
-    '   |           +-------------------------------------+                           '
-    '   |             |                                                               '
-    '   |             |                                                               '
-    '   |             v                                                               '
-    '   |     No    +-------------------------------------+                           '
-    '   +---------- |           Stack is Full?            |                           '
-    '   |           +-------------------------------------+                           '
-    '   |             |                                                               '
-    '   |             | Yes                                                           '
-    '   |             |                                                               '
-    '   |             v                                                               '
-    '   |           +-------------------------------------+                           '
-    '   |           | Upload Stack of Documents to Target |                           '
-    '   |           +-------------------------------------+                           '
-    '   |           |       POST /target/_bulk_docs       |                           '
-    '   |           +-------------------------------------+                           '
-    '   |             |                                                               '
-    '   |             | 201 Created                                                   '
-    '   |             v                                                               '
-    '   |           +-------------------------------------+                           '
-    '   |           |          Ensure in Commit           |                           '
-    '   |           +-------------------------------------+                           '
-    '   |           |  POST /target/_ensure_full_commit   |                           '
-    '   |           +-------------------------------------+                           '
-    '   |             |                                                               '
-    '   |             | 201 Created                                                   '
-    '   |             v                                                               '
-    '   |           +-------------------------------------+                           '
-    '   |           |    Record Replication Checkpoint    |                           '
-    '   |           +-------------------------------------+                           '
-    '   |           |  PUT /source/_local/replication-id  |                           '
-    '   |           |  PUT /target/_local/replication-id  |                           '
-    '   |           +-------------------------------------+                           '
-    '   |             |                                                               '
-    '   |             | 201 Created                                                   '
-    '   |             v                                                               '
-    '   |     No    +-------------------------------------+                           '
-    '   +---------- | All Documents from Batch Processed? |                           '
-    '               +-------------------------------------+                           '
-    '                                                   |                             '
-    '                                               Yes |                             '
-    '                                                   |                             '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - +
-                                                        |
-    + - - - - - - - - - - - - - - - - - - - - - - - - - | - - - - - - - - - - - - - - +
-    ' Locate Changed Documents:                         |                             '
-    '                                                   v                             '
-    '               +-------------------------------------+                           '
-    '               |       Listen to Changes Feed        |                           '
-    '               +-------------------------------------+                           '
-    '                                                                                 '
-    + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - +
-
-Fetch Changed Documents
-^^^^^^^^^^^^^^^^^^^^^^^
-
-At this step the Replicator MUST fetch all Document Leaf Revisions from Source
-that are missed at Target. This operation is effective if Replication WILL
-use previously calculated Revision differences since they define
-missing Documents and their Revisions.
-
-To fetch the Document the Replicator will make a :get:`/{db}/{docid}` request
-with the following query parameters:
-
-- ``revs=true``: Instructs the Source to include the list of all known revisions
-  into the Document in the ``_revisions`` field. This information is needed to
-  synchronize the Document's ancestors history between Source and Target
-
-- The ``open_revs`` query parameter contains a JSON array with a list of
-  Leaf Revisions that are needed to be fetched. If the specified Revision
-  exists then the Document MUST be returned for this Revision. Otherwise,
-  Source MUST return an object with the single field ``missing`` with the
-  missed Revision as the value. In case the Document contains attachments,
-  Source MUST return information only for those ones that had been changed
-  (added or updated) since the specified Revision values. If an attachment
-  was deleted, the Document MUST NOT have stub information for it
-
-- ``latest=true``: Ensures, that Source will return the latest Document Revision
-  regardless of which one was specified in the ``open_revs`` query parameter.
-  This parameter solves a race condition problem where the requested Document
-  may be changed in between this step and handling related events on the
-  Changes Feed
-
-In the response Source SHOULD return :mimetype:`multipart/mixed` or respond
-instead with :mimetype:`application/json` unless the :header:`Accept` header
-specifies a different mime type. The :mimetype:`multipart/mixed` content type
-allows handling the response data as a stream, since there could be multiple
-documents (one per each Leaf Revision) plus several attachments. These
-attachments are mostly binary and JSON has no way to handle such data except as
-base64 encoded strings which are very ineffective for transfer and processing
-operations.
-
-With a :mimetype:`multipart/mixed` response the Replicator handles multiple
-Document Leaf Revisions and their attachments one by one as raw data without
-any additional encoding applied. There is also one agreement to make data
-processing more effective: the Document ALWAYS goes before its attachments, so
-the Replicator has no need to process all the data to map related
-Documents-Attachments and may handle it as stream with lesser memory footprint.
-
-    **Request**:
-
-    .. code-block:: http
-
-        GET /source/SpaghettiWithMeatballs?revs=true&open_revs=[%225-00ecbbc%22,%221-917fa23%22,%223-6bcedf1%22]&latest=true HTTP/1.1
-        Accept: multipart/mixed
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 200 OK
-        Content-Type: multipart/mixed; boundary="7b1596fc4940bc1be725ad67f11ec1c4"
-        Date: Thu, 07 Nov 2013 15:10:16 GMT
-        Server: CouchDB (Erlang OTP)
-        Transfer-Encoding: chunked
-
-        --7b1596fc4940bc1be725ad67f11ec1c4
-        Content-Type: application/json
-
-        {
-            "_id": "SpaghettiWithMeatballs",
-            "_rev": "1-917fa23",
-            "_revisions": {
-                "ids": [
-                    "917fa23"
-                ],
-                "start": 1
-            },
-            "description": "An Italian-American delicious dish",
-            "ingredients": [
-                "spaghetti",
-                "tomato sauce",
-                "meatballs"
-            ],
-            "name": "Spaghetti with meatballs"
-        }
-        --7b1596fc4940bc1be725ad67f11ec1c4
-        Content-Type: multipart/related; boundary="a81a77b0ca68389dda3243a43ca946f2"
-
-        --a81a77b0ca68389dda3243a43ca946f2
-        Content-Type: application/json
-
-        {
-            "_attachments": {
-              "recipe.txt": {
-                  "content_type": "text/plain",
-                  "digest": "md5-R5CrCb6fX10Y46AqtNn0oQ==",
-                  "follows": true,
-                  "length": 87,
-                  "revpos": 7
-              }
-            },
-            "_id": "SpaghettiWithMeatballs",
-            "_rev": "7-474f12e",
-            "_revisions": {
-                "ids": [
-                    "474f12e",
-                    "5949cfc",
-                    "00ecbbc",
-                    "fc997b6",
-                    "3552c87",
-                    "404838b",
-                    "5defd9d",
-                    "dc1e4be"
-                ],
-                "start": 7
-            },
-            "description": "An Italian-American delicious dish",
-            "ingredients": [
-                "spaghetti",
-                "tomato sauce",
-                "meatballs",
-                "love"
-            ],
-            "name": "Spaghetti with meatballs"
-        }
-        --a81a77b0ca68389dda3243a43ca946f2
-        Content-Disposition: attachment; filename="recipe.txt"
-        Content-Type: text/plain
-        Content-Length: 87
-
-        1. Cook spaghetti
-        2. Cook meetballs
-        3. Mix them
-        4. Add tomato sauce
-        5. ...
-        6. PROFIT!
-
-        --a81a77b0ca68389dda3243a43ca946f2--
-        --7b1596fc4940bc1be725ad67f11ec1c4
-        Content-Type: application/json; error="true"
-
-        {"missing":"3-6bcedf1"}
-        --7b1596fc4940bc1be725ad67f11ec1c4--
-
-After receiving the response, the Replicator puts all the received data into a
-local stack for further bulk upload to utilize network bandwidth effectively.
-The local stack size could be limited by number of Documents or bytes of
-handled JSON data. When the stack is full the Replicator uploads all the
-handled Document in bulk mode to the Target. While bulk operations are highly
-RECOMMENDED to be used, in certain cases the Replicator MAY upload Documents to
-Target one by one.
-
-.. note::
-    Alternative Replicator implementations MAY use alternative ways to retrieve
-    Documents from Source. For instance, `PouchDB`_ doesn't use the Multipart
-    API
-    and fetches only the latest Document Revision with inline attachments as a
-    single
-    JSON object. While this is still valid CouchDB HTTP API usage, such
-    solutions MAY require a different API implementation for non-CouchDB
-    Peers.
-
-.. _PouchDB: https://github.com/pouchdb/pouchdb/blob/master/packages/node_modules/pouchdb-replication/src/replicate.js
-
-Upload Batch of Changed Documents
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To upload multiple Documents in a single shot the Replicator sends a
-:post:`/{db}/_bulk_docs` request to Target with payload containing a JSON object
-with the following mandatory fields:
-
-- **docs** (*array* of *objects*): List of Document objects to update on Target.
-  These Documents MUST contain the ``_revisions`` field that holds a list of the
-  full Revision history to let Target create Leaf Revisions that correctly
-  preserve ancestry
-- **new_edits** (*boolean*): Special flag that instructs Target to store
-  Documents with the specified Revision (field ``_rev``) value as-is without
-  generating a new revision. Always ``false``
-
-The request also MAY contain :header:`X-Couch-Full-Commit` that used to control
-CouchDB <3.0 behavior when delayed commits were enabled. Other Peers MAY ignore
-this header or use it to control similar local feature.
-
-    **Request**:
-
-    .. code-block:: http
-
-        POST /target/_bulk_docs HTTP/1.1
-        Accept: application/json
-        Content-Length: 826
-        Content-Type:application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-        X-Couch-Full-Commit: false
-
-        {
-            "docs": [
-                {
-                    "_id": "SpaghettiWithMeatballs",
-                    "_rev": "1-917fa2381192822767f010b95b45325b",
-                    "_revisions": {
-                        "ids": [
-                            "917fa2381192822767f010b95b45325b"
-                        ],
-                        "start": 1
-                    },
-                    "description": "An Italian-American delicious dish",
-                    "ingredients": [
-                        "spaghetti",
-                        "tomato sauce",
-                        "meatballs"
-                    ],
-                    "name": "Spaghetti with meatballs"
-                },
-                {
-                    "_id": "LambStew",
-                    "_rev": "1-34c318924a8f327223eed702ddfdc66d",
-                    "_revisions": {
-                        "ids": [
-                            "34c318924a8f327223eed702ddfdc66d"
-                        ],
-                        "start": 1
-                    },
-                    "servings": 6,
-                    "subtitle": "Delicious with scone topping",
-                    "title": "Lamb Stew"
-                },
-                {
-                    "_id": "FishStew",
-                    "_rev": "1-9c65296036141e575d32ba9c034dd3ee",
-                    "_revisions": {
-                        "ids": [
-                            "9c65296036141e575d32ba9c034dd3ee"
-                        ],
-                        "start": 1
-                    },
-                    "servings": 4,
-                    "subtitle": "Delicious with fresh bread",
-                    "title": "Fish Stew"
-                }
-            ],
-            "new_edits": false
-        }
-
-In its response Target MUST return a JSON array with a list of Document update
-statuses. If the Document has been stored successfully, the list item MUST
-contain the field ``ok`` with ``true`` value. Otherwise it MUST contain
-``error`` and ``reason`` fields with error type and a human-friendly reason
-description.
-
-Document updating failure isn't fatal as Target MAY reject the update for its
-own reasons. It's RECOMMENDED to use error type ``forbidden`` for rejections,
-but other error types can also be used (like invalid field name etc.). The
-Replicator SHOULD NOT retry uploading rejected documents unless there are
-good reasons for doing so (e.g. there is special error type for that).
-
-Note that while a update may fail for one Document in the response,
-Target can still return a :statuscode:`201` response. Same will be true if all
-updates fail for all uploaded Documents.
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 246
-        Content-Type: application/json
-        Date: Sun, 10 Nov 2013 19:02:26 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        [
-            {
-                "ok": true,
-                "id": "SpaghettiWithMeatballs",
-                "rev":" 1-917fa2381192822767f010b95b45325b"
-            },
-            {
-                "ok": true,
-                "id": "FishStew",
-                "rev": "1-9c65296036141e575d32ba9c034dd3ee"
-            },
-            {
-                "error": "forbidden",
-                "id": "LambStew",
-                "reason": "sorry",
-                "rev": "1-34c318924a8f327223eed702ddfdc66d"
-            }
-        ]
-
-Upload Document with Attachments
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-There is a special optimization case when then Replicator WILL NOT use bulk
-upload of changed Documents. This case is applied when Documents contain a
-lot of attached files or the files are too big to be efficiently encoded with
-Base64.
-
-For this case the Replicator issues a :put:`/{db}/{docid}?new_edits=false
-</{db}/{docid}>` request with :mimetype:`multipart/related` content type. Such
-a request allows one to easily stream the Document and all its attachments
-one by one without any serialization overhead.
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /target/SpaghettiWithMeatballs?new_edits=false HTTP/1.1
-        Accept: application/json
-        Content-Length: 1030
-        Content-Type: multipart/related; boundary="864d690aeb91f25d469dec6851fb57f2"
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-        --2fa48cba80d0cdba7829931fe8acce9d
-        Content-Type: application/json
-
-        {
-            "_attachments": {
-                "recipe.txt": {
-                    "content_type": "text/plain",
-                    "digest": "md5-R5CrCb6fX10Y46AqtNn0oQ==",
-                    "follows": true,
-                    "length": 87,
-                    "revpos": 7
-                }
-            },
-            "_id": "SpaghettiWithMeatballs",
-            "_rev": "7-474f12eb068c717243487a9505f6123b",
-            "_revisions": {
-                "ids": [
-                    "474f12eb068c717243487a9505f6123b",
-                    "5949cfcd437e3ee22d2d98a26d1a83bf",
-                    "00ecbbc54e2a171156ec345b77dfdf59",
-                    "fc997b62794a6268f2636a4a176efcd6",
-                    "3552c87351aadc1e4bea2461a1e8113a",
-                    "404838bc2862ce76c6ebed046f9eb542",
-                    "5defd9d813628cea6e98196eb0ee8594"
-                ],
-                "start": 7
-            },
-            "description": "An Italian-American delicious dish",
-            "ingredients": [
-                "spaghetti",
-                "tomato sauce",
-                "meatballs",
-                "love"
-            ],
-            "name": "Spaghetti with meatballs"
-        }
-        --2fa48cba80d0cdba7829931fe8acce9d
-        Content-Disposition: attachment; filename="recipe.txt"
-        Content-Type: text/plain
-        Content-Length: 87
-
-        1. Cook spaghetti
-        2. Cook meetballs
-        3. Mix them
-        4. Add tomato sauce
-        5. ...
-        6. PROFIT!
-
-        --2fa48cba80d0cdba7829931fe8acce9d--
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 105
-        Content-Type: application/json
-        Date: Fri, 08 Nov 2013 16:35:27 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "ok": true,
-            "id": "SpaghettiWithMeatballs",
-            "rev": "7-474f12eb068c717243487a9505f6123b"
-        }
-
-Unlike bulk updating via :post:`/{db}/_bulk_docs` endpoint, the response MAY
-come with a different status code. For instance, in the case when the Document
-is rejected, Target SHOULD respond with a :statuscode:`403`:
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 403 Forbidden
-        Cache-Control: must-revalidate
-        Content-Length: 39
-        Content-Type: application/json
-        Date: Fri, 08 Nov 2013 16:35:27 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "error": "forbidden",
-            "reason": "sorry"
-        }
-
-Replicator SHOULD NOT retry requests in case of a :statuscode:`401`,
-:statuscode:`403`, :statuscode:`409` or :statuscode:`412` since repeating
-the request couldn't solve the issue with user credentials or uploaded data.
-
-Ensure In Commit
-^^^^^^^^^^^^^^^^
-
-Once a batch of changes has been successfully uploaded to Target, the
-Replicator issues a :post:`/{db}/_ensure_full_commit` request to ensure that
-every transferred bit is laid down on disk or other *persistent* storage place.
-Target MUST return :statuscode:`201` response with a JSON object containing the
-following mandatory fields:
-
-- **instance_start_time** (*string*): Timestamp of when the database was
-  opened, expressed in *microseconds* since the epoch
-- **ok** (*boolean*): Operation status. Constantly ``true``
-
-  **Request**:
-
-  .. code-block:: http
-
-      POST /target/_ensure_full_commit HTTP/1.1
-      Accept: application/json
-      Content-Type: application/json
-      Host: localhost:5984
-
-  **Response**:
-
-  .. code-block:: http
-
-      HTTP/1.1 201 Created
-      Cache-Control: must-revalidate
-      Content-Length: 53
-      Content-Type: application/json
-      Date: Web, 06 Nov 2013 18:20:43 GMT
-      Server: CouchDB (Erlang/OTP)
-
-      {
-          "instance_start_time": "0",
-          "ok": true
-      }
-
-Record Replication Checkpoint
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-Since batches of changes were uploaded and committed successfully, the
-Replicator updates the Replication Log both on Source and Target recording
-the current Replication state. This operation is REQUIRED so that in the case
-of Replication failure the replication can resume from last point of success,
-not from the very beginning.
-
-Replicator updates Replication Log on Source:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /source/_local/afa899a9e59589c3d4ce5668e3218aef HTTP/1.1
-        Accept: application/json
-        Content-Length: 591
-        Content-Type: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-        {
-            "_id": "_local/afa899a9e59589c3d4ce5668e3218aef",
-            "_rev": "0-1",
-            "_revisions": {
-                "ids": [
-                    "31f36e40158e717fbe9842e227b389df"
-                ],
-                "start": 1
-            },
-            "history": [
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 6,
-                    "docs_written": 6,
-                    "end_last_seq": 26,
-                    "end_time": "Thu, 07 Nov 2013 09:42:17 GMT",
-                    "missing_checked": 6,
-                    "missing_found": 6,
-                    "recorded_seq": 26,
-                    "session_id": "04bf15bf1d9fa8ac1abc67d0c3e04f07",
-                    "start_last_seq": 0,
-                    "start_time": "Thu, 07 Nov 2013 09:41:43 GMT"
-                }
-            ],
-            "replication_id_version": 3,
-            "session_id": "04bf15bf1d9fa8ac1abc67d0c3e04f07",
-            "source_last_seq": 26
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 75
-        Content-Type: application/json
-        Date: Thu, 07 Nov 2013 09:42:17 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "_local/afa899a9e59589c3d4ce5668e3218aef",
-            "ok": true,
-            "rev": "0-2"
-        }
-
-...and on Target too:
-
-    **Request**:
-
-    .. code-block:: http
-
-        PUT /target/_local/afa899a9e59589c3d4ce5668e3218aef HTTP/1.1
-        Accept: application/json
-        Content-Length: 591
-        Content-Type: application/json
-        Host: localhost:5984
-        User-Agent: CouchDB
-
-        {
-            "_id": "_local/afa899a9e59589c3d4ce5668e3218aef",
-            "_rev": "1-31f36e40158e717fbe9842e227b389df",
-            "_revisions": {
-                "ids": [
-                    "31f36e40158e717fbe9842e227b389df"
-                ],
-                "start": 1
-            },
-            "history": [
-                {
-                    "doc_write_failures": 0,
-                    "docs_read": 6,
-                    "docs_written": 6,
-                    "end_last_seq": 26,
-                    "end_time": "Thu, 07 Nov 2013 09:42:17 GMT",
-                    "missing_checked": 6,
-                    "missing_found": 6,
-                    "recorded_seq": 26,
-                    "session_id": "04bf15bf1d9fa8ac1abc67d0c3e04f07",
-                    "start_last_seq": 0,
-                    "start_time": "Thu, 07 Nov 2013 09:41:43 GMT"
-                }
-            ],
-            "replication_id_version": 3,
-            "session_id": "04bf15bf1d9fa8ac1abc67d0c3e04f07",
-            "source_last_seq": 26
-        }
-
-    **Response**:
-
-    .. code-block:: http
-
-        HTTP/1.1 201 Created
-        Cache-Control: must-revalidate
-        Content-Length: 106
-        Content-Type: application/json
-        Date: Thu, 07 Nov 2013 09:42:17 GMT
-        Server: CouchDB (Erlang/OTP)
-
-        {
-            "id": "_local/afa899a9e59589c3d4ce5668e3218aef",
-            "ok": true,
-            "rev": "2-9b5d1e36bed6ae08611466e30af1259a"
-        }
-
-Continue Reading Changes
-------------------------
-
-Once a batch of changes had been processed and transferred to Target
-successfully, the Replicator can continue to listen to the Changes Feed for new
-changes. If there are no new changes to process the Replication is considered
-to be done.
-
-For Continuous Replication, the Replicator MUST continue to wait for new changes
-from Source.
-
-Protocol Robustness
-===================
-
-Since the `CouchDB Replication Protocol` works on top of HTTP, which is based on
-TCP/IP, the Replicator SHOULD expect to be working within an unstable
-environment with delays, losses and other bad surprises that might eventually
-occur. The Replicator SHOULD NOT count every HTTP request failure as a *fatal
-error*. It SHOULD be smart enough to detect timeouts, repeat failed requests,
-be ready to process incomplete or malformed data and so on. *Data must flow*
-- that's the rule.
-
-Error Responses
-===============
-
-In case something goes wrong the Peer MUST respond with a JSON object with
-the following REQUIRED fields:
-
-- **error** (*string*): Error type for programs and developers
-- **reason** (*string*): Error description for humans
-
-Bad Request
------------
-
-If a request contains malformed data (like invalid JSON) the Peer MUST respond
-with a HTTP :statuscode:`400` and ``bad_request`` as error type:
-
-.. code-block:: javascript
-
-    {
-        "error": "bad_request",
-        "reason": "invalid json"
-    }
-
-Unauthorized
-------------
-
-If a Peer REQUIRES credentials be included with the request and the request
-does not contain acceptable credentials then the Peer MUST respond with the
-HTTP :statuscode:`401` and ``unauthorized`` as error type:
-
-.. code-block:: javascript
-
-    {
-        "error": "unauthorized",
-        "reason": "Name or password is incorrect"
-    }
-
-Forbidden
----------
-
-If a Peer receives valid user credentials, but the requester does not have
-sufficient permissions to perform the operation then the Peer
-MUST respond with a HTTP :statuscode:`403` and ``forbidden`` as error type:
-
-.. code-block:: javascript
-
-    {
-        "error": "forbidden",
-        "reason": "You may only update your own user document."
-    }
-
-Resource Not Found
-------------------
-
-If the requested resource, Database or Document wasn't found on a Peer, the Peer
-MUST respond with a HTTP :statuscode:`404` and ``not_found`` as error type:
-
-.. code-block:: javascript
-
-    {
-        "error": "not_found",
-        "reason": "database \"target\" does not exists"
-    }
-
-Method Not Allowed
-------------------
-
-If an unsupported method was used then the Peer MUST respond with a
-HTTP :statuscode:`405` and ``method_not_allowed`` as error type:
-
-.. code-block:: javascript
-
-    {
-        "error": "method_not_allowed",
-        "reason": "Only GET, PUT, DELETE allowed"
-    }
-
-Resource Conflict
------------------
-
-A resource conflict error occurs when there are concurrent updates of the same
-resource by multiple clients. In this case the Peer MUST respond with a HTTP
-:statuscode:`409` and ``conflict`` as error type:
-
-.. code-block:: javascript
-
-    {
-        "error": "conflict",
-        "reason": "document update conflict"
-    }
-
-Precondition Failed
--------------------
-
-The HTTP :statuscode:`412` response may be sent in case of an attempt to
-create a Database (error type ``db_exists``) that already exists
-or some attachment information is missing (error type ``missing_stub``).
-There is no explicit error type restrictions, but it is RECOMMEND to use error
-types that are previously mentioned:
-
-.. code-block:: javascript
-
-    {
-        "error": "db_exists",
-        "reason": "database \"target\" exists"
-    }
-
-Server Error
-------------
-
-Raised in case an error is *fatal* and the Replicator cannot do anything to
-continue Replication. In this case the Replicator MUST return a HTTP
-:statuscode:`500` response with an error description (no restrictions on error
-type applied):
-
-.. code-block:: javascript
-
-    {
-        "error": "worker_died",
-        "reason": "kaboom!"
-    }
-
-Optimisations
-=============
-
-There are RECOMMENDED approaches to optimize the Replication process:
-
-- Keep the number of HTTP requests at a reasonable minimum
-- Try to work with a connection pool and make parallel/multiple requests
-  whenever possible
-- Don't close sockets after each request: respect the keep-alive option
-- Use continuous sessions (cookies, etc.) to reduce authentication overhead
-- Try to use bulk requests for every operations with Documents
-- Find out optimal batch size for Changes feed processing
-- Preserve Replication Logs and resume Replication from the last Checkpoint
-  whenever possible
-- Optimize filter functions: let them run as fast as possible
-- Get ready for surprises: networks are very unstable environments
-
-API Reference
-=============
-
-Common Methods
---------------
-
-- :head:`/{db}` -- Check Database existence
-- :get:`/{db}` -- Retrieve Database information
-- :get:`/{db}/_local/{docid}` -- Read the last Checkpoint
-- :put:`/{db}/_local/{docid}` -- Save a new Checkpoint
-
-For Target
-----------
-
-- :put:`/{db}` -- Create Target if it not exists and the option was provided
-- :post:`/{db}/_revs_diff` -- Locate Revisions that are not known to Target
-- :post:`/{db}/_bulk_docs` -- Upload Revisions to Target
-- :put:`/{db}/{docid}` -- Upload a single Document with attachments to Target
-- :post:`/{db}/_ensure_full_commit` -- Ensure that all changes are stored
-  on disk
-
-For Source
-----------
-
-- :get:`/{db}/_changes` -- Fetch changes since the last pull of Source
-- :post:`/{db}/_changes` -- Fetch changes for specified Document IDs since
-  the last pull of Source
-- :get:`/{db}/{docid}` -- Retrieve a single Document from Source
-  with attachments
-
-Reference
-=========
-
-* `Refuge RCouch wiki <https://github.com/refuge/rcouch/wiki/Replication-Algorithm>`_
-* `CouchBase Lite IOS wiki <https://github.com/couchbase/couchbase-lite-ios/wiki/Replication-Algorithm>`_
-
-.. _ECMA-262: http://www.ecma-international.org/publications/files/ecma-st/ECMA-262.pdf
-.. _MVCC: http://en.wikipedia.org/wiki/Multiversion_concurrency_control
-.. _CouchDB: http://couchdb.apache.org
-.. _Erlang: http://erlang.org
-.. _couch_replicator: https://github.com/apache/couchdb/tree/main/src/couch_replicator
-.. _change notifications: http://guide.couchdb.org/draft/notifications.html
diff --git a/src/replication/replicator.rst b/src/replication/replicator.rst
deleted file mode 100644
index 8f02587..0000000
--- a/src/replication/replicator.rst
+++ /dev/null
@@ -1,837 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _replicator:
-
-===================
-Replicator Database
-===================
-
-.. versionchanged:: 2.1.0 Scheduling replicator was introduced.
-   Replication states, by default are not written back to documents
-   anymore. There are new replication job states and new API endpoints
-   ``_scheduler/jobs`` and ``_scheduler/docs``.
-
-.. versionchanged:: 3.2.0 Fair share scheduling was introduced. Multiple
-   ``_replicator`` databases get an equal chance (configurable) of running
-   their jobs. Previously replication jobs were scheduled without any regard of
-   their originating database.
-
-.. versionchanged:: 3.3.0 ``winning_revs_only: true`` replicator option to
-   replicate the winning document revisions.
-
-The ``_replicator`` database works like any other in CouchDB, but
-documents added to it will trigger replications. Create (``PUT`` or
-``POST``) a document to start replication. ``DELETE`` a replication
-document to cancel an ongoing replication.
-
-These documents have exactly the same content as the JSON objects we used to
-``POST`` to ``_replicate`` (fields ``source``, ``target``, ``create_target``,
-``create_target_params``, ``continuous``, ``doc_ids``, ``filter``,
-``query_params``, ``use_checkpoints``, ``checkpoint_interval``).
-
-Replication documents can have a user defined ``_id`` (handy for finding
-a specific replication request later). Design Documents (and ``_local``
-documents) added to the replicator database are ignored.
-
-The default replicator database is ``_replicator``. Additional
-replicator databases can be created. To be recognized as such by the
-system, their database names should end with ``/_replicator``.
-
-Basics
-======
-
-Let's say you POST the following document into ``_replicator``:
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep",
-        "source": "http://myserver.com/foo",
-        "target": {
-            "url": "http://localhost:5984/bar",
-            "auth": {
-                "basic": {
-                    "username": "user",
-                    "password": "pass"
-                }
-            }
-        },
-        "create_target":  true,
-        "continuous": true
-    }
-
-In the couch log you'll see 2 entries like these:
-
-.. code-block:: text
-
-    [notice] 2017-04-05T17:16:19.646716Z node1@127.0.0.1 <0.29432.0> -------- Replication `"a81a78e822837e66df423d54279c15fe+continuous+create_target"` is using:
-        4 worker processes
-        a worker batch size of 500
-        20 HTTP connections
-        a connection timeout of 30000 milliseconds
-        10 retries per request
-        socket options are: [{keepalive,true},{nodelay,false}]
-    [notice] 2017-04-05T17:16:19.646759Z node1@127.0.0.1 <0.29432.0> -------- Document `my_rep` triggered replication `a81a78e822837e66df423d54279c15fe+continuous+create_target`
-
-Replication state of this document can then be queried from
-``http://adm:pass@localhost:5984/_scheduler/docs/_replicator/my_rep``
-
-.. code-block:: json
-
-     {
-         "database": "_replicator",
-         "doc_id": "my_rep",
-         "error_count": 0,
-         "id": "a81a78e822837e66df423d54279c15fe+continuous+create_target",
-         "info": {
-             "revisions_checked": 113,
-             "missing_revisions_found": 113,
-             "docs_read": 113,
-             "docs_written": 113,
-             "changes_pending": 0,
-             "doc_write_failures": 0,
-             "checkpointed_source_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ",
-             "source_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ",
-             "through_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ"
-         },
-         "last_updated": "2017-04-05T19:18:15Z",
-         "node": "node1@127.0.0.1",
-         "source_proxy": null,
-         "target_proxy": null,
-         "source": "http://myserver.com/foo/",
-         "start_time": "2017-04-05T19:18:15Z",
-         "state": "running",
-         "target": "http://localhost:5984/bar/"
-     }
-
-The state is ``running``. That means replicator has scheduled this
-replication job to run. Replication document contents stay the same.
-Previously, before version 2.1, it was updated with the ``triggered``
-state.
-
-The replication job will also appear in
-
-``http://adm:pass@localhost:5984/_scheduler/jobs``
-
-.. code-block:: json
-
-      {
-          "jobs": [
-              {
-                  "database": "_replicator",
-                  "doc_id": "my_rep",
-                  "history": [
-                      {
-                          "timestamp": "2017-04-05T19:18:15Z",
-                          "type": "started"
-                      },
-                      {
-                          "timestamp": "2017-04-05T19:18:15Z",
-                          "type": "added"
-                      }
-                  ],
-                  "id": "a81a78e822837e66df423d54279c15fe+continuous+create_target",
-                  "info": {
-                      "changes_pending": 0,
-                      "checkpointed_source_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ",
-                      "doc_write_failures": 0,
-                      "docs_read": 113,
-                      "docs_written": 113,
-                      "missing_revisions_found": 113,
-                      "revisions_checked": 113,
-                      "source_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ",
-                      "through_seq": "113-g1AAAACTeJzLYWBgYMpgTmHgz8tPSTV0MDQy1zMAQsMckEQiQ1L9____szKYE01ygQLsZsYGqcamiZjKcRqRxwIkGRqA1H-oSbZgk1KMLCzTDE0wdWUBAF6HJIQ"
-                  },
-                  "node": "node1@127.0.0.1",
-                  "pid": "<0.1174.0>",
-                  "source": "http://myserver.com/foo/",
-                  "start_time": "2017-04-05T19:18:15Z",
-                  "target": "http://localhost:5984/bar/",
-                  "user": null
-              }
-          ],
-          "offset": 0,
-          "total_rows": 1
-      }
-
-``_scheduler/jobs`` shows more information, such as a detailed history of
-state changes. If a persistent replication has not yet started,
-has failed, or is completed, information about its state can only be found
-in ``_scheduler/docs``. Keep in mind that some replication documents could be
-invalid and could not become a replication job. Others might be delayed
-because they are fetching data from a slow source database.
-
-If there is an error, for example if the source database is missing, the
-replication job will crash and retry after a wait period. Each
-successive crash will result in a longer waiting period.
-
-For example, POST-ing this document
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep_crashing",
-        "source": "http://myserver.com/missing",
-        "target": {
-            "url": "http://localhost:5984/bar",
-            "auth": {
-                "basic": {
-                    "username": "user",
-                    "password": "pass"
-                }
-            }
-        },
-        "create_target":  true,
-        "continuous": true
-    }
-
-when source database is missing, will result in periodic starts and
-crashes with an increasingly larger interval. The ``history`` list from
-``_scheduler/jobs`` for this replication would look something like this:
-
-.. code-block:: json
-
-    [
-          {
-              "reason": "db_not_found: could not open http://adm:*****@localhost:5984/missing/",
-              "timestamp": "2017-04-05T20:55:10Z",
-              "type": "crashed"
-          },
-          {
-              "timestamp": "2017-04-05T20:55:10Z",
-              "type": "started"
-          },
-          {
-              "reason": "db_not_found: could not open http://adm:*****@localhost:5984/missing/",
-              "timestamp": "2017-04-05T20:47:10Z",
-              "type": "crashed"
-          },
-          {
-              "timestamp": "2017-04-05T20:47:10Z",
-              "type": "started"
-          }
-    ]
-
-``_scheduler/docs`` shows a shorter summary:
-
-.. code-block:: json
-
-    {
-          "database": "_replicator",
-          "doc_id": "my_rep_crashing",
-          "error_count": 6,
-          "id": "cb78391640ed34e9578e638d9bb00e44+create_target",
-          "info": {
-               "error": "db_not_found: could not open http://myserver.com/missing/"
-          },
-          "last_updated": "2017-04-05T20:55:10Z",
-          "node": "node1@127.0.0.1",
-          "source_proxy": null,
-          "target_proxy": null,
-          "source": "http://myserver.com/missing/",
-          "start_time": "2017-04-05T20:38:34Z",
-          "state": "crashing",
-          "target": "http://localhost:5984/bar/"
-    }
-
-Repeated crashes are described as a ``crashing`` state. ``-ing`` suffix
-implies this is a temporary state. User at any moment could create the
-missing database and then replication job could return back to the
-normal.
-
-Documents describing the same replication
-=========================================
-
-Lets suppose 2 documents are added to the ``_replicator`` database in
-the following order:
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep",
-        "source": "http://myserver.com/foo",
-        "target":  "http://user:pass@localhost:5984/bar",
-        "create_target":  true,
-        "continuous": true
-    }
-
-and
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep_dup",
-        "source": "http://myserver.com/foo",
-        "target":  "http://user:pass@localhost:5984/bar",
-        "create_target":  true,
-        "continuous": true
-    }
-
-Both describe exactly the same replication (only their ``_ids`` differ).
-In this case document ``my_rep`` triggers the replication, while
-``my_rep_dup``` will fail. Inspecting ``_scheduler/docs`` explains
-exactly why it failed:
-
-.. code-block:: json
-
-        {
-            "database": "_replicator",
-            "doc_id": "my_rep_dup",
-            "error_count": 1,
-            "id": null,
-            "info": {
-                "error": "Replication `a81a78e822837e66df423d54279c15fe+continuous+create_target` specified by document `my_rep_dup` already started, triggered by document `my_rep` from db `_replicator`"
-            },
-            "last_updated": "2017-04-05T21:41:51Z",
-            "source": "http://myserver.com/foo/",
-            "start_time": "2017-04-05T21:41:51Z",
-            "state": "failed",
-            "target": "http://user:****@localhost:5984/bar",
-        }
-
-Notice the state for this replication is ``failed``. Unlike
-``crashing``, ``failed`` state is terminal. As long as both documents
-are present the replicator will not retry to run ``my_rep_dup``
-replication. Another reason could be malformed documents. For example if
-worker process count is specified as a string (``"worker_processes": "a
-few"``) instead of an integer, failure will occur.
-
-Replication Scheduler
-=====================
-
-Once replication jobs are created they are managed by the scheduler. The
-scheduler is the replication component which periodically stops some
-jobs and starts others. This behavior makes it possible to have a
-larger number of jobs than the cluster could run simultaneously.
-Replication jobs which keep failing will be penalized and forced to
-wait. The wait time increases exponentially with each consecutive
-failure.
-
-When deciding which jobs to stop and which to start, the scheduler uses
-a round-robin algorithm to ensure fairness. Jobs which have been running
-the longest time will be stopped, and jobs which have been waiting the
-longest time will be started.
-
-.. note:: Non-continuous (normal) replication are treated differently
-          once they start running. See :ref:`Normal vs Continuous
-          Replications` section for more information.
-
-The behavior of the scheduler can configured via ``max_jobs``,
-``interval`` and ``max_churn`` options. See :ref:`Replicator
-configuration section <config/replicator>` for additional information.
-
-.. _replicator/states:
-
-Replication states
-==================
-
-Replication jobs during their life-cycle pass through various states.
-This is a diagram of all the states and transitions between them:
-
-.. figure:: ../../images/replication-state-diagram.svg
-     :align: center
-     :alt: Replication state diagram
-
-     Replication state diagram
-
-Blue and yellow shapes represent replication job states.
-
-Trapezoidal shapes represent external APIs, that's how users interact
-with the replicator. Writing documents to ``_replicator`` is the
-preferred way of creating replications, but posting to the
-``_replicate`` HTTP endpoint is also supported.
-
-Six-sided shapes are internal API boundaries. They are optional for this
-diagram and are only shown as additional information to help clarify how the
-replicator works. There are two processing stages: the first is where
-replication documents are parsed and become replication jobs, and the second is
-the scheduler itself. The scheduler runs replication jobs, periodically
-stopping and starting some. Jobs posted via the ``_replicate`` endpoint bypass
-the first component and go straight to the scheduler.
-
-States descriptions
--------------------
-
-Before explaining the details of each state, it is worth noticing that
-color and shape of each state in the diagram:
-
-`Blue` vs `yellow` partitions states into "healthy" and "unhealthy",
-respectively. Unhealthy states indicate something has gone wrong and it
-might need user's attention.
-
-`Rectangle` vs `oval` separates "terminal" states from "non-terminal"
-ones. Terminal states are those which will not transition to other
-states any more. Informally, jobs in a terminal state will not be
-retried and don't consume memory or CPU resources.
-
- * ``Initializing``: Indicates replicator has noticed the change from
-   the replication document. Jobs should transition quickly through this
-   state. Being stuck here for a while could mean there is an internal
-   error.
-
- * ``Failed``: Replication document could not be processed and turned
-   into a valid replication job for the scheduler. This state is
-   terminal and requires user intervention to fix the problem. A typical
-   reason for ending up in this state is a malformed document. For
-   example, specifying an integer for a parameter which accepts a
-   boolean. Another reason for failure could be specifying a duplicate
-   replication. A duplicate replication is a replication with identical
-   parameters but a different document ID.
-
- * ``Error``: Replication document update could not be turned into a
-   replication job. Unlike the ``Failed`` state, this one is temporary,
-   and replicator will keep retrying periodically. There is an
-   exponential backoff applied in case of consecutive failures. The main
-   reason this state exists is to handle filtered replications with
-   custom user functions. Filter function content is needed in order to
-   calculate the replication ID. A replication job could not be created
-   until the function code is retrieved. Because retrieval happens over
-   the network, temporary failures have to be handled.
-
- * ``Running``: Replication job is running normally. This means, there
-   might be a change feed open, and if changes are noticed, they would
-   be processed and posted to the target. Job is still considered
-   ``Running`` even if its workers are currently not streaming changes
-   from source to target and are just waiting on the change feed.
-   Continuous replications will most likely end up in this state.
-
- * ``Pending``: Replication job is not running and is waiting its turn.
-   This state is reached when the number of replication jobs added to
-   the scheduler exceeds ``replicator.max_jobs``. In that case scheduler
-   will periodically stop and start subsets of jobs trying to give each
-   one a fair chance at making progress.
-
- * ``Crashing``: Replication job has been successfully added to the
-   replication scheduler. However an error was encountered during the
-   last run. Error could be a network failure, a missing source
-   database, a permissions error, etc. Repeated consecutive crashes
-   result in an exponential backoff. This state is considered temporary
-   (non-terminal) and replication jobs will be periodically retried.
-   Maximum backoff interval is around a day or so.
-
- * ``Completed``: This is a terminal, successful state for
-   non-continuous replications. Once in this state the replication is
-   "forgotten" by the scheduler and it doesn't consume any more CPU or
-   memory resources. Continuous replication jobs will never reach this
-   state.
-
-.. _Normal vs Continuous Replications:
-
-Normal vs Continuous Replications
----------------------------------
-
-Normal (non-continuous) replications once started will be allowed to run
-to completion. That behavior is to preserve their semantics of
-replicating a snapshot of the source database to the target. For example
-if new documents are added to the source after the replication are
-started, those updates should not show up on the target database.
-Stopping and restring a normal replication would violate that
-constraint.
-
-.. warning:: When there is a mix of continuous and normal replications,
-    once normal replication are scheduled to run, they might temporarily
-    starve continuous replication jobs.
-
-However, normal replications will still be stopped and rescheduled if an
-operator reduces the value for the maximum number of replications. This
-is so that if an operator decides replications are overwhelming a node
-that it has the ability to recover. Any stopped replications will be
-resubmitted to the queue to be rescheduled.
-
-Compatibility Mode
-==================
-
-Previous version of CouchDB replicator wrote state updates back to
-replication documents. In cases where user code programmatically read
-those states, there is compatibility mode enabled via a configuration
-setting::
-
-  [replicator]
-  update_docs = true
-
-In this mode replicator will continue to write state updates to the
-documents.
-
-To effectively disable the scheduling behavior, which periodically stop
-and starts jobs, set ``max_jobs`` configuration setting to a large
-number. For example::
-
-  [replicator]
-  max_jobs = 9999999
-
-See :ref:`Replicator configuration section <config/replicator>` for
-other replicator configuration options.
-
-Canceling replications
-======================
-
-To cancel a replication simply ``DELETE`` the document which triggered
-the replication. To update a replication, for example, change the number
-of worker or the source, simply update the document with new data. If
-there is extra application-specific data in the replication documents,
-that data is ignored by the replicator.
-
-Server restart
-==============
-
-When CouchDB is restarted, it checks its ``_replicator`` databases and
-restarts replications described by documents if they are not already in
-in a ``completed`` or ``failed`` state. If they are, they are ignored.
-
-Clustering
-==========
-
-In a cluster, replication jobs are balanced evenly among all the nodes
-nodes such that a replication job runs on only one node at a time.
-
-Every time there is a cluster membership change, that is when nodes are
-added or removed, as it happens in a rolling reboot, replicator
-application will notice the change, rescan all the document and running
-replication, and re-evaluate their cluster placement in light of the new
-set of live nodes. This mechanism also provides replication fail-over in
-case a node fails. Replication jobs started from replication documents
-(but not those started from ``_replicate`` HTTP endpoint) will
-automatically migrate one of the live nodes.
-
-Additional Replicator Databases
-===============================
-
-Imagine replicator database (``_replicator``) has these two documents
-which represent pull replications from servers A and B:
-
-.. code-block:: javascript
-
-    {
-        "_id": "rep_from_A",
-        "source":  "http://aserver.com:5984/foo",
-        "target": {
-            "url": "http://localhost:5984/foo_a",
-            "auth": {
-                "basic": {
-                    "username": "user",
-                    "password": "pass"
-                }
-            }
-        },
-        "continuous":  true
-    }
-
-.. code-block:: javascript
-
-    {
-        "_id": "rep_from_B",
-        "source":  "http://bserver.com:5984/foo",
-        "target": {
-            "url": "http://localhost:5984/foo_b",
-            "auth": {
-                "basic": {
-                    "username": "user",
-                    "password": "pass"
-                }
-            }
-        },
-        "continuous":  true
-    }
-
-Now without stopping and restarting CouchDB, add another replicator
-database. For example ``another/_replicator``:
-
-.. code-block:: bash
-
-    $ curl -X PUT http://user:pass@localhost:5984/another%2F_replicator/
-    {"ok":true}
-
-.. note::
-   A / character in a database name, when used in a URL, should be escaped.
-
-Then add a replication document to the new replicator database:
-
-.. code-block:: javascript
-
-    {
-        "_id": "rep_from_X",
-        "source":  "http://xserver.com:5984/foo",
-        "target":  "http://user:pass@localhost:5984/foo_x",
-        "continuous":  true
-    }
-
-From now on, there are three replications active in the system: two
-replications from A and B, and a new one from X.
-
-Then remove the additional replicator database:
-
-.. code-block:: bash
-
-    $ curl -X DELETE http://user:pass@localhost:5984/another%2F_replicator/
-    {"ok":true}
-
-After this operation, replication pulling from server X will be stopped
-and the replications in the ``_replicator`` database (pulling from
-servers A and B) will continue.
-
-Fair Share Job Scheduling
-=========================
-
-When multiple ``_replicator`` databases are used, and the total number
-of jobs on any node is greater than ``max_jobs``, replication jobs
-will be scheduled such that each of the ``_replicator`` databases by
-default get an equal chance of running their jobs.
-
-This is accomplished by assigning a number of "shares" to each
-``_replicator`` database and then automatically adjusting the
-proportion of running jobs to match each database's proportion of
-shares. By default, each ``_replicator`` database is assigned 100
-shares. It is possible to alter the share assignments for each
-individual ``_replicator`` database in the :ref:`[replicator.shares]
-<config/replicator.shares>` configuration section.
-
-The fair share behavior is perhaps easier described with a set of
-examples. Each example assumes the default of ``max_jobs = 500``, and
-two replicator databases: ``_replicator`` and ``another/_replicator``.
-
-Example 1: If ``_replicator`` has 1000 jobs and
-``another/_replicator`` has 10, the scheduler will run about 490 jobs
-from ``_replicator`` and 10 jobs from ``another/_replicator``.
-
-Example 2: If ``_replicator`` has 200 jobs and ``another/_replicator``
-also has 200 jobs, all 400 jobs will get to run as the sum of all the
-jobs is less than the ``max_jobs`` limit.
-
-Example 3: If both replicator databases have 1000 jobs each, the
-scheduler will run about 250 jobs from each database on average.
-
-Example 4: If both replicator databases have 1000 jobs each, but
-``_replicator`` was assigned 400 shares, then on average the scheduler
-would run about 400 jobs from ``_replicator`` and 100 jobs from
-``_another/replicator``.
-
-The proportions described in the examples are approximate and might
-oscillate a bit, and also might take anywhere from tens of minutes to
-an hour to converge.
-
-Replicating the replicator database
-===================================
-
-Imagine you have in server C a replicator database with the two
-following pull replication documents in it:
-
-.. code-block:: javascript
-
-    {
-         "_id": "rep_from_A",
-         "source":  "http://aserver.com:5984/foo",
-         "target":  "http://user:pass@localhost:5984/foo_a",
-         "continuous":  true
-    }
-
-.. code-block:: javascript
-
-    {
-         "_id": "rep_from_B",
-         "source":  "http://bserver.com:5984/foo",
-         "target":  "http://user:pass@localhost:5984/foo_b",
-         "continuous":  true
-    }
-
-Now you would like to have the same pull replications going on in server
-D, that is, you would like to have server D pull replicating from
-servers A and B. You have two options:
-
-- Explicitly add two documents to server's D replicator database
-
-- Replicate server's C replicator database into server's D replicator
-  database
-
-Both alternatives accomplish exactly the same goal.
-
-Delegations
-===========
-
-Replication documents can have a custom ``user_ctx`` property. This
-property defines the user context under which a replication runs. For
-the old way of triggering a replication (POSTing to ``/_replicate/``),
-this property is not needed. That's because information about the
-authenticated user is readily available during the replication, which is
-not persistent in that case. Now, with the replicator database, the
-problem is that information about which user is starting a particular
-replication is only present when the replication document is written.
-The information in the replication document and the replication itself
-are persistent, however. This implementation detail implies that in the
-case of a non-admin user, a ``user_ctx`` property containing the user's
-name and a subset of their roles must be defined in the replication
-document. This is enforced by the document update validation function
-present in the default design document of the replicator database. The
-validation function also ensures that non-admin users are unable to set
-the value of the user context's ``name`` property to anything other than
-their own user name. The same principle applies for roles.
-
-For admins, the ``user_ctx`` property is optional, and if it's missing
-it defaults to a user context with name ``null`` and an empty list of
-roles, which means design documents won't be written to local targets.
-If writing design documents to local targets is desired, the role
-``_admin`` must be present in the user context's list of roles.
-
-Also, for admins the ``user_ctx`` property can be used to trigger a
-replication on behalf of another user. This is the user context that
-will be passed to local target database document validation functions.
-
-.. note:: The ``user_ctx`` property only has effect for local endpoints.
-
-Example delegated replication document:
-
-.. code-block:: javascript
-
-    {
-        "_id": "my_rep",
-        "source":  "http://bserver.com:5984/foo",
-        "target":  "http://user:pass@localhost:5984/bar",
-        "continuous":  true,
-        "user_ctx": {
-            "name": "joe",
-            "roles": ["erlanger", "researcher"]
-        }
-    }
-
-As stated before, the ``user_ctx`` property is optional for admins,
-while being mandatory for regular (non-admin) users. When the roles
-property of ``user_ctx`` is missing, it defaults to the empty list
-``[]``.
-
-.. _selectorobj:
-
-Selector Objects
-================
-
-Including a Selector Object in the replication document enables you to
-use a query expression to determine if a document should be included in
-the replication.
-
-The selector specifies fields in the document, and provides an expression
-to evaluate with the field content or other data. If the expression resolves
-to ``true``, the document is replicated.
-
-The selector object must:
-
--  Be structured as valid JSON.
--  Contain a valid query expression.
-
-The syntax for a selector is the same as the
-:ref:`selectorsyntax <find/selectors>` used for :ref:`_find <api/db/_find>`.
-
-Using a selector is significantly more efficient than using a JavaScript
-filter function, and is the recommended option if filtering on document
-attributes only.
-
-Specifying Usernames and Passwords
-===================================
-
-There are multiple ways to specify usernames and passwords for replication endpoints:
-
- - In an ``{"auth": {"basic": ...}}`` object:
-
-    .. versionadded:: 3.2.0
-
-    .. code-block:: javascript
-
-        {
-            "target": {
-                "url": "http://someurl.com/mydb",
-                "auth": {
-                    "basic": {
-                        "username": "$username",
-                        "password": "$password"
-                     }
-                }
-            },
-            ...
-        }
-
-   This is the prefererred format as it allows including characters like ``@``, ``:``
-   and others in the username and password fields.
-
- - In the userinfo part of the endpoint URL. This allows for a more compact
-   endpoint represention however, it prevents using characters like ``@`` and ``:``
-   in usernames or passwords:
-
-    .. code-block:: javascript
-
-        {
-            "target":  "http://user:pass@localhost:5984/bar"
-            ...
-        }
-
-   Specifying credentials in the userinfo part of the URL is deprecated as per
-   `RFC3986 <https://datatracker.ietf.org/doc/html/rfc3986#section-3.2.1>`_.
-   CouchDB still supports this way of specifying credentials and doesn't yet
-   have a target release when support will be removed.
-
- - In an ``"Authorization: Basic $b64encoded_username_and_password"`` header:
-
-    .. code-block:: javascript
-
-        {
-            "target": {
-                "url": "http://someurl.com/mydb",
-                    "headers": {
-                        "Authorization": "Basic dXNlcjpwYXNz"
-                    }
-                },
-            ...
-        }
-
-    This method has the downside of the going through the extra step of base64
-    encoding. In addition, it could give the impression that it encrypts or
-    hides the credentials so it could encourage invadvertent sharing and
-    leaking credentials.
-
-When credentials are provided in multiple forms, they are selected in the following order:
-
-  - ``"auth": {"basic": {...}}`` object
-  - URL userinfo
-  - ``"Authorization: Basic ..."`` header.
-
-First, the ``auth`` object is checked, and if credentials are defined there,
-they are used. If they are not, then URL userinfo is checked. If credentials
-are found there, then those credentials are used, otherwise basic auth header
-is used.
-
-Replicate Winning Revisions Only
-================================
-
-Use the ``winning_revs_only: true`` option to replicate "winning" document
-revisions only. These are the revisions that would be returned by the ``GET
-db/doc`` API endpoint by default, or appear in the ``_changes`` feed with the
-default parameters.
-
-.. code-block:: http
-
-    POST http://couchdb:5984/_replicate HTTP/1.1
-    Accept: application/json
-    Content-Type: application/json
-
-    {
-        "winning_revs_only" : true
-        "source" : "http://source:5984/recipes",
-        "target" : "http://target:5984/recipes",
-    }
-
-Replication with this mode discards conflicting revisions, so it could be one
-way to remove conflicts through replication.
-
-Replication IDs and checkpoint IDs, generated by ``winning_revs_only: true``
-replications will be different than those generated by default, so it is
-possible to first replicate the winning revisions, then later, to
-"backfill" the rest of the revisions with a regular replication job.
-
-``winning_revs_only: true`` option can be combined with filters or other
-options like ``continuous: true`` or ``create_target: true``.
diff --git a/src/setup/cluster.rst b/src/setup/cluster.rst
deleted file mode 100644
index ea6f6cd..0000000
--- a/src/setup/cluster.rst
+++ /dev/null
@@ -1,373 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _setup/cluster:
-
-==============
-Cluster Set Up
-==============
-
-This section describes everything you need to know to prepare, install, and
-set up your first CouchDB 2.x/3.x cluster.
-
-Ports and Firewalls
-===================
-
-CouchDB uses the following ports:
-
-+-------------+----------+--------------------------+----------------------+
-| Port Number | Protocol | Recommended binding      | Usage                |
-+=============+==========+==========================+======================+
-| 5984        | tcp      | As desired, by           | Standard clustered   |
-|             |          | default ``localhost``    | port for all HTTP    |
-|             |          |                          | API requests         |
-+-------------+----------+--------------------------+----------------------+
-| 4369        | tcp      | ``localhost`` for single | Erlang port mapper   |
-|             |          | node installs. Private   | daemon (epmd)        |
-|             |          | interface if clustered   |                      |
-+-------------+----------+--------------------------+----------------------+
-| Random      | tcp      | Private interface        | Communication with   |
-| above 1024  |          |                          | other CouchDB nodes  |
-| (see below) |          |                          | in the cluster       |
-+-------------+----------+--------------------------+----------------------+
-
-CouchDB in clustered mode uses the port ``5984``, just as in a standalone
-configuration. Port ``5986``, previously used in CouchDB 2.x, has been removed
-in CouchDB 3.x. All endpoints previously accessible at that port are now
-available under the ``/_node/{node-name}/...`` hierarchy via the primary ``5984``
-port.
-
-CouchDB uses Erlang-native clustering functionality to achieve a clustered
-installation.  Erlang uses TCP port ``4369`` (EPMD) to find other nodes, so all
-servers must be able to speak to each other on this port. In an Erlang cluster,
-all nodes are connected to all other nodes, in a mesh network configuration.
-
-Every Erlang application running on that machine (such as CouchDB) then uses
-automatically assigned ports for communication with other nodes. Yes, this
-means random ports. This will obviously not work with a firewall, but it is
-possible to force an Erlang application to use a specific port range.
-
-This documentation will use the range TCP ``9100-9200``, but this range is
-unnecessarily broad. If you only have a single Erlang application running on a
-machine, the range can be limited to a single port: ``9100-9100``, since the
-ports erlang assigns are for *inbound connections* only. Three CouchDB nodes
-running on a single machine, as in a development cluster scenario, would need
-three ports in this range.
-
-.. warning::
-    If you expose the distribution port to the Internet or any other untrusted
-    network, then the only thing protecting you is the Erlang
-    `cookie`_.
-
-.. _cookie: http://erlang.org/doc/reference_manual/distributed.html
-
-Configure and Test the Communication with Erlang
-================================================
-
-Make CouchDB use correct IP|FQDN and the open ports
-----------------------------------------------------
-
-In file ``etc/vm.args`` change the line ``-name couchdb@127.0.0.1`` to
-``-name couchdb@<reachable-ip-address|fully-qualified-domain-name>`` which defines
-the name of the node. Each node must have an identifier that allows remote
-systems to talk to it. The node name is of the form
-``<name>@<reachable-ip-address|fully-qualified-domain-name>``.
-
-The name portion can be couchdb on all nodes, unless you are running more than
-1 CouchDB node on the same server with the same IP address or domain name. In
-that case, we recommend names of ``couchdb1``, ``couchdb2``, etc.
-
-The second portion of the node name must be an identifier by which other nodes
-can access this node -- either the node's fully qualified domain name (FQDN) or
-the node's IP address. The FQDN is preferred so that you can renumber the node's
-IP address without disruption to the cluster. (This is common in cloud-hosted
-environments.)
-
-.. warning::
-
-    Tricks with ``/etc/hosts`` and ``libresolv`` don't work with Erlang.
-    Either properly set up DNS and use fully-qualified domain names, or
-    use IP addresses. DNS and FQDNs are preferred.
-
-    Changing the name later is somewhat cumbersome (i.e. moving shards), which
-    is why you will want to set it once and not have to change it.
-
-Open ``etc/vm.args``, on all nodes, and add ``-kernel inet_dist_listen_min 9100``
-and ``-kernel inet_dist_listen_max 9200`` like below:
-
-.. code-block:: erlang
-
-    -name ...
-    -setcookie ...
-    ...
-    -kernel inet_dist_listen_min 9100
-    -kernel inet_dist_listen_max 9200
-
-Again, a small range is fine, down to a single port (set both to ``9100``) if you
-only ever run a single CouchDB node on each machine.
-
-Confirming connectivity between nodes
--------------------------------------
-
-For this test, you need 2 servers with working hostnames. Let us call them
-server1.test.com and server2.test.com. They reside at ``192.168.0.1`` and
-``192.168.0.2``, respectively.
-
-On server1.test.com:
-
-.. code-block:: bash
-
-    erl -name bus@192.168.0.1 -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200
-
-Then on server2.test.com:
-
-.. code-block:: bash
-
-    erl -name car@192.168.0.2 -setcookie 'brumbrum' -kernel inet_dist_listen_min 9100 -kernel inet_dist_listen_max 9200
-
-An explanation to the commands:
-    * ``erl`` the Erlang shell.
-    * ``-name bus@192.168.0.1`` the name of the Erlang node and its IP address or FQDN.
-    * ``-setcookie 'brumbrum'`` the "password" used when nodes connect to each
-      other.
-    * ``-kernel inet_dist_listen_min 9100`` the lowest port in the range.
-    * ``-kernel inet_dist_listen_max 9200`` the highest port in the range.
-
-This gives us 2 Erlang shells. shell1 on server1, shell2 on server2.
-Time to connect them. Enter the following, being sure to end the line with a
-period (``.``):
-
-In shell1:
-
-.. code-block:: erlang
-
-    net_kernel:connect_node('car@192.168.0.2').
-
-This will connect to the node called ``car`` on the server called
-``192.168.0.2``.
-
-If that returns true, then you have an Erlang cluster, and the firewalls are
-open. This means that 2 CouchDB nodes on these two servers will be able to
-communicate with each other successfully. If you get false or nothing at all,
-then you have a problem with the firewall, DNS, or your settings. Try again.
-
-If you're concerned about firewall issues, or having trouble connecting all
-nodes of your cluster later on, repeat the above test between all pairs of
-servers to confirm connectivity and system configuration is correct.
-
-.. _cluster/setup/prepare:
-
-Preparing CouchDB nodes to be joined into a cluster
-===================================================
-
-Before you can add nodes to form a cluster, you must have them listening on an
-IP address accessible from the other nodes in the cluster. You should also ensure
-that a few critical settings are identical across all nodes before joining them.
-
-The settings we recommend you set now, before joining the nodes into a cluster,
-are:
-
-1. ``etc/vm.args`` settings as described in the
-   :ref:`previous two sections<setup/cluster>`
-2. At least one :ref:`server administrator<config/admins>`
-   user (and password)
-3. Bind the node's clustered interface (port ``5984``) to a reachable IP address
-4. A consistent :config:option:`UUID <couchdb/uuid>`. The UUID is used in identifying
-   the cluster when replicating. If this value is not consistent across all nodes
-   in the cluster, replications may be forced to rewind the changes feed to zero,
-   leading to excessive memory, CPU and network use.
-5. A consistent :config:option:`httpd secret <chttpd_auth/secret>`. The secret
-   is used in calculating and evaluating cookie and proxy authentication, and should
-   be set consistently to avoid unnecessary repeated session cookie requests.
-
-As of CouchDB 3.0, steps 4 and 5 above are automatically performed for you when
-using the setup API endpoints described below.
-
-If you use a configuration management tool, such as Chef, Ansible, Puppet, etc.,
-then you can place these settings in a ``.ini`` file and distribute them to all
-nodes ahead of time. Be sure to pre-encrypt the password (cutting and pasting
-from a test instance is easiest) if you use this route to avoid CouchDB rewriting
-the file.
-
-If you do not use configuration management, or are just experimenting with
-CouchDB for the first time, use these commands *once per server* to perform
-steps 2-4 above. Be sure to change the ``password`` to something secure, and
-again, use the same password on all nodes. You may have to run these commands
-locally on each node; if so, replace ``<server-IP|FQDN>`` below with ``127.0.0.1``.
-
-.. code-block:: bash
-
-    # First, get two UUIDs to use later on. Be sure to use the SAME UUIDs on all nodes.
-    curl http://<server-IP|FQDN>:5984/_uuids?count=2
-
-    # CouchDB will respond with something like:
-    #   {"uuids":["60c9e8234dfba3e2fdab04bf92001142","60c9e8234dfba3e2fdab04bf92001cc2"]}
-    # Copy the provided UUIDs into your clipboard or a text editor for later use.
-    # Use the first UUID as the cluster UUID.
-    # Use the second UUID as the cluster shared http secret.
-
-    # Create the admin user and password:
-    curl -X PUT http://<server-IP|FQDN>:5984/_node/_local/_config/admins/admin -d '"password"'
-
-    # Now, bind the clustered interface to all IP addresses available on this machine
-    curl -X PUT http://<server-IP|FQDN>:5984/_node/_local/_config/chttpd/bind_address -d '"0.0.0.0"'
-
-    # If not using the setup wizard / API endpoint, the following 2 steps are required:
-    # Set the UUID of the node to the first UUID you previously obtained:
-    curl -X PUT http://<server-IP|FQDN>:5984/_node/_local/_config/couchdb/uuid -d '"FIRST-UUID-GOES-HERE"'
-
-    # Finally, set the shared http secret for cookie creation to the second UUID:
-    curl -X PUT http://<server-IP|FQDN>:5984/_node/_local/_config/chttpd_auth/secret -d '"SECOND-UUID-GOES-HERE"'
-
-.. _cluster/setup/wizard:
-
-The Cluster Setup Wizard
-========================
-
-CouchDB 2.x/3.x comes with a convenient Cluster Setup Wizard as part of the Fauxton
-web administration interface. For first-time cluster setup, and for
-experimentation, this is your best option.
-
-It is **strongly recommended** that the minimum number of nodes in a cluster is
-3. For more explanation, see the :ref:`Cluster Theory <cluster/theory>` section
-of this documentation.
-
-After installation and initial start-up of all nodes in your cluster, ensuring
-all nodes are reachable, and the pre-configuration steps listed above, visit
-Fauxton at ``http://<server1>:5984/_utils#setup``. You will be asked to set up
-CouchDB as a single-node instance or set up a cluster.
-
-When you click "Setup Cluster" you are asked for admin credentials again, and
-then to add nodes by IP address. To get more nodes, go through the same install
-procedure for each node, using the same machine to perform the setup process.
-Be sure to specify the total number of nodes you expect to add to the cluster
-before adding nodes.
-
-Now enter each node's IP address or FQDN in the setup wizard, ensuring you also
-enter the previously set server admin username and password.
-
-Once you have added all nodes, click "Setup" and Fauxton will finish the
-cluster configuration for you.
-
-To check that all nodes have been joined correctly, visit
-``http://<server-IP|FQDN>:5984/_membership`` on each node. The returned list
-should show all of the nodes in your cluster:
-
-.. code-block:: javascript
-
-    {
-      "all_nodes": [
-        "couchdb@server1.test.com",
-        "couchdb@server2.test.com",
-        "couchdb@server3.test.com"
-      ],
-      "cluster_nodes": [
-        "couchdb@server1.test.com",
-        "couchdb@server2.test.com",
-        "couchdb@server3.test.com"
-      ]
-    }
-
-The ``all_nodes`` section is the list of *expected* nodes; the ``cluster_nodes``
-section is the list of *actually connected* nodes. Be sure the two lists match.
-
-Now your cluster is ready and available! You can send requests to any one of
-the nodes, and all three will respond as if you are working with a single
-CouchDB cluster.
-
-For a proper production setup, you'd now set up an HTTP reverse proxy in front
-of the cluster, for load balancing and SSL termination. We recommend
-`HAProxy`_, but others can be used. Sample configurations are available in the
-:ref:`best-practices` section.
-
-.. _cluster/setup/api:
-
-The Cluster Setup API
-=====================
-
-If you would prefer to manually configure your CouchDB cluster, CouchDB exposes
-the ``_cluster_setup`` endpoint for that purpose. After installation and
-initial setup/config, we can set up the cluster. On each node we need to run
-the following command to set up the node:
-
-.. code-block:: bash
-
-     curl -X POST -H "Content-Type: application/json" http://admin:password@127.0.0.1:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "node_count":"3"}'
-
-After that we can join all the nodes together. Choose one node as the "setup
-coordination node" to run all these commands on.  This "setup coordination
-node" only manages the setup and requires all other nodes to be able to see it
-and vice versa. *It has no special purpose beyond the setup process; CouchDB
-does not have the concept of a "master" node in a cluster.*
-
-Setup will not work with unavailable nodes. All nodes must be online and properly
-preconfigured before the cluster setup process can begin.
-
-To join a node to the cluster, run these commands for each node you want to add:
-
-.. code-block:: bash
-
-    curl -X POST -H "Content-Type: application/json" http://admin:password@<setup-coordination-node>:5984/_cluster_setup -d '{"action": "enable_cluster", "bind_address":"0.0.0.0", "username": "admin", "password":"password", "port": 5984, "node_count": "3", "remote_node": "<remote-node-ip>", "remote_current_user": "<remote-node-username>", "remote_current_password": "<remote-node-password>" }'
-    curl -X POST -H "Content-Type: application/json" http://admin:password@<setup-coordination-node>:5984/_cluster_setup -d '{"action": "add_node", "host":"<remote-node-ip>", "port": <remote-node-port>, "username": "admin", "password":"password"}'
-
-This will join the two nodes together. Keep running the above commands for each
-node you want to add to the cluster. Once this is done run the following
-command to complete the cluster setup and add the system databases:
-
-.. code-block:: bash
-
-    curl -X POST -H "Content-Type: application/json" http://admin:password@<setup-coordination-node>:5984/_cluster_setup -d '{"action": "finish_cluster"}'
-
-Verify install:
-
-.. code-block:: bash
-
-    curl http://admin:password@<setup-coordination-node>:5984/_cluster_setup
-
-Response:
-
-.. code-block:: bash
-
-    {"state":"cluster_finished"}
-
-Verify all cluster nodes are connected:
-
-.. code-block:: bash
-
-    curl http://admin:password@<setup-coordination-node>:5984/_membership
-
-Response:
-
-.. code-block:: bash
-
-    {
-        "all_nodes": [
-            "couchdb@couch1.test.com",
-            "couchdb@couch2.test.com",
-            "couchdb@couch3.test.com",
-        ],
-        "cluster_nodes": [
-            "couchdb@couch1.test.com",
-            "couchdb@couch2.test.com",
-            "couchdb@couch3.test.com",
-        ]
-    }
-
-If the cluster is enabled and ``all_nodes`` and ``cluster_nodes`` lists don't match, use curl to add nodes with
-PUT ``/_node/_local/_nodes/couchdb@<reachable-ip-address|fully-qualified-domain-name>``
-and remove nodes with
-DELETE ``/_node/_local/_nodes/couchdb@<reachable-ip-address|fully-qualified-domain-name>``
-
-You CouchDB cluster is now set up.
-
-.. _HAProxy: http://haproxy.org/
-.. _example configuration for HAProxy: https://github.com/apache/couchdb/blob/main/rel/haproxy.cfg
diff --git a/src/setup/index.rst b/src/setup/index.rst
deleted file mode 100644
index c3f8aa8..0000000
--- a/src/setup/index.rst
+++ /dev/null
@@ -1,27 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _setup:
-
-=====
-Setup
-=====
-
-CouchDB 2.x can be deployed in either a single-node or a clustered
-configuration. This section covers the first-time setup steps required for each
-of these configurations.
-
-.. toctree::
-    :maxdepth: 2
-
-    single-node
-    cluster
diff --git a/src/setup/single-node.rst b/src/setup/single-node.rst
deleted file mode 100644
index 30af61a..0000000
--- a/src/setup/single-node.rst
+++ /dev/null
@@ -1,57 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _setup/single-node:
-
-=================
-Single Node Setup
-=================
-
-Many users simply need a single-node CouchDB 2.x installation. Operationally,
-it is roughly equivalent to the CouchDB 1.x series. Note that a single-node
-setup obviously doesn't take any advantage of the new scaling and
-fault-tolerance features in CouchDB 2.x.
-
-After installation and initial startup, visit Fauxton at
-``http://127.0.0.1:5984/_utils#setup``. You will be asked to set up
-CouchDB as a single-node instance or set up a cluster. When you click
-“Single-Node-Setup”, you will get asked for an admin username and
-password. Choose them well and remember them.
-
-You can also bind CouchDB to a public address, so it is accessible within your
-LAN or the public, if you are doing this on a public VM. Or, you can keep the
-installation private by binding only to 127.0.0.1 (localhost). Binding to
-0.0.0.0 will bind to all addresses. The wizard then configures your admin
-username and password and creates the three system databases ``_users``,
-``_replicator`` and ``_global_changes`` for you.
-
-Another option is to set the configuration parameter ``[couchdb] single_node=true``
-in your ``local.ini`` file. When doing this, CouchDB will create the system
-database for you on restart.
-
-Alternatively, if you don't want to use the Setup Wizard or set that value, and
-run 3.x as a single node with a server administrator already configured via
-:ref:`config file<config/admins>`, make sure to create the three system
-databases manually on startup:
-
-.. code-block:: sh
-
-    curl -X PUT http://127.0.0.1:5984/_users
-
-    curl -X PUT http://127.0.0.1:5984/_replicator
-
-    curl -X PUT http://127.0.0.1:5984/_global_changes
-
-Note that the last of these is not necessary if you do not expect to be
-using the global changes feed. Feel free to delete this database if you
-have created it, it has grown in size, and you do not need the function
-(and do not wish to waste system resources on compacting it regularly.)
diff --git a/src/whatsnew/0.10.rst b/src/whatsnew/0.10.rst
deleted file mode 100644
index 1e60fae..0000000
--- a/src/whatsnew/0.10.rst
+++ /dev/null
@@ -1,143 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/0.10.x:
-
-=============
-0.10.x Branch
-=============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/0.10.x/upgrade:
-
-Upgrade Notes
-=============
-
-.. warning::
-    :ref:`release/0.10.2` contains important security fixes. Previous `0.10.x`
-    releases are not recommended for regular usage.
-
-Modular Configuration Directories
----------------------------------
-
-CouchDB now loads configuration from the following places (`glob(7)`_ syntax)
-in order:
-
-- PREFIX/default.ini
-- PREFIX/default.d/*
-- PREFIX/local.ini
-- PREFIX/local.d/*
-
-The configuration options for `couchdb` script have changed to::
-
-  -a FILE     add configuration FILE to chain
-  -A DIR      add configuration DIR to chain
-  -n          reset configuration file chain (including system default)
-  -c          print configuration file chain and exit
-
-.. _glob(7): http://linux.die.net/man/7/glob
-
-Show and List API change
-------------------------
-
-Show and List functions must have a new structure in 0.10.
-See `Formatting_with_Show_and_List`_ for details.
-
-.. _Formatting_with_Show_and_List: http://wiki.apache.org/couchdb/Formatting_with_Show_and_List
-
-Stricter enforcing of reduciness in reduce-functions
-----------------------------------------------------
-
-Reduce functions are now required to reduce the number of values for a key.
-
-View query reduce parameter strictness
---------------------------------------
-
-CouchDB now considers the parameter ``reduce=false`` to be an error for queries
-of map-only views, and responds with status code 400.
-
-.. _release/0.10.2:
-
-Version 0.10.2
-==============
-
-Build and System Integration
-----------------------------
-
-* Fixed distribution preparation for building on Mac OS X.
-
-Security
---------
-
-* Fixed :ref:`cve/2010-0009`
-
-Replicator
-----------
-
-* Avoid leaking file descriptors on automatic replication restarts.
-
-.. _release/0.10.1:
-
-Version 0.10.1
-==============
-
-Build and System Integration
-----------------------------
-
-* Test suite now works with the distcheck target.
-
-Replicator
-----------
-
-* Stability enhancements regarding redirects, timeouts, OAuth.
-
-Query Server
-------------
-
-* Avoid process leaks
-* Allow list and view to span languages
-
-Stats
------
-
-* Eliminate new process flood on system wake
-
-.. _release/0.10.0:
-
-Version 0.10.0
-==============
-
-Build and System Integration
-----------------------------
-
-* Changed `couchdb` script configuration options.
-* Added default.d and local.d configuration directories to load sequence.
-
-HTTP Interface
---------------
-
-* Added optional cookie-based authentication handler.
-* Added optional two-legged OAuth authentication handler.
-
-Storage Format
---------------
-
-* Add move headers with checksums to the end of database files for extra robust
-  storage and faster storage.
-
-View Server
------------
-
-* Added native Erlang views for high-performance applications.
diff --git a/src/whatsnew/0.11.rst b/src/whatsnew/0.11.rst
deleted file mode 100644
index 854ab88..0000000
--- a/src/whatsnew/0.11.rst
+++ /dev/null
@@ -1,349 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/0.11.x:
-
-=============
-0.11.x Branch
-=============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/0.11.x/upgrade:
-
-Upgrade Notes
-=============
-
-.. warning::
-    :ref:`release/0.11.2` contains important security fixes. Previous `0.11.x`
-    releases are not recommended for regular usage.
-
-Changes Between 0.11.0 and 0.11.1
----------------------------------
-
-- ``_log`` and ``_temp_views`` are now admin-only resources.
-- ``_bulk_docs`` now requires a valid `Content-Type` header of
-  ``application/json``.
-- `JSONP` is disabled by default. An .ini option was added to selectively
-  enable it.
-- The ``key``, ``startkey`` and ``endkey`` properties of the request object
-  passed to :ref:`list <listfun>` and :ref:`show <showfun>` functions now
-  contain JSON objects representing the URL encoded string values in the query
-  string. Previously, these properties contained strings which needed to be
-  converted to JSON before using.
-
-Changes Between 0.10.x and 0.11.0
----------------------------------
-
-show, list, update and validation functions
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``req`` argument to show, list, update and validation functions now contains
-the member method with the specified HTTP method of the current request.
-Previously, this member was called ``verb``. ``method`` is following :rfc:`2616`
-(HTTP 1.1) closer.
-
-_admins -> _security
-^^^^^^^^^^^^^^^^^^^^
-
-The `/db/_admins` handler has been removed and replaced with a
-:ref:`/db/_security <api/db/security>` object. Any existing `_admins` will be
-dropped and need to be added to the security object again. The reason for this
-is that the old system made no distinction between names and roles, while the
-new one does, so there is no way to automatically upgrade the old admins list.
-
-The security object has 2 special fields, ``admins`` and ``readers``, which
-contain lists of names and roles which are admins or readers on that database.
-Anything else may be stored in other fields on the security object. The entire
-object is made available to validation functions.
-
-json2.js
-^^^^^^^^
-
-JSON handling in the query server has been upgraded to use `json2.js`_.
-This allows us to use faster native JSON serialization when it is available.
-
-In previous versions, attempts to serialize undefined would throw an exception,
-causing the doc that emitted undefined to be dropped from the view index.
-The new behavior is to serialize undefined as null. Applications depending on
-the old behavior will need to explicitly check for undefined.
-
-Another change is that E4X's XML objects will not automatically be
-stringified. XML users will need to call ``my_xml_object.toXMLString()``
-to return a string value. :commit:`8d3b7ab3`
-
-.. _json2.js: https://github.com/douglascrockford/JSON-js/blob/master/json2.js
-
-WWW-Authenticate
-^^^^^^^^^^^^^^^^
-
-The default configuration has been changed to avoid causing basic-auth popups
-which result from sending the WWW-Authenticate header. To enable basic-auth
-popups, uncomment the config option ``httpd/WWW-Authenticate`` line in
-`local.ini`.
-
-Query server line protocol
-^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The query server line protocol has changed for all functions except
-:ref:`map <qs/map_doc>`, :ref:`reduce <qs/reduce>`, and
-:ref:`rereduce <qs/rereduce>`. This allows us to cache the entire design
-document in the query server process, which results in faster performance for
-common operations. It also gives more flexibility to query server
-implementers and shouldn't require major changes in the future when adding
-new query server features.
-
-UTF8 JSON
-^^^^^^^^^
-
-JSON request bodies are validated for proper UTF-8 before saving, instead of
-waiting to fail on subsequent read requests.
-
-_changes line format
-^^^^^^^^^^^^^^^^^^^^
-
-Continuous changes are now newline delimited, instead of having each line
-followed by a comma.
-
-.. _release/0.11.2:
-
-Version 0.11.2
-==============
-
-Authentication
---------------
-
-* User documents can now be deleted by admins or the user.
-
-Futon
------
-
-* Add some Futon files that were missing from the Makefile.
-
-HTTP Interface
---------------
-
-* Better error messages on invalid URL requests.
-
-Replicator
-----------
-
-* Fix bug when pushing design docs by non-admins, which was hanging the
-  replicator for no good reason.
-* Fix bug when pulling design documents from a source that requires
-  basic-auth.
-
-Security
---------
-
-* Avoid potential DOS attack by guarding all creation of atoms.
-* Fixed :ref:`cve/2010-2234`
-
-.. _release/0.11.1:
-
-Version 0.11.1
-==============
-
-Build and System Integration
-----------------------------
-
-* Output of `couchdb --help` has been improved.
-* Fixed compatibility with the Erlang R14 series.
-* Fixed warnings on Linux builds.
-* Fixed build error when aclocal needs to be called during the build.
-* Require ICU 4.3.1.
-* Fixed compatibility with Solaris.
-
-Configuration System
---------------------
-
-* Fixed timeout with large .ini files.
-
-Futon
------
-
-* Use "expando links" for over-long document values in Futon.
-* Added continuous replication option.
-* Added option to replicating test results anonymously to a community
-  CouchDB instance.
-* Allow creation and deletion of config entries.
-* Fixed display issues with doc ids that have escaped characters.
-* Fixed various UI issues.
-
-HTTP Interface
---------------
-
-* Mask passwords in active tasks and logging.
-* Update mochijson2 to allow output of BigNums not in float form.
-* Added support for X-HTTP-METHOD-OVERRIDE.
-* Better error message for database names.
-* Disable jsonp by default.
-* Accept gzip encoded standalone attachments.
-* Made max_concurrent_connections configurable.
-* Made changes API more robust.
-* Send newly generated document rev to callers of an update function.
-
-JavaScript Clients
-------------------
-
-* Added tests for couch.js and jquery.couch.js
-* Added changes handler to jquery.couch.js.
-* Added cache busting to jquery.couch.js if the user agent is msie.
-* Added support for multi-document-fetch (via _all_docs) to jquery.couch.js.
-* Added attachment versioning to jquery.couch.js.
-* Added option to control ensure_full_commit to jquery.couch.js.
-* Added list functionality to jquery.couch.js.
-* Fixed issues where bulkSave() wasn't sending a POST body.
-
-Log System
-----------
-
-* Log HEAD requests as HEAD, not GET.
-* Keep massive JSON blobs out of the error log.
-* Fixed a timeout issue.
-
-Replication System
-------------------
-
-* Refactored various internal APIs related to attachment streaming.
-* Fixed hanging replication.
-* Fixed keepalive issue.
-
-Security
---------
-
-* Added authentication redirect URL to log in clients.
-* Fixed query parameter encoding issue in oauth.js.
-* Made authentication timeout configurable.
-* Temporary views are now admin-only resources.
-
-Storage System
---------------
-
-* Don't require a revpos for attachment stubs.
-* Added checking to ensure when a revpos is sent with an attachment stub,
-  it's correct.
-* Make file deletions async to avoid pauses during compaction and db
-  deletion.
-* Fixed for wrong offset when writing headers and converting them to blocks,
-  only triggered when header is larger than 4k.
-* Preserve _revs_limit and instance_start_time after compaction.
-
-Test Suite
-----------
-
-* Made the test suite overall more reliable.
-
-View Server
------------
-
-* Provide a UUID to update functions (and all other functions) that they can
-  use to create new docs.
-* Upgrade CommonJS modules support to 1.1.1.
-* Fixed erlang filter funs and normalize filter fun API.
-* Fixed hang in view shutdown.
-
-URL Rewriter & Vhosts
----------------------
-
-* Allow more complex keys in rewriter.
-* Allow global rewrites so system defaults are available in vhosts.
-* Allow isolation of databases with vhosts.
-* Fix issue with passing variables to query parameters.
-
-.. _release/0.11.0:
-
-Version 0.11.0
-==============
-
-Build and System Integration
-----------------------------
-
-* Updated and improved source documentation.
-* Fixed distribution preparation for building on Mac OS X.
-* Added support for building a Windows installer as part of 'make dist'.
-* Bug fix for building couch.app's module list.
-* ETap tests are now run during make distcheck. This included a number of
-  updates to the build system to properly support VPATH builds.
-* Gavin McDonald set up a build-bot instance. More info can be found at
-  http://ci.apache.org/buildbot.html
-
-Futon
------
-
-* Added a button for view compaction.
-* JSON strings are now displayed as-is in the document view, without the
-  escaping of new-lines and quotes. That dramatically improves readability of
-  multi-line strings.
-* Same goes for editing of JSON string values. When a change to a field value is
-  submitted, and the value is not valid JSON it is assumed to be a string. This
-  improves editing of multi-line strings a lot.
-* Hitting tab in textareas no longer moves focus to the next form field, but
-  simply inserts a tab character at the current caret position.
-* Fixed some font declarations.
-
-HTTP Interface
---------------
-
-* Provide Content-MD5 header support for attachments.
-* Added URL Rewriter handler.
-* Added virtual host handling.
-
-Replication
------------
-
-* Added option to implicitly create replication target databases.
-* Avoid leaking file descriptors on automatic replication restarts.
-* Added option to replicate a list of documents by id.
-* Allow continuous replication to be cancelled.
-
-Runtime Statistics
-------------------
-
-* Statistics are now calculated for a moving window instead of non-overlapping
-  timeframes.
-* Fixed a problem with statistics timers and system sleep.
-* Moved statistic names to a term file in the priv directory.
-
-Security
---------
-
-* Fixed CVE-2010-0009: Apache CouchDB Timing Attack Vulnerability.
-* Added default cookie-authentication and users database.
-* Added Futon user interface for user signup and login.
-* Added per-database reader access control lists.
-* Added per-database security object for configuration data in validation
-  functions.
-* Added proxy authentication handler
-
-Storage System
---------------
-
-* Adds batching of multiple updating requests, to improve throughput with many
-  writers. Removed the now redundant couch_batch_save module.
-* Adds configurable compression of attachments.
-
-View Server
------------
-
-* Added optional 'raw' binary collation for faster view builds where Unicode
-  collation is not important.
-* Improved view index build time by reducing ICU collation callouts.
-* Improved view information objects.
-* Bug fix for partial updates during view builds.
-* Move query server to a design-doc based protocol.
-* Use json2.js for JSON serialization for compatibility with native JSON.
-* Major refactoring of couchjs to lay the groundwork for disabling cURL
-  support. The new HTTP interaction acts like a synchronous XHR. Example usage
-  of the new system is in the JavaScript CLI test runner.
diff --git a/src/whatsnew/0.8.rst b/src/whatsnew/0.8.rst
deleted file mode 100644
index 9443788..0000000
--- a/src/whatsnew/0.8.rst
+++ /dev/null
@@ -1,175 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/0.8.x:
-
-============
-0.8.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/0.8.1:
-
-Version 0.8.1-incubating
-========================
-
-Build and System Integration
-----------------------------
-
-* The `couchdb` script no longer uses `awk` for configuration checks as this
-  was causing portability problems.
-* Updated `sudo` example in `README` to use the `-i` option, this fixes
-  problems when invoking from a directory the `couchdb` user cannot access.
-
-Database Core
--------------
-
-* Fix for replication problems where the write queues can get backed up if the
-  writes aren't happening fast enough to keep up with the reads. For a large
-  replication, this can exhaust memory and crash, or slow down the machine
-  dramatically. The fix keeps only one document in the write queue at a time.
-* Fix for databases sometimes incorrectly reporting that they contain 0
-  documents after compaction.
-* CouchDB now uses ibrowse instead of inets for its internal HTTP client
-  implementation. This means better replication stability.
-
-Futon
------
-
-* The view selector dropdown should now work in Opera and Internet Explorer
-  even when it includes optgroups for design documents. (:issue:`81`)
-
-JavaScript View Server
-----------------------
-
-* Sealing of documents has been disabled due to an incompatibility with
-  SpiderMonkey 1.9.
-* Improve error handling for undefined values emitted by map functions.
-  (:issue:`83`)
-
-HTTP Interface
---------------
-
-* Fix for chunked responses where chunks were always being split into multiple
-  TCP packets, which caused problems with the test suite under Safari, and in
-  some other cases.
-* Fix for an invalid JSON response body being returned for some kinds of
-  views. (:issue:`84`)
-* Fix for connections not getting closed after rejecting a chunked request.
-  (:issue:`55`)
-* CouchDB can now be bound to IPv6 addresses.
-* The HTTP `Server` header now contains the versions of CouchDB and Erlang.
-
-.. _release/0.8.0:
-
-Version 0.8.0-incubating
-========================
-
-Build and System Integration
-----------------------------
-
-* CouchDB can automatically respawn following a server crash.
-* Database server no longer refuses to start with a stale PID file.
-* System logrotate configuration provided.
-* Improved handling of ICU shared libraries.
-* The `couchdb` script now automatically enables SMP support in Erlang.
-* The `couchdb` and `couchjs` scripts have been improved for portability.
-* The build and system integration have been improved for portability.
-
-Database Core
--------------
-
-* The view engine has been completely decoupled from the storage engine. Index
-  data is now stored in separate files, and the format of the main database
-  file has changed.
-* Databases can now be compacted to reclaim space used for deleted documents
-  and old document revisions.
-* Support for incremental map/reduce views has been added.
-* To support map/reduce, the structure of design documents has changed. View
-  values are now JSON objects containing at least a `map` member, and
-  optionally a `reduce` member.
-* View servers are now identified by name (for example `javascript`) instead of
-  by media type.
-* Automatically generated document IDs are now based on proper UUID generation
-  using the crypto module.
-* The field `content-type` in the JSON representation of attachments has been
-  renamed to `content_type` (underscore).
-
-Futon
------
-
-* When adding a field to a document, Futon now just adds a field with an
-  autogenerated name instead of prompting for the name with a dialog. The name
-  is automatically put into edit mode so that it can be changed immediately.
-* Fields are now sorted alphabetically by name when a document is displayed.
-* Futon can be used to create and update permanent views.
-* The maximum number of rows to display per page on the database page can now
-  be adjusted.
-* Futon now uses the XMLHTTPRequest API asynchronously to communicate with the
-  CouchDB HTTP server, so that most operations no longer block the browser.
-* View results sorting can now be switched between ascending and descending by
-  clicking on the `Key` column header.
-* Fixed a bug where documents that contained a `@` character could not be
-  viewed. (:issue:`12`)
-* The database page now provides a `Compact` button to trigger database
-  compaction. (:issue:`38`)
-* Fixed portential double encoding of document IDs and other URI segments in
-  many instances. (:issue:`39`)
-* Improved display of attachments.
-* The JavaScript Shell has been removed due to unresolved licensing issues.
-
-JavaScript View Server
-----------------------
-
-* SpiderMonkey is no longer included with CouchDB, but rather treated as a
-  normal external dependency. A simple C program (`_couchjs`) is provided that
-  links against an existing SpiderMonkey installation and uses the interpreter
-  embedding API.
-* View functions using the default JavaScript view server can now do logging
-  using the global `log(message)` function. Log messages are directed into the
-  CouchDB log at `INFO` level. (:issue:`59`)
-* The global `map(key, value)` function made available to view code has been
-  renamed to `emit(key, value)`.
-* Fixed handling of exceptions raised by view functions.
-
-HTTP Interface
---------------
-
-* CouchDB now uses MochiWeb instead of inets for the HTTP server
-  implementation. Among other things, this means that the extra configuration
-  files needed for inets (such as `couch_httpd.conf`) are no longer used.
-* The HTTP interface now completely supports the `HEAD` method. (:issue:`3`)
-* Improved compliance of `Etag` handling with the HTTP specification.
-  (:issue:`13`)
-* Etags are no longer included in responses to document `GET` requests that
-  include query string parameters causing the JSON response to change without
-  the revision or the URI having changed.
-* The bulk document update API has changed slightly on both the request and the
-  response side. In addition, bulk updates are now atomic.
-* CouchDB now uses `TCP_NODELAY` to fix performance problems with persistent
-  connections on some platforms due to nagling.
-* Including a `?descending=false` query string parameter in requests to views
-  no longer raises an error.
-* Requests to unknown top-level reserved URLs (anything with a leading
-  underscore) now return a `unknown_private_path` error instead of the
-  confusing `illegal_database_name`.
-* The Temporary view handling now expects a JSON request body, where the JSON
-  is an object with at least a `map` member, and optional `reduce` and
-  `language` members.
-* Temporary views no longer determine the view server based on the Content-Type
-  header of the `POST` request, but rather by looking for a `language` member
-  in the JSON body of the request.
-* The status code of responses to `DELETE` requests is now 200 to reflect that
-  that the deletion is performed synchronously.
diff --git a/src/whatsnew/0.9.rst b/src/whatsnew/0.9.rst
deleted file mode 100644
index 15d4eaf..0000000
--- a/src/whatsnew/0.9.rst
+++ /dev/null
@@ -1,263 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/0.9.x:
-
-============
-0.9.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/0.9.x/upgrade:
-
-Upgrade Notes
-=============
-
-Response to Bulk Creation/Updates
----------------------------------
-
-The response to a bulk creation / update now looks like this
-
-.. code-block:: javascript
-
-    [
-        {"id": "0", "rev": "3682408536"},
-        {"id": "1", "rev": "3206753266"},
-        {"id": "2", "error": "conflict", "reason": "Document update conflict."}
-    ]
-
-Database File Format
---------------------
-
-The database file format has changed. CouchDB itself does yet not provide any
-tools for migrating your data. In the meantime, you can use third-party scripts
-to deal with the migration, such as the dump/load tools that come with the
-development version (trunk) of `couchdb-python`_.
-
-.. _couchdb-python: http://code.google.com/p/couchdb-python/
-
-Renamed "count" to "limit"
---------------------------
-
-The view query API has been changed: ``count`` has become ``limit``.
-This is a better description of what the parameter does, and should be a simple
-update in any client code.
-
-Moved View URLs
----------------
-
-The view URLs have been moved to design document resources. This means that
-paths that used to be like::
-
-    http://hostname:5984/mydb/_view/designname/viewname?limit=10
-
-will now look like::
-
-    http://hostname:5984/mydb/_design/designname/_view/viewname?limit=10.
-
-See the `REST, Hypermedia, and CouchApps`_  thread on dev for details.
-
-.. _REST, Hypermedia, and CouchApps: http://mail-archives.apache.org/mod_mbox/couchdb-dev/200902.mbox/%3Ce282921e0902242116n2cd207c4x7a9d0feced3f10d9@mail.gmail.com%3E
-
-Attachments
------------
-
-Names of attachments are no longer allowed to start with an underscore.
-
-Error Codes
------------
-
-Some refinements have been made to error handling. CouchDB will send 400 instead
-of 500 on invalid query parameters. Most notably, document update conflicts now
-respond with `409 Conflict` instead of `412 Precondition Failed`. The error code
-for when attempting to create a database that already exists is now 412
-instead of 409.
-
-ini file format
----------------
-
-CouchDB 0.9 changes sections and configuration variable names in configuration
-files. Old .ini files won't work. Also note that CouchDB now ships with two .ini
-files where 0.8 used couch.ini there are now `default.ini` and `local.ini`.
-`default.ini` contains CouchDB's standard configuration values. local.ini is
-meant for local changes. `local.ini` is not overwritten on CouchDB updates, so
-your edits are safe. In addition, the new runtime configuration system persists
-changes to the configuration in `local.ini`.
-
-.. _release/0.9.2:
-
-Version 0.9.2
-=============
-
-Build and System Integration
-----------------------------
-
-* Remove branch callbacks to allow building couchjs against newer versions of
-  Spidermonkey.
-
-Replication
------------
-
-* Fix replication with 0.10 servers initiated by an 0.9 server (:issue:`559`).
-
-.. _release/0.9.1:
-
-Version 0.9.1
-=============
-
-Build and System Integration
-----------------------------
-
-* PID file directory is now created by the SysV/BSD daemon scripts.
-* Fixed the environment variables shown by the configure script.
-* Fixed the build instructions shown by the configure script.
-* Updated ownership and permission advice in `README` for better security.
-
-Configuration and stats system
-------------------------------
-
-* Corrected missing configuration file error message.
-* Fixed incorrect recording of request time.
-
-Database Core
--------------
-
-* Document validation for underscore prefixed variables.
-* Made attachment storage less sparse.
-* Fixed problems when a database with delayed commits pending is considered
-  idle, and subject to losing changes when shutdown. (:issue:`334`)
-
-External Handlers
------------------
-
-* Fix POST requests.
-
-Futon
------
-
-* Redirect when loading a deleted view URI from the cookie.
-
-HTTP Interface
---------------
-
-* Attachment requests respect the "rev" query-string parameter.
-
-JavaScript View Server
-----------------------
-
-* Useful JavaScript Error messages.
-
-Replication
------------
-
-* Added support for Unicode characters transmitted as UTF-16 surrogate pairs.
-* URL-encode attachment names when necessary.
-* Pull specific revisions of an attachment, instead of just the latest one.
-* Work around a rare chunk-merging problem in ibrowse.
-* Work with documents containing Unicode characters outside the Basic
-  Multilingual Plane.
-
-.. _release/0.9.0:
-
-Version 0.9.0
-=============
-
-Build and System Integration
-----------------------------
-
-* The `couchdb` script now supports system chainable configuration files.
-* The Mac OS X daemon script now redirects STDOUT and STDERR like SysV/BSD.
-* The build and system integration have been improved for portability.
-* Added COUCHDB_OPTIONS to etc/default/couchdb file.
-* Remove COUCHDB_INI_FILE and COUCHDB_PID_FILE from etc/default/couchdb file.
-* Updated `configure.ac` to manually link `libm` for portability.
-* Updated `configure.ac` to extended default library paths.
-* Removed inets configuration files.
-* Added command line test runner.
-* Created dev target for make.
-
-Configuration and stats system
-------------------------------
-
-* Separate default and local configuration files.
-* HTTP interface for configuration changes.
-* Statistics framework with HTTP query API.
-
-Database Core
--------------
-
-* Faster B-tree implementation.
-* Changed internal JSON term format.
-* Improvements to Erlang VM interactions under heavy load.
-* User context and administrator role.
-* Update validations with design document validation functions.
-* Document purge functionality.
-* Ref-counting for database file handles.
-
-Design Document Resource Paths
-------------------------------
-
-* Added httpd_design_handlers config section.
-* Moved _view to httpd_design_handlers.
-* Added ability to render documents as non-JSON content-types with _show and
-  _list functions, which are also httpd_design_handlers.
-
-Futon Utility Client
---------------------
-
-* Added pagination to the database listing page.
-* Implemented attachment uploading from the document page.
-* Added page that shows the current configuration, and allows modification of
-  option values.
-* Added a JSON "source view" for document display.
-* JSON data in view rows is now syntax highlighted.
-* Removed the use of an iframe for better integration with browser history and
-  bookmarking.
-* Full database listing in the sidebar has been replaced by a short list of
-  recent databases.
-* The view editor now allows selection of the view language if there is more
-  than one configured.
-* Added links to go to the raw view or document URI.
-* Added status page to display currently running tasks in CouchDB.
-* JavaScript test suite split into multiple files.
-* Pagination for reduce views.
-
-HTTP Interface
---------------
-
-* Added client side UUIDs for idempotent document creation
-* HTTP COPY for documents
-* Streaming of chunked attachment PUTs to disk
-* Remove negative count feature
-* Add include_docs option for view queries
-* Add multi-key view post for views
-* Query parameter validation
-* Use stale=ok to request potentially cached view index
-* External query handler module for full-text or other indexers.
-* Etags for attachments, views, shows and lists
-* Show and list functions for rendering documents and views as developer
-  controlled content-types.
-* Attachment names may use slashes to allow uploading of nested directories
-  (useful for static web hosting).
-* Option for a view to run over design documents.
-* Added newline to JSON responses. Closes bike-shed.
-
-Replication
------------
-
-* Using ibrowse.
-* Checkpoint replications so failures are less expensive.
-* Automatically retry of failed replications.
-* Stream attachments in pull-replication.
diff --git a/src/whatsnew/1.0.rst b/src/whatsnew/1.0.rst
deleted file mode 100644
index 5355d9a..0000000
--- a/src/whatsnew/1.0.rst
+++ /dev/null
@@ -1,269 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.0.x:
-
-============
-1.0.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/1.0.x/upgrade:
-
-Upgrade Notes
-=============
-
-Note, to replicate with a 1.0 CouchDB instance you must first upgrade in-place
-your current CouchDB to 1.0 or 0.11.1 -- backporting so that 0.10.x can
-replicate to 1.0 wouldn't be that hard. All that is required is patching the
-replicator to use the ``application/json`` content type.
-
-- ``_log`` and ``_temp_views`` are now admin-only resources.
-- ``_bulk_docs`` now requires a valid `Content-Type` header of
-  ``application/json``.
-- `JSONP` is disabled by default. An .ini option was added to selectively
-  enable it.
-- The ``key``, ``startkey`` and ``endkey`` properties of the request object
-  passed to :ref:`list <listfun>` and :ref:`show <showfun>` functions now
-  contain JSON objects representing the URL encoded string values in the query
-  string. Previously, these properties contained strings which needed to be
-  converted to JSON before using.
-
-.. warning::
-    :ref:`release/1.0.4` contains important security fixes. Previous `1.0.x`
-    releases are not recommended for regular usage.
-
-.. _release/1.0.4:
-
-Version 1.0.4
-=============
-
-HTTP Interface
---------------
-
-* Fix missing revisions in ``_changes?style=all_docs``.
-* Fix validation of attachment names.
-
-Log System
-----------
-
-* Fix file descriptor leak in ``_log``.
-
-Replicator
-----------
-
-* Fix a race condition where replications can go stale
-
-Security
---------
-
-* Fixed :ref:`cve/2012-5641`
-* Fixed :ref:`cve/2012-5649`
-* Fixed :ref:`cve/2012-5650`
-
-View System
------------
-
-* Avoid invalidating view indexes when running out of file descriptors.
-
-.. _release/1.0.3:
-
-Version 1.0.3
-=============
-
-General
--------
-
-* Fixed compatibility issues with Erlang R14B02.
-
-Etap Test Suite
----------------
-
-* Etap tests no longer require use of port 5984. They now use a randomly
-  selected port so they won't clash with a running CouchDB.
-
-Futon
------
-
-* Made compatible with jQuery 1.5.x.
-
-HTTP Interface
---------------
-
-* Fix bug that allows invalid UTF-8 after valid escapes.
-* The query parameter `include_docs` now honors the parameter `conflicts`.
-  This applies to queries against map views, _all_docs and _changes.
-* Added support for inclusive_end with reduce views.
-
-Replicator
-----------
-
-* Enabled replication over IPv6.
-* Fixed for crashes in continuous and filtered changes feeds.
-* Fixed error when restarting replications in OTP R14B02.
-* Upgrade ibrowse to version 2.2.0.
-* Fixed bug when using a filter and a limit of 1.
-
-Security
---------
-
-* Fixed OAuth signature computation in OTP R14B02.
-* Handle passwords with : in them.
-
-Storage System
---------------
-
-* More performant queries against _changes and _all_docs when using the
-  `include_docs` parameter.
-
-Windows
--------
-
-* Windows builds now require ICU >= 4.4.0 and Erlang >= R14B03. See
-  :issue:`1152`, and :issue:`963` + OTP-9139 for more information.
-
-.. _release/1.0.2:
-
-Version 1.0.2
-=============
-
-Futon
------
-
-* Make test suite work with Safari and Chrome.
-* Fixed animated progress spinner.
-* Fix raw view document link due to overzealous URI encoding.
-* Spell javascript correctly in loadScript(uri).
-
-HTTP Interface
---------------
-
-* Allow reduce=false parameter in map-only views.
-* Fix parsing of Accept headers.
-* Fix for multipart GET APIs when an attachment was created during a
-  local-local replication. See :issue:`1022` for details.
-
-Log System
-----------
-
-* Reduce lengthy stack traces.
-* Allow logging of native <xml> types.
-
-Replicator
-----------
-
-* Updated ibrowse library to 2.1.2 fixing numerous replication issues.
-* Make sure that the replicator respects HTTP settings defined in the config.
-* Fix error when the ibrowse connection closes unexpectedly.
-* Fix authenticated replication (with HTTP basic auth) of design documents
-  with attachments.
-* Various fixes to make replication more resilient for edge-cases.
-
-Storage System
---------------
-
-* Fix leaking file handles after compacting databases and views.
-* Fix databases forgetting their validation function after compaction.
-* Fix occasional timeout errors after successfully compacting large databases.
-* Fix occasional error when writing to a database that has just been compacted.
-* Fix occasional timeout errors on systems with slow or heavily loaded IO.
-* Fix for OOME when compactions include documents with many conflicts.
-* Fix for missing attachment compression when MIME types included parameters.
-* Preserve purge metadata during compaction to avoid spurious view rebuilds.
-* Fix spurious conflicts introduced when uploading an attachment after
-  a doc has been in a conflict. See :issue:`902` for details.
-* Fix for frequently edited documents in multi-master deployments being
-  duplicated in _changes and _all_docs.  See :issue:`968` for details on how
-  to repair.
-* Significantly higher read and write throughput against database and
-  view index files.
-
-View Server
------------
-
-* Don't trigger view updates when requesting `_design/doc/_info`.
-* Fix for circular references in CommonJS requires.
-* Made isArray() function available to functions executed in the query server.
-* Documents are now sealed before being passed to map functions.
-* Force view compaction failure when duplicated document data exists. When
-  this error is seen in the logs users should rebuild their views from
-  scratch to fix the issue. See :issue:`999` for details.
-
-.. _release/1.0.1:
-
-Version 1.0.1
-=============
-
-Authentication
---------------
-
-* Enable basic-auth popup when required to access the server, to prevent
-   people from getting locked out.
-
-Build and System Integration
-----------------------------
-
-* Included additional source files for distribution.
-
-Futon
------
-
-* User interface element for querying stale (cached) views.
-
-HTTP Interface
---------------
-
-* Expose `committed_update_seq` for monitoring purposes.
-* Show fields saved along with _deleted=true. Allows for auditing of deletes.
-* More robust Accept-header detection.
-
-Replicator
-----------
-
-* Added support for replication via an HTTP/HTTPS proxy.
-* Fix pull replication of attachments from 0.11 to 1.0.x.
-* Make the _changes feed work with non-integer seqnums.
-
-Storage System
---------------
-
-* Fix data corruption bug :issue:`844`. Please see
-  http://couchdb.apache.org/notice/1.0.1.html for details.
-
-.. _release/1.0.0:
-
-Version 1.0.0
-=============
-
-Security
---------
-
-* Added authentication caching, to avoid repeated opening and closing of the
-  users database for each request requiring authentication.
-
-Storage System
---------------
-
-* Small optimization for reordering result lists.
-* More efficient header commits.
-* Use O_APPEND to save lseeks.
-* Faster implementation of pread_iolist(). Further improves performance on
-  concurrent reads.
-
-View Server
------------
-
-* Faster default view collation.
-* Added option to include update_seq in view responses.
diff --git a/src/whatsnew/1.1.rst b/src/whatsnew/1.1.rst
deleted file mode 100644
index 696bb8c..0000000
--- a/src/whatsnew/1.1.rst
+++ /dev/null
@@ -1,170 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.1.x:
-
-============
-1.1.x Branch
-============
-
-.. contents::
-   :depth: 1
-   :local:
-
-.. _release/1.1.x/upgrade:
-
-Upgrade Notes
-=============
-
-.. warning::
-    :ref:`release/1.1.2` contains important security fixes. Previous `1.1.x`
-    releases are not recommended for regular usage.
-
-.. _release/1.1.2:
-
-Version 1.1.2
-=============
-
-Build System
-------------
-
-* Don't `ln` the `couchjs` install target on Windows
-* Remove ICU version dependency on Windows.
-* Improve SpiderMonkey version detection.
-
-HTTP Interface
---------------
-
-* ETag of attachment changes only when the attachment changes, not
-  the document.
-* Fix retrieval of headers larger than 4k.
-* Allow OPTIONS HTTP method for list requests.
-* Don't attempt to encode invalid json.
-
-Log System
-----------
-
-* Improvements to log messages for file-related errors.
-
-Replicator
-----------
-
- * Fix pull replication of documents with many revisions.
- * Fix replication from an HTTP source to an HTTP target.
-
-Security
---------
-
-* Fixed :ref:`cve/2012-5641`
-* Fixed :ref:`cve/2012-5649`
-* Fixed :ref:`cve/2012-5650`
-
-View Server
------------
-
-* Avoid invalidating view indexes when running out of file descriptors.
-
-.. _release/1.1.1:
-
-Version 1.1.1
-=============
-
-* Support SpiderMonkey 1.8.5
-* Add configurable maximum to the number of bytes returned by _log.
-* Allow CommonJS modules to be an empty string.
-* Bump minimum Erlang version to R13B02.
-* Do not run deleted validate_doc_update functions.
-* ETags for views include current sequence if include_docs=true.
-* Fix bug where duplicates can appear in _changes feed.
-* Fix bug where update handlers break after conflict resolution.
-* Fix bug with _replicator where include "filter" could crash couch.
-* Fix crashes when compacting large views.
-* Fix file descriptor leak in _log
-* Fix missing revisions in _changes?style=all_docs.
-* Improve handling of compaction at max_dbs_open limit.
-* JSONP responses now send "text/javascript" for Content-Type.
-* Link to ICU 4.2 on Windows.
-* Permit forward slashes in path to update functions.
-* Reap couchjs processes that hit reduce_overflow error.
-* Status code can be specified in update handlers.
-* Support provides() in show functions.
-* _view_cleanup when ddoc has no views now removes all index files.
-* max_replication_retry_count now supports "infinity".
-* Fix replication crash when source database has a document with empty ID.
-* Fix deadlock when assigning couchjs processes to serve requests.
-* Fixes to the document multipart PUT API.
-* Fixes regarding file descriptor leaks for databases with views.
-
-.. _release/1.1.0:
-
-Version 1.1.0
-=============
-
-.. note::
-    All CHANGES for 1.0.2 and 1.0.3 also apply to 1.1.0.
-
-Externals
----------
-
-* Added OS Process module to manage daemons outside of CouchDB.
-* Added HTTP Proxy handler for more scalable externals.
-
-Futon
------
-
-* Added a "change password"-feature to Futon.
-
-HTTP Interface
---------------
-
-* Native SSL support.
-* Added support for HTTP range requests for attachments.
-* Added built-in filters for `_changes`: `_doc_ids` and `_design`.
-* Added configuration option for TCP_NODELAY aka "Nagle".
-* Allow POSTing arguments to `_changes`.
-* Allow `keys` parameter for GET requests to views.
-* Allow wildcards in vhosts definitions.
-* More granular ETag support for views.
-* More flexible URL rewriter.
-* Added support for recognizing "Q values" and media parameters in
-  HTTP Accept headers.
-* Validate doc ids that come from a PUT to a URL.
-
-Replicator
-----------
-
-* Added `_replicator` database to manage replications.
-* Fixed issues when an endpoint is a remote database accessible via SSL.
-* Added support for continuous by-doc-IDs replication.
-* Fix issue where revision info was omitted when replicating attachments.
-* Integrity of attachment replication is now verified by MD5.
-
-Storage System
---------------
-
-* Multiple micro-optimizations when reading data.
-
-URL Rewriter & Vhosts
----------------------
-
-* Fix for variable substitution
-
-View Server
------------
-
-* Added CommonJS support to map functions.
-* Added `stale=update_after` query option that triggers a view update after
-  returning a `stale=ok` response.
-* Warn about empty result caused by `startkey` and `endkey` limiting.
-* Built-in reduce function `_sum` now accepts lists of integers as input.
-* Added view query aliases start_key, end_key, start_key_doc_id and
-  end_key_doc_id.
diff --git a/src/whatsnew/1.2.rst b/src/whatsnew/1.2.rst
deleted file mode 100644
index e591216..0000000
--- a/src/whatsnew/1.2.rst
+++ /dev/null
@@ -1,235 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.2.x:
-
-============
-1.2.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/1.2.x/upgrade:
-
-Upgrade Notes
-=============
-
-.. warning::
-    This version drops support for the database format that was introduced in
-    version 0.9.0. Compact your older databases (that have not been compacted
-    for a long time) before upgrading, or they will become inaccessible.
-
-.. warning::
-    :ref:`release/1.2.1` contains important security fixes. Previous `1.2.x`
-    releases are not recommended for regular usage.
-
-Security changes
-----------------
-
-The interface to the ``_users`` and ``_replicator`` databases have been
-changed so that non-administrator users can see less information:
-
-* In the ``_users`` database:
-
-  * User documents can now only be read by the respective users, as well as
-    administrators. Other users cannot read these documents.
-  * Views can only be defined and queried by administrator users.
-  * The ``_changes`` feed can only be queried by administrator users.
-
-* In the ``_replicator`` database:
-
-  * Documents now have a forced ``owner`` field that corresponds to the
-    authenticated user that created them.
-  * Non-owner users will not see confidential information like passwords or
-    OAuth tokens in replication documents; they can still see the other
-    contents of those documents. Administrators can see everything.
-  * Views can only be defined and queried by administrators.
-
-Database Compression
---------------------
-
-The new optional (but enabled by default) compression of disk files requires
-an upgrade of the on-disk format (5 -> 6) which occurs on creation for new
-databases and views, and on compaction for existing files. This format is not
-supported in previous releases, so rollback would require replication to the
-previous CouchDB release or restoring from backup.
-
-Compression can be disabled by setting ``compression = none`` in your
-``local.ini`` ``[couchdb]`` section, but the on-disk format will still be
-upgraded.
-
-.. _release/1.2.2:
-
-Version 1.2.2
-=============
-
-Build System
-------------
-
-* Fixed issue in `couchdb` script where stopped status returns before process
-  exits.
-
-HTTP Interface
---------------
-
-* Reset rewrite counter on new request, avoiding unnecessary request failures
-  due to bogus rewrite limit reports.
-
-.. _release/1.2.1:
-
-Version 1.2.1
-=============
-
-Build System
-------------
-
-* Fix couchdb start script.
-* Win: fix linker invocations.
-
-Futon
------
-
-* Disable buttons that aren't available for the logged-in user.
-
-HTTP Interface
---------------
-
-* No longer rewrites the ``X-CouchDB-Requested-Path`` during recursive
-  calls to the rewriter.
-* Limit recursion depth in the URL rewriter. Defaults to a maximum
-  of 100 invocations but is configurable.
-
-Security
---------
-
-* Fixed :ref:`cve/2012-5641`
-* Fixed :ref:`cve/2012-5649`
-* Fixed :ref:`cve/2012-5650`
-
-Replication
------------
-
-* Fix potential timeouts.
-
-View Server
------------
-
-* Change use of signals to avoid broken view groups.
-
-.. _release/1.2.0:
-
-Version 1.2.0
-=============
-
-Authentication
---------------
-
-* Fix use of OAuth with VHosts and URL rewriting.
-* OAuth secrets can now be stored in the users system database
-  as an alternative to key value pairs in the .ini configuration.
-  By default this is disabled (secrets are stored in the .ini)
-  but can be enabled via the .ini configuration key `use_users_db`
-  in the `couch_httpd_oauth` section.
-* Documents in the _users database are no longer publicly
-  readable.
-* Confidential information in the _replication database is no
-  longer publicly readable.
-* Password hashes are now calculated by CouchDB. Clients are no
-  longer required to do this manually.
-* Cookies used for authentication can be made persistent by enabling
-  the .ini configuration key `allow_persistent_cookies` in the
-  `couch_httpd_auth` section.
-
-Build System
-------------
-
-* cURL is no longer required to build CouchDB as it is only
-  used by the command line JS test runner. If cURL is available
-  when building CouchJS you can enable the HTTP bindings by
-  passing -H on the command line.
-* Temporarily made `make check` pass with R15B. A more thorough
-  fix is in the works (:issue:`1424`).
-* Fixed --with-js-include and --with-js-lib options.
-* Added --with-js-lib-name option.
-
-Futon
------
-
-* The `Status` screen (active tasks) now displays two new task status
-  fields: `Started on` and `Updated on`.
-* Futon remembers view code every time it is saved, allowing to save an
-  edit that amounts to a revert.
-
-HTTP Interface
---------------
-
-* Added a native JSON parser.
-* The _active_tasks API now offers more granular fields. Each
-  task type is now able to expose different properties.
-* Added built-in changes feed filter `_view`.
-* Fixes to the `_changes` feed heartbeat option which caused
-  heartbeats to be missed when used with a filter. This caused
-  timeouts of continuous pull replications with a filter.
-* Properly restart the SSL socket on configuration changes.
-
-OAuth
------
-
-* Updated bundled `erlang_oauth` library to the latest version.
-
-Replicator
-----------
-
-* A new replicator implementation. It offers more performance and
-  configuration options.
-* Passing non-string values to query_params is now a 400 bad
-  request. This is to reduce the surprise that all parameters
-  are converted to strings internally.
-* Added optional field `since_seq` to replication objects/documents.
-  It allows to bootstrap a replication from a specific source sequence
-  number.
-* Simpler replication cancellation. In addition to the current method,
-  replications can now be canceled by specifying the replication ID
-  instead of the original replication object/document.
-
-Storage System
---------------
-
-* Added optional database and view index file compression (using Google's
-  snappy or zlib's deflate). This feature is enabled by default, but it
-  can be disabled by adapting local.ini accordingly. The on-disk format
-  is upgraded on compaction and new DB/view creation to support this.
-* Several performance improvements, most notably regarding database writes
-  and view indexing.
-* Computation of the size of the latest MVCC snapshot data and all its
-  supporting metadata, both for database and view index files. This
-  information is exposed as the `data_size` attribute in the database and
-  view group information URIs.
-* The size of the buffers used for database and view compaction is now
-  configurable.
-* Added support for automatic database and view compaction. This feature
-  is disabled by default, but it can be enabled via the .ini configuration.
-* Performance improvements for the built-in changes feed filters `_doc_ids`
-  and `_design`.
-
-View Server
------------
-
-* Add CoffeeScript (http://coffeescript.org/) as a first class view server
-  language.
-* Fixed old index file descriptor leaks after a view cleanup.
-* The requested_path property keeps the pre-rewrite path even when no VHost
-  configuration is matched.
-* Fixed incorrect reduce query results when using pagination parameters.
-* Made icu_driver work with Erlang R15B and later.
diff --git a/src/whatsnew/1.3.rst b/src/whatsnew/1.3.rst
deleted file mode 100644
index 589dfdb..0000000
--- a/src/whatsnew/1.3.rst
+++ /dev/null
@@ -1,258 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.3.x:
-
-============
-1.3.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. warning::
-    :ref:`release/1.3.x` is affected by the issue described in
-    :ref:`cve/2014-2668`. Upgrading to a more recent release is strongly
-    recommended.
-
-.. _release/1.3.x/upgrade:
-
-Upgrade Notes
-=============
-
-You can upgrade your existing CouchDB 1.0.x installation to 1.3.0
-without any specific steps or migration. When you run CouchDB, the
-existing data and index files will be opened and used as normal.
-
-The first time you run a compaction routine on your database within 1.3.0,
-the data structure and indexes will be updated to the new version of the
-CouchDB database format that can only be read by CouchDB 1.3.0 and later.
-This step is not reversible. Once the data files have been updated and
-migrated to the new version the data files will no longer work with a
-CouchDB 1.0.x release.
-
-.. warning::
-    If you want to retain support for opening the data files in
-    CouchDB 1.0.x you must back up your data files before performing the
-    upgrade and compaction process.
-
-.. _release/1.3.1:
-
-Version 1.3.1
-=============
-
-Replicator
-----------
-
-* :issue:`1788`: Tolerate missing source and target fields in _replicator docs.
-  :commit:`869f42e2`
-
-Log System
-----------
-
-* :issue:`1794`: Fix bug in WARN level logging from 1.3.0.
-* Don't log about missing .compact files. :commit:`06f1a8dc`
-
-View Server
------------
-
-* :issue:`1792`: Fix the -S option to couchjs to increase memory limits.
-  :commit:`cfaa66cd`
-
-Miscellaneous
--------------
-
-* :issue:`1784`: Improvements to test suite and VPATH build system.
-  :commit:`01afaa4f`
-* Improve documentation: better structure, improve language, less duplication.
-
-.. _release/1.3.0:
-
-Version 1.3.0
-=============
-
-Database core
--------------
-
-* :issue:`1512`: Validate bind address before assignment. :commit:`09ead8a0`
-* Restore ``max_document_size`` protection. :commit:`bf1eb135`
-
-Documentation
--------------
-
-* :issue:`1523`: Import CouchBase documentation and convert them into
-  `Sphinx docs <http://sphinx.pocoo.org/>`_
-
-Futon
------
-
-* :issue:`509`: Added view request duration to Futon. :commit:`2d2c7d1e`
-* :issue:`627`: Support all timezones. :commit:`b1a049bb`
-* :issue:`1383`: Futon view editor won't allow you to save original view after
-  saving a revision. :commit:`ce48342`
-* :issue:`1470`: Futon raises pop-up on attempt to navigate to missed/deleted
-  document. :commit:`5da40eef`
-* :issue:`1473`, :issue:`1472`: Disable buttons for actions that the user
-  doesn't have permissions to. :commit:`7156254d`
-
-HTTP Interface
---------------
-
-* :issue:`431`: Introduce experimental :ref:`CORS support <cors>`.
-  :commit:`b90e4021`
-* :issue:`764`, :issue:`514`, :issue:`430`: Fix sending HTTP headers from
-  ``_list`` function, :commit:`2a74f88375`
-* :issue:`887`: Fix ``bytes`` and ``offset`` parameters semantic for `_log`
-  resource (`explanation`_)
-  :commit:`ad700014`
-* :issue:`986`: Added Server-Sent Events protocol to db changes API.
-  See http://www.w3.org/TR/eventsource/ for details. :commit:`093d2aa6`
-* :issue:`1026`: Database names are encoded with respect of special characters
-  in the rewriter now. :commit:`272d6415`
-* :issue:`1097`: Allow `OPTIONS` request to shows and lists functions.
-  :commit:`9f53704a`
-* :issue:`1210`: Files starting with underscore can be attached and updated now.
-  :commit:`05858792`
-* :issue:`1277`: Better query parameter support and code clarity:
-  :commit:`7e3c69ba`
-
-  * Responses to documents created/modified via form data `POST` to /db/doc or
-    copied with `COPY` should now include `Location` header.
-  * Form data POST to /db/doc now includes an `ETag` response header.
-  * ``?batch=ok`` is now supported for `COPY` and `POST` /db/doc updates.
-  * ``?new_edits=false`` is now supported for more operations.
-
-* :issue:`1285`: Allow configuration of vendor and modules version in CouchDB
-  welcome message. :commit:`3c24a94d`
-* :issue:`1321`: Variables in rewrite rules breaks OAuth authentication.
-  :commit:`c307ba95`
-* :issue:`1337`: Use MD5 for attachment ETag header value. :commit:`6d912c9f`
-* :issue:`1381`: Add jquery.couch support for Windows 8 Metro apps.
-  :commit:`dfc5d37c`
-* :issue:`1441`: Limit recursion depth in the URL rewriter.
-  Defaults to a maximum of 100 invocations but is configurable.
-  :commit:`d076976c`
-* :issue:`1442`: No longer rewrites the `X-CouchDB-Requested-Path` during
-  recursive calls to the rewriter. :commit:`56744f2f`
-* :issue:`1501`: :ref:`Changes feed <changes>` now can take special parameter
-  ``since=now`` to emit changes since current point of time. :commit:`3bbb2612`
-* :issue:`1502`: Allow users to delete own _users doc. :commit:`f0d6f19bc8`
-* :issue:`1511`: CouchDB checks `roles` field for `_users` database documents
-  with more care. :commit:`41205000`
-* :issue:`1537`: Include user name in show/list `ETags`. :commit:`ac320479`
-* Send a 202 response for `_restart`. :commit:`b213e16f`
-* Make password hashing synchronous when using the /_config/admins API.
-  :commit:`08071a80`
-* Add support to serve single file with CouchDB, :commit:`2774531ff2`
-* Allow any 2xx code to indicate success, :commit:`0d50103cfd`
-* Fix `_session` for IE7.
-* Restore 400 error for empty PUT, :commit:`2057b895`
-* Return ``X-Couch-Id`` header if doc is created, :commit:`98515bf0b9`
-* Support auth cookies with ``:`` characters, :commit:`d9566c831d`
-
-.. _explanation: https://git-wip-us.apache.org/repos/asf?p=couchdb.git;a=blobdiff;f=src/couchdb/couch_log.erl;h=1b05f4db2;hp=0befe7aab;hb=ad700014;hpb=7809f3ca
-
-Log System
-----------
-
-* :issue:`1380`: Minor fixes for logrotate support.
-* Improve file I/O error logging and handling, :commit:`4b6475da`
-* Module Level Logging, :commit:`b58f069167`
-* Log 5xx responses at error level, :commit:`e896b0b7`
-* Log problems opening database at ERROR level except for auto-created
-  system dbs, :commit:`41667642f7`
-
-Replicator
-----------
-
-* :issue:`1248`: `HTTP 500` error now doesn't occurs when replicating with
-  ``?doc_ids=null``. :commit:`bea76dbf`
-* :issue:`1259`: Stabilize replication id, :commit:`c6252d6d7f`
-* :issue:`1323`: Replicator now acts as standalone application.
-  :commit:`f913ca6e`
-* :issue:`1363`: Fix rarely occurred, but still race condition in changes feed
-  if a quick burst of changes happens while replication is starting the
-  replication can go stale. :commit:`573a7bb9`
-* :issue:`1557`: Upgrade some code to use BIFs bring good improvements for
-  replication.
-
-Security
---------
-
-* :issue:`1060`: Passwords are now hashed using the PBKDF2 algorithm with a
-  configurable work factor. :commit:`7d418134`
-
-Source Repository
------------------
-
-* The source repository was migrated from `SVN`_ to `Git`_.
-
-.. _SVN: https://svn.apache.org/repos/asf/couchdb
-.. _Git: https://git-wip-us.apache.org/repos/asf/couchdb.git
-
-Storage System
---------------
-
-* Fixed unnecessary conflict when deleting and creating a
-  document in the same batch.
-
-Test Suite
-----------
-
-* :issue:`1321`: Moved the JS test suite to the CLI.
-* :issue:`1338`: Start CouchDB with ``port=0``. While CouchDB might be already
-  running on the default port 5984, port number 0 let the TCP stack figure out a
-  free port to run. :commit:`127cbe3`
-* :issue:`1339`: Use shell trap to catch dying beam processes during test runs.
-  :commit:`2921c78`
-* :issue:`1389`: Improved tracebacks printed by the JS CLI tests.
-* :issue:`1563`: Ensures urlPrefix is set in all ajax requests.
-  :commit:`07a6af222`
-* Fix race condition for test running on faster hardware.
-* Improved the reliability of a number of tests.
-
-URL Rewriter & Vhosts
----------------------
-
-* :issue:`1026`: Database name is encoded during rewriting
-  (allowing embedded /'s, etc). :commit:`272d6415`
-
-UUID Algorithms
----------------
-
-* :issue:`1373`: Added the utc_id algorithm :commit:`5ab712a2`
-
-Query and View Server
----------------------
-
-* :issue:`111`: Improve the errors reported by the JavaScript view server
-  to provide a more friendly error report when something goes wrong.
-  :commit:`0c619ed`
-* :issue:`410`: More graceful error handling for JavaScript validate_doc_update
-  functions.
-* :issue:`1372`: `_stats` built-in reduce function no longer produces error for
-  empty view result.
-* :issue:`1444`: Fix missed_named_view error that occurs on existed design
-  documents and views. :commit:`b59ac98b`
-* :issue:`1445`: CouchDB tries no more to delete view file if it couldn't open
-  it, even if the error is `emfile`.
-* :issue:`1483`: Update handlers requires valid doc ids. :commit:`72ea7e38`
-* :issue:`1491`: Clean up view tables. :commit:`c37204b7`
-* Deprecate E4X support, :commit:`cdfdda2314`
-
-Windows
--------
-
-* :issue:`1482`: Use correct linker flag to build `snappy_nif.dll` on Windows.
-  :commit:`a6eaf9f1`
-* Allows building cleanly on Windows without cURL, :commit:`fb670f5712`
diff --git a/src/whatsnew/1.4.rst b/src/whatsnew/1.4.rst
deleted file mode 100644
index 5f2afbf..0000000
--- a/src/whatsnew/1.4.rst
+++ /dev/null
@@ -1,62 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.4.x:
-
-============
-1.4.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. warning::
-    :ref:`release/1.4.x` is affected by the issue described in :ref:`cve/2014-2668`.
-    Upgrading to a more recent release is strongly recommended.
-
-.. _release/1.4.x/upgrade:
-
-Upgrade Notes
-=============
-
-We now support Erlang/OTP R16B and R16B01; the minimum required version is R14B.
-
-User document role values must now be strings. Other types of values will be
-refused when saving the user document.
-
-.. _release/1.4.0:
-
-Version 1.4.0
-=============
-
-* :issue:`1139`: it's possible to apply :ref:`list <listfun>`
-  functions to ``_all_docs`` view. :commit:`54fd258e`
-* :issue:`1632`: Ignore epilogues in ``multipart/related`` MIME attachments.
-  :commit:`2b4ab67a`
-* :issue:`1634`: Reduce PBKDF2 work factor. :commit:`f726bc4d`
-* :issue:`1684`: Support for server-wide changes feed reporting on creation,
-  updates and deletion of databases. :commit:`917d8988`
-* :issue:`1772`: Prevent invalid JSON output when using `all_or_nothing`
-  :ref:`of bulk API <api/db/bulk_docs>`. :commit:`dfd39d57`
-* Add a ``configurable whitelist``
-  of user document properties. :commit:`8d7ab8b1`
-* :issue:`1852`: Support Last-Event-ID header in EventSource changes feeds.
-  :commit:`dfd2199a`
-* Allow storing pre-hashed admin passwords via :ref:`config API <api/config>`.
-  :commit:`c98ba561`
-* Automatic loading of CouchDB plugins. :commit:`3fab6bb5`
-* Much improved documentation, including an :ref:`expanded description
-  <vdufun>` of `validate_doc_update` functions (commit:`ef9ac469`) and
-  a description of how  CouchDB handles JSON :ref:`number values
-  <json/numbers>` (:commit:`bbd93f77`).
-* Split up `replicator_db` tests into multiple independent tests.
diff --git a/src/whatsnew/1.5.rst b/src/whatsnew/1.5.rst
deleted file mode 100644
index 013512e..0000000
--- a/src/whatsnew/1.5.rst
+++ /dev/null
@@ -1,60 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.5.x:
-
-============
-1.5.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. warning::
-    :ref:`release/1.5.1` contains important security fixes. Previous `1.5.x`
-    releases are not recommended for regular usage.
-
-.. _release/1.5.1:
-
-Version 1.5.1
-=============
-
-* Add the ``max_count`` option (:ref:`config/uuids`) to allow rate-limiting
-  the amount of UUIDs that can be requested from the :ref:`api/server/uuids`
-  handler in a single request (:ref:`CVE 2014-2668 <cve/2014-2668>`).
-
-.. _release/1.5.0:
-
-Version 1.5.0
-=============
-
-* :issue:`1781`: The official documentation has been overhauled. A lot of
-  content from other sources have been merged, and the index page
-  has been rebuilt to make the docs much more accessible.
-  :commit:`54813a7`
-* A new administration UI, codenamed Fauxton, has been included as an
-  experimental preview. It can be accessed at ``/_utils/fauxton/``. There
-  are too many improvements here to list them all. We are looking for
-  feedback from the community on this preview release.
-* :issue:`1888`: Fixed an issue where admin users would be restricted by
-  the ``public_fields`` feature.
-* Fixed an issue with the JavaScript CLI test runner. :commit:`be76882`,
-  :commit:`54813a7`
-* :issue:`1867`: An experimental plugin feature has been added. See
-  ``src/couch_plugin/README.md`` for details. We invite the community to
-  test and report any findings.
-* :issue:`1894`: An experimental Node.js-based query server runtime
-  has been added. See :ref:`experimental` for details. We invite the
-  community to test and report any findings.
-* :issue:`1901`: Better retry mechanism for transferring attachments
-  during replication. :commit:`4ca2cec`
diff --git a/src/whatsnew/1.6.rst b/src/whatsnew/1.6.rst
deleted file mode 100644
index cb52c51..0000000
--- a/src/whatsnew/1.6.rst
+++ /dev/null
@@ -1,72 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.6.x:
-
-============
-1.6.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/1.6.x/upgrade:
-
-Upgrade Notes
-=============
-
-The :ref:`Proxy Authentication <api/auth/proxy>` handler was renamed to
-``proxy_authentication_handler`` to follow the ``*_authentication_handler`` form
-of all other handlers. The old ``proxy_authentification_handler`` name is marked
-as deprecated and will be removed in future releases. It's strongly recommended
-to update ``httpd/authentication_handlers`` option with new value
-in case if you had used such handler.
-
-.. _release/1.6.0:
-
-Version 1.6.0
-=============
-
-* :issue:`2200`: support Erlang/OTP 17.0 :commit:`35e16032`
-* Fauxton: many improvements in our experimental new user interface, including
-  switching the code editor from CodeMirror to Ace as well as better support
-  for various browsers.
-* Add the ``max_count`` option (:ref:`config/uuids`) to allow rate-limiting
-  the amount of UUIDs that can be requested from the :ref:`api/server/uuids`
-  handler in a single request (:ref:`CVE 2014-2668 <cve/2014-2668>`).
-* :issue:`1986`: increase socket buffer size to improve replication speed
-  for large documents and attachments, and fix tests on BSD-like systems.
-  :commit:`9a0e561b`
-* :issue:`1953`: improve performance of multipart/related requests.
-  :commit:`ce3e89dc`
-* :issue:`2221`: verify that authentication-related configuration settings
-  are well-formed. :commit:`dbe769c6`
-* :issue:`1922`: fix CORS exposed headers. :commit:`4f619833`
-* Rename ``proxy_authentification_handler`` to ``proxy_authentication_handler``.
-  :commit:`c66ac4a8`
-* :issue:`1795`: ensure the startup script clears the pid file on termination.
-  :commit:`818ef4f9`
-* :issue:`1962`: replication can now be performed without having write access
-  to the source database (:commit:`1d5fe2aa`), the replication checkpoint
-  interval is now configurable (:commit:`0693f98e`).
-* :issue:`2025`: add support for SOCKS5 proxies for replication.
-  :commit:`fcd76c9`
-* :issue:`1930`: redirect to the correct page after submitting a new document
-  with a different ID than the one suggested by Futon. :commit:`4906b591`
-* :issue:`1923`: add support for `attachments` and `att_encoding_info` options
-  (formerly only available on the documents API) to the view API.
-  :commit:`ca41964b`
-* :issue:`1647`: for failed replications originating from a document in the
-  `_replicator` database, store the failure reason in the document.
-  :commit:`08cac68b`
-* A number of improvements for the documentation.
diff --git a/src/whatsnew/1.7.rst b/src/whatsnew/1.7.rst
deleted file mode 100644
index e0f9fa4..0000000
--- a/src/whatsnew/1.7.rst
+++ /dev/null
@@ -1,117 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/1.7.x:
-
-============
-1.7.x Branch
-============
-
-.. contents::
-   :depth: 1
-   :local:
-
-.. _release/1.7.2:
-
-Version 1.7.2
-=============
-
-Security
---------
-* :ref:`CVE 2018-8007 <cve/2018-8007>`
-
-.. _release/1.7.1:
-
-Version 1.7.1
-=============
-
-Bug Fix
--------
-
-- :ghissue:`974`: Fix access to /db/_all_docs for database members.
-
-.. _release/1.7.0:
-
-Version 1.7.0
-=============
-
-Security
---------
-* :ref:`CVE 2017-12635 <cve/2017-12635>`
-* :ref:`CVE 2017-12636 <cve/2017-12635>`
-
-API Changes
------------
-
-- :issue:`1356`: Return username on :http:post:`/_session`.
-- :issue:`1876`: Fix duplicated Content-Type for show/update functions.
-- :issue:`2310`: Implement :http:post:`/{db}/_bulk_get`.
-- :issue:`2375`: :statuscode:`400` returned when invalid revision specified.
-- :issue:`2845`: :statuscode:`400` returned when `revs` is not a list.
-
-Build
------
-
-- :issue:`1964`: Replace etap test suite with EUnit.
-- :issue:`2225`: Enforce that shared libraries can be built by the system.
-- :issue:`2761`: Support glibc >= 2.20.
-- :issue:`2747`: Support Erlang 18.
-- :commit:`5b9742c`: Support Erlang 19.
-- :commit:`1545bf4`: Remove broken benchmarks.
-
-Database Core
--------------
-
-- :issue:`2534`: Improve checks for db admin/member.
-- :issue:`2735`: Duplicate document _ids created under high edit load.
-
-Documentation
--------------
-
-- :commit:`c3c9588`: Improve documentation of `cacert_file` ssl option.
-- :commit:`3266f23`: Clarify the purpose of tombstones.
-- :commit:`75887d9`: Improve CouchDB Replication Protocol definition.
-- :commit:`3b1dc0f`: Remove mention of `group_level=exact`.
-- :commit:`2a11daa`: Remove mention of "Test Suite" in Futon.
-- :commit:`01c60f1`: Clarify type of key, startkey and endkey params.
-
-Futon
------
-
-- :issue:`241`: Support document copying.
-- :issue:`1011`: Run replication filtered by document ids from Futon.
-- :issue:`1275`: Unescape database names in Futon recently used list.
-- :commit:`f18f82a`: Update jquery.ui to 1.10.4 with fixes of potential
-  XSS issues.
-
-HTTP Server
------------
-
-- :issue:`2430`: Disable Nagle's algorithm by default.
-- :issue:`2583`: Don't drop connection by the endpoints which doesn't require
-  any payload.
-- :issue:`2673`: Properly escape Location: HTTP header.
-- :issue:`2677`: Wrong Expires header weekday.
-- :issue:`2783`: Bind both to IPv4 and IPv6.
-- :commit:`f30f3dd`: Support for user configurable SSL ciphers.
-
-Query Server
-------------
-
-- :issue:`1447`: Custom response headers from design functions get merged with
-  default ones.
-- :commit:`7779c11`: Upgrade Coffeescript to version 1.10.
-
-jquery.couch.js
----------------
-
-- :commit:`f9095e7`: Fix document copying.
diff --git a/src/whatsnew/2.0.rst b/src/whatsnew/2.0.rst
deleted file mode 100644
index 8437b5c..0000000
--- a/src/whatsnew/2.0.rst
+++ /dev/null
@@ -1,151 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/2.0.x:
-
-============
-2.0.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/2.0.0:
-
-Version 2.0.0
-=============
-
-* Native clustering is now supported. Rather than use CouchDB replication
-  between multiple, distinct CouchDB servers, configure a cluster of CouchDB
-  nodes. These nodes will use an optimized Erlang-driven 'internal replication'
-  to ensure data durability and accessibility. Combine a clustered CouchDB with
-  a load balancer (such as ``haproxy``) to scale CouchDB out horizontally. More
-  details of the clustering feature are available in the :ref:`cluster`.
-* `Futon` replaced by brand-new, completely re-engineered `Fauxton` interface.
-  URL remains the same.
-* The new Mango Query Server provides a simple JSON-based way to perform CouchDB
-  queries without JavaScript or MapReduce. Mango Queries have a similar indexing
-  speed advantage over JavaScript Queries than the Erlang Queries have (2x-10x
-  faster indexing depending on doc size and system configuration). We recommend
-  all new apps start using Mango as a default. Further details are available
-  in the :ref:`_find, _index and _explain API <api/db/_find>`.
-* Mango :ref:`selectors <changes/filter/selector>` can be used in _changes
-  feeds instead of JavaScript MapReduce filters. Mango has been tested to be
-  up to an order of magnitude (10x) faster than JavaScript in this application.
-* :ref:`Rewrite rules <api/ddoc/rewrite>` for URLs can be performed using
-  JavaScript functions.
-* :ref:`Multiple queries <api/ddoc/view/multiple_queries>` can be made of a
-  view with a single HTTP request.
-* Views can be queried with sorting turned off ( ``sorted=false``) for a
-  performance boost.
-* The global changes feed has been enhanced. It is now resumable and persistent.
-* New endpoints added (documentation forthcoming):
-
-  * :ref:`api/server/membership` shows all nodes in a cluster
-  * ``/_bulk_get`` speeds up the replication protocol over low-latency
-    connections
-  * ``/_node/`` api to access individual nodes' configuration and compaction
-    features
-  * ``/_cluster_setup`` api to set up a cluster from scratch.
-  * ``/_up`` api to signal health of a node to a load-balancer
-  * ``/db/_local_docs`` and ``/db/_design_docs`` (similar to ``/db/_all_docs``)
-
-* The ``/_log`` endpoint was removed.
-* "Backend" interface on port 5986 used for specific cluster admin tasks. Of
-  interest are the ``_nodes`` and ``_dbs`` databases visible only through this
-  interface.
-* Support added for Erlang/OTP 17.x, 18.x and 19
-* New streamlined build system written for Unix-like systems and Microsoft
-  Windows
-* :ref:`Configuration <api/config>` has moved from ``/_config`` to
-  ``/_node/{node-name}/_config``
-* ``instance_start_time`` now always reports ``"0"``.
-
-.. _release/2.0.x/upgrade:
-
-Upgrade Notes
-=============
-
-* The update sequences returned by the :ref:`api/db/changes` feed are no longer
-  integers. They can be any JSON value. Applications should treat them as opaque
-  values and return them to CouchDB as-is.
-* Temporary views are no longer supported.
-* It is possible to have multiple replicator databases.
-  ``replicator/db`` config option has been removed.
-  Instead ``_replicator`` and any database names ending
-  with the ``/_replicator`` suffix will be recognized as
-  replicator databases by the system.
-* Note that the semantics of some API calls have changed due to the introduction
-  of the clustering feature. Specifically, make note of the difference between
-  receiving a ``201`` and a ``202`` when storing a document.
-* ``all_or_nothing`` is no longer supported by the :ref:`bulk_docs
-  <api/db/bulk_docs>` API
-* After updating a design document containing a ``show``, an immediate GET to
-  that same ``show`` function may still return results from the previous
-  definition. This is due to design document caching, which may take a few
-  seconds to fully evict, or longer (up to ~30s) for a clustered installation.
-
-.. _release/2.0.x/knownissues:
-
-Known Issues
-============
-
-All `known issues`_ filed against the 2.0 release are contained within the
-official `CouchDB JIRA instance` or `CouchDB GitHub Issues`.
-
-The following are some highlights of known issues for which fixes did not land
-in time for the 2.0.0 release:
-
-* :issue:`2980`: The replicator (whether invoked via ``_replicate`` or a
-  document stored in the ``_replicator`` database) understands two kinds of
-  source and target:
-
-  #. A URL (e.g., ``https://foo:bar@foo.com/db1``), called a "remote" source or
-     target
-  #. A database name (e.g., ``db1``), called a "local" source or target.
-
-  Whenever the latter type is used, this refers to a local unclustered
-  database, not a clustered one.
-
-  In a future release we hope to support "local" source or target specs to
-  clustered databases. For now, we recommend always using the URL format for
-  both source and target specifications.
-* :issue:`3034`: CouchDB will occasionally return 500 errors when multiple
-  clients attempt to PUT or DELETE the same database concurrently.
-* :issue:`3119`: Adding nodes to a cluster fails if the Erlang node name
-  is not ``couchdb`` (of the form ``couchdb@hostname``.)
-* :issue:`3050`: Occasionally the ``dev/run`` script used for development
-  purposes to start a local 3-node cluster will fail to start one or more
-  nodes.
-* :issue:`2817`: The compaction daemon will only compact views for shards
-  that contain the design document.
-* :issue:`2804`: The fast_view optimization is not enabled on the clustered
-  interface.
-* :ghissue:`656`: The OAuth 1.0 support is broken and deprecated. It will be
-  removed in a future version of CouchDB.
-
-.. _known issues: https://s.apache.org/couchdb-2.0-known-issues
-.. _CouchDB JIRA instance: https://issues.apache.org/jira/browse/COUCHDB
-.. _CouchDB GitHub Issues: https://github.com/apache/couchdb/issues
-
-.. _release/2.0.x/breakingchanges:
-
-Breaking Changes
-================
-
-The following changes in 2.0 represent a significant deviation from
-CouchDB 1.x and may alter behaviour of systems designed to work with
-older versions of CouchDB:
-
-* :ghissue:`620`: ``POST /dbname`` no longer returns an ETag response header,
-  in compliance with RFC 7231, Section 7.2.
diff --git a/src/whatsnew/2.1.rst b/src/whatsnew/2.1.rst
deleted file mode 100644
index 1401023..0000000
--- a/src/whatsnew/2.1.rst
+++ /dev/null
@@ -1,454 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/2.1.x:
-
-============
-2.1.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/2.1.x/upgrade:
-
-Upgrade Notes
-=============
-
-* When upgrading from 2.x to 2.1.1, if you have not customized your
-  node name in ``vm.args``, be sure to retain your original ``vm.args``
-  file. The default node name has changed from ``couchdb@localhost`` to
-  ``couchdb@127.0.0.1``, which can prevent CouchDB from accessing existing
-  databases on the system. You may also change the name option back to the
-  old value by setting ``-name couchdb@localhost`` in ``etc/vm.args`` by
-  hand. The default has changed to meet new guidelines and to provide
-  additional functionality in the future.
-
-  If you receive errors in the logfile, such as
-  ``internal_server_error : No DB shards could be opened.`` or in Fauxton,
-  such as ``This database failed to load.`` you need to make this change.
-
-* The deprecated (and broken) OAuth 1.0 implementation has been removed.
-
-* If user code reads or manipulates replicator document states,
-  consider using the ``[replicator] update_docs = true`` compatibility
-  parameter. In that case the replicator will continue updating documents
-  with transient replication states. However, that will incur a
-  performance cost. Consider instead using the ``_scheduler/docs`` HTTP
-  endpoint.
-
-* The ``stale`` parameter for views and ``_find`` has been deprecated in favour
-  of two new parameters: ``stable`` and ``update``. The old ``stale=ok``
-  behaviour is equivalent to ``stable=true&update=false``, and the old
-  ``stale=update_after`` behaviour is equivalent to ``stable=true&update=lazy``.
-  The deprecated ``stale`` parameter will be removed in CouchDB 3.0.
-
-* The new :``httpd/max_http_request_size`` configuration parameter
-  was added. This has the same behavior as the old
-  :config:option:`couchdb/max_document_size` configuration parameter, which
-  had been unfortunately misnamed, and has now been updated to behave as the
-  name would suggest. Both are documented in the shipped ``default.ini`` file.
-
-  Note that the default for this new parameter is 64MB instead of 4GB. If you
-  get errors when trying to PUT or POST and see HTTP 413 return codes in couchdb
-  logs, this could be the culprit. This can affect couchup in-place upgrades as
-  well.
-
-* :ghissue:`914`: Certain critical config sections are blacklisted from being
-  modified through the HTTP API. These sections can still be modified through
-  the standard ``local.ini`` or ``local.d/*.ini`` files.
-
-* :ghissue:`916`: ``couchjs`` now disables ``eval()`` and the ``Function()``
-  constructor by default. To restore the original behaviour, add the
-  ``--eval`` flag to the definition of the javascript query server in your
-  ``local.ini`` file.
-
-.. _release/2.1.2:
-
-Version 2.1.2
-=============
-
-Security
---------
-* :ref:`CVE 2018-8007 <cve/2018-8007>`
-
-.. _release/2.1.1:
-
-Version 2.1.1
-=============
-
-Security
---------
-* :ref:`CVE 2017-12635 <cve/2017-12635>`
-* :ref:`CVE 2017-12636 <cve/2017-12636>`
-
-General
--------
-
-* :ghissue:`617`: CouchDB now supports compilation and running under Erlang/OTP
-  20.x.
-
-* :ghissue:`756`: The ``couch_peruser`` functionality is now *really* fixed.
-  Really.
-
-* :ghissue:`827`: The cookie domain for AuthSession cookies, used in a
-  proxy authentication configuration, can now be customized via the ini file.
-
-* :ghissue:`858`: It is now possible to modify shard maps for system databases.
-
-* :ghissue:`732`: Due to an Erlang bug (ERL-343_), invalid paths can be
-  returned if volumes are mounted containing whitespace in their name. This
-  problem surfaced primarily on macOS (Time Machine volumes). CouchDB now
-  works around this bug in unpatched versions of Erlang by skipping the free
-  space check performed by the compaction daemon. Erlang itself will
-  correctly perform free space checks in version 21.0.
-
-* :ghissue:`824`: The current node's local interface can now be accessed at
-  ``/_node/_local/{endpoint}`` as well as at
-  ``/_node/<nodename>@<hostname>/{endpoint}``.
-
-* The Dockerfile in the source repository has been retired. For a current
-  Dockerfile, see the `couchdb-docker repository`.
-
-* Fauxton now uses a version of React with a BSD license.
-
-.. _ERL-343: https://bugs.erlang.org/browse/ERL-343
-.. _couchdb-docker repository: https://github.com/apache/couchdb-docker
-
-Performance
------------
-
-* :ghissue:`835`: CouchDB now no longer decompresses documents just to
-  determine their uncompressed size. In tests, this has lead to improvements
-  between 10-40% in both CPU and wall-clock time for database compaction.
-
-* The design document cache (``ddoc_cache``) has been rewritten to improve
-  performance.
-
-Mango
------
-
-* :ghissue:`808`: Mango now supports
-  :ref:`partial indexes <find/partial_indexes>`. Partial indexes allow
-  documents to be filtered at indexing time, potentially offering
-  significant performance improvements for query selectors that don't map
-  cleanly to a range query on an index.
-
-* :ghissue:`740`: Mango queries can now be paginated. Each query response
-  includes a bookmark.  The bookmark can be provided on a subsequent query to
-  continue from a specific key.
-
-* :ghissue:`768`: Mango ``_find`` accepts an ``execution_stats``
-  parameter. If present, a new object is included in the response which
-  contains information about the query executed. The object contains the
-  count of total keys examined (0 for json indexes), total documents
-  examined (when ``include_docs=true`` is used), and the total quorum
-  documents examined (when fabric doc lookups are used).
-
-* :ghissue:`816` and :ghissue:`866`: Mango now requires that all of the fields
-  in a candidate index must exist in a query's selector. Previously, this check
-  was incorrect, and indexes that might only contain a subset of valid
-  documents might be selected by the query planner if no explicit index was
-  specified at query time. Further, if a sort field is specified at query time,
-  that field needs to exist (but could be null) in the results returned.
-
-Other
------
-
-The 2.1.1 release also includes the following minor improvements:
-
-  * :ghissue:`635`: Stop couch_index processes on ddoc update
-  * :ghissue:`721`: Save migrated replicator checkpoint documents immediately
-  * :ghissue:`688`: Reuse http-based replication checkpoints when upgrading
-    to https
-  * :ghissue:`729`: Recommend the use only of ``-name`` and not ``-sname`` in
-    `vm.args` for compatibility.
-  * :ghissue:`738`: Allow replicator application to always update replicator
-    docs.
-  * :ghissue:`605`: Add ``Prefer: return=minimal`` header options from
-    RFC7240 to reduce the number of headers in the response.
-  * :ghissue:`744`: Allow a 503 response to be returned to clients (with
-    metric support)
-  * :ghissue:`746`: Log additional information on crashes from rexi
-  * :ghissue:`752`: Allow Mango $in queries without requiring the index to
-    use an array
-  * (multiple) Additional debugging utilities have been added.
-  * (multiple) Hot code upgrades from 2.0 -> 2.1.1 are now possible.
-  * (multiple) Improvements to the test suite have been made.
-  * :ghissue:`765`: Mango ``_explain`` now includes view parameters as requested
-    by the user.
-  * :ghissue:`653`: `_show` and `_list` should now work for admin-only
-    databases such as ``_users``.
-  * :ghissue:`807`: Mango index selection should occur only once.
-  * :ghissue:`804`: Unhandled Mango errors are now logged.
-  * :ghissue:`659`: Improve accuracy of the ``max_document_size`` check.
-  * :ghissue:`817`: Invalid Base64 in inline attachments is now caught.
-  * :ghissue:`825`: Replication IDs no longer need to be URL encoded when
-    using the ``_scheduler/jobs/<job_id>`` endpoint.
-  * :ghissue:`838`: Do not buffer rexi messages to disconnected nodes.
-  * :ghissue:`830`: The stats collection interval is now configurable in
-    an ini file, not in the application context. The default value is 10,
-    and the setting is reloaded every 600 seconds.
-  * :ghissue:`812`: The ``/{db}`` endpoint now includes a ``cluster`` block
-    with the database's ``q``, ``n``, and default ``w`` and ``r`` values.
-    This supplements the existing ``/{db}/_shards`` and ``/{db}/_shards/{id}``
-    detailed information on sharding and quorum.
-  * :ghissue:`810`: The replicator scheduler crashed counter gauge more
-    reliably detects replication crashes by reducing the default number
-    of retries from 10 to 5 (reducing the duration from 4 mins to 8 secs).
-  * :issue:`3288`: Tolerate mixed clusters for the upcoming pluggable
-    storage engine work.
-  * :ghissue:`839`: Mango python tests now support Python 3 as well as 2.
-  * :ghissue:`845`: A convenience ``remsh`` script has been added to support
-    live debugging of running systems.
-  * :ghissue:`846`: Replicator logging is now less verbose and more informative
-    when replication terminates unexpectedly.
-  * :ghissue:`797`: Reduce overflow errors are now returned to the client,
-    allowing views with a single bad reduce to build while not exhausting the
-    server's RAM usage.
-  * :ghissue:`881`: Mango now allows match on documents where the indexed
-    value is an object if a range query is issued. Previously, query results
-    might change in the presence of an index, and operators/selectors which
-    explicitly depend on a full index scan (such as ``$exists``) would not
-    return a complete result set.
-  * :ghissue:`883`: Erlang time module compatibility has been improved for
-    releases of Erlang newer than 18.0.
-  * :ghissue:`933`: 410 is now returned when attempting to make a temporary
-    view request.
-  * :ghissue:`934`: The replicator now has a configurable delay before
-    retrying to retrieve a document after receiving a ``missing_doc`` error.
-  * :ghissue:`936`: jiffy now deduplicates JSON keys.
-
-.. _release/2.1.0:
-
-Version 2.1.0
-=============
-
-* The Mango ``_find`` endpoint supports a new combination operator,
-  ``$allMatch``, which matches and returns all documents that contain an
-  array field with all its elements matching all the specified query
-  criteria.
-
-* New scheduling replicator. The core of the new replicator is a
-  scheduler which allows running a large number of replication
-  jobs by switching between them, stopping some and starting others
-  periodically. Jobs which fail are backed off exponentially. There is
-  also an improved inspection and querying API: ``_scheduler/jobs`` and
-  ``_scheduler/docs``:
-
-  * ``_scheduler/jobs`` : This endpoint shows active replication
-    jobs. These are jobs managed by the scheduler. Some of them might
-    be running, some might be waiting to run, or backed off
-    (penalized) because they crashed too many times. Semantically this
-    is somewhat equivalent to ``_active_tasks`` but focuses only on
-    replications. Jobs which have completed or which were never
-    created because of malformed replication documents will not be
-    shown here as they are not managed by the scheduler.
-    ``_replicate`` replications, started form _replicate endpoint not
-    from a document in a ``_replicator`` db, will also show up here.
-
-  * ``_scheduler/docs`` : This endpoint is an improvement on having to go
-    back and read replication documents to query their state. It
-    represents the state of all the replications started from
-    documents in _replicator db. Unlike ``_scheduler/jobs`` it will also
-    show jobs which have failed or have completed.
-
-  By default, scheduling replicator will not update documents with
-  transient states like ``triggered`` or ``error`` anymore, instead
-  ``_scheduler/docs`` API should be used to query replication document
-  states.
-
-Other scheduling replicator improvements
------------------------------------------
-
-  * Network resource usage and performance was improved by
-    implementing a shared connection pool. This should help in cases
-    of a large number of connections to the same sources or
-    target. Previously connection pools were shared only within a
-    single replication job.
-
-  * Improved request rate limit handling. Replicator requests will
-    auto-discover rate limit capacity on targets and sources based on
-    a proven Additive Increase / Multiplicative Decrease feedback
-    control algorithm.
-
-  * Improved performance by having exponential backoff for all
-    replication jobs failures.  Previously there were some scenarios
-    were failure led to continuous repeated retries, consuming CPU and
-    disk resources in the process.
-
-  * Improved recovery from long but temporary network
-    failure. Currently if replications jobs fail to start 10 times in
-    a row, they will not be retried anymore. This is sometimes
-    desirable, but in some cases, for example, after a sustained DNS
-    failure which eventually recovers, replications reach their retry
-    limit, stop retrying and never recover. Previously it required
-    user intervention to continue. Scheduling replicator will never
-    give up retrying a valid scheduled replication job and so it
-    should recover automatically.
-
-  * Better handling of filtered replications. Failing user filter code
-    fetches from the source will not block replicator manager and
-    stall other replications. Failing filter fetches will also be
-    backed off exponentially. Another improvement is when filter code
-    changes on the source, a running replication will detect that and
-    restart itself with a new replication ID automatically.
-
-The 2.1.0 release also includes the following minor improvements:
-
-  * :issue:`1946`: Hibernate couch_stream after each write (up to 70% reduction
-    in memory usage during replication of DBs with large attachments)
-  * :issue:`2964`: Investigate switching replicator manager change feeds to
-    using "normal" instead of "longpoll"
-  * :issue:`2988`: (mango) Allow query selector as changes and replication
-    filter
-  * :issue:`2992`: Add additional support for document size
-  * :issue:`3046`: Improve reduce function overflow protection
-  * :issue:`3061`: Use vectored reads to search for buried headers in .couch
-    files. "On a modern linux system with SSD, we see improvements up to 15x."
-  * :issue:`3063`: "stale=ok" option replaced with new "stable" and "update"
-    options.
-  * :issue:`3180`: Add features list in the welcome message
-  * :issue:`3203`: Make auth handlers configurable (in ini files)
-  * :issue:`3234`: Track open shard timeouts with a counter instead of logging
-  * :issue:`3242`: Make get view group info timeout in couch_indexer
-    configurable
-  * :issue:`3249`: Add config to disable index all fields (text indexes)
-  * :issue:`3251`: Remove hot loop usage of filename:rootname/1
-  * :issue:`3284`: 8Kb read-ahead in couch_file causes extra IO and binary
-    memory usage
-  * :issue:`3298`: Optimize writing btree nodes
-  * :issue:`3302`: (Improve) Attachment replication over low bandwidth network
-    connections
-  * :issue:`3307`: Limit calls to maybe_add_sys_db_callbacks to once per db
-    open
-  * :issue:`3318`: bypass couch_httpd_vhost if there are none
-  * :issue:`3323`: Idle dbs cause excessive overhead
-  * :issue:`3324`: Introduce couch_replicator_scheduler
-  * :issue:`3337`: End-point _local_docs doesn't conform to query params of
-    _all_docs
-  * :issue:`3358`: (mango) Use efficient set storage for field names
-  * :issue:`3425`: Make _doc_ids _changes filter fast-path limit configurable
-  * :ghissue:`457`: TeX/LaTeX/texinfo removed from default docs build chain
-  * :ghissue:`469`: (mango) Choose index based on fields match
-  * :ghissue:`483`: couchup database migration tool
-  * :ghissue:`582`: Add X-Frame-Options support to help protect against
-    clickjacking
-  * :ghissue:`593`: Allow bind address of 127.0.0.1 in ``_cluster_setup`` for
-    single nodes
-  * :ghissue:`624`: Enable compaction daemon by default
-  * :ghissue:`626`: Allow enable node decom using string "true"
-  * (mango) Configurable default limit, defaults to 25.
-  * (mango) _design documents ignored when querying _all_docs
-  * (mango) add $allMatch selector
-  * Add local.d/default.d directories by default and document
-  * Improved INSTALL.* text files
-
-.. _release/2.1.x/fixes:
-
-Fixed Issues
-============
-
-The 2.1.0 release includes fixes for the following issues:
-
-* :issue:`1447`: X-Couch-Update-NewRev header is missed if custom headers are
-  specified in response of _update handler (missed in 2.0 merge)
-* :issue:`2731`: Authentication DB was not considered a system DB
-* :issue:`3010`: (Superseded fix for replication exponential backoff)
-* :issue:`3090`: Error when handling empty "Access-Control-Request-Headers"
-  header
-* :issue:`3100`: Fix documentation on require_valid_user
-* :issue:`3109`: 500 when include_docs=true for linked documents
-* :issue:`3113`: fabric:open_revs can return {ok, []}
-* :issue:`3149`: Exception written to the log if db deleted while there is a
-  change feed running
-* :issue:`3150`: Update all shards with stale=update_after
-* :issue:`3158`: Fix a crash when connection closes for _update
-* :issue:`3162`: Default ssl settings cause a crash
-* :issue:`3164`: Request fails when using
-  _changes?feed=eventsource&heartbeat=30000
-* :issue:`3168`: Replicator doesn't handle well writing documents to a target
-  db which has a small max_document_size
-* :issue:`3173`: Views return corrupt data for text fields containing non-BMP
-  characters
-* :issue:`3174`: max_document_size setting can by bypassed by issuing
-  multipart/related requests
-* :issue:`3178`: Fabric does not send message when filtering lots of documents
-* :issue:`3181`: function_clause error when adding attachment to doc in _users
-  db
-* :issue:`3184`: couch_mrview_compactor:recompact/1 does not handle errors in
-  spawned process
-* :issue:`3193`: fabric:open_revs returns multiple results when one of the
-  shards has stem_interactive_updates=false
-* :issue:`3199`: Replicator VDU function doesn't account for an already
-  malformed document in replicator db
-* :issue:`3202`: (mango) do not allow empty field names
-* :issue:`3220`: Handle timeout in _revs_diff
-* :issue:`3222`: (Fix) HTTP code 500 instead of 400 for invalid key during
-  document creation
-* :issue:`3231`: Allow fixing users' documents (type and roles)
-* :issue:`3232`: user context not passed down in fabric_view_all_docs
-* :issue:`3238`: os_process_limit documentation wrong
-* :issue:`3241`: race condition in couch_server if delete msg for a db is
-  received before open_result msg
-* :issue:`3245`: Make couchjs -S option take effect again
-* :issue:`3252`: Include main-coffee.js in release artifact (broken
-  CoffeeScript view server)
-* :issue:`3255`: Conflicts introduced by recreating docs with attachments
-* :issue:`3259`: Don't trap exits in couch_file
-* :issue:`3264`: POST to _all_docs does not respect conflicts=true
-* :issue:`3269`: view response can 'hang' with filter and limit specified
-* :issue:`3271`: Replications crash with 'kaboom' exit
-* :issue:`3274`: eof in couch_file can be incorrect after error
-* :issue:`3277`: Replication manager crashes when it finds _replicator db
-  shards which are not part of a mem3 db
-* :issue:`3286`: Validation function throwing unexpected json crashes with
-  function_clause
-* :issue:`3289`: handle error clause when calling fabric:open_revs
-* :issue:`3291`: Excessively long document IDs prevent replicator from making
-  progress
-* :issue:`3293`: Allow limiting length of document ID (for CouchDB proper)
-* :issue:`3305`: (mango) don't crash with invalid input to built in reducer
-  function
-* :issue:`3362`: DELETE attachment on non-existing document creates the
-  document, rather than returning 404
-* :issue:`3364`: Don't crash compactor when compacting process fails.
-* :issue:`3367`: Require server admin user for db/_compact and db_view_cleanup
-  endpoints
-* :issue:`3376`: Fix mem3_shards under load
-* :issue:`3378`: Fix mango full text detection
-* :issue:`3379`: Fix couch_auth_cache reinitialization logic
-* :issue:`3400`: Notify couch_index_processes on all shards when ddoc updated
-* :issue:`3402`: race condition in mem3 startup
-* :ghissue:`511`: (mango)  Return false for empty list
-* :ghissue:`595`: Return 409 to PUT attachment with non-existent rev
-* :ghissue:`623`: Ensure replicator _active_tasks entry reports recent pending
-  changes value
-* :ghissue:`627`: Pass UserCtx to fabric's all_docs from mango query
-* :ghissue:`631`: fix couchdb_os_proc_pool eunit timeouts
-* :ghissue:`644`: Make couch_event_sup:stop/1 synchronous
-* :ghissue:`645`: Pass db open options to fabric_view_map for _view and _list
-  queries on _users DB
-* :ghissue:`648`: Fix couch_replicator_changes_reader:process_change
-* :ghissue:`649`: Avoid a race when restarting an index updater
-* :ghissue:`667`: Prevent a terrible race condition
-* :ghissue:`677`: Make replication filter fetch error for _replicate return a
-  404
-* Fix CORS ``max_age`` configuration parameter via Access-Control-Max-Age
-* Chunk missing revisions before attempting to save on target (improves
-  replication for very conflicted, very deep revision tree documents)
-* Allow w parameter for attachments
-* Return "Bad Request" when count in ``/_uuids`` exceeds max
-* Fix crashes when replicator db is deleted
-* Skip internal replication if changes already replicated
-* Fix encoding issues on ``_update/../doc_id`` and PUT attachments
diff --git a/src/whatsnew/2.2.rst b/src/whatsnew/2.2.rst
deleted file mode 100644
index a0a7da1..0000000
--- a/src/whatsnew/2.2.rst
+++ /dev/null
@@ -1,343 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/2.2.x:
-
-============
-2.2.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/2.2.x/upgrade:
-
-Upgrade Notes
-=============
-
-.. rst-class:: open
-
-* The minimum supported version of Erlang is now 17, not R16B03. Support for Erlang 21
-  is still ongoing and will be provided in a future release.
-
-* The CouchDB replication client can now use the ``/_session`` endpoint when
-  authenticating against remote CouchDB instances, improving performance since
-  re-authorization does not have to be performed with every request. Because of
-  this performance improvement, it is recommended to increase the PBKDF2 work
-  factor beyond the default ``10`` to a modern default such as ``10000``. This is done
-  via the local ini file setting ``[couch_httpd_auth] iterations = 10000``.
-
-  Do **not** do this if an older version of CouchDB is replicating TO this instance or
-  cluster regularly, since CouchDB < 2.2.0 must perform authentication on every request
-  and replication performance will suffer.
-
-  A future version will make this increased number of iterations a default.
-
-* :ghissue:`820`, :ghissue:`1032`: Multiple queries can now be made at the
-  ``POST /{db}/_all_docs/queries``, ``POST /{db}/_design_docs/queries`` and
-  ``POST /{db}/_local_docs/queries`` endpoints. Also, a new endpoint
-  ``POST /{db}/_design/{ddoc}/_view/{view}/queries`` has been introduced to replace
-  the ``?queries`` parameter formerly provided for making multiple queries to a view.
-  The old ``?queries`` parameter *is now deprecated and will be removed in a future
-  release of CouchDB.*
-
-* The maximum http request limit, which had been lowered in 2.1.0, has been re-raised
-  to a 4GB limit for now. (:ghissue:`1446`). Ongoing discussion about the path forward
-  for future releases is available in :ghissue:`1200` and :ghissue:`1253`.
-
-* :ghissue:`1118`: The least recently used (LRU) cache of databases is now only updated
-  on database write, not read. This has lead to significant performance enhancements
-  on very busy clusters. To restore the previous behaviour, your local ini file can
-  contain the block ``[couchdb] update_lru_on_read = true``.
-
-* :ghissue:`1153`: The CouchDB replicator can now make use of the ``/_session`` endpoint
-  rather than relying entirely on HTTP basic authentication headers. This can greatly
-  improve replication performance. We encourage you to upgrade any nodes or clusters that
-  regularly act as replication clients to use this new feature, which is enabled by
-  default (:ghissue:`1462`).
-
-* :ghissue:`1283`: The ``[couchdb] enable_database_recovery`` feature, which only
-  soft-deletes databases in response to a ``DELETE /{db}`` call, is now documented in
-  ``default.ini``.
-
-* :ghissue:`1330`: CouchDB externals and OS daemons are now officially deprecated and no
-  longer documented. Support for these features will be completely removed in a future
-  release of CouchDB (probably 3.0.0).
-
-* :ghissue:`1436`: CouchDB proxy authentication now uses a proper ``chttpd_auth``
-  module, simplifying configuration in local ini files. While this is not a backward-
-  compatible breaking change, it is best to update your local ini files to reference the
-  new ``{chttpd_auth, proxy_authentication_handler}`` handler rather than the
-  ``couch_httpd_auth`` version, as ``couch_httpd`` is in the process of being deprecated
-  completely.
-
-* :ghissue:`1476`, :ghissue:`1477`: The obsolete `update_notification` feature, which
-  was replaced by `/{db}/_changes` feeds c. CouchDB 1.2, has been completely removed.
-  This feature never worked in 2.0 for databases, only for shards, making it effectively
-  useless.
-
-.. _release/2.2.0:
-
-Version 2.2.0
-=============
-
-Features
---------
-
-.. rst-class:: open
-
-* Much improved documentation. Highlights include:
-
-  * A complete rewrite of the :ref:`sharding <cluster/sharding>` documentation.
-  * Developer installation notes (``INSTALL.*.rst``)
-  * Much of the content of the original CouchDB Wiki has been imported into the
-    official docs. (The old CouchDB Wiki is in the process of being deprecated.)
-
-* Much improved Fauxton functionality. Highlights include:
-
-  * Search support in the code editor
-  * Support for relative Fauxton URLs (*i.e.*, not always at ``/_utils``)
-  * Replication setup enhancements for various authentication mechanisms
-  * Fixes for IE10, IE11, and Edge (we hope...)
-  * Resolving conflicts of design documents is now allowed
-
-* :ghissue:`496`, :issue:`3287`: New pluggable storage engine framework has landed in
-  CouchDB. This internal refactor makes it possible for CouchDB to use different backends
-  for storing the base database file itself. The refactor included a full migration of
-  the existing "legacy" storage engine into the new framework.
-* :ghissue:`603`: When creating a new database on a cluster without quorum, CouchDB will
-  now return a ``202 Accepted`` code if possible, indicating that at least one node
-  has written the database record to disk, and that other nodes will be updated as they
-  return to an online state. This replaces the former ``500`` internal error.
-* :ghissue:`1136`, :ghissue:`1139`: When deleting a database in a cluster without
-  quorum, CouchDB will no longer throw a ``500`` error status, but a ``202`` as long as
-  at least one node records the deletion, or a ``200`` when all nodes respond. This fix
-  parallels the one made for :ghissue:`603`.
-* :ghissue:`745`: CouchDB no longer fails to complete replicating databases with
-  large attachments. The fix for this issue included several related changes:
-
-  * The maximum http request limit, which had been lowered in 2.1.0, has been re-raised
-    to a 4GB limit for now. (:ghissue:`1446`). Ongoing discussion about the path forward
-    for future releases is available in :ghissue:`1200` and :ghissue:`1253`.
-  * An update to the replicator http client that improves active socket accounting,
-    without which CouchDB can cease to be responsive over the main http interface
-    (:ghissue:`1117`)
-  * The replicator's http client no longer performs unconditional retries on failure
-    (:ghissue:`1177`)
-  * A path by which CouchDB could lose track of their RPC workers during multipart
-    attachment processing was removed. (:ghissue:`1178`)
-  * When CouchDB transmits a ``413 Payload Too Large`` response on attachment upload,
-    it now correctly flushes the receive socket before closing the connection to avoid
-    a TCP reset, and to give the client a better chance of parsing the 413 response. In
-    tandem, the replicator http client correctly closes its own socket after processing
-    any 413 response. (:ghissue:`1234`)
-  * A ``fabric`` process to receive unchunked attachments can no longer orphan processes
-    that leave unprocessed binaries in memory until all available memory is exhausted.
-    (:ghissue:`1264`).
-  * When using CouchDB's native SSL responder (port 6984 by default), sessions are now
-    timed out by default after 300s. This is to work around RAM explosion in the BEAM VM
-    when using the Erlang-native SSL libraries. (:ghissue:`1321`
-
-* :ghissue:`822`: A new end point :ref:`api/server/dbs_info` has been added to return
-  information about a list of specified databases. This endpoint can take the place of
-  multiple queries to ``/{db}``.
-* :ghissue:`875`, :ghissue:`1030`: ``couch_peruser`` installations can now specify a
-  default ``q`` value for each peruser-created database that is different from the
-  cluster's ``q`` value. Set this in your local ini file, under ``[couch_peruser] q``.
-* :ghissue:`876`, :ghissue:`1068`: The ``couch_peruser`` database prefix is now
-  configurable through your local ini file, under ``[couch_peruser] database_prefix``.
-* :ghissue:`887`: Replicator documents can now include parameters for target database
-  creation, such as ``"create_target_params": {"q": "1"}``. This can assist in
-  database resharding or placement.
-* :ghissue:`977`: When using ``COPY`` to copy a document, CouchDB no longer fails if
-  the new ID includes Unicode characters.
-* :ghissue:`1095`: Recognize the environment variables ``ARGS_FILE``, ``SYSCONFIG_FILE``,
-  ``COUCHDB_ARGS_FILE`` and ``COUCHDB_SYSCONFIG_FILE`` to override where CouchDB looks
-  for the ``vm.args`` and ``sys.config`` files at startup.
-* :ghissue:`1101`, :ghissue:`1425`: Mango can now be used to find conflicted documents
-  in a database by adding ``conflicts: true`` to a mango selector.
-* :ghissue:`1126`: When queried back after saving, replication documents no longer
-  contain sensitive credential information (such as basic authenticataion headers).
-* :ghissue:`1203`:
-
-    * The compaction daemon now has a snooze period, during which it waits to start
-      the next compaction after finishing the previous one. This value is useful in
-      setups with many databases (e.g. with ``couch_peruser``) or many design docs,
-      which can cause a CPU spike every ``check_interval`` seconds. The setting can
-      be adjusted in your local ini file via ``[compaction_daemon] snooze_period``.
-      The current default is a 3 second pause.
-
-    * The ``check_interval`` has been raised from 300 seconds to 3600 seconds.
-
-    * A ``notice``-level log about closing view indexes has been demoted to the
-      ``debug`` level. In a sceario with many design docs, this would createsignficant
-      load on the logging subsystem every ``[compaction_daemon] check_interval`` for
-      no discernible benefit.
-
-* :ghissue:`1309`, :ghissue:`1435`: CouchDB now reports the git sha at the time of build
-  in the top-level ``GET /`` version string, in a new ``git_sha`` key. This can be used
-  to help ensure an unmodified version of CouchDB has been built and is running on any
-  given machine.
-* :issue:`2971`, :ghissue:`1346`: CouchDB now includes a new builtin reduce function
-  ``_approx_count_distinct``, that uses a HyperLogLog algorithm to estimate the number of
-  distinct keys in the view index. The precision is currently fixed to 2^11 observables,
-  and therefore uses approximately 1.5KB of memory.
-* :ghissue:`1377`: CouchDB finalization of view reduces now occurs at the coordinator
-  node. This simplified the built-in ``_stats`` function.
-* :ghissue:`1392`: When running CouchDB under Erlang 19.0 or newer, messages can now be
-  stored off the process heap. This is extremely useful for Erlang processes that can
-  have huge number of messages in their mailbox, and is now enabled for ``couch_server``,
-  ``couch_log_server``, ``ddoc_cache``, ``mem3_shards``, and ``rexi_server`` whenever
-  possible.
-* :ghissue:`1424`: The CouchDB native SSL/TLS server ``httpsd`` now accepts socket-level
-  configuration options through the ``[httpsd] server_options`` ini file setting.
-* :ghissue:`1440`: CouchDB can now be configured to prevent non-admins from accessing
-  the ``GET /_all_dbs`` method by specifying ``[chttpd] admin_only_all_dbs = true`` in
-  your local ini file(s). The ``true`` setting will become default in future versions.
-* :ghissue:`1171`, :ghissue:`1445`: CouchDB can now be configured to use the internal
-  Erlang MD5 hash function when not available in the external environment (e.g. FIPS
-  enabled CentOS) at compile time with the ``configure`` flag ``--enable-md5``. Because
-  this implementation is slower, it is not recommended in the general case.
-
-Performance
------------
-
-.. rst-class:: open
-
-* :ghissue:`958`: The revision stemming algorithm was optimized down from *O(N^2)* to
-  *O(N)* via a depth-first search approach, and then further improved by calling the
-  stemming operation only when necessary. This new algorithm can be disabled by
-  setting the option ``[couchdb] stem_interactive_updates = false`` if necessary.
-* :ghissue:`1246`: CouchDB now checks for request authorization only once per each
-  database request, improving the performance of any request that requires
-  authorization.
-
-Bugfixes
---------
-
-.. rst-class:: open
-
-* :ghissue:`832`, :ghissue:`1064`: Tracking of Couch logging stats has been added back
-  into the per-node ``/_node/<node-name>/_stats`` endpoint.
-* :ghissue:`953`, :ghissue:`973`: Return ``404 Not Found`` on ``GET /_scheduler``,
-  not ``405 Method Not Allowed``.
-* :ghissue:`955`: The ``/{db}/_bulk_docs`` endpoint now correctly responds with a
-  ``400 Bad Request`` error if the ``new_edits`` parameter is not a boolean.
-* :ghissue:`969`: CouchDB now returns ``offset`` and ``update_seq`` values when ``keys``
-  are provided to the ``GET`` or ``POST`` ``/{db}/_all_docs?update_seq=true`` endpoints.
-  This was affecting PouchDB compatibility.
-* :ghissue:`984`, :ghissue:`1434`: CouchDB views now retain their ``update_seq`` after
-  compaction, preventing potentially expensive client-side view rewinds after compaction.
-* :ghissue:`1012`: Address a theoretical race condition the replication scheduler could
-  encounter when trying to determine if the cluster is "stable" enough to resume
-  handling replication-introduced document updates.
-* :ghissue:`1051`: Return a user-friendly error message when attempting to create a
-  CouchDB user with an invalid password field (non-string).
-* :ghissue:`1059`: DB-specific compaction configurations were not working correctly. The
-  syntax now also supports shard-level custom compaction configuration if desired (which
-  it probably isn't.)
-* :ghissue:`1097`: Compaction daemon will not crash out when trying to check specific
-  file system mounts that are not "real" file systems (like ``/run`` on Linux).
-* :ghissue:`1198`: Fauxton is no longer available on the node-local port (5986, by
-  default). The node-local port is only to be used for specific administrative tasks;
-  removing the Fauxton interface prevents mistaking the node-local port as the correct
-  CouchDB port (5984, by default).
-* :ghissue:`1165`: ``validate_doc_update`` view functions can once again be implemented
-  directly in Erlang (after enabling the optional Erlang view server).
-* :ghissue:`1223`: The ``couch_config`` application now correctly handles non-persistent
-  integer and boolean-valued configuration changes.
-* :ghissue:`1242`: ``couch_os_daemons`` may now reside in directories with spaces.
-* :ghissue:`1258`: CouchDB will now successfully login users, even if password encryption
-  is very slow.
-* :ghissue:`1276`: The replication scheduler status for a repeatedly erroring job now
-  correctly reflects the `crashing` state in more scenarios.
-* :ghissue:`1375`: If CouchDB fails authorization but passes authentication, it no longer
-  drops the ``user_ctx`` out of the request.
-* :ghissue:`1390`: The active size of views (as returned in a database info response) no
-  longer is incorrectly calculated in such a way that it could occasionally be larger than
-  the actual on-disk file size.
-* :ghissue:`1401`: CouchDB Erlang views no longer crash in the ``couch_native`` process
-  with an unexpected ``function_clause`` error.
-* :ghissue:`1419`: When deleting a file, CouchDB now properly ignores the configuration
-  flag ``enable_database_recovery`` when set when compacting databases, rather than
-  always retaining the old, renamed, uncompacted database file.
-* :ghissue:`1439`: The CouchDB setup wizard now correctly validates bind_addresses. It
-  also no longer logs credentials by moving logging of internal wizard setup steps to
-  the ``debug`` level from the ``notice`` level.
-
-Mango
------
-
-.. rst-class:: open
-
-* :ghissue:`816`, :ghissue:`962`, :ghissue:`1038`: If a user specifies a value for
-  ``use_index`` that is not valid for the selector (does not meet coverage requirements
-  or proper sort fields), attempt to fall back to a valid index or full DB scan rather
-  than returning  a ``400``.  If we fall back, populate a ``warning`` field in the
-  response. Mango also tries to use indexes where ``$or`` may select a field only when
-  certain values are present.
-* :ghissue:`849`: When ``{"seq_indexed": true}`` is specified, a badmatch error was
-  returned. This is now fixed.
-* :ghissue:`927`, :ghissue:`1310`: Error messages when attempting to sort incorrectly are
-  now actually useful.
-* :ghissue:`951`: When using ``GET /{db}/_index``, only use a partial filter selector for
-  an index if it is set to something other than the default.
-* :ghissue:`961`: Do not prefix ``_design/`` to a Mango index name whose user-specified
-  name already starts with ``_design/``.
-* :ghissue:`988`, :ghissue:`989`: When specifying a ``use_index`` value with an invalid
-  index, correctly return a ``400 Bad Request`` showing that the requested index is
-  invalid for the request specified.
-* :ghissue:`998`: The fix for :ref:`CVE 2017-12635 <cve/2017-12635>` presented a breaking
-  change to Mango's ``/{db}/_find``, which would evaluate all instances of all JSON
-  fields in a selector. Mango is now tested to ensure it only considers the last instance
-  of a field, silently ignoring those that appear before it.
-* :ghissue:`1014`: Correctly deduce list of indexed fields in a selector when nested
-  ``$and`` operators are specified.
-* :ghissue:`1023`: Fix an unexpected ``500`` error if ``startkey`` and ``endkey`` in a
-  Mango selector were reversed.
-* :ghissue:`1067`: Prevent an ``invalid_cast`` crash when the ``couch_proc_manager`` soft
-  limit for processes is reached and mango idle processes are stopped.
-* :ghissue:`1336`: The built-in fields ``_id`` and ``rev`` will always be covered by any
-  index, and Mango now correctly ignores their presence in any index that explicitly
-  includes them for selector matching purposes.
-* :ghissue:`1376`: Mango now appropriately selects some indexes as usable for queries,
-  even if not all columns for an index are added to the query's sort field list.
-* Multiple fixes related to using Mango as a front-end for full text indexing (a feature
-  not shipped with couch, but for which support is in place as a compile-time addon).
-
-Other
------
-
-The 2.2.0 release also includes the following minor improvements:
-
-.. rst-class:: open
-
-* Developers can, at build time, enable curl libraries & disable Fauxton and documentation
-  builds by specifying the new ``--dev`` option to the ``configure`` script.
-* The ``mochiweb`` dependency was bumped to version 2.17.0, in part to address the
-  difficult :ghissue:`745` issue.
-* Improved compatibility with newer versions of Erlang (20.x)
-* Improved release process for CouchDB maintainers and PMC members.
-* Multiple test suite improvements, focused on increased coverage, speed, and
-  reliability.
-* Improvements to the Travis CI and Jenkins CI setups, focused on improved long-term
-  project maintenance and automatability.
-* Related improvements to the CouchDB deb/rpm packaging and Docker repositories to
-  make deployment even easier.
-* :ghissue:`1007`: Move ``etc/default.ini`` entries back into ``[replicator]`` section
-  (incorrectly moved to ``[couch_peruser]`` section)
-* :ghissue:`1245`: Increased debug-level logging for shard open errors is now available.
-* :ghissue:`1296`: CouchDB by default now always invokes the SMP-enabled BEAM VM, even
-  on single-processor machines. A future release of Erlang will remove the non-SMP BEAM
-  VM entirely.
-* A pony! OK, no, not really. If you got this far...thank you for reading.
diff --git a/src/whatsnew/2.3.rst b/src/whatsnew/2.3.rst
deleted file mode 100644
index 7e2b2d0..0000000
--- a/src/whatsnew/2.3.rst
+++ /dev/null
@@ -1,316 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/2.3.x:
-
-============
-2.3.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/2.3.x/upgrade:
-
-Upgrade Notes
-=============
-
-.. rst-class:: open
-
-* :ghissue:`1602`: To improve security, there have been major changes in the
-  configuration of query servers, SSL support, and HTTP global handlers:
-
-    1. Query servers
-
-    Query servers are NO LONGER DEFINED in the .ini files, and can
-    no longer be altered at run-time.
-
-    The JavaScript and CoffeeScript query servers continue to be enabled
-    by default. Setup differences have been moved from default.ini to
-    the ``couchdb`` and ``couchdb.cmd`` start scripts respectively.
-
-    Additional query servers can now be configured using environment
-    variables:
-
-    .. code-block:: bash
-
-        export COUCHDB_QUERY_SERVER_PYTHON="/path/to/python/query/server.py with args"
-        couchdb
-
-    where the last segment in the environment variable (``_PYTHON``) matches
-    the usual lowercase(!) query language in the design doc
-    ``language`` field (here, ``python``.)
-
-    Multiple query servers can be configured by using more environment
-    variables.
-
-    You can also override the default servers if you need to set command-
-    line options (such as ``couchjs`` stack size):
-
-    .. code-block:: bash
-
-        export COUCHDB_QUERY_SERVER_JAVASCRIPT="/path/to/couchjs /path/to/main.js -S <STACKSIZE>"
-        couchdb
-
-    2. Native Query Servers
-
-    The mango query server continues to be enabled by default. The Erlang
-    query server continues to be disabled by default. This change adds
-    a ``[native_query_servers] enable_erlang_query_server = BOOL`` setting
-    (defaults to ``false``) to enable the Erlang query server.
-
-    If the legacy configuration for enabling the query server is detected,
-    that is counted as a ``true`` setting as well, so existing configurations
-    continue to work just fine.
-
-    3. SSL Support
-
-    Enabling SSL support in the ini file is now easier:
-
-    .. code-block:: bash
-
-        [ssl]
-        enable = true
-
-    If the legacy httpsd configuration is found in your ini file, this will
-    still enable SSL support, so existing configurations do not need to be
-    changed.
-
-    4. HTTP global handlers
-
-    These are no longer defined in the default.ini file, but have been
-    moved to the couch.app context. If you need to customize your handlers,
-    you can modify the app context using a couchdb.config file as usual.
-
-* :ghissue:`1602`: Also to improve security, the deprecated ``os_daemons`` and
-  ``couch_httpd_proxy`` functionality has been completely removed ahead of the planned
-  CouchDB 3.0 release. We recommend the use of OS-level daemons such as runit, sysvinit,
-  systemd, upstart, etc. to launch and maintain OS daemons instead, and the use of
-  a reverse proxy server in front of CouchDB (such as haproxy) to proxy access to other
-  services or domains alongside CouchDB.
-* :ghissue:`1543`: The node-local (default port 5986) ``/_restart`` endpoint has been
-  replaced by the clustered (default port 5984) endpoint ``/_node/$node/_restart`` and
-  ``/_node/_local/_restart`` endpoints. The node-local endpoint has been removed.
-* :ghissue:`1764`: All python scripts shipped with CouchDB, including ``couchup`` and the
-  ``dev/run`` development cluster script, now specify and require Python 3.x.
-* :ghissue:`1396`: CouchDB is now compatible with Erlang 21.x.
-* :ghissue:`1680`: The embedded version of ``rebar`` used to build CouchDB has been
-  updated to the last version of ``rebar2`` available. This assists in building on
-  non-x86 platforms.
-* :ghissue:`1857`: Refuse building with known bad versions of Erlang.
-
-.. _release/2.3.1:
-
-Version 2.3.1
-=============
-
-Features
---------
-
-.. rst-class:: open
-
-* :ghissue:`1811`: Add new ``/{db}/_sync_shards`` endpoint (admin-only).
-* :ghissue:`1870`: Update to mochiweb 2.19.0. See also :ghissue:`1875`.
-* :ghissue:`1857`: Refuse building with known bad versions of Erlang.
-* :ghissue:`1880`: Compaction: Add snooze_period_ms for finer tuning.
-
-Bugfixes
---------
-
-.. rst-class:: open
-
-* :ghissue:`1795`: Filter out empty missing_revs results in ``mem3_rep``.
-* :ghissue:`1384`: Fix ``function_clause`` error on invalid DB ``_security`` objects.
-* :ghissue:`1841`: Fix ``end_time`` field in ``/_replicate`` response.
-* :ghissue:`1860`: Fix read repair in a mixed cluster environment.
-* :ghissue:`1862`: Fix ``fabric_open_doc_revs``.
-* :ghissue:`1865`: Support purge requests with more than 100 doc ids.
-* :ghissue:`1867`: Fix timeout in ``chttpd_purge_tests``.
-* :ghissue:`1766`: Add default fabric request timeouts.
-* :ghissue:`1810`: Requests return 400 Bad Request when URL length exceeds 1460
-  characters. See :ghissue:`1870` for details.
-* :ghissue:`1799`: Restrict ``_purge`` to server admin.
-* :ghissue:`1874`: This fixes inability to set keys with regex symbols in them.
-* :ghissue:`1901`: Fix badarg crash on invalid rev for individual doc update.
-* :ghissue:`1897`: Fix ``from_json_obj_validate`` crash when provided rev isn't
-  a valid hex.
-* :ghissue:`1803`: Use the same salt for admin passwords on cluster setup.
-* :ghissue:`1053`: Fix python2 compatibility for ``couchup``.
-* :ghissue:`1905`: Fix python3 compatibility for ``couchup``.
-
-.. _release/2.3.0:
-
-Version 2.3.0
-=============
-
-Features
---------
-
-.. rst-class:: open
-
-* (Multiple) Clustered purge is now available. This feature restores the CouchDB 1.x
-  ability to completely remove any record of a document from a database. Conditions
-  apply; to use the feature safely, and for full details, read the complete
-  :ref:`cluster/purging` documentation.
-* :ghissue:`1658`: A new config setting is available, allowing an administrator to
-  configure an initial list of nodes that should be contacted when a node boots up.
-  Nodes in the ``seedlist`` that are successfully reached will be added to that node's
-  ``_nodes`` database automatically, triggering a distributed Erlang connection and
-  replication of the internal system databases to the new node. This can be used instead
-  of manual config or the cluster setup wizard to bootstrap a cluster. The progress of
-  the initial seeding of new nodes is exposed at the ``GET /_up`` endpoint.
-* Replication supports ipv6-only peers after updating ibrowse dependency.
-* :ghissue:`1708`: The UUID of the server/cluster is once again exposed in the
-  ``GET /`` response. This was a regression from CouchDB 1.x.
-* :ghissue:`1722`: Stats counts between job runs of the replicator are no longer reset
-  on job restart.
-* :ghissue:`1195`, :ghissue:`1742`: CouchDB's ``_bulk_get`` implementation now supports
-  the ``multipart/mixed`` and ``multipart/related`` content types if requested,
-  extending compatibility with third-party replication clients.
-
-Performance
------------
-
-.. rst-class:: open
-
-* :ghissue:`1409`: CouchDB no longer forces the TCP receive buffer to a fixed size
-  of 256KB, allowing the operating system to dynamically adjust the buffer size. This
-  can lead to significantly improved network performance when transferring large
-  attachments.
-* :ghissue:`1423`: Mango selector matching now occurs at the shard level, reducing the
-  network traffic within a cluster for a mango query.
-* :ghissue:`1423`: Long running operations at the node level could exceed the inter-node
-  timeout, leading to a fabric timeout error in the logfile and a cancellation of the
-  task. Nodes can now ping to stop that from happening.
-* :ghissue:`1560`: An optimization to how external data sizes of attachments were
-  recorded was made.
-* :ghissue:`1586`: When cleaning up outdated secondary index files, the search is limited
-  to the index directory of a specific database.
-* :ghissue:`1593`: The ``couch_server`` ETS table now has the ``read_concurrency``
-  option set, improving access to the global list of open database handles.
-* :ghissue:`1593`: Messages to update the least-recently used (LRU) cache are not
-  sent when the ``[couchdb] update_lru_on_read`` setting is disabled.
-* :ghissue:`1625`: All nodes in a cluster now run their own ``rexi`` server.
-
-Bugfixes
---------
-
-.. rst-class:: open
-
-* :ghissue:`1484`: ``_stats`` now correctly handles the case where a map function emits
-  an array of integers. This bug was introduced in 2.2.0.
-* :ghissue:`1544`: Certain list functions could return a ``render_error`` error
-  intermittently.
-* :ghissue:`1550`: Replicator ``_session`` support was incompatible with CouchDB
-  installations using the ``require_valid_user = true`` setting.
-* :ghissue:`1571`: Under very heavy load, it was possible that ``rexi_server`` could
-  die in such a way that it's never restarted, leaving a cluster without the ability
-  to issue RPC calls - effectively rendering the cluster useless.
-* :ghissue:`1574`: The built-in ``_sum`` reduce function has been improved to check
-  if the objects being summed are not overflowing the view storage. Previously, there
-  was no protection for ``_sum``-introduced overflows.
-* :ghissue:`1582`: Database creation parameters now have improved validation, giving a
-  more readable error on invalid input.
-* :ghissue:`1588`: A missing security check has been restored for the noop
-  ``/db/_ensure_full_commit`` call to restore database validation checks.
-* :ghissue:`1591`: CouchDB now creates missing shard files when accessing a database
-  if necessary. This handles the situation when, on database creation, no nodes were
-  capable of creating any of the shard files required for that database.
-* :ghissue:`1568`: CouchDB now logs a warning if a changes feed is rewound to 0. This
-  can help diagnose problems in busy or malfunctioning clusters.
-* :ghissue:`1596`: It is no longer possible that a busy ``couch_server``, under a
-  specific ordering and timing of events, will incorrectly track ``open_async``
-  messages in its mailbox.
-* :ghissue:`1601`, :ghissue:`1654`: CouchDB now logs better when an error causes it to
-  read past the EOF of a database shard. The check for whether CouchDB is trying to read
-  too many bytes has been correctly separated out from the error indicating it has
-  attempted to read past the EOF.
-* :ghissue:`1613`: Local nodes are now filtered out during read repair operations.
-* :ghissue:`1636`: A memory leak when replicating over HTTPS and a problem occurs
-  has been squashed.
-* :ghissue:`1635`: ``/_replicate`` jobs are no longer restarted if parameters haven't
-  changed.
-* :ghissue:`1612`: JavaScript rewrite functions now send the body of the request to
-  the rewritten endpoint.
-* :ghissue:`1631`: The replicator no longer crashes if the user has placed an
-  invalid VDU function into one of the ``_replicator`` databases.
-* :ghissue:`1644`, :ghissue:`1647`: It is no longer possible to create illegally-named
-  databases within the reserved system space (``_`` prefix.)
-* :ghissue:`1650`: ``_bulk_get`` is once again operational for system databases such
-  as ``_users``.
-* :ghissue:`1652`: Access to ``/_active_tasks`` is once again restricted to server
-  admins only.
-* :ghissue:`1662`: The ``couch_log`` application no longer crashes when new, additional
-  information is supplied by a crashing application, or when any of its own children are
-  restarted.
-* :ghissue:`1666`: Mango could return an error that would crash the
-  ``couch_query_servers`` application. This is no longer the case.
-* :ghissue:`1655`: Configuration of ``ets_lru`` in ``chttpd`` now performs proper error
-  checking of the specified config value.
-* :ghissue:`1667`: The ``snappy`` dependency has been updated to fix a memory allocation
-  error.
-* :ghissue:`1683`: Attempting to create a local document with an invalid revision no
-  longer throws a ``badarg`` exception. Also, when setting ``new_edits`` to ``false``
-  and performing a bulk write operation, local documents are no longer written into the
-  wrong btree. Finally, it is no longer possible to create a document with an empty
-  ID during a bulk operation with ``new_edits`` set to ``false``.
-* :ghissue:`1721`: The ``couchup`` convenience script for upgrading from CouchDB 1.x
-  now also copies a database's ``_security`` object on migration.
-* :ghissue:`1672`: When checking the status of a view compaction immediately after
-  starting it, the ``total_changes`` and ``changes_done`` fields are now immediately
-  populated with valid values.
-* :ghissue:`1717`: If the ``.ini`` config file is read only, an attempt to update the
-  config through the HTTP API will now result in a proper ``eacces`` error response.
-* :ghissue:`1603`: CouchDB now returns the correct ``total_rows`` result when querying
-  ``/{db}/_design_docs``.
-* :ghissue:`1629`: Internal load validation functions no longer incorrectly hold open
-  a deleted database or its host process.
-* :ghissue:`1746`: Server admins defined in the ini file accessing via HTTP API no longer
-  result in the auth cache logging the access as a miss in the statistics.
-* :ghissue:`1607`: The replicator no longer fails to re-authenticate to open a remote
-  database when its session cookie times out due to a VDU function forbidding writes
-  or a non-standard cookie expiration duration.
-* :ghissue:`1579`: The compaction daemon no longer incorrectly only compacts a single
-  view shard for databases with a ``q`` value greater than 1.
-* :ghissue:`1737`: CouchDB 2.x now performs as well as 1.x when using a ``_doc_ids``
-  or ``_design_docs`` filter on a changes feed.
-
-Mango
------
-
-.. rst-class:: open
-
-Other
------
-
-The 2.3.0 release also includes the following minor improvements:
-
-.. rst-class:: open
-
-* Improved test cases:
-
-  * The Elixir test suite has been merged. These test cases are intended to replace the
-    aging, unmaintainable JavaScript test suite, and help reduce our dependency on
-    Mozilla Spidermonkey 1.8.5. The test suite does not yet cover all of the tests that
-    the JS test suite does. Once it achieves full coverage, the JS test suite will be
-    removed.
-  * Many racy test cases improved for reliable CI runs.
-  * The Makefile targets for ``list-eunit-*`` now work correctly on macOS.
-  * :ghissue:`1732`, :ghissue:`1733`, :ghissue:`1736`: All of the test suites run and
-    pass on the Windows platform once again.
-
-* :ghissue:`1597`: Off-heap messages, a new feature in Erlang 19+, can now be disabled
-  per module if desired.
-* :ghissue:`1682`: A new ``[feature_flags]`` config section exists for the purpose of
-  enabling or disabling experimental features by CouchDB developers.
-* A narwhal! OK, no, not really. If you got this far...thank you for reading.
diff --git a/src/whatsnew/3.0.rst b/src/whatsnew/3.0.rst
deleted file mode 100644
index b70c418..0000000
--- a/src/whatsnew/3.0.rst
+++ /dev/null
@@ -1,706 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/3.0.x:
-
-============
-3.0.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/3.0.x/upgrade:
-
-Upgrade Notes
-=============
-
-* :ghissue:`2228`: The default maximum document size has been reduced to 8MB. This means
-  that databases with larger documents will not be able to replicate into CouchDB 3.0
-  correctly without modification. This change has been made in preparation for
-  anticipated hard upper limits on document size imposed by CouchDB 4.0. For 3.x,
-  the max document size setting can be relaxed via the ``[couchdb] max_document_size``
-  config setting.
-
-* :ghissue:`2228`: The default database sharding factor ``q`` has been reduced to 2 by
-  default. This, combined with automated database resharding (see below), is a better
-  starting place for new CouchDB databases. As in CouchDB 2.x, specify ``?q=#`` to
-  change the value upon database creation if desired. The default can be changed
-  via the config ``[cluster] q`` setting.
-
-* :ghissue:`1523`, :ghissue:`2092`, :ghissue:`2336`, :ghissue:`2475`: The "node-local"
-  HTTP interface, by default exposed on port 5986, has been removed. All functionality
-  previously available at that port is now available on the main, clustered interface (by
-  default, port 5984). Examples:
-
-  .. code-block:: javascript
-
-    GET /_node/{nodename}/_stats
-    GET /_node/{nodename}/_system
-    GET /_node/{nodename}/_all_dbs
-    GET /_node/{nodename}/_uuids
-    GET /_node/{nodename}/_config
-    GET /_node/{nodename}/_config/couchdb/uuid
-    POST /_node/{nodename}_config/_reload
-    GET /_node/{nodename}/_nodes/_changes?include_docs=true
-    PUT /_node/{nodename}/_dbs/{dbname}
-    POST /_node/{nodename}/_restart
-    GET /_node/{nodename}/{db-shard}
-    GET /_node/{nodename}/{db-shard}/{doc}
-    GET /_node/{nodename}/{db-shard}/{ddoc}/_info
-
-  ...and so on. Documentation has been updated to reflect this change.
-
-* :ghissue:`2389`: CouchDB 3.0 now requires a server admin user to be defined at
-  startup, or will print an error message and exit. If you do not have one, be sure
-  to :ref:`create an admin user<config/admins>`. (The Admin Party is now over.)
-
-.. figure:: ../../images/23379351593_0c480537de_q.jpg
-  :align: center
-  :alt: Dizzy the cat with a Santa hat.
-
-  CC-BY-NC 2.0: `hehaden @ Flickr <https://www.flickr.com/photos/hellie55/23379351593/>`_
-
-* :ghissue:`2576`: CouchDB 3.0 now requires admin-level access for the ``/_all_dbs``
-  endpoint.
-
-* :ghissue:`2339`: All databases are now created by default as admin-only. That is, the
-  default new database ``_security`` object is now:
-
-  .. code-block:: javascript
-
-    {
-      "members" : { "roles" : [ "_admin" ] },
-       "admins" : { "roles" : [ "_admin" ] }
-    }
-
-  This can be changed after database creation.
-
-* Due to code changes in :ghissue:`2324`, it is not possible to upgrade transparently from
-  CouchDB 1.x to 3.x. In addition, the ``couchup`` utility has been removed from CouchDB
-  3.0 by :ghissue:`2399`. If you are upgrading from CouchDB 1.x, you must first upgrade
-  to CouchDB 2.3.1 to convert your database and indexes, using ``couchup`` if desired.
-  You can then upgrade to CouchDB 3.0. Or, you can start a new CouchDB 3.0 installation
-  and replicate directly from 1.x to 3.0.
-
-* :ghissue:`1833`, :ghissue:`2358`, :ghissue:`1871`, :ghissue:`1857`: CouchDB 3.0 supports
-  running only under the following Erlang/OTP versions:
-
-  * 19.x - "soft" support only. No longer tested, but should work.
-  * 20.x - must be newer than 20.3.8.11 (20.0, 20.1, 20.2 versions all invalid)
-  * 21.x - for 21.2, must be newer than 21.2.3
-  * 22.x - for 22.0, must be newer than 22.0.5
-
-* :ghissue:`1804`: By default, views are limited to return a maximum of 2**28 (268435456)
-  results. This limit can be configured separately for views and partitioned views via
-  the ``query_limit`` and ``partition_query_limit`` values in the ini file
-  ``[query_server_config]`` section.
-
-* After upgrading all nodes in a cluster to 3.0, add ``[rexi] use_kill_all = true`` to
-  ``local.ini`` to save some intra-cluster network bandwidth.
-
-Deprecated feature removal
---------------------------
-
-The following features, deprecated in CouchDB 2.x, have been removed or replaced in
-CouchDB 3.0:
-
-* :ghissue:`2089`, :ghissue:`2128`, :ghissue:`2251`: Local endpoints for replication
-  targets, which never functioned as expected in CouchDB 2.x, have been completely
-  removed. When replicating databases, always specify a full URL for the source and
-  target. In addition, the node local ``_replicator`` database is no longer automatically
-  created.
-
-* :ghissue:`2163`: The ``disk_size`` and ``data_size`` fields have been retired from the
-  database info object returned by ``GET /{db}/``. These were deprecated in CouchDB 2.x
-  and replaced by the ``sizes`` object, which contains the improved ``file``,
-  ``active`` and ``external`` size metrics. Fauxton has been updated to match.
-
-* :ghissue:`2173`: The ability to submit multiple queries against a view using
-  the :method:`POST` to ``/{db}/_design/{ddoc}/_view/{view}`` with the
-  ``?queries=`` option has been replaced by the new :ref:`queries
-  <api/ddoc/view/multiple_queries>` endpoint. The same is true of the
-  :ref:`_all_docs, _design_docs, and _local_docs <api/db/all_docs>` endpoints.
-  Specify a ``keys`` object when :method:`POST`-ing to these endpoints.
-
-* :ghissue:`2248`: CouchDB externals (``_external/``) have been removed entirely.
-
-* :ghissue:`2208`: CouchDB no longer supports the ``delayed_commits`` option in the
-  configuration file. All writes are now full commits. The ``/_ensure_full_commit``
-  API endpoint has been retained (as a no-op) for backwards compatibility with old
-  CouchDB replicators.
-
-* :ghissue:`2395`: The security object in the ``_users`` database cannot be edited by
-  default. A setting exists in the configuration file to revert this behaviour. The
-  ability to override the disable setting is expected to be removed in CouchDB 4.0.
-
-Deprecated feature warnings
----------------------------
-
-The following features are deprecated in CouchDB 3.0 and will be removed in CouchDB 4.0:
-
-* Show functions (``/{db}/{ddoc}/_show``)
-
-* List functions (``/{db}/{ddoc}/_list``)
-
-* Update functions (``/{db}/{ddoc}/_update``)
-
-* Virtual hosts and ini-file rewrites
-
-* Rewrite functions (``/{db}/{ddoc}/_rewrite``)
-
-.. _release/3.0.1:
-
-Version 3.0.1
-=============
-
-Features and Enhancements
--------------------------
-
-* Fauxton was updated to version `v1.2.3`.
-
-Bugfixes
---------
-
-* :ghissue:`2441`: A memory leak when encoding large binary content was patched.
-  This should resolve a long-standing gradual memory increase bug in CouchDB.
-
-* :ghissue:`2613`: Simultaneous attempts to create the same new database should
-  no longer result in a :statuscode:`500` error.
-
-* :ghissue:`2678`: Defaults for the ``smoosh`` compaction daemon are now consistent
-  with the shipped ``default.ini`` file.
-
-* :ghissue:`2680`: The Windows CouchDB startup batch file will no longer fail to
-  start CouchDB if incompatible versions of OpenSSL are on the ``PATH``.
-
-* :ghissue:`2741`: A small performance improvement in the ``couch_server`` process
-  was made.
-
-* :ghissue:`2745`: The ``require_valid_user`` exception logic was corrected.
-
-* :ghissue:`2643`: The ``users_db_security_editable`` setting is now in the correct
-  section of the ``default.ini`` file.
-
-* :ghissue:`2654`: Filtered changes feeds that need to rewind partially should no
-  longer rewind all the way to the beginning of the feed.
-
-* :ghissue:`2655`: When deleting a session cookie, CouchDB should now respect the
-  operator-specified cookie domain, if set.
-
-* :ghissue:`2690`: Nodes that re-enter a cluster after a database was created (while
-  the node was offline or in maintenance mode) should more correctly handle
-  creating local replicas of that database.
-
-* :ghissue:`2805`: Mango operators more correctly handle being passed empty arrays.
-
-* :ghissue:`2716`, :ghissue:`2738`: The ``remsh`` utility will now try and guess the
-  node name and Erlang cookie of the local installation. It will also respect the
-  ``COUCHDB_ARGS_FILE`` environment variable.
-
-* :ghissue:`2797`: The cluster setup workflow now uses the correct logging module.
-
-* :ghissue:`2818`: Mango now uses a safer method of bookmark creation that prevents
-  unexpectedly creating new Erlang atoms.
-
-* :ghissue:`2756`: SpiderMonkey 60+ will no longer corrupt UTF-8 strings when
-  various JS functions are applied to them.
-
-* Multiple test case improvements, including more ports of JS tests to Elixir.
-
-.. _release/3.0.0:
-
-Version 3.0.0
-=============
-
-Features and Enhancements
--------------------------
-
-.. rst-class:: open
-
-* :ghissue:`1789`: :ref:`User-defined partitioned databases <partitioned-dbs>`.
-
-  These special databases support user-driven placement of documents into the same
-  shard range. :ref:`JavaScript views <api/partitioned/views>` and :ref:`Mango
-  indexes <api/partitioned/find>` have specific optimizations for partitioned databases
-  as well.
-
-  Two tweakable configuration parameters exist:
-
-  * :ghissue:`1842`: Partition size limits. By default, each partition is limited
-    to 10 GiB.
-  * :ghissue:`1684`: Partitioned database support can be disabled via feature
-    flag in ``default.ini``.
-
-* :ghissue:`1972`, :ghissue:`2012`: :ref:`Automated shard splitting
-  <cluster/sharding/splitting_shards>`.  Databases can now be re-sharded *while online*
-  to increase the ``q`` factor to a larger number. This can be configured to
-  require specific node and range parameters upon execution.
-
-* :ghissue:`1910`: :ref:`Automatic background indexing <config/index_builds>`,
-  internally known as ``ken``. This subsystem ensures secondary indexes (such
-  as JavaScript, Mango, and text search) are kept up to date, without requiring
-  an external query to trigger building them. Many configuration parameters are
-  available.
-
-* :ghissue:`1904`: Completely rewritten :ref:`automatic compaction daemon
-  <compact/auto>`, internally known as ``smoosh``. This subsystem automatically
-  triggers background compaction jobs for both databases and views, based on
-  :ref:`configurable thresholds <config/database_compaction>`.
-
-* :ghissue:`1889`, :ghissue:`2408`: New IO Queue subsystem implementation.
-  This is :ref:`highly configurable and well-documented <config/ioq>`.
-
-* :ghissue:`2436`, :ghissue:`2455`: CouchDB now regression tests against, and officially
-  supports, running on the ``arm64v8`` (``aarch64``) and ``ppc64le`` (``ppc64el``)
-  machine architectures. Convenience binaries are generated on these architectures for
-  Debian 10.x ("buster") packages, and for the Docker containers.
-
-* :ghissue:`1875`, :ghissue:`2437`, :ghissue:`2423`: CouchDB now supports linking against
-  SpiderMonkey 60 or SpiderMonkey 1.8.5. SpiderMonkey 60 provides enhanced support for
-  ES5, ES6, and ES2016+. Full compatibility information is available at the
-  `ECMAScript compatibility table`_. Click on "Show obsolete platforms", then look for
-  "FF 60 ESR" in the list of engine types.
-
-  However, it was discovered that on some ARM 64-bit distributions, SM 60 segfaults
-  frequently, including the SM 60 packages on CentOS 8 and Debian 10.
-
-  As a result, CouchDB's convenience binaries **only link against SM 60 on the
-  ``x86_64`` and ``ppc64le`` architectures**. This includes the Docker image for these
-  architectures.
-
-  At present, CouchDB ships with SM 60 linked in on the following binary distributions:
-
-  * Debian buster (10.x)
-  * CentOS / RedHat 8.x
-  * macOS (10.10+)
-  * Windows (7+)
-  * Docker (3.0.0)
-  * FreeBSD (CURRENT)
-
-  We expect to add SM 60 support to Ubuntu with Focal Fossa (20.04 LTS) when it ships in
-  April 2020.
-
-  It is unlikely we will backport SM 60 packages to older versions of Debian, CentOS,
-  RedHat, or Ubuntu.
-
-* The Windows installer has many improvements, including:
-
-  * Prompts for an admin user/password as CouchDB 3.0 requires
-    * Will not overwrite existing credentials if in place
-  * No longer remove user-modified config files, closing :ghissue:`1989`
-    * Also will not overwrite them on install.
-  * Checkbox to disable installation of the Windows service
-  * :ref:`Silent install support <install/windows/silent>`.
-  * Friendly link to these online release notes in the exit dialog
-  * Higher resolution icon for HiDPI (500x500)
-
-.. warning::
-    Windows 8, 8.1, and 10 require the `.NET Framework v3.5`_ to be installed.
-
-* :ghissue:`2037`: Dreyfus, the CouchDB side of the Lucene-powered search solution, is now
-  shipped with CouchDB. When one or more Clouseau Java nodes are joined to the cluster,
-  text-based indexes can be enabled in CouchDB. It is recommended to have as many Clouseau
-  nodes as you have CouchDB nodes. Search is advertised in the feature list present at
-  ``GET /`` if configured correctly (:ghissue:`2206`).  :ref:`Configuration
-  <config/search>` and :ref:`installation documentation is available <install/search>`.
-
-* :ghissue:`2411`: The ``/_up`` endpoint no longer requires authentication, even when
-  ``require_valid_user`` is ``true``.
-
-* :ghissue:`2392`: A new ``_metrics`` role can be given to a user. This allows that
-  user access only to the ``/_node/{node}/_stats`` and ``/_node/{node}/_system``
-  endpoints.
-
-* :ghissue:`1912`: A new alternative ``systemd-journald logging`` backend has been added,
-  and can be enabled through the ini file. The new backend does not include CouchDB's
-  microsecond-accurate timestamps, and uses the ``sd-daemon(3)`` logging levels.
-
-* :ghissue:`2296`, :ghissue:`1977`: If the configuration file setting ``[couchdb]``
-  ``single_node`` is set to ``true``, CouchDB will automatically create the system
-  databases on startup if they are not present.
-
-* :ghissue:`2338`, :ghissue:`2343`: :method:`POST` request to CouchDB views and the
-  ``/{db}/_all_docs``, ``/{db}/_local_docs`` and ``/{db}/_design_docs`` endpoints now
-  support the same functionality as :method:`GET`.  Parameters are passed in the body as a
-  JSON object, rather than in the URL when using :method:`POST`.
-
-* :ghissue:`2292`: The ``_scheduler/docs`` and ``_scheduler/info`` endpoints now return
-  detailed replication stats for running and pending jobs.
-
-* :ghissue:`2282`, :ghissue:`2272`, :ghissue:`2290`: CouchDB now supports specifying
-  separate proxies for both the ``source`` and ``target`` in a replication via
-  ``source_proxy`` and ``target_proxy`` keys. The :ref:`API documentation
-  <api/server/replicate>` has been updated.
-
-* :ghissue:`2240`: Headers are now returned from the ``/{db}/_changes`` feed
-  immediately, even when there are no changes available. This avoids client
-  blocking.
-
-* :ghissue:`2005`, :ghissue:`2006`: The name of any node can now be retrieved through
-  the :ref:`new API endpoint <api/server/name>` ``GET /_node/{node-name}``.
-
-* :ghissue:`1766`: Timeouts for requests, ``all_docs``, attachments, views, and
-  partitioned view requests can all be specified separately in the ini file under
-  the ``[fabric]`` section. See ``default.ini`` for more detail.
-
-* :ghissue:`1963`: Metrics are now kept on the number of partition and global view
-  queries, along with the number of timeouts that occur.
-
-* :ghissue:`2452`, :ghissue:`2221`: A new configuration field ``[couch_httpd_auth]
-  same_site`` has been added to set the value of the CouchDB auth cookie's ``SameSite``
-  attribute.  It may be necessary to set this to ``strict`` for compatibility with future
-  versions of Google Chrome. If CouchDB CORS support is enabled, set this to ``None``.
-
-Performance
------------
-
-.. rst-class:: open
-
-* :ghissue:`2277`: The ``couch_server`` process has been highly optimized, supporting
-  significantly more load than before.
-
-* :ghissue:`2360`: It is now possible to make the rexi interface's unacked message
-  limit configurable. A new, more optimized default (5, lowered from 10) has been set.
-  This results in a ~50% improvement on view queries on large clusters with ``q ≥ 8``.
-
-* :ghissue:`2280`: Connection sharing for replication now functions correctly when
-  replicating through a forward proxy. Closes :ghissue:`2271`.
-
-* :ghissue:`2195`, :ghissue:`2207`: Metrics aggregation now supports CouchDB systems
-  that sleep or hibernate, ensuring that on wakeup does not trigger thousands of
-  unnecessary function calls.
-
-* :ghissue:`1795`: Avoid calling ``fabric:update_docs`` with empty doc lists.
-
-* :ghissue:`2497`: The setup wizard no longer automatically creates the
-  ``_global_changes`` database, as the majority of users do not need this
-  functionality. This reduces overall CouchDB load.
-
-Bugfixes
---------
-
-.. rst-class:: open
-
-* :ghissue:`1752`, :ghissue:`2398`, :ghissue:`1803`: The cluster setup wizard now ensures
-  a consistent UUID and http secret across all nodes in a cluster. CouchDB admin passwords
-  are also synced when the cluster setup wizard is used. This prevents being logged out
-  when using Fauxton as a server admin user through a load balancer.
-
-* :ghissue:`2388`: A compatibility change has been made to support replication with
-  future databases containing per-document access control fields.
-
-* :ghissue:`2379`: Any replicator error messages will provide an object in the response,
-  or null, but never a string.
-
-* :ghissue:`2244`, :ghissue:`2310`: CouchDB will no longer send more data than is
-  requested when retrieving partial attachment data blocks.
-
-* :ghissue:`2138`: Manual operator updates to a database's shard map will not
-  corrupt additional database properties, such as partitioning values.
-
-* :ghissue:`1877`: The ``_purge`` and ``_purged_infos_limit`` endpoints are now
-  correctly restricted to server admin only.
-
-* :ghissue:`1794`: The minimum purge sequence value for a database is now
-  gathered without a clustered ``_all_docs`` lookup.
-
-* :ghissue:`2351`: A timeout case clause in ``fabric_db_info`` has been normalised
-  to match other case clauses.
-
-* :ghissue:`1897`: The ``/{db}/_bulk_docs`` endpoint now correctly catches invalid
-  (*i.e.*, non-hexadecimal) ``_rev_`` values and responds with a :statuscode:`400` error.
-
-* :ghissue:`2321`: CouchDB no longer requires Basic auth credentials to reach the
-  ``/_session`` endpoint for login, even when ``require_valid_user`` is enabled.
-
-* :ghissue:`2295`: CouchDB no longer marks a job as failed permanently if the
-  internal doc processor crashes.
-
-* :ghissue:`2178`: View compaction files are now removed on view cleanup.
-
-* :ghissue:`2179`: The error message logged when CouchDB does not have a ``_users``
-  database is now less scary.
-
-* :ghissue:`2153`: CouchDB no longer may return a ``badmatch`` error when querying
-  ``all_docs`` with a passed ``keys`` array.
-
-* :ghissue:`2137`: If search is not available, return a :statuscode:`400` instead of a
-  :statuscode:`500` status code.
-
-* :ghissue:`2077`: Any failed ``fsync(2)`` calls are now correctly raised to avoid
-  data corruption arising from retry attempts.
-
-* :ghissue:`2027`: Handle epoch mismatch when duplicate UUIDs are created through
-  invalid operator intervention.
-
-* :ghissue:`2019`: If a database is deleted and re-created while internal cluster
-  replication is still active, CouchDB will no longer retry to delete it continuously.
-
-* :ghissue:`2003`, :ghissue:`2438`: CouchDB will no longer automatically reset an index
-  file if any attempt to read its header fails (such as when the couch_file process
-  terminates unexpectedly).  CouchDB now also handles the case when a view file lacks a
-  proper header.
-
-* :ghissue:`1983`: Improve database "external" size calcuation to be more precise.
-
-* :ghissue:`1971`: Correctly compare ETags using weak comparison methods to support
-  `W/` prefix added by some load balancer configurations.
-
-* :ghissue:`1901`: Invalid revision specified for a document update will no longer result
-  in a ``badarg`` crash.
-
-* :ghissue:`1845`: The ``end_time`` field in ``/_replicate`` now correctly converts time
-  to UTC.
-
-* :ghissue:`1824`: ``rexi`` stream workers are now cleaned up when the coordinator process
-  is killed, such as when the ddoc cache is refreshed.
-
-* :ghissue:`1770`: Invalid database ``_security`` objects no longer return a
-  ``function_clause`` error and stack trace.
-
-* :ghissue:`2412`: Mango execution stats now correctly count documents read which weren't
-  followed by a match within a given shard.
-
-* :ghissue:`2393`, :ghissue:`2143`: It is now possible to override the query server
-  environment variables ``COUCHDB_QUERY_SERVER_JAVASCRIPT`` and
-  ``COUCHDB_QUERY_SERVER_COFFEESCRIPT`` without overwriting the
-  ``couchdb``/``couchdb.cmd`` startup scripts.
-
-* :ghissue:`2426`, :ghissue:`2415`: The replicator now better handles the situation where
-  design document writes to the target fail when replicating with non-admin credentials.
-
-* :ghissue:`2444`, :ghissue:`2413`: Replicator error messages are now significantly
-  improved, reducing ``function_clause`` responses.
-
-* :ghissue:`2454`: The replication auth session plugin now ignores other cookies it may
-  receive without logging an error.
-
-* :ghissue:`2458`: Partitioned queries and dreyfus search functions no longer fail
-  if there is a single failed node or rexi worker error.
-
-* :ghissue:`1783`: Mango text indexes no longer error when given an empty selector or
-  operators with empty arrays.
-
-* :ghissue:`2466`: Mango text indexes no longer error if the indexed document revision
-  no longer exists in the primary index.
-
-* :ghissue:`2486`: The ``$lt``, ``$lte``, ``$gt``, and ``$gte`` Mango operators are
-  correctly quoted internally when used in conjunction with a text index search.
-
-* :ghissue:`2493`: The ``couch_auth_cache`` no longer has a runaway condition in which
-  it creates millions of monitors on the ``_users`` database.
-
-Other
------
-
-The 3.0.0 release also includes the following minor improvements:
-
-.. rst-class:: open
-
-* :ghissue:`2472`: CouchDB now logs the correct, clustered URI at startup (by default:
-  port ``5984``.)
-
-* :ghissue:`2034`, :ghissue:`2416`: The path to the Fauxton installation can now be
-  specified via the ``COUCHDB_FAUXTON_DOCROOT`` environment variable.
-
-* :ghissue:`2447`: Replication stats are both persisted when jobs are re-created, as well
-  as properly handled when bulk document batches are split.
-
-* :ghissue:`2410`, :ghissue:`2390`, :ghissue:`1913`: Many metrics were added for Mango
-  use, including counts of unindexed queries, invalid index queries, docs examined that
-  do and don't meet cluster quorum, query time, etc.
-
-* :ghissue:`2152`, :ghissue:`2504`: CouchDB can now be started via a symlink to the
-  binary on UNIX-based platforms.
-
-* :ghissue:`1844`: A new internal API has been added to write custom Erlang
-  request-level metrics reporting plugins.
-
-* :ghissue:`2293`, :ghissue:`1095`: The ``-args_file``, ``-config`` and ``-couch_ini``
-  parameters may now be overridden via the ``COUCHDB_INI_FILES`` environment variable
-  on UNIX-based systems.
-
-* :ghissue:`2352`: The ``remsh`` utility now searches for the Erlang cookie in
-  ``ERL_FLAGS`` as well as ``vm.args``.
-
-* :ghissue:`2324`: All traces of the (never fully functional) view-based ``_changes``
-  feed have been expunged from the code base.
-
-* :ghissue:`2337`: The md5 shim (introduced to support FIPS-compliance) is now
-  used consistently throughout the code base.
-
-* :ghissue:`2270`: Negative and non-integer ``heartbeat`` values now return
-  :statuscode:`400`.
-
-* :ghissue:`2268`: When rescheduling jobs, CouchDB now stops sufficient running jobs
-  to make room for the pending jobs.
-
-* :ghissue:`2186`: CouchDB plugin writers have a new field in which endpoint
-  credentials may be stashed for later use.
-
-* :ghissue:`2183`: ``dev/run`` now supports an ``--extra-args`` flag to modify the
-  Erlang runtime environment during development.
-
-* :ghissue:`2105`: ``dev/run`` no longer fails on unexpected remote end connection
-  close during cluster setup.
-
-* :ghissue:`2118`: Improve ``couch_epi`` process replacement mechanism using map
-  childspecs functionality in modern Erlang.
-
-* :ghissue:`2111`: When more than ``MaxJobs`` replication jobs are defined, CouchDB
-  now correctly handles job rotation when some jobs crash.
-
-* :ghissue:`2020`: Fix full ring assertion in fabric stream shard replacements
-
-* :ghissue:`1925`: Support list for docid when using ``couch_db:purge_docs/3``.
-
-* :ghissue:`1642`: ``io_priority`` is now set properly on view update and compaction
-  processes.
-
-* :ghissue:`1865`: Purge now supports >100 document IDs in a single request.
-
-* :ghissue:`1861`: The ``vm.args`` file has improved commentary.
-
-* :ghissue:`1808`: Pass document update type for additional checks in
-  ``before_doc_update``.
-
-* :ghissue:`1835`: Module lists are no longer hardcoded in ``.app`` files.
-
-* :ghissue:`1798`, :ghissue:`1933`: Multiple compilation warnings were eliminated.
-
-* :ghissue:`1826`: The ``couch_replicator_manager`` shim has been fully removed.
-
-* :ghissue:`1820`: After restarting CouchDB, JS and Elixir tests now wait up to 30s for
-  it to be ready before timing out.
-
-* :ghissue:`1800`: ``make elixir`` supports specifying individual tests to run with
-  ``tests=``.
-
-* :ghissue:`1805`: ``dev/run`` supports ``--with-haproxy`` again.
-
-* :ghissue:`1774`: ``dev/run`` now supports more than 3 nodes.
-
-* :ghissue:`1779`: Refactor Elixir test suite initialization.
-
-* :ghissue:`1769`: The Elixir test suite uses Credo for static analysis.
-
-* :ghissue:`1776`: All Python code is now formatted using `Python black`_.
-
-* :ghissue:`1786`: ``dev/run``: do not create needless ``dev/data/`` directory.
-
-* :ghissue:`2482`: A redundant ``get_ring_opts`` call has been removed from
-  ``dreyfus_fabric_search``.
-
-* :ghissue:`2506`: CouchDB's release candidates no longer propagate the RC tags
-  into each Erlang application's version string.
-
-* :ghissue:`2511`: `recon`_, the Erlang diagnostic toolkit, has been added to
-  CouchDB's build process and ships in the release + convenience binaries.
-
-* Fauxton updated to v1.2.3, which includes:
-
-  * Support multiple server-generated warnings when running queries
-
-  * Partitioned database support
-
-  * Search index support
-
-  * Remove references to deprecated dbinfo fields
-
-  * Improve accessibility for screen readers
-
-  * Numerous CSS fixes
-
-* Improved test cases:
-
-  * Many, many test race conditions and bugs have been removed (PR list too long to
-    include here!)
-
-  * More test cases were ported to Elixir, including:
-
-    * Cluster with and without quorum tests (:ghissue:`1812`)
-    * ``delayed_commits`` (:ghissue:`1796`)
-    * ``multiple_rows`` (:ghissue:`1958`)
-    * ``invalid_docids`` (:ghissue:`1968`)
-    * ``replication`` (:ghissue:`2090`)
-    * All ``attachment_*`` tests (:ghissue:`1999`)
-    * ``copy_doc`` (:ghissue:`2000`)
-    * ``attachments`` (:ghissue:`1953`)
-    * ``erlang_views`` (:ghissue:`2237`)
-    * ``auth_cache``, ``cookie_auth``, ``lorem*``, ``multiple_rows``, ``users_db``,
-      ``utf8`` (:ghissue:`2394`)
-    * ``etags_head`` (:ghissue:`2464`, :ghissue:`2469`)
-
-  * :ghissue:`2431`: ``chttpd_purge_tests`` have been improved in light of CI failures.
-
-  * :ghissue:`2432`: Address flaky test failure on ``t_invalid_view/1``.
-
-  * :ghissue:`2363`: Elixir tests now run against a single node cluster, in line with
-    the original design of the JavaScript test suite. This is a permanent change.
-
-  * :ghissue:`1893`: Add "w:3" for lots of doc tests.
-
-  * :ghissue:`1939`, :ghissue:`1931`: Multiple fixes to improve support in constrained
-    CI environments.
-  * :ghissue:`2346`: Big-endian support for the ``couch_compress`` tests.
-
-  * :ghissue:`2314`: Do not auto-index when testing ``update=false`` in Mango.
-
-  * :ghissue:`2141`: Fix ``couch_views`` encoding test.
-
-  * :ghissue:`2123`: Timeout added for ``fold_docs-with_different_keys`` test.
-
-  * :ghissue:`2114`: EUnit tests now correctly inherit necessary environment
-    variables.
-
-  * :ghissue:`2122`: `:meck.unload()` is now called automatically after every test.
-
-  * :ghissue:`2098`: Fix ``cpse_test_purge_replication`` eunit test.
-
-  * :ghissue:`2085`, :ghissue:`2086`: Fix a flaky ``mem3_sync_event_listener`` test.
-
-  * :ghissue:`2084`: Increase timeouts on two slow btree tests.
-
-  * :ghissue:`1960`, :ghissue:`1961`: Fix for ``chttpd_socket_buffer_size_test``.
-
-  * :ghissue:`1922`: Tests added for shard splitting functionality.
-
-  * :ghissue:`1869`: New test added for doc reads with etag ``If-None-Match`` header.
-
-  * :ghissue:`1831`: Re-introduced `cpse_test_purge_seqs` test.
-
-  * :ghissue:`1790`: Reorganise ``couch_flag_config_tests`` into a proper suite.
-
-  * :ghissue:`1785`: Use ``devclean`` on elixir target for consistency of Makefile.
-
-  * :ghissue:`2476`: For testing, ``Triq`` has been replaced with ``PropEr`` as an
-    optional dependency.
-
-* External dependency updates:
-
-  * :ghissue:`1870`: Mochiweb has been updated to 2.19.0.
-
-  * :ghissue:`1938`: Folsom has been updated to 0.8.3.
-
-  * :ghissue:`2001`: ibrowse has been updated to 4.0.1-1.
-
-  * :ghissue:`2400`: jiffy has been updated to 1.0.1.
-
-* A llama! OK, no, not really. If you got this far...thank you for reading.
-
-.. _Python black: https://github.com/ambv/black
-.. _hehaden: https://www.flickr.com/photos/hellie55/23379351593/
-.. _ECMAScript compatibility table: https://kangax.github.io/compat-table/
-.. _recon: https://github.com/ferd/recon
-.. _.NET Framework v3.5: https://docs.microsoft.com/en-us/dotnet/framework/install/dotnet-35-windows-10
diff --git a/src/whatsnew/3.1.rst b/src/whatsnew/3.1.rst
deleted file mode 100644
index 5715b42..0000000
--- a/src/whatsnew/3.1.rst
+++ /dev/null
@@ -1,147 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/3.1.x:
-
-============
-3.1.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/3.1.2:
-
-Version 3.1.2
-=============
-
-This is a security release for a *low severity* vulnerability. Details of
-the issue will be published one week after this release. See the `CVE
-database <https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-38295>`_
-for details at a later time.
-
-.. _release/3.1.1:
-
-Version 3.1.1
-=============
-
-Features and Enhancements
--------------------------
-
-.. rst-class:: open
-
-* :ghissue:`3102`, :ghissue:`1600`, :ghissue:`2877`, :ghissue:`2041`: When a
-  client disconnects unexpectedly, CouchDB will no longer log a "``normal :
-  unknown``" error. Bring forth the rainbows.
-
-.. figure:: ../../images/gf-gnome-rainbows.png
-  :align: center
-  :alt: The Gravity Falls gnome pukes some rainbows for us.
-
-* :ghissue:`3109`: Drilldown parameters for text index searches may now be
-  specified as a list of lists, to avoid having to define this redundantly
-  in a single query. (Some languages don't have this facility.)
-
-* :ghissue:`3132`: The new ``[chttpd] buffer_response`` option can be enabled
-  to delay the start of a response until the end has been calculated. This
-  increases memory usage, but simplifies client error handling as it
-  eliminates the possibility that a response may be deliberately
-  terminated midway through, due to a timeout. This config value may be
-  changed at runtime, without impacting any in-flight responses.
-
-Performance
------------
-
-Bugfixes
---------
-
-* :ghissue:`2935`: The replicator now correctly picks jobs to restart during
-  rescheduling, where previously with high load it may have failed to try to
-  restart crashed jobs.
-
-* :ghissue:`2981`: When handling extremely large documents (≥50MB), CouchDB
-  can no longer time out on a ``gen_server:call`` if bypassing the IOQ.
-
-* :ghissue:`2941`: CouchDB will no longer fail to compact databases if it
-  finds files from a 2.x compaction process (prior to an upgrade) on disk.
-
-* :ghissue:`2955` CouchDB now sends the correct CSP header to ensure
-  Fauxton operates correctly with newer browsers.
-
-* :ghissue:`3061`, :ghissue:`3080`: The `couch_index` server won't crash
-  and log errors if a design document is deleted while that index is
-  building, or when a ddoc is added immediately after database creation.
-
-* :ghissue:`3078`: CouchDB now checks for and complains correctly about
-  invalid parameters on database creation.
-
-* :ghissue:`3090`: CouchDB now correctly encodes URLs correctly when
-  encoding the ``atts_since`` query string.
-
-* :ghissue:`2953`: Some parameters not allowed for text-index queries on
-  partitioned database are now properly validated and rejected.
-
-* :ghissue:`3118`: Text-based search indexes may now be cleaned up
-  correctly, even if the design document is now invalid.
-
-* :ghissue:`3121`: ``fips`` is now only reported in the welcome message
-  if FIPS mode was enabled at boot (such as in ``vm.args``).
-
-* :ghissue:`3128`: Using :method:`COPY` to copy a document will no longer
-  return a JSON result with two ``ok`` fields.
-
-* :ghissue:`3138`: Malformed URLs in replication requests or documents
-  will no longer throw an error.
-
-Other
------
-
-* JS tests skip faster now.
-
-* More JS tests ported into elixir: ``reader_acl``, ``reduce_builtin``,
-  ``reduce_false``, ``rev_stemming``, ``update_documents``,
-  ``view_collation_raw``, ``view_compaction``, all the
-  ``view_multi_key`` tests, ``view_sandboxing``,
-  ``view_update_seq``.
-
-.. _release/3.1.0:
-
-Version 3.1.0
-=============
-
-Features and Enhancements
--------------------------
-
-.. rst-class:: open
-
-* :ghissue:`2648`: Authentication via :ref:`JSON Web Token (JWT) <api/auth/jwt>`. Full
-  documentation is at the friendly link.
-
-* :ghissue:`2770`: CouchDB now supports linking against SpiderMonkey 68, the current
-  Mozilla SpiderMonkey ESR release. This provides direct support for packaging on the
-  latest operating system variants, including Ubuntu 20.04 "Focal Fossa."
-
-* A new Fauxton release is included, with updated dependencies, and a new optional
-   CouchDB news page.
-
-Performance
------------
-
-.. rst-class:: open
-
-* :ghissue:`2754`: Optimized compactor performance, resulting in a 40% speed improvement
-  when document revisions approach the ``revs_limit``. The fixes also include additional
-  metrics on size tracking during the sort and copy phases, accessible via the
-  :get:`GET /_active_tasks </active_tasks>` endpoint.
-
-* A big bowl of candy! OK, no, not really. If you got this far...thank you for reading.
diff --git a/src/whatsnew/3.2.rst b/src/whatsnew/3.2.rst
deleted file mode 100644
index d9962c5..0000000
--- a/src/whatsnew/3.2.rst
+++ /dev/null
@@ -1,336 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _release/3.2.x:
-
-============
-3.2.x Branch
-============
-
-.. contents::
-    :depth: 1
-    :local:
-
-.. _release/3.2.2:
-
-Version 3.2.2
-=============
-
-Bugfixes
----------
-
-* Fix for `CVE-2022-24706 <https://docs.couchdb.org/en/stable/cve/2022-24706.html>`_.
-  This is a security release for a *critical* vulnerability.
-
-* :ghissue:`3963`: Optimize compaction and doc updates for conflicted
-  documents on Erlang versions higher than 21.
-
-* :ghissue:`3852`: Add support for SpiderMonkey 91esr.
-
-.. _release/3.2.1:
-
-Version 3.2.1
-=============
-
-Features and Enhancements
--------------------------
-
-* :ghissue:`3746`: ``couch_icu_driver`` collation driver has been
-  removed. ICU collation functionality is consolidated in the single
-  ``couch_ejson_compare`` module. View performance might slightly
-  increase as there are less corner cases when the C collation driver
-  fails and falls back to Erlang.
-
-* :ghissue:`3787`: Update sequences generated from DB info and
-  ``_changes?since=now&limit=0`` now contain shard uuids as part of
-  their internal, opaque, representation. As a result, there should be
-  less chance of experiencing changes feed rewinds with these
-  sequences.
-
-* :ghissue:`3798`: ICU driver and collator algorithm versions are
-  returned in the ``_node/$node/_versions`` result.
-
-* :ghissue:`3801`: Users with the ``_metrics`` role can now read
-  ``_prometheus`` metrics.
-
-Bugfixes
---------
-
-* :ghissue:`3780`: Avoid changes feed rewinds after shard moves.
-
-* :ghissue:`3779`, :ghissue:`3785`: Prevent deleted view file cleanup
-  from crashing when database is deleted while the cleanup process is
-  running.
-
-* :ghissue:`3789`: Fix ``badarith`` 500 errors when ``[fabric]
-  request_timeout`` is set to ``infinity``.
-
-* :ghissue:`3786`: Fix off-by-one ``limit`` error for
-  ``_all_dbs``. Also, the auto-injected shard ``_dbs`` design doc is
-  removed and replaced with an Erlang module.
-
-* :ghissue:`3788`: Minimize changes feeds rewinds when a node is down.
-
-* :ghissue:`3807`: Enable ``custodian`` application
-  reporting. Previously, ``custodian`` was accidentally left disabled
-  as it used a hard-coded shards db name different than ``_dbs``.
-
-* :ghissue:`3805`: Cluster setup correctly syncs admin passwords and
-  uses the new (since 3.2.0) ``[chttpd_auth]`` config section instead
-  of the previous ``[couch_httpd_auth]`` section.
-
-* :ghissue:`3810`: Local development ``dev/run`` script now uses the
-  ``[chttpd_auth]`` section in ``local.ini`` instead of
-  ``[couch_httpd_auth]``.
-
-* :ghissue:`3773`: Fix reduce view collation results for unicode
-  equivalent keys.
-
-.. _release/3.2.0:
-
-Version 3.2.0
-=============
-
-Features and Enhancements
--------------------------
-
-* :ghissue:`3364`: CouchDB's replicator now implements a Fair Share replication
-  scheduler. Rather than using a round-robin scheduling mechanism, this update allows
-  specifying the relative priority of jobs via different ``_replicator`` databases.
-  More information is available in the :ref:`_replicator DB docs <replicator>`.
-
-.. figure:: ../../images/fair-enough.png
-  :align: center
-  :alt: Robert Downey, Jr., thinks that's fair enough for him.
-
-* :ghissue:`3166`: Allow custom JWT claims for roles, via the ``[jwt_auth]
-  roles_claim_name`` config setting.
-
-* :ghissue:`3296`, :ghissue:`3312`: CouchDB now includes ``weatherreport`` and its
-  dependency ``custodian``, a diagnostic app forked from Basho's ``riaknostic`` tool.
-  More documentation is available in the :ref:`Cluster Troubleshooting
-  <cluster/troubleshooting>` section.
-
-* :ghissue:`2911`, :ghissue:`3298`, :ghissue:`3425`: CouchDB now returns the version of
-  SpiderMonkey to administrators in the ``GET /_node/{node-name}/_versions`` response.
-
-* :ghissue:`3303`: CouchDB now treats a ``408`` response received by the replicator
-  similar to any ``5xx`` error (by retrying, as opposed to a permanent error). CouchDB
-  will never return a ``408``, but some reverse proxies in front of CouchDB may return
-  this code.
-
-* :ghissue:`3322`: ``_session`` now accepts gzip encoding.
-
-* :ghissue:`3254`: The new ``$keyMapMatch`` operator allows Mango to query on the keys
-  of a map. It is similar to the ``$elemMatch`` operator, but instead of operating on
-  the elements of array, it operates on the keys of a map.
-
-* :ghissue:`3336`: Developers now have access to a ``.devcontainer`` configuration for
-  the 3.x version of CouchDB, right in the source code repository.
-
-* :ghissue:`3347`: The default maximum attachment size has been reduced from
-  ``infinity`` to 1 GiB.
-
-* :ghissue:`3361`: Compaction process suspension now appears in the ``active_tasks``
-  output, allowing administrators to verify that the ``strict_window`` value is being
-  respected.
-
-* :ghissue:`3378`: The ``[admins]`` section and the ``[replicator] password`` are now
-  redacted from all logs. In addition, :ghissue:`3380` removes user credentials,
-  user documents and design documents from logfiles as much as possible. Further,
-  :ghissue:`3489` no longer logs all of the messages received by a terminated internal
-  Erlang process.
-
-* :ghissue:`3421`, :ghissue:`3500`: CouchDB now supports SpiderMonkey 78 and 86.
-
-* :ghissue:`3422`: CouchDB now supports Erlang/OTP 23 and ``error_logger`` reports
-  for Erlang/OTP >= 21.
-
-* :ghissue:`3566`: CouchDB now also supports Erlang/OTP 24.
-
-* :ghissue:`3571`: CouchDB *no longer supports Erlang/OTP 19*.
-
-* :ghissue:`3643`: Contribute a custom Erlang network protocol to CouchDB,
-  users can specify nodes to use TCP or TLS.
-
-.. figure:: ../../images/TLS-Handshake.png
-  :align: center
-  :alt: The SSL/TLS handshake enables the TLS client and server to establish
-        the secret keys with which they communicate.
-
-* :ghissue:`3472`, :ghissue:`3473`, :ghissue:`3609`: Migrate some config options from
-  ``[httpd]`` to ``[chttpd]``, migrate some from ``[couch_httpd_auth]`` to
-  ``[chttpd_auth]``, and comment all out in the ``default.ini``.
-
-  * Config options moved from ``[httpd]`` to ``[chttpd]``:
-    ``allow_jsonp``, ``changes_timeout``, ``config_whitelist``,
-    ``enable_cors``, ``secure_rewrites``, ``x_forwarded_host``,
-    ``x_forwarded_proto``, ``x_forwarded_ssl``,
-    ``enable_xframe_options``, ``max_http_request_size``.
-
-  * Config options moved from ``[couch_httpd_auth]`` to ``[chttpd_auth]``:
-    ``authentication_redirect``, ``timeout``, ``auth_cache_size``,
-    ``allow_persistent_cookies``, ``iterations``, ``min_iterations``, ``max_iterations``,
-    ``password_scheme``, ``proxy_use_secret``, ``public_fields``, ``secret``,
-    ``users_db_public``, ``x_auth_roles``, ``x_auth_token``, ``x_auth_username``,
-    ``cookie_domain``, ``same_site``
-
-* :ghissue:`3586`: We added a new way of specifying basic auth credentials
-  which can include various characters previously not allowed to be included
-  in the url info part of endpoint urls.
-
-* :ghissue:`3483`: We added a way of specifying requirements for new user passwords
-  using a list of regular expressions.
-
-* :ghissue:`3506`, :ghissue:`3416`, :ghissue:`3377`: CouchDB now provides a Prometheus
-  compatible endpoint at ``GET /_node/{node-name}/_prometheus``. A configuration option
-  allows for scraping via a different port (17986) that does not require authentication,
-  if desired. More information is available at the :ref:`Prometheus API endpoint
-  <api/server/prometheus>` summary.
-
-* :ghissue:`3697`, :issue:`883` (JIRA): As an opt-in policy, CouchDB can now stop
-  encoding the plus sign ``+`` in non-query parts of URLs, in compliance with the
-  original CouchDB standards. The opt-in is via the ``[chttpd] decode_plus_to_space =
-  true`` setting. *In CouchDB 4.x, this is going to be an opt-out policy.*
-
-* :ghissue:`3724`: CouchDB now has new CSP settings for attachments and show/list
-  functions. This deprecates the old ``[csp] enable`` and ``[csp] header_value``
-  settings, replacing them with the new ``[csp] utils_enable`` and ``[csp]
-  utils_header_value`` settings respectively. In addition, new settings for
-  ``attachments_enable``, ``attachments_header_value``, ``showlist_enable`` and
-  ``showlist_header_value`` now are available. Documentation is in the ``default.ini``
-  file.
-
-* :ghissue:`3734`, :ghissue:`3733`: Users with databases that have low ``q`` and ``n``
-  values would often receive the ``No DB shards could be opened`` error when the cluster
-  is overloaded, due to a hard-coded 100ms timeout. CouchDB now calculates a more
-  reasonable timeout, based on the number of shards and the overall maximum fabric
-  request timeout limit, using a geometric series.
-
-Performance
------------
-
-* :ghissue:`3337`: Developer nodes now start faster when using the ``dev/run`` script.
-
-* :ghissue:`3366`: The monolithic ``couch_server`` process has been sharded for
-  performance. Previously, as a single ``gen_server``, the process would
-  have a finite throughput that, in busy clusters, is easily breached -- causing a
-  sizeable backlog in the message queue, ultimately leading to failure and errors. No
-  more! The aggregate message queue info is still available in the ``_system`` output.
-  ( :ghissue:`3370` )
-
-* :ghissue:`3208`: CouchDB now uses the latest ibrowse 4.4.2 client for the replicator.
-
-* :ghissue:`3600`, :ghissue:`3047`, :ghissue:`3019`: The default ``slack`` channel for
-  smoosh auto-compaction has been increased to a more reasonable value, reducing load
-  on systems that would have normally been idle in CouchDB 2.x (where no auto-compaction
-  daemon exists).
-
-* :ghissue:`3711`: Changes feeds may no longer rewind after shard moves, assuming the
-  node and range specified by the changes feed nonce can still match an existing node's
-  shard.
-
-Bugfixes
---------
-
-* Complete retirement of the JavaScript test suite - replaced by Elixir. Hooray!
-* :ghissue:`3165`: Allow configurability of JWT claims that require a value. Also fixes
-  :ghissue:`3232`. Further, :ghissue:`3392` no longer validates claims provided that
-  CouchDB does not require.
-* :ghissue:`3160`, :ghissue:`3161`: The ``run_queue`` statistic now returns valid
-  information even when using Erlang BEAM dirty CPU and IO queues.
-* :ghissue:`3162`: Makefiles updated to include local configs & clean configs when
-  running ``make devclean``.
-* :ghissue:`3195`: The ``max_document_size`` parameter now has a clearer explanation in
-  ``default.ini``.
-* :ghissue:`3207`, :ghissue:`2536`: Improve the ``INSTALL.Unix.md`` file.
-* :ghissue:`3212`: Base and extra headers are properly combined when making replicator
-  requests that contain duplicate headers.
-* :ghissue:`3201`: When using a POST with request body to pass parameters to a view-like
-  request, the boolean parameters are accepting only JSON strings, but not booleans.
-  Now, CouchDB accepts ``true`` and ``false`` for the ``stable`` parameter, in addition
-  to ``"true"`` and ``"false"``.
-  comment in
-* :ghissue:`1988`: Attachment operations ``PUT /db/doc`` and ``POST /db`` now perform
-  consistent attachment name validation.
-* :ghissue:`3249`: Documents with lots of conflicts no longer blow up couchjs if the
-  user calls ``_changes`` with a JS filter and with ``style=all_docs``.
-* :ghissue:`3144`: Respawning compaction jobs to catch up with intervening changes are
-  now handled correctly by the smoosh monitor.
-* :ghissue:`3252`: CouchDB now exports the ``couch_util:json_decode/2`` function to
-  support maps instead of the default data structure.
-* :ghissue:`3255`, :ghissue:`2558`: View files that have incorrect ``db_headers``
-  now reset the index forcing a rebuild.
-* :ghissue:`3271`: Attachments that are stored uncompressed but later replicated to
-  nodes that compress the attachment no longer fail an internal md5 check that would
-  break eventual consistency between nodes.
-* :ghissue:`3277`: ``req_body`` requests that have ``req_body`` set already now
-  properly return the field without parsing.
-* :ghissue:`3279`: Some default headers were missing from some responses in replication,
-  including ``X-CouchDB-Body-Time`` and ``X-Couch-Request-ID``.
-* :ghissue:`3329`, :ghissue:`2962`: CouchDB no longer returns broken couchjs processes
-  to the internal viewserver process pool.
-* :ghissue:`3340`, :ghissue:`1943`: ``PUTs`` of ``multipart/related`` attachments now
-  support a ``Transfer-Encoding`` value of ``chunked``. Hooray!
-* :ghissue:`2858`, :ghissue:`3359`: The cluster setup wizard no longer fails when a
-  request to ``/`` is not made before a request to ``finish_cluster``.
-* :ghissue:`3368`: Changing the ``max_dbs_open`` configuration setting correctly
-  ensures that each new ``couch_server_X`` property receives ``1/num_servers()`` of it.
-* :ghissue:`3373`: Requests to ``{db}/_changes`` with a custom filter no longer result
-  in a fabric request timeout if the request body is not available to additional cluster
-  nodes, resulting in a more descriptive exit message and proper JSON object validation
-  in the payload.
-* :ghissue:`3409`: The internal ``chttpd_external:json_req_obj/2`` function now reads
-  the cached ``peer`` before falling back to a socket read operation.
-* :ghissue:`3335`, :ghissue:`3617`, :ghissue:`3708`: The ``COUCHDB_FAUXTON_DOCROOT``
-  environment variable is now introduced to allow its explicit overriding at startup.
-* :ghissue:`3471`: http clients should no longer receive stacktraces unexpectedly.
-* :ghissue:`3491`: libicu tests no longer fail on older OS releases such as CentOS 6
-  and 7.
-* :ghissue:`3541`: Usernames and passwords can now contain `@` and not break the
-  CouchDB replicator.
-* :ghissue:`3545`: The ``dreyfus_index_manager`` process now supports offheap message
-  queues.
-* :ghissue:`3551`: The replication worker pool now properly cleans up worker processes
-  as they are done via the ``worker_trap_exits = false`` setting.
-* :ghissue:`3633`, :ghissue:`3631`: All code paths for creating databases now fully
-  respect db creation options, including partitioning options.
-* :ghissue:`3424`, :ghissue:`3362`: When using ``latest=true`` and an old revision with
-  conflicting children as rev is specified, CouchDB no longer returns an ``"error":
-  "case_clause"`` response.
-* :ghissue:`3673`: Non-existent attachments now return a ``404`` when the attachment
-  is missing.
-* :ghissue:`3698`: The ``dev/run`` development script now allows clusters where ``n >
-  5``.
-* :ghissue:`3700`: The ``maybe_close`` message is now sent to the correct internal
-  process.
-* :ghissue:`3183`: The smoosh operator guide now recommends to use the ``rpc:multicall``
-  function.
-* :ghissue:`3712`: Including a payload within a ``DELETE`` operation no longer hangs
-  the next request made to the same mochiweb acceptor.
-* :ghissue:`3715`: For clusters with databases where ``n > [cluster] n``, attachments
-  chunks are longer dropped on quorum writes.
-* :ghissue:`3507`: If a file is truncated underneath CouchDB, CouchDB will now log
-  the filename if it finds this situation with a ``file_truncate_error``.
-* :ghissue:`3739`: Shards with large purge sequences no longer fail to split in a
-  shard splitting job.
-* :ghissue:`3754`: Always return views meta info when ``limit=0`` and
-  ``sorted=true``.
-* :ghissue:`3757`: Properly sort ``descending=true`` view results with a ``keys``
-  list.
-* :ghissue:`3763`: Stabilize view row sorting order when they are merged by the
-  coordinator.
-
-Other
------
-
-* Donuts for everyone! Er, not really - thank you for reading the 3.2 release notes.
diff --git a/src/whatsnew/index.rst b/src/whatsnew/index.rst
deleted file mode 100644
index 5503512..0000000
--- a/src/whatsnew/index.rst
+++ /dev/null
@@ -1,41 +0,0 @@
-.. Licensed under the Apache License, Version 2.0 (the "License"); you may not
-.. use this file except in compliance with the License. You may obtain a copy of
-.. the License at
-..
-..   http://www.apache.org/licenses/LICENSE-2.0
-..
-.. Unless required by applicable law or agreed to in writing, software
-.. distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
-.. WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
-.. License for the specific language governing permissions and limitations under
-.. the License.
-
-.. _releases:
-
-=============
-Release Notes
-=============
-
-.. toctree::
-    :glob:
-    :maxdepth: 2
-
-    3.2
-    3.1
-    3.0
-    2.3
-    2.2
-    2.1
-    2.0
-    1.7
-    1.6
-    1.5
-    1.4
-    1.3
-    1.2
-    1.1
-    1.0
-    0.11
-    0.10
-    0.9
-    0.8
diff --git a/static/css/rtd_theme.css b/static/css/rtd_theme.css
deleted file mode 100644
index 17b53b8..0000000
--- a/static/css/rtd_theme.css
+++ /dev/null
@@ -1,60 +0,0 @@
-/*
- * Licensed under the Apache License, Version 2.0 (the "License"); you may not
- * use this file except in compliance with the License. You may obtain a copy of
- * the License at
- *
- *   http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
- * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
- * License for the specific language governing permissions and limitations under
- * the License.
- */
-
-@import url("theme.css");
-
-.wy-side-nav-search {
-  background-color: #370000;
-}
-
-.wy-nav-content {
-  background-color: #eee;
-}
-
-.wy-menu h2 {
-  margin: 1em auto 0.3em 0.25em;
-  color: #bbb;
-}
-
-/* -- index page ------------------------------------------------------------ */
-
-table.contentstable {
-    width: 90%;
-}
-
-table.contentstable p.biglink {
-    line-height: 150%;
-}
-
-a.biglink {
-    font-size: 1.3em;
-}
-
-span.linkdescr {
-    font-style: italic;
-    padding-top: 5px;
-    font-size: 90%;
-}
-
-/* https://github.com/rtfd/sphinx_rtd_theme/pull/591/files and
-   https://github.com/rtfd/sphinx_rtd_theme/issues/590 */
-
-.rst-content .section ol li p:last-child,
-.rst-content .section ul li p:last-child {
-    margin-bottom: 24px !important;
-}
-
-ul.simple {
-  margin-bottom: 2ex !important; /* dirty but necessary hack */
-}
\ No newline at end of file
diff --git a/templates/layout.html b/templates/layout.html
deleted file mode 100644
index 3840a61..0000000
--- a/templates/layout.html
+++ /dev/null
@@ -1,55 +0,0 @@
-<!--
-Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-this file except in compliance with the License. You may obtain a copy of the
-License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software distributed
-under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-CONDITIONS OF ANY KIND, either express or implied. See the License for the
-specific language governing permissions and limitations under the License.
--->
-
-{%- extends "!layout.html" %}
-{% block menu %}
-<h2>Table of Contents</h2>
-{{ super() }}
-<h2>Quick Reference Guides</h2>
-<ul>
-<li><a href="{{ pathto('http-api') }}">API Quick Reference</a></li>
-<li><a href="{{ pathto('config-ref') }}">Configuration Quick Reference</a></li>
-</ul>
-
-{% if local %}
-<h2>Local Links</h2>
-<ul>
-<li><a href="../">Fauxton</a></li>
-</ul>
-{% endif %}
-
-<h2>More Help</h2>
-<ul>
-<li><a href="https://couchdb.apache.org/"{% if not local %} onclick="_gaq.push(['_link', 'https://couchdb.apache.org/']); return false;"{% endif %}>CouchDB Homepage</a></li>
-<li><a href="https://couchdb.apache.org/#mailing-list"{% if not local %} onclick="_gaq.push(['_link', 'https://couchdb.apache.org/#mailing-list']); return false;"{% endif %}>Mailing Lists</a></li>
-<li><a href="https://couchdb.apache.org/#chat">Realtime Chat</a></li>
-<li><a href="https://github.com/apache/couchdb/issues">Issue Tracker</a></li>
-<li><a href="{{ pathto('download') }}">Download Docs</a></li>
-</ul>
-
-{% if not local %}
-<script type="text/javascript">
-  var _gaq = _gaq || [];
-  _gaq.push(['_setAccount', '{{ ga_code }}']);
-  _gaq.push(['_setDomainName', 'couchdb.org']);
-  _gaq.push(['_setAllowLinker', true]);
-  _gaq.push(['_trackPageview']);
-
-  (function() {
-    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
-    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
-    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
-  })();
-</script>
-{% endif %}
-{% endblock %}
diff --git a/templates/pages/download.html b/templates/pages/download.html
deleted file mode 100644
index 76fe93d..0000000
--- a/templates/pages/download.html
+++ /dev/null
@@ -1,48 +0,0 @@
-<!--
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-this file except in compliance with the License. You may obtain a copy of the
-License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software distributed
-under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-CONDITIONS OF ANY KIND, either express or implied. See the License for the
-specific language governing permissions and limitations under the License.
-
--->
-
-{% extends "layout.html" %}
-{% set title = 'Download' %}
-{% set url = 'https://media.readthedocs.org/%s/couchdb/%s/couchdb.%s' %}
-{% if git_branch == 'master' %}
-  {% set rtd_ver = 'latest' %}
-{% else %}
-  {% set rtd_ver = git_branch %}
-{% endif %}
-
-{% block body %}
-<h1>Download Apache CouchDB™ {{ release }} Documentation</h1>
-
-<p>To download an archive containing all the documents for this version of
-CouchDB in one of various formats, follow one of links in this table</p>
-
-<table class="docutils">
-  <tr>
-     <td>PDF (A4 paper size)</td>
-     <td><a href="{{ url|format('pdf', rtd_ver, 'pdf') }}">Download</a> (~1 MB)</td>
-  </tr>
-  <tr>
-     <td>HTML</td>
-     <td><a href="{{ url|format('htmlzip', rtd_ver, 'zip') }}">Download</a> (~5 MB)</td>
-  </tr>
-  <tr>
-    <td>EPUB</td>
-    <td><a href="{{ url|format('epub', rtd_ver, 'epub') }}">Download</a> (~1 MB)</td>
-  </tr>
-</table>
-
-<p>These archives contain all the content of the documentation.</p>
-
-{% endblock %}
diff --git a/templates/pages/index.html b/templates/pages/index.html
deleted file mode 100644
index 0506b60..0000000
--- a/templates/pages/index.html
+++ /dev/null
@@ -1,195 +0,0 @@
-<!--
-
-Licensed under the Apache License, Version 2.0 (the "License"); you may not use
-this file except in compliance with the License. You may obtain a copy of the
-License at
-
-   http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing, software distributed
-under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
-CONDITIONS OF ANY KIND, either express or implied. See the License for the
-specific language governing permissions and limitations under the License.
-
--->
-
-{% extends "layout.html" %}
-{% set title = _('Overview') %}
-{% block body %}
-  <h1>Apache CouchDB<sup>&reg;</sup> {{ release|e }} Documentation</h1>
-  <p>
-    {{ _('Welcome! This is') }}
-    {% block description %}{{ _('the documentation for') }} Apache CouchDB<sup>&reg;</sup>
-    {{ release|e }}{% if last_updated %}, {{ _('last updated') }} {{ last_updated|e }}{% endif %}{% endblock %}.
-</p>
-{% block tables %}
-<h2>Meet CouchDB</h2>
-<table class="contentstable" align="center">
-  <tr>
-    <td width="50%">
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("whatsnew/" + version) }}">
-          What's new in CouchDB {{ version }}?
-        </a>
-        <br />
-        <span class="linkdescr">
-          or browse <a href="{{ pathto("whatsnew/index") }}">all "What's new" documents</a>
-        </span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("intro/why") }}">
-          Why CouchDB?
-        </a>
-        <br />
-        <span class="linkdescr">why you might want to use CouchDB</span>
-      </p>
-    <td width="50%">
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("intro/overview") }}">
-          Technical Overview
-        </a>
-        <br />
-        <span class="linkdescr">a quick overview of technology used</span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("intro/consistency") }}">
-          Eventual Consistency
-        </a>
-        <br />
-        <span class="linkdescr">how CouchDB handles synchronization</span>
-      </p>
-    </td>
-  </tr>
-</table>
-
-<h2>Getting started</h2>
-<table class="contentstable" align="center">
-  <tr>
-    <td width="50%">
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("install/index") }}">
-          Installation guides
-        </a>
-        <br />
-        <span class="linkdescr">
-          install CouchDB on <a href="{{ pathto("install/windows") }}">Windows</a>,
-          <a href="{{ pathto("install/mac") }}">OS X</a> or
-          <a href="{{ pathto("install/unix") }}">Linux</a>
-        </span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("intro/tour") }}">
-          Tutorial
-        </a>
-        <br />
-        <span class="linkdescr">start using CouchDB with <a href="{{ pathto("fauxton/index") }}">Fauxton</a> and
-          <a href="{{ pathto("intro/curl") }}">cURL</a></span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("intro/api") }}">
-          HTTP API overview
-        </a>
-        <br />
-        <span class="linkdescr">a short walk though the API</span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("cluster/index") }}">
-          Clustering
-        </a>
-        <br />
-        <span class="linkdescr">setup and manage a cluster of nodes</span>
-      </p>
-    <td width="50%">
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("ddocs/views/intro") }}">
-          Guide to Views
-        </a>
-        <br />
-        <span class="linkdescr">how to query documents with MapReduce</span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("api/database/find") }}">
-          Mango Query Server
-        </a>
-        <br />
-        <span class="linkdescr">
-          simple JSON CouchDB queries without MapReduce
-        </span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("replication/intro") }}">
-          Replication
-        </a>
-        <br />
-        <span class="linkdescr">
-          painless master-master data synchronization
-        </span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("ddocs/ddocs") }}">
-          Design Documents
-        </a>
-        <br />
-        <span class="linkdescr">
-          transform, update or validate your documents
-        </span>
-      </p>
-    </td>
-  </tr>
-</table>
-
-<h2>Reference Documentation &amp; Advanced Topics</h2>
-<table class="contentstable" align="center"><tr>
-  <tr>
-    <td width="50%">
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("api/index") }}">
-          Complete HTTP API Reference
-        </a>
-        <br />
-        <span class="linkdescr">
-          something to come back to
-        </span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("config/index") }}">
-          Configuration Reference
-        </a>
-        <br />
-        <span class="linkdescr">tweak CouchDB to your liking</span>
-      </p>
-    </td>
-    <td width="50%" style="vertical-align: top;">
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("maintenance/index") }}">
-          Maintenance
-        </a>
-        <br />
-        <span class="linkdescr">
-          how to take care of your CouchDB
-        </span>
-      </p>
-      <p class="biglink">
-        <a class="biglink" href="{{ pathto("partitioned-dbs/index") }}">
-          Partitioned Databases
-        </a>
-        <br />
-        <span class="linkdescr">
-          how to use Partitioned Databases in CouchDB
-        </span>
-      </p>
-    </td>
-  </tr>
-</table>
-
-
-<h2>Links</h2>
-<table class="contentstable" align="center"><tr>
-  <td>
-    <p class="biglink"><a class="biglink" href="https://couchdb.apache.org/#download">Download CouchDB</a></p>
-    <p class="biglink"><a class="biglink" href="{{ pathto("http-api") }}">HTTP API Quick Reference</a></p>
-    <p class="biglink"><a class="biglink" href="{{ pathto("config-ref") }}">Configuration Quick Reference</a></p>
-  </td></tr>
-</table>
-{% endblock %}
-{% endblock %}