Fix replicator handling of max_document_size when posting to _bulk_docs

Currently `max_document_size` setting is a misnomer, it actually configures
maximum request body size. For single document requests it is a good enough
approximation. However, _bulk_docs updates could fail the total request size
check even if individual documents stay below the maximum limit.

Before this fix during replication, `_bulk_docs` reqeust would crash, which
eventually leads to an infinite cycles of crashes and restarts (with a
potential large state being dumped to logs), without replicaton job making
progress.

The is to do binary split on the batch size until either all documents will
fit under max_document_size limit, or some documents will fail to replicate.

If documents fail to replicate, they bump the `doc_write_failures` count.
Effectively `max_document_size` acts as in implicit replication filter in this
case.

Jira: COUCHDB-3168
2 files changed
tree: f687fdc3a13fd76666abc667f2a5ba4cd59cedf9
  1. priv/
  2. src/
  3. test/
  4. .gitignore
  5. .travis.yml
  6. LICENSE