Use xxHash for couch_file document and attachment summary checksums

Use the 128 bit variant of xxHash as it has the same output size as MD5, is
non-cryptographic, and is quite a bit faster [1].

Writing xxHash checkums is disabled by default. This would allow us to make a
an intermediate release to which we can downgrade to as it can read both xxHash
and MD5 checksums. In a future release writing xxHash checksums will flip to
`true`, but it would be possible to safely downgrade to the intermediate
releases which would be aware of xxHash checksums and won't interpret them as
corrupt data.

If downgrade is not a concern, using xxHash checksums can yield a noticeable
speed improvement when reading larger documents (128KB+ or so).

There is metric which would indicate if any MD5 checksums are present after the
xxHash has been enabled.

To avoid duplicating the verfication logic for headers and blocks combined into
one function with a check if it's for a header or not. This is to preserve
previous behavior where orr headers we don't want to emit emergency logs if
they fail the check since we may be just reading left-over uncommitted data
from the file end.

Add a stats counter to indicate if there are still any md5 checksums
found during normal cluster operation after it has been enabled.

During test coverage checks noticed we never actually tested block level
corruption before so added a test for that as well for both xxHash and legacy
cases.

[1] Comparison of hashing a 4KB block (units are microseconds).
```
(node1@127.0.0.1)20> f(T), {T, ok} = timer:tc(fun() -> lists:foreach(fun (_) -> do_nothing_overhead end, lists:seq(1, 1000000)) end), (T/1000000.0).
0.167425
(node1@127.0.0.1)21> f(T), {T, ok} = timer:tc(fun() -> lists:foreach(fun (_) -> exxhash:xxhash128(B) end, lists:seq(1, 1000000)) end), (T/1000000).
0.770687
(node1@127.0.0.1)22> f(T), {T, ok} = timer:tc(fun() -> lists:foreach(fun (_) -> crypto:hash(md5, B) end, lists:seq(1, 1000000)) end), (T/1000000).
6.205445
```
5 files changed