ARROW-3536: [C++] Add UTF8 validation functions

The baseline UTF8 decoder is adapted from Bjoern Hoehrmann's DFA-based implementation.
The common case of runs of ASCII chars benefit from a fast path handling 8 bytes at a time.

Benchmark results (on a Ryzen 7 machine with gcc 7.3):
```
-----------------------------------------------------------------------------
Benchmark                                      Time           CPU Iterations
-----------------------------------------------------------------------------
BM_ValidateTinyAscii/repeats:1                 3 ns          3 ns  245245630   3.26202GB/s
BM_ValidateTinyNonAscii/repeats:1              7 ns          7 ns  104679950   1.54295GB/s
BM_ValidateSmallAscii/repeats:1               10 ns         10 ns   66365983   13.0928GB/s
BM_ValidateSmallAlmostAscii/repeats:1         37 ns         37 ns   18755439   3.69415GB/s
BM_ValidateSmallNonAscii/repeats:1            68 ns         68 ns   10267387   1.82934GB/s
BM_ValidateLargeAscii/repeats:1             4140 ns       4140 ns     171331   22.5003GB/s
BM_ValidateLargeAlmostAscii/repeats:1      24472 ns      24468 ns      28565   3.80816GB/s
BM_ValidateLargeNonAscii/repeats:1         50420 ns      50411 ns      13830   1.84927GB/s
```

The case of tiny strings is probably the most important for the use case of CSV type inference.

PS: benchmarks on the same machine with clang 6.0:
```
-----------------------------------------------------------------------------
Benchmark                                      Time           CPU Iterations
-----------------------------------------------------------------------------
BM_ValidateTinyAscii/repeats:1                 3 ns          3 ns  213945214   2.84658GB/s
BM_ValidateTinyNonAscii/repeats:1              8 ns          8 ns   90916423   1.33072GB/s
BM_ValidateSmallAscii/repeats:1                7 ns          7 ns   91498265   17.4425GB/s
BM_ValidateSmallAlmostAscii/repeats:1         34 ns         34 ns   20750233   4.08138GB/s
BM_ValidateSmallNonAscii/repeats:1            58 ns         58 ns   12063206   2.14002GB/s
BM_ValidateLargeAscii/repeats:1             3999 ns       3999 ns     175099   23.2937GB/s
BM_ValidateLargeAlmostAscii/repeats:1      21783 ns      21779 ns      31738   4.27822GB/s
BM_ValidateLargeNonAscii/repeats:1         55162 ns      55153 ns      12526   1.69028GB/s
```

Author: Antoine Pitrou <antoine@python.org>

Closes #2916 from pitrou/ARROW-3536-utf8-validation and squashes the following commits:

9c9713b78 <Antoine Pitrou> Improve benchmarks
e6f23963a <Antoine Pitrou> Use a larger state table allowing for single lookups
29d6e347c <Antoine Pitrou> Help clang code gen
e621b220f <Antoine Pitrou> Use memcpy for safe aligned reads, and improve speed of non-ASCII runs
89f6843d9 <Antoine Pitrou> ARROW-3536:  Add UTF8 validation functions
1 file changed
tree: 5b25613f4a51b159fb8a06f4c4426e3e53110c31
  1. .github/
  2. ci/
  3. dev/
  4. format/
  5. integration/
  6. rust/
  7. site/
  8. .clang-format
  9. .clang-tidy
  10. .clang-tidy-ignore
  11. .dockerignore
  12. .gitattributes
  13. .gitignore
  14. .gitmodules
  15. .pre-commit-config.yaml
  16. .readthedocs.yml
  17. .travis.yml
  18. appveyor.yml
  19. CHANGELOG.md
  20. docker-compose.yml
  21. header
  22. LICENSE.txt
  23. NOTICE.txt
  24. README.md
README.md

Apache Arrow

Powering In-Memory Analytics

Apache Arrow is a development platform for in-memory analytics. It contains a set of technologies that enable big data systems to process and move data fast.

Major components of the project include:

Arrow is an Apache Software Foundation project. Learn more at arrow.apache.org.

What's in the Arrow libraries?

The reference Arrow libraries contain a number of distinct software components:

  • Columnar vector and table-like containers (similar to data frames) supporting flat or nested types
  • Fast, language agnostic metadata messaging layer (using Google's Flatbuffers library)
  • Reference-counted off-heap buffer memory management, for zero-copy memory sharing and handling memory-mapped files
  • Low-overhead IO interfaces to files on disk, HDFS (C++ only)
  • Self-describing binary wire formats (streaming and batch/file-like) for remote procedure calls (RPC) and interprocess communication (IPC)
  • Integration tests for verifying binary compatibility between the implementations (e.g. sending data from Java to C++)
  • Conversions to and from other in-memory data structures

Getting involved

Even if you do not plan to contribute to Apache Arrow itself or Arrow integrations in other projects, we'd be happy to have you involved:

How to Contribute

We prefer to receive contributions in the form of GitHub pull requests. Please send pull requests against the github.com/apache/arrow repository.

If you are looking for some ideas on what to contribute, check out the JIRA issues for the Apache Arrow project. Comment on the issue and/or contact dev@arrow.apache.org with your questions and ideas.

If you’d like to report a bug but don’t have time to fix it, you can still post it on JIRA, or email the mailing list dev@arrow.apache.org

To contribute a patch:

  1. Break your work into small, single-purpose patches if possible. It’s much harder to merge in a large change with a lot of disjoint features.
  2. Create a JIRA for your patch on the Arrow Project JIRA.
  3. Submit the patch as a GitHub pull request against the master branch. For a tutorial, see the GitHub guides on forking a repo and sending a pull request. Prefix your pull request name with the JIRA name (ex: https://github.com/apache/arrow/pull/240).
  4. Make sure that your code passes the unit tests. You can find instructions how to run the unit tests for each Arrow component in its respective README file.
  5. Add new unit tests for your code.

Thank you in advance for your contributions!