| |
| |
| |
| <!DOCTYPE html> |
| <html class="writer-html5" lang="en" > |
| <head> |
| <meta charset="utf-8"> |
| |
| <meta name="viewport" content="width=device-width, initial-scale=1.0"> |
| |
| <title>Reading and Writing the Apache Parquet Format — Apache Arrow v2.0.0</title> |
| |
| |
| |
| <link rel="stylesheet" href="../_static/css/theme.css" type="text/css" /> |
| <link rel="stylesheet" href="../_static/pygments.css" type="text/css" /> |
| |
| |
| |
| |
| |
| |
| |
| <!--[if lt IE 9]> |
| <script src="../_static/js/html5shiv.min.js"></script> |
| <![endif]--> |
| |
| |
| <script type="text/javascript" id="documentation_options" data-url_root="../" src="../_static/documentation_options.js"></script> |
| <script src="../_static/jquery.js"></script> |
| <script src="../_static/underscore.js"></script> |
| <script src="../_static/doctools.js"></script> |
| <script src="../_static/language_data.js"></script> |
| |
| <script type="text/javascript" src="../_static/js/theme.js"></script> |
| |
| |
| <link rel="canonical" href="https://arrow.apache.org/docs/python/parquet.html" /> |
| <link rel="index" title="Index" href="../genindex.html" /> |
| <link rel="search" title="Search" href="../search.html" /> |
| <link rel="next" title="Tabular Datasets" href="dataset.html" /> |
| <link rel="prev" title="Reading JSON files" href="json.html" /> |
|
|
|
|
| <!-- Matomo -->
|
| <script>
|
| var _paq = window._paq = window._paq || [];
|
| /* tracker methods like "setCustomDimension" should be called before "trackPageView" */
|
| _paq.push(["setDoNotTrack", true]);
|
| _paq.push(["disableCookies"]);
|
| _paq.push(['trackPageView']);
|
| _paq.push(['enableLinkTracking']);
|
| (function() {
|
| var u="https://analytics.apache.org/";
|
| _paq.push(['setTrackerUrl', u+'matomo.php']);
|
| _paq.push(['setSiteId', '20']);
|
| var d=document, g=d.createElement('script'), s=d.getElementsByTagName('script')[0];
|
| g.async=true; g.src=u+'matomo.js'; s.parentNode.insertBefore(g,s);
|
| })();
|
| </script>
|
| <!-- End Matomo Code -->
|
|
|
| </head> |
| |
| <body class="wy-body-for-nav"> |
| |
| |
| <div class="wy-grid-for-nav"> |
| |
| <nav data-toggle="wy-nav-shift" class="wy-nav-side"> |
| <div class="wy-side-scroll"> |
| <div class="wy-side-nav-search" > |
| |
| |
| |
| <a href="../index.html" class="icon icon-home" alt="Documentation Home"> Apache Arrow |
| |
| |
| |
| </a> |
| |
| |
| |
| |
| <div class="version"> |
| 2.0.0 |
| </div> |
| |
| |
| |
| |
| <div role="search"> |
| <form id="rtd-search-form" class="wy-form" action="../search.html" method="get"> |
| <input type="text" name="q" placeholder="Search docs" /> |
| <input type="hidden" name="check_keywords" value="yes" /> |
| <input type="hidden" name="area" value="default" /> |
| </form> |
| </div> |
| |
| |
| </div> |
| |
| |
| <div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation"> |
| |
| |
| |
| |
| |
| |
| <p class="caption"><span class="caption-text">Specifications and Protocols</span></p> |
| <ul> |
| <li class="toctree-l1"><a class="reference internal" href="../format/Versioning.html">Format Versioning and Stability</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../format/Columnar.html">Arrow Columnar Format</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../format/Flight.html">Arrow Flight RPC</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../format/Integration.html">Integration Testing</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../format/CDataInterface.html">The Arrow C data interface</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../format/CStreamInterface.html">The Arrow C stream interface</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../format/Other.html">Other Data Structures</a></li> |
| </ul> |
| <p class="caption"><span class="caption-text">Libraries</span></p> |
| <ul class="current"> |
| <li class="toctree-l1"><a class="reference internal" href="../status.html">Implementation Status</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://arrow.apache.org/docs/c_glib/">C/GLib</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../cpp/index.html">C++</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://github.com/apache/arrow/blob/master/csharp/README.md">C#</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://godoc.org/github.com/apache/arrow/go/arrow">Go</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../java/index.html">Java</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://arrow.apache.org/docs/js/">JavaScript</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://github.com/apache/arrow/blob/master/matlab/README.md">MATLAB</a></li> |
| <li class="toctree-l1 current"><a class="reference internal" href="index.html">Python</a><ul class="current"> |
| <li class="toctree-l2"><a class="reference internal" href="install.html">Installing PyArrow</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="memory.html">Memory and IO Interfaces</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="data.html">Data Types and In-Memory Data Model</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="compute.html">Compute Functions</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="ipc.html">Streaming, Serialization, and IPC</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="filesystems.html">Filesystem Interface</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="filesystems_deprecated.html">Filesystem Interface (legacy)</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="plasma.html">The Plasma In-Memory Object Store</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="numpy.html">NumPy Integration</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="pandas.html">Pandas Integration</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="timestamps.html">Timestamps</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="csv.html">Reading CSV files</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="feather.html">Feather File Format</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="json.html">Reading JSON files</a></li> |
| <li class="toctree-l2 current"><a class="current reference internal" href="#">Reading and Writing the Apache Parquet Format</a><ul> |
| <li class="toctree-l3"><a class="reference internal" href="#obtaining-pyarrow-with-parquet-support">Obtaining pyarrow with Parquet Support</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#reading-and-writing-single-files">Reading and Writing Single Files</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="#parquet-file-writing-options">Parquet file writing options</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="#omitting-the-dataframe-index">Omitting the DataFrame index</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="#finer-grained-reading-and-writing">Finer-grained Reading and Writing</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#inspecting-the-parquet-file-metadata">Inspecting the Parquet File Metadata</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#data-type-handling">Data Type Handling</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="#reading-types-as-dictionaryarray">Reading types as DictionaryArray</a></li> |
| <li class="toctree-l4"><a class="reference internal" href="#storing-timestamps">Storing timestamps</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="#compression-encoding-and-file-compatibility">Compression, Encoding, and File Compatibility</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#partitioned-datasets-multiple-files">Partitioned Datasets (Multiple Files)</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#writing-to-partitioned-datasets">Writing to Partitioned Datasets</a><ul> |
| <li class="toctree-l4"><a class="reference internal" href="#writing-metadata-and-common-medata-files">Writing <code class="docutils literal notranslate"><span class="pre">_metadata</span></code> and <code class="docutils literal notranslate"><span class="pre">_common_medata</span></code> files</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l3"><a class="reference internal" href="#reading-from-partitioned-datasets">Reading from Partitioned Datasets</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#using-with-spark">Using with Spark</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#multithreaded-reads">Multithreaded Reads</a></li> |
| <li class="toctree-l3"><a class="reference internal" href="#reading-a-parquet-file-from-azure-blob-storage">Reading a Parquet File from Azure Blob storage</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l2"><a class="reference internal" href="dataset.html">Tabular Datasets</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="cuda.html">CUDA Integration</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="extending_types.html">Extending pyarrow</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="extending.html">Using pyarrow from C++ and Cython Code</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="api.html">API Reference</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="getting_involved.html">Getting Involved</a></li> |
| <li class="toctree-l2"><a class="reference internal" href="benchmarks.html">Benchmarks</a></li> |
| </ul> |
| </li> |
| <li class="toctree-l1"><a class="reference external" href="https://arrow.apache.org/docs/r/">R</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://github.com/apache/arrow/blob/master/ruby/README.md">Ruby</a></li> |
| <li class="toctree-l1"><a class="reference external" href="https://docs.rs/crate/arrow/">Rust</a></li> |
| </ul> |
| <p class="caption"><span class="caption-text">Development</span></p> |
| <ul> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/contributing.html">Contributing to Apache Arrow</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/cpp/index.html">C++ Development</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/python.html">Python Development</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/archery.html">Daily Development using Archery</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/crossbow.html">Packaging and Testing with Crossbow</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/docker.html">Running Docker Builds</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/benchmarks.html">Benchmarks</a></li> |
| <li class="toctree-l1"><a class="reference internal" href="../developers/documentation.html">Building the Documentation</a></li> |
| </ul> |
| |
| |
| |
| </div> |
| |
| </div> |
| </nav> |
| |
| <section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"> |
| |
| |
| <nav class="wy-nav-top" aria-label="top navigation"> |
| |
| <i data-toggle="wy-nav-top" class="fa fa-bars"></i> |
| <a href="../index.html">Apache Arrow</a> |
| |
| </nav> |
| |
| |
| <div class="wy-nav-content"> |
| |
| <div class="rst-content"> |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| <div role="navigation" aria-label="breadcrumbs navigation"> |
| |
| <ul class="wy-breadcrumbs"> |
| |
| <li><a href="../index.html" class="icon icon-home"></a> »</li> |
| |
| <li><a href="index.html">Python bindings</a> »</li> |
| |
| <li>Reading and Writing the Apache Parquet Format</li> |
| |
| |
| <li class="wy-breadcrumbs-aside"> |
| |
| |
| <a href="../_sources/python/parquet.rst.txt" rel="nofollow"> View page source</a> |
| |
| |
| </li> |
| |
| </ul> |
| |
| |
| <hr/> |
| </div> |
| <div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article"> |
| <div itemprop="articleBody"> |
| |
| <div class="section" id="reading-and-writing-the-apache-parquet-format"> |
| <span id="parquet"></span><h1>Reading and Writing the Apache Parquet Format<a class="headerlink" href="#reading-and-writing-the-apache-parquet-format" title="Permalink to this headline">¶</a></h1> |
| <p>The <a class="reference external" href="http://parquet.apache.org/">Apache Parquet</a> project provides a |
| standardized open-source columnar storage format for use in data analysis |
| systems. It was created originally for use in <a class="reference external" href="http://hadoop.apache.org/">Apache Hadoop</a> with systems like <a class="reference external" href="http://drill.apache.org">Apache Drill</a>, <a class="reference external" href="http://hive.apache.org">Apache Hive</a>, <a class="reference external" href="http://impala.apache.org">Apache |
| Impala (incubating)</a>, and <a class="reference external" href="http://spark.apache.org">Apache Spark</a> adopting it as a shared standard for high |
| performance data IO.</p> |
| <p>Apache Arrow is an ideal in-memory transport layer for data that is being read |
| or written with Parquet files. We have been concurrently developing the <a class="reference external" href="http://github.com/apache/parquet-cpp">C++ |
| implementation of Apache Parquet</a>, |
| which includes a native, multithreaded C++ adapter to and from in-memory Arrow |
| data. PyArrow includes Python bindings to this code, which thus enables reading |
| and writing Parquet files with pandas as well.</p> |
| <div class="section" id="obtaining-pyarrow-with-parquet-support"> |
| <h2>Obtaining pyarrow with Parquet Support<a class="headerlink" href="#obtaining-pyarrow-with-parquet-support" title="Permalink to this headline">¶</a></h2> |
| <p>If you installed <code class="docutils literal notranslate"><span class="pre">pyarrow</span></code> with pip or conda, it should be built with Parquet |
| support bundled:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [1]: </span><span class="kn">import</span> <span class="nn">pyarrow.parquet</span> <span class="kn">as</span> <span class="nn">pq</span> |
| </pre></div> |
| </div> |
| <p>If you are building <code class="docutils literal notranslate"><span class="pre">pyarrow</span></code> from source, you must use |
| <code class="docutils literal notranslate"><span class="pre">-DARROW_PARQUET=ON</span></code> when compiling the C++ libraries and enable the Parquet |
| extensions when building <code class="docutils literal notranslate"><span class="pre">pyarrow</span></code>. See the <a class="reference internal" href="../developers/python.html#python-development"><span class="std std-ref">Python Development</span></a> page for more details.</p> |
| </div> |
| <div class="section" id="reading-and-writing-single-files"> |
| <h2>Reading and Writing Single Files<a class="headerlink" href="#reading-and-writing-single-files" title="Permalink to this headline">¶</a></h2> |
| <p>The functions <a class="reference internal" href="generated/pyarrow.parquet.read_table.html#pyarrow.parquet.read_table" title="pyarrow.parquet.read_table"><code class="xref py py-func docutils literal notranslate"><span class="pre">read_table()</span></code></a> and <a class="reference internal" href="generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table" title="pyarrow.parquet.write_table"><code class="xref py py-func docutils literal notranslate"><span class="pre">write_table()</span></code></a> |
| read and write the <a class="reference internal" href="data.html#data-table"><span class="std std-ref">pyarrow.Table</span></a> object, respectively.</p> |
| <p>Let’s look at a simple table:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [2]: </span><span class="kn">import</span> <span class="nn">numpy</span> <span class="kn">as</span> <span class="nn">np</span> |
| |
| <span class="gp">In [3]: </span><span class="kn">import</span> <span class="nn">pandas</span> <span class="kn">as</span> <span class="nn">pd</span> |
| |
| <span class="gp">In [4]: </span><span class="kn">import</span> <span class="nn">pyarrow</span> <span class="kn">as</span> <span class="nn">pa</span> |
| |
| <span class="gp">In [5]: </span><span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">DataFrame</span><span class="p">({</span><span class="s1">'one'</span><span class="p">:</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">nan</span><span class="p">,</span> <span class="mf">2.5</span><span class="p">],</span> |
| <span class="gp"> ...: </span> <span class="s1">'two'</span><span class="p">:</span> <span class="p">[</span><span class="s1">'foo'</span><span class="p">,</span> <span class="s1">'bar'</span><span class="p">,</span> <span class="s1">'baz'</span><span class="p">],</span> |
| <span class="gp"> ...: </span> <span class="s1">'three'</span><span class="p">:</span> <span class="p">[</span><span class="bp">True</span><span class="p">,</span> <span class="bp">False</span><span class="p">,</span> <span class="bp">True</span><span class="p">]},</span> |
| <span class="gp"> ...: </span> <span class="n">index</span><span class="o">=</span><span class="nb">list</span><span class="p">(</span><span class="s1">'abc'</span><span class="p">))</span> |
| <span class="gp"> ...: </span> |
| |
| <span class="gp">In [6]: </span><span class="n">table</span> <span class="o">=</span> <span class="n">pa</span><span class="o">.</span><span class="n">Table</span><span class="o">.</span><span class="n">from_pandas</span><span class="p">(</span><span class="n">df</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>We write this to Parquet format with <code class="docutils literal notranslate"><span class="pre">write_table</span></code>:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [7]: </span><span class="kn">import</span> <span class="nn">pyarrow.parquet</span> <span class="kn">as</span> <span class="nn">pq</span> |
| |
| <span class="gp">In [8]: </span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="s1">'example.parquet'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>This creates a single Parquet file. In practice, a Parquet dataset may consist |
| of many files in many directories. We can read a single file back with |
| <code class="docutils literal notranslate"><span class="pre">read_table</span></code>:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [9]: </span><span class="n">table2</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">read_table</span><span class="p">(</span><span class="s1">'example.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [10]: </span><span class="n">table2</span><span class="o">.</span><span class="n">to_pandas</span><span class="p">()</span> |
| <span class="gh">Out[10]: </span><span class="go"></span> |
| <span class="go"> one two three</span> |
| <span class="go">a -1.0 foo True</span> |
| <span class="go">b NaN bar False</span> |
| <span class="go">c 2.5 baz True</span> |
| </pre></div> |
| </div> |
| <p>You can pass a subset of columns to read, which can be much faster than reading |
| the whole file (due to the columnar layout):</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [11]: </span><span class="n">pq</span><span class="o">.</span><span class="n">read_table</span><span class="p">(</span><span class="s1">'example.parquet'</span><span class="p">,</span> <span class="n">columns</span><span class="o">=</span><span class="p">[</span><span class="s1">'one'</span><span class="p">,</span> <span class="s1">'three'</span><span class="p">])</span> |
| <span class="gh">Out[11]: </span><span class="go"></span> |
| <span class="go">pyarrow.Table</span> |
| <span class="go">one: double</span> |
| <span class="go">three: bool</span> |
| </pre></div> |
| </div> |
| <p>When reading a subset of columns from a file that used a Pandas dataframe as the |
| source, we use <code class="docutils literal notranslate"><span class="pre">read_pandas</span></code> to maintain any additional index column data:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [12]: </span><span class="n">pq</span><span class="o">.</span><span class="n">read_pandas</span><span class="p">(</span><span class="s1">'example.parquet'</span><span class="p">,</span> <span class="n">columns</span><span class="o">=</span><span class="p">[</span><span class="s1">'two'</span><span class="p">])</span><span class="o">.</span><span class="n">to_pandas</span><span class="p">()</span> |
| <span class="gh">Out[12]: </span><span class="go"></span> |
| <span class="go"> two</span> |
| <span class="go">a foo</span> |
| <span class="go">b bar</span> |
| <span class="go">c baz</span> |
| </pre></div> |
| </div> |
| <p>We need not use a string to specify the origin of the file. It can be any of:</p> |
| <ul class="simple"> |
| <li><p>A file path as a string</p></li> |
| <li><p>A <a class="reference internal" href="memory.html#io-native-file"><span class="std std-ref">NativeFile</span></a> from PyArrow</p></li> |
| <li><p>A Python file object</p></li> |
| </ul> |
| <p>In general, a Python file object will have the worst read performance, while a |
| string file path or an instance of <a class="reference internal" href="generated/pyarrow.NativeFile.html#pyarrow.NativeFile" title="pyarrow.NativeFile"><code class="xref py py-class docutils literal notranslate"><span class="pre">NativeFile</span></code></a> (especially memory |
| maps) will perform the best.</p> |
| <div class="section" id="parquet-file-writing-options"> |
| <h3>Parquet file writing options<a class="headerlink" href="#parquet-file-writing-options" title="Permalink to this headline">¶</a></h3> |
| <p><a class="reference internal" href="generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table" title="pyarrow.parquet.write_table"><code class="xref py py-func docutils literal notranslate"><span class="pre">write_table()</span></code></a> has a number of options to |
| control various settings when writing a Parquet file.</p> |
| <ul class="simple"> |
| <li><p><code class="docutils literal notranslate"><span class="pre">version</span></code>, the Parquet format version to use, whether <code class="docutils literal notranslate"><span class="pre">'1.0'</span></code> |
| for compatibility with older readers, or <code class="docutils literal notranslate"><span class="pre">'2.0'</span></code> to unlock more |
| recent features.</p></li> |
| <li><p><code class="docutils literal notranslate"><span class="pre">data_page_size</span></code>, to control the approximate size of encoded data |
| pages within a column chunk. This currently defaults to 1MB</p></li> |
| <li><p><code class="docutils literal notranslate"><span class="pre">flavor</span></code>, to set compatibility options particular to a Parquet |
| consumer like <code class="docutils literal notranslate"><span class="pre">'spark'</span></code> for Apache Spark.</p></li> |
| </ul> |
| <p>See the <a class="reference internal" href="generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table" title="pyarrow.parquet.write_table"><code class="xref py py-func docutils literal notranslate"><span class="pre">write_table()</span></code></a> docstring for more details.</p> |
| <p>There are some additional data type handling-specific options |
| described below.</p> |
| </div> |
| <div class="section" id="omitting-the-dataframe-index"> |
| <h3>Omitting the DataFrame index<a class="headerlink" href="#omitting-the-dataframe-index" title="Permalink to this headline">¶</a></h3> |
| <p>When using <code class="docutils literal notranslate"><span class="pre">pa.Table.from_pandas</span></code> to convert to an Arrow table, by default |
| one or more special columns are added to keep track of the index (row |
| labels). Storing the index takes extra space, so if your index is not valuable, |
| you may choose to omit it by passing <code class="docutils literal notranslate"><span class="pre">preserve_index=False</span></code></p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [13]: </span><span class="n">df</span> <span class="o">=</span> <span class="n">pd</span><span class="o">.</span><span class="n">DataFrame</span><span class="p">({</span><span class="s1">'one'</span><span class="p">:</span> <span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span> <span class="n">np</span><span class="o">.</span><span class="n">nan</span><span class="p">,</span> <span class="mf">2.5</span><span class="p">],</span> |
| <span class="gp"> ....: </span> <span class="s1">'two'</span><span class="p">:</span> <span class="p">[</span><span class="s1">'foo'</span><span class="p">,</span> <span class="s1">'bar'</span><span class="p">,</span> <span class="s1">'baz'</span><span class="p">],</span> |
| <span class="gp"> ....: </span> <span class="s1">'three'</span><span class="p">:</span> <span class="p">[</span><span class="bp">True</span><span class="p">,</span> <span class="bp">False</span><span class="p">,</span> <span class="bp">True</span><span class="p">]},</span> |
| <span class="gp"> ....: </span> <span class="n">index</span><span class="o">=</span><span class="nb">list</span><span class="p">(</span><span class="s1">'abc'</span><span class="p">))</span> |
| <span class="gp"> ....: </span> |
| |
| <span class="gp">In [14]: </span><span class="n">df</span> |
| <span class="gh">Out[14]: </span><span class="go"></span> |
| <span class="go"> one two three</span> |
| <span class="go">a -1.0 foo True</span> |
| <span class="go">b NaN bar False</span> |
| <span class="go">c 2.5 baz True</span> |
| |
| <span class="gp">In [15]: </span><span class="n">table</span> <span class="o">=</span> <span class="n">pa</span><span class="o">.</span><span class="n">Table</span><span class="o">.</span><span class="n">from_pandas</span><span class="p">(</span><span class="n">df</span><span class="p">,</span> <span class="n">preserve_index</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Then we have:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [16]: </span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="s1">'example_noindex.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [17]: </span><span class="n">t</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">read_table</span><span class="p">(</span><span class="s1">'example_noindex.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [18]: </span><span class="n">t</span><span class="o">.</span><span class="n">to_pandas</span><span class="p">()</span> |
| <span class="gh">Out[18]: </span><span class="go"></span> |
| <span class="go"> one two three</span> |
| <span class="go">0 -1.0 foo True</span> |
| <span class="go">1 NaN bar False</span> |
| <span class="go">2 2.5 baz True</span> |
| </pre></div> |
| </div> |
| <p>Here you see the index did not survive the round trip.</p> |
| </div> |
| </div> |
| <div class="section" id="finer-grained-reading-and-writing"> |
| <h2>Finer-grained Reading and Writing<a class="headerlink" href="#finer-grained-reading-and-writing" title="Permalink to this headline">¶</a></h2> |
| <p><code class="docutils literal notranslate"><span class="pre">read_table</span></code> uses the <a class="reference internal" href="generated/pyarrow.parquet.ParquetFile.html#pyarrow.parquet.ParquetFile" title="pyarrow.parquet.ParquetFile"><code class="xref py py-class docutils literal notranslate"><span class="pre">ParquetFile</span></code></a> class, which has other features:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [19]: </span><span class="n">parquet_file</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">ParquetFile</span><span class="p">(</span><span class="s1">'example.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [20]: </span><span class="n">parquet_file</span><span class="o">.</span><span class="n">metadata</span> |
| <span class="gh">Out[20]: </span><span class="go"></span> |
| <span class="go"><pyarrow._parquet.FileMetaData object at 0x7fe9ade1fb48></span> |
| <span class="go"> created_by: parquet-cpp version 1.5.1-SNAPSHOT</span> |
| <span class="go"> num_columns: 4</span> |
| <span class="go"> num_rows: 3</span> |
| <span class="go"> num_row_groups: 1</span> |
| <span class="go"> format_version: 1.0</span> |
| <span class="go"> serialized_size: 2636</span> |
| |
| <span class="gp">In [21]: </span><span class="n">parquet_file</span><span class="o">.</span><span class="n">schema</span> |
| <span class="gh">Out[21]: </span><span class="go"></span> |
| <span class="go"><pyarrow._parquet.ParquetSchema object at 0x7fe9add51d88></span> |
| <span class="go">required group field_id=0 schema {</span> |
| <span class="go"> optional double field_id=1 one;</span> |
| <span class="go"> optional binary field_id=2 two (String);</span> |
| <span class="go"> optional boolean field_id=3 three;</span> |
| <span class="go"> optional binary field_id=4 __index_level_0__ (String);</span> |
| <span class="go">}</span> |
| </pre></div> |
| </div> |
| <p>As you can learn more in the <a class="reference external" href="https://github.com/apache/parquet-format">Apache Parquet format</a>, a Parquet file consists of |
| multiple row groups. <code class="docutils literal notranslate"><span class="pre">read_table</span></code> will read all of the row groups and |
| concatenate them into a single table. You can read individual row groups with |
| <code class="docutils literal notranslate"><span class="pre">read_row_group</span></code>:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [22]: </span><span class="n">parquet_file</span><span class="o">.</span><span class="n">num_row_groups</span> |
| <span class="gh">Out[22]: </span><span class="go">1</span> |
| |
| <span class="gp">In [23]: </span><span class="n">parquet_file</span><span class="o">.</span><span class="n">read_row_group</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> |
| <span class="gh">Out[23]: </span><span class="go"></span> |
| <span class="go">pyarrow.Table</span> |
| <span class="go">one: double</span> |
| <span class="go">two: string</span> |
| <span class="go">three: bool</span> |
| <span class="go">__index_level_0__: string</span> |
| </pre></div> |
| </div> |
| <p>We can similarly write a Parquet file with multiple row groups by using |
| <code class="docutils literal notranslate"><span class="pre">ParquetWriter</span></code>:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [24]: </span><span class="n">writer</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">ParquetWriter</span><span class="p">(</span><span class="s1">'example2.parquet'</span><span class="p">,</span> <span class="n">table</span><span class="o">.</span><span class="n">schema</span><span class="p">)</span> |
| |
| <span class="gp">In [25]: </span><span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">):</span> |
| <span class="gp"> ....: </span> <span class="n">writer</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">)</span> |
| <span class="gp"> ....: </span> |
| |
| <span class="gp">In [26]: </span><span class="n">writer</span><span class="o">.</span><span class="n">close</span><span class="p">()</span> |
| |
| <span class="gp">In [27]: </span><span class="n">pf2</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">ParquetFile</span><span class="p">(</span><span class="s1">'example2.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [28]: </span><span class="n">pf2</span><span class="o">.</span><span class="n">num_row_groups</span> |
| <span class="gh">Out[28]: </span><span class="go">3</span> |
| </pre></div> |
| </div> |
| <p>Alternatively python <code class="docutils literal notranslate"><span class="pre">with</span></code> syntax can also be use:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [29]: </span><span class="k">with</span> <span class="n">pq</span><span class="o">.</span><span class="n">ParquetWriter</span><span class="p">(</span><span class="s1">'example3.parquet'</span><span class="p">,</span> <span class="n">table</span><span class="o">.</span><span class="n">schema</span><span class="p">)</span> <span class="k">as</span> <span class="n">writer</span><span class="p">:</span> |
| <span class="gp"> ....: </span> <span class="k">for</span> <span class="n">i</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="mi">3</span><span class="p">):</span> |
| <span class="gp"> ....: </span> <span class="n">writer</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">)</span> |
| <span class="gp"> ....: </span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="inspecting-the-parquet-file-metadata"> |
| <h2>Inspecting the Parquet File Metadata<a class="headerlink" href="#inspecting-the-parquet-file-metadata" title="Permalink to this headline">¶</a></h2> |
| <p>The <code class="docutils literal notranslate"><span class="pre">FileMetaData</span></code> of a Parquet file can be accessed through |
| <a class="reference internal" href="generated/pyarrow.parquet.ParquetFile.html#pyarrow.parquet.ParquetFile" title="pyarrow.parquet.ParquetFile"><code class="xref py py-class docutils literal notranslate"><span class="pre">ParquetFile</span></code></a> as shown above:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [30]: </span><span class="n">parquet_file</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">ParquetFile</span><span class="p">(</span><span class="s1">'example.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [31]: </span><span class="n">metadata</span> <span class="o">=</span> <span class="n">parquet_file</span><span class="o">.</span><span class="n">metadata</span> |
| </pre></div> |
| </div> |
| <p>or can also be read directly using <a class="reference internal" href="generated/pyarrow.parquet.read_metadata.html#pyarrow.parquet.read_metadata" title="pyarrow.parquet.read_metadata"><code class="xref py py-func docutils literal notranslate"><span class="pre">read_metadata()</span></code></a>:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [32]: </span><span class="n">metadata</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">read_metadata</span><span class="p">(</span><span class="s1">'example.parquet'</span><span class="p">)</span> |
| |
| <span class="gp">In [33]: </span><span class="n">metadata</span> |
| <span class="gh">Out[33]: </span><span class="go"></span> |
| <span class="go"><pyarrow._parquet.FileMetaData object at 0x7fe9adea04c0></span> |
| <span class="go"> created_by: parquet-cpp version 1.5.1-SNAPSHOT</span> |
| <span class="go"> num_columns: 4</span> |
| <span class="go"> num_rows: 3</span> |
| <span class="go"> num_row_groups: 1</span> |
| <span class="go"> format_version: 1.0</span> |
| <span class="go"> serialized_size: 2636</span> |
| </pre></div> |
| </div> |
| <p>The returned <code class="docutils literal notranslate"><span class="pre">FileMetaData</span></code> object allows to inspect the |
| <a class="reference external" href="https://github.com/apache/parquet-format#metadata">Parquet file metadata</a>, |
| such as the row groups and column chunk metadata and statistics:</p> |
| <div class="highlight-ipython notranslate"><div class="highlight"><pre><span></span><span class="gp">In [34]: </span><span class="n">metadata</span><span class="o">.</span><span class="n">row_group</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> |
| <span class="gh">Out[34]: </span><span class="go"></span> |
| <span class="go"><pyarrow._parquet.RowGroupMetaData object at 0x7fe9ae077728></span> |
| <span class="go"> num_columns: 4</span> |
| <span class="go"> num_rows: 3</span> |
| <span class="go"> total_byte_size: 296</span> |
| |
| <span class="gp">In [35]: </span><span class="n">metadata</span><span class="o">.</span><span class="n">row_group</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span><span class="o">.</span><span class="n">column</span><span class="p">(</span><span class="mi">0</span><span class="p">)</span> |
| <span class="gh">Out[35]: </span><span class="go"></span> |
| <span class="go"><pyarrow._parquet.ColumnChunkMetaData object at 0x7fe9adfd8db8></span> |
| <span class="go"> file_offset: 108</span> |
| <span class="go"> file_path: </span> |
| <span class="go"> physical_type: DOUBLE</span> |
| <span class="go"> num_values: 3</span> |
| <span class="go"> path_in_schema: one</span> |
| <span class="go"> is_stats_set: True</span> |
| <span class="go"> statistics:</span> |
| <span class="go"> <pyarrow._parquet.Statistics object at 0x7fe9adfd80e8></span> |
| <span class="go"> has_min_max: True</span> |
| <span class="go"> min: -1.0</span> |
| <span class="go"> max: 2.5</span> |
| <span class="go"> null_count: 1</span> |
| <span class="go"> distinct_count: 0</span> |
| <span class="go"> num_values: 2</span> |
| <span class="go"> physical_type: DOUBLE</span> |
| <span class="go"> logical_type: None</span> |
| <span class="go"> converted_type (legacy): NONE</span> |
| <span class="go"> compression: SNAPPY</span> |
| <span class="go"> encodings: ('PLAIN_DICTIONARY', 'PLAIN', 'RLE')</span> |
| <span class="go"> has_dictionary_page: True</span> |
| <span class="go"> dictionary_page_offset: 4</span> |
| <span class="go"> data_page_offset: 36</span> |
| <span class="go"> total_compressed_size: 104</span> |
| <span class="go"> total_uncompressed_size: 100</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="data-type-handling"> |
| <h2>Data Type Handling<a class="headerlink" href="#data-type-handling" title="Permalink to this headline">¶</a></h2> |
| <div class="section" id="reading-types-as-dictionaryarray"> |
| <h3>Reading types as DictionaryArray<a class="headerlink" href="#reading-types-as-dictionaryarray" title="Permalink to this headline">¶</a></h3> |
| <p>The <code class="docutils literal notranslate"><span class="pre">read_dictionary</span></code> option in <code class="docutils literal notranslate"><span class="pre">read_table</span></code> and <code class="docutils literal notranslate"><span class="pre">ParquetDataset</span></code> will |
| cause columns to be read as <code class="docutils literal notranslate"><span class="pre">DictionaryArray</span></code>, which will become |
| <code class="docutils literal notranslate"><span class="pre">pandas.Categorical</span></code> when converted to pandas. This option is only valid for |
| string and binary column types, and it can yield significantly lower memory use |
| and improved performance for columns with many repeated string values.</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">read_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">read_dictionary</span><span class="o">=</span><span class="p">[</span><span class="s1">'binary_c0'</span><span class="p">,</span> <span class="s1">'stringb_c2'</span><span class="p">])</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="storing-timestamps"> |
| <h3>Storing timestamps<a class="headerlink" href="#storing-timestamps" title="Permalink to this headline">¶</a></h3> |
| <p>Some Parquet readers may only support timestamps stored in millisecond |
| (<code class="docutils literal notranslate"><span class="pre">'ms'</span></code>) or microsecond (<code class="docutils literal notranslate"><span class="pre">'us'</span></code>) resolution. Since pandas uses nanoseconds |
| to represent timestamps, this can occasionally be a nuisance. By default |
| (when writing version 1.0 Parquet files), the nanoseconds will be cast to |
| microseconds (‘us’).</p> |
| <p>In addition, We provide the <code class="docutils literal notranslate"><span class="pre">coerce_timestamps</span></code> option to allow you to select |
| the desired resolution:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">coerce_timestamps</span><span class="o">=</span><span class="s1">'ms'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>If a cast to a lower resolution value may result in a loss of data, by default |
| an exception will be raised. This can be suppressed by passing |
| <code class="docutils literal notranslate"><span class="pre">allow_truncated_timestamps=True</span></code>:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">coerce_timestamps</span><span class="o">=</span><span class="s1">'ms'</span><span class="p">,</span> |
| <span class="n">allow_truncated_timestamps</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Timestamps with nanoseconds can be stored without casting when using the |
| more recent Parquet format version 2.0:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">version</span><span class="o">=</span><span class="s1">'2.0'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>However, many Parquet readers do not yet support this newer format version, and |
| therefore the default is to write version 1.0 files. When compatibility across |
| different processing frameworks is required, it is recommended to use the |
| default version 1.0.</p> |
| <p>Older Parquet implementations use <code class="docutils literal notranslate"><span class="pre">INT96</span></code> based storage of |
| timestamps, but this is now deprecated. This includes some older |
| versions of Apache Impala and Apache Spark. To write timestamps in |
| this format, set the <code class="docutils literal notranslate"><span class="pre">use_deprecated_int96_timestamps</span></code> option to |
| <code class="docutils literal notranslate"><span class="pre">True</span></code> in <code class="docutils literal notranslate"><span class="pre">write_table</span></code>.</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">use_deprecated_int96_timestamps</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| </div> |
| </div> |
| <div class="section" id="compression-encoding-and-file-compatibility"> |
| <h2>Compression, Encoding, and File Compatibility<a class="headerlink" href="#compression-encoding-and-file-compatibility" title="Permalink to this headline">¶</a></h2> |
| <p>The most commonly used Parquet implementations use dictionary encoding when |
| writing files; if the dictionaries grow too large, then they “fall back” to |
| plain encoding. Whether dictionary encoding is used can be toggled using the |
| <code class="docutils literal notranslate"><span class="pre">use_dictionary</span></code> option:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">use_dictionary</span><span class="o">=</span><span class="bp">False</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>The data pages within a column in a row group can be compressed after the |
| encoding passes (dictionary, RLE encoding). In PyArrow we use Snappy |
| compression by default, but Brotli, Gzip, and uncompressed are also supported:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">compression</span><span class="o">=</span><span class="s1">'snappy'</span><span class="p">)</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">compression</span><span class="o">=</span><span class="s1">'gzip'</span><span class="p">)</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">compression</span><span class="o">=</span><span class="s1">'brotli'</span><span class="p">)</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">compression</span><span class="o">=</span><span class="s1">'none'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Snappy generally results in better performance, while Gzip may yield smaller |
| files.</p> |
| <p>These settings can also be set on a per-column basis:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">where</span><span class="p">,</span> <span class="n">compression</span><span class="o">=</span><span class="p">{</span><span class="s1">'foo'</span><span class="p">:</span> <span class="s1">'snappy'</span><span class="p">,</span> <span class="s1">'bar'</span><span class="p">:</span> <span class="s1">'gzip'</span><span class="p">},</span> |
| <span class="n">use_dictionary</span><span class="o">=</span><span class="p">[</span><span class="s1">'foo'</span><span class="p">,</span> <span class="s1">'bar'</span><span class="p">])</span> |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="partitioned-datasets-multiple-files"> |
| <h2>Partitioned Datasets (Multiple Files)<a class="headerlink" href="#partitioned-datasets-multiple-files" title="Permalink to this headline">¶</a></h2> |
| <p>Multiple Parquet files constitute a Parquet <em>dataset</em>. These may present in a |
| number of ways:</p> |
| <ul class="simple"> |
| <li><p>A list of Parquet absolute file paths</p></li> |
| <li><p>A directory name containing nested directories defining a partitioned dataset</p></li> |
| </ul> |
| <p>A dataset partitioned by year and month may look like on disk:</p> |
| <div class="highlight-text notranslate"><div class="highlight"><pre><span></span>dataset_name/ |
| year=2007/ |
| month=01/ |
| 0.parq |
| 1.parq |
| ... |
| month=02/ |
| 0.parq |
| 1.parq |
| ... |
| month=03/ |
| ... |
| year=2008/ |
| month=01/ |
| ... |
| ... |
| </pre></div> |
| </div> |
| </div> |
| <div class="section" id="writing-to-partitioned-datasets"> |
| <h2>Writing to Partitioned Datasets<a class="headerlink" href="#writing-to-partitioned-datasets" title="Permalink to this headline">¶</a></h2> |
| <p>You can write a partitioned dataset for any <code class="docutils literal notranslate"><span class="pre">pyarrow</span></code> file system that is a |
| file-store (e.g. local, HDFS, S3). The default behaviour when no filesystem is |
| added is to use the local filesystem.</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Local dataset write</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_to_dataset</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">root_path</span><span class="o">=</span><span class="s1">'dataset_name'</span><span class="p">,</span> |
| <span class="n">partition_cols</span><span class="o">=</span><span class="p">[</span><span class="s1">'one'</span><span class="p">,</span> <span class="s1">'two'</span><span class="p">])</span> |
| </pre></div> |
| </div> |
| <p>The root path in this case specifies the parent directory to which data will be |
| saved. The partition columns are the column names by which to partition the |
| dataset. Columns are partitioned in the order they are given. The partition |
| splits are determined by the unique values in the partition columns.</p> |
| <p>To use another filesystem you only need to add the filesystem parameter, the |
| individual table writes are wrapped using <code class="docutils literal notranslate"><span class="pre">with</span></code> statements so the |
| <code class="docutils literal notranslate"><span class="pre">pq.write_to_dataset</span></code> function does not need to be.</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Remote file-system example</span> |
| <span class="n">fs</span> <span class="o">=</span> <span class="n">pa</span><span class="o">.</span><span class="n">hdfs</span><span class="o">.</span><span class="n">connect</span><span class="p">(</span><span class="n">host</span><span class="p">,</span> <span class="n">port</span><span class="p">,</span> <span class="n">user</span><span class="o">=</span><span class="n">user</span><span class="p">,</span> <span class="n">kerb_ticket</span><span class="o">=</span><span class="n">ticket_cache_path</span><span class="p">)</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_to_dataset</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">root_path</span><span class="o">=</span><span class="s1">'dataset_name'</span><span class="p">,</span> |
| <span class="n">partition_cols</span><span class="o">=</span><span class="p">[</span><span class="s1">'one'</span><span class="p">,</span> <span class="s1">'two'</span><span class="p">],</span> <span class="n">filesystem</span><span class="o">=</span><span class="n">fs</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Compatibility Note: if using <code class="docutils literal notranslate"><span class="pre">pq.write_to_dataset</span></code> to create a table that |
| will then be used by HIVE then partition column values must be compatible with |
| the allowed character set of the HIVE version you are running.</p> |
| <div class="section" id="writing-metadata-and-common-medata-files"> |
| <h3>Writing <code class="docutils literal notranslate"><span class="pre">_metadata</span></code> and <code class="docutils literal notranslate"><span class="pre">_common_medata</span></code> files<a class="headerlink" href="#writing-metadata-and-common-medata-files" title="Permalink to this headline">¶</a></h3> |
| <p>Some processing frameworks such as Spark or Dask (optionally) use <code class="docutils literal notranslate"><span class="pre">_metadata</span></code> |
| and <code class="docutils literal notranslate"><span class="pre">_common_metadata</span></code> files with partitioned datasets.</p> |
| <p>Those files include information about the schema of the full dataset (for |
| <code class="docutils literal notranslate"><span class="pre">_common_metadata</span></code>) and potentially all row group metadata of all files in the |
| partitioned dataset as well (for <code class="docutils literal notranslate"><span class="pre">_metadata</span></code>). The actual files are |
| metadata-only Parquet files. Note this is not a Parquet standard, but a |
| convention set in practice by those frameworks.</p> |
| <p>Using those files can give a more efficient creation of a parquet Dataset, |
| since it can use the stored schema and and file paths of all row groups, |
| instead of inferring the schema and crawling the directories for all Parquet |
| files (this is especially the case for filesystems where accessing files |
| is expensive).</p> |
| <p>The <a class="reference internal" href="generated/pyarrow.parquet.write_to_dataset.html#pyarrow.parquet.write_to_dataset" title="pyarrow.parquet.write_to_dataset"><code class="xref py py-func docutils literal notranslate"><span class="pre">write_to_dataset()</span></code></a> function does not automatically |
| write such metadata files, but you can use it to gather the metadata and |
| combine and write them manually:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="c1"># Write a dataset and collect metadata information of all written files</span> |
| <span class="n">metadata_collector</span> <span class="o">=</span> <span class="p">[]</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_to_dataset</span><span class="p">(</span><span class="n">table</span><span class="p">,</span> <span class="n">root_path</span><span class="p">,</span> <span class="n">metadata_collector</span><span class="o">=</span><span class="n">metadata_collector</span><span class="p">)</span> |
| |
| <span class="c1"># Write the ``_common_metadata`` parquet file without row groups statistics</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_metadata</span><span class="p">(</span><span class="n">table</span><span class="o">.</span><span class="n">schema</span><span class="p">,</span> <span class="n">root_path</span> <span class="o">/</span> <span class="s1">'_common_metadata'</span><span class="p">)</span> |
| |
| <span class="c1"># Write the ``_metadata`` parquet file with row groups statistics of all files</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_metadata</span><span class="p">(</span> |
| <span class="n">table</span><span class="o">.</span><span class="n">schema</span><span class="p">,</span> <span class="n">root_path</span> <span class="o">/</span> <span class="s1">'_metadata'</span><span class="p">,</span> |
| <span class="n">metadata_collector</span><span class="o">=</span><span class="n">metadata_collector</span> |
| <span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>When not using the <a class="reference internal" href="generated/pyarrow.parquet.write_to_dataset.html#pyarrow.parquet.write_to_dataset" title="pyarrow.parquet.write_to_dataset"><code class="xref py py-func docutils literal notranslate"><span class="pre">write_to_dataset()</span></code></a> function, but |
| writing the individual files of the partitioned dataset using |
| <a class="reference internal" href="generated/pyarrow.parquet.write_table.html#pyarrow.parquet.write_table" title="pyarrow.parquet.write_table"><code class="xref py py-func docutils literal notranslate"><span class="pre">write_table()</span></code></a> or <a class="reference internal" href="generated/pyarrow.parquet.ParquetWriter.html#pyarrow.parquet.ParquetWriter" title="pyarrow.parquet.ParquetWriter"><code class="xref py py-class docutils literal notranslate"><span class="pre">ParquetWriter</span></code></a>, |
| the <code class="docutils literal notranslate"><span class="pre">metadata_collector</span></code> keyword can also be used to collect the FileMetaData |
| of the written files. In this case, you need to ensure to set the file path |
| contained in the row group metadata yourself before combining the metadata, and |
| the schemas of all different files and collected FileMetaData objects should be |
| the same:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">metadata_collector</span> <span class="o">=</span> <span class="p">[]</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_table</span><span class="p">(</span> |
| <span class="n">table1</span><span class="p">,</span> <span class="n">root_path</span> <span class="o">/</span> <span class="s2">"year=2017/data1.parquet"</span><span class="p">,</span> |
| <span class="n">metadata_collector</span><span class="o">=</span><span class="n">metadata_collector</span> |
| <span class="p">)</span> |
| |
| <span class="c1"># set the file path relative to the root of the partitioned dataset</span> |
| <span class="n">metadata_collector</span><span class="p">[</span><span class="o">-</span><span class="mi">1</span><span class="p">]</span><span class="o">.</span><span class="n">set_file_path</span><span class="p">(</span><span class="s2">"year=2017/data1.parquet"</span><span class="p">)</span> |
| |
| <span class="c1"># combine and write the metadata</span> |
| <span class="n">metadata</span> <span class="o">=</span> <span class="n">metadata_collector</span><span class="p">[</span><span class="mi">0</span><span class="p">]</span> |
| <span class="k">for</span> <span class="n">_meta</span> <span class="ow">in</span> <span class="n">metadata_collector</span><span class="p">[</span><span class="mi">1</span><span class="p">:]:</span> |
| <span class="n">metadata</span><span class="o">.</span><span class="n">append_row_groups</span><span class="p">(</span><span class="n">_meta</span><span class="p">)</span> |
| <span class="n">metadata</span><span class="o">.</span><span class="n">write_metadata_file</span><span class="p">(</span><span class="n">root_path</span> <span class="o">/</span> <span class="s2">"_metadata"</span><span class="p">)</span> |
| |
| <span class="c1"># or use pq.write_metadata to combine and write in a single step</span> |
| <span class="n">pq</span><span class="o">.</span><span class="n">write_metadata</span><span class="p">(</span> |
| <span class="n">table1</span><span class="o">.</span><span class="n">schema</span><span class="p">,</span> <span class="n">root_path</span> <span class="o">/</span> <span class="s2">"_metadata"</span><span class="p">,</span> |
| <span class="n">metadata_collector</span><span class="o">=</span><span class="n">metadata_collector</span> |
| <span class="p">)</span> |
| </pre></div> |
| </div> |
| </div> |
| </div> |
| <div class="section" id="reading-from-partitioned-datasets"> |
| <h2>Reading from Partitioned Datasets<a class="headerlink" href="#reading-from-partitioned-datasets" title="Permalink to this headline">¶</a></h2> |
| <p>The <a class="reference internal" href="generated/pyarrow.parquet.ParquetDataset.html#pyarrow.parquet.ParquetDataset" title="pyarrow.parquet.ParquetDataset"><code class="xref py py-class docutils literal notranslate"><span class="pre">ParquetDataset</span></code></a> class accepts either a directory name or a list |
| or file paths, and can discover and infer some common partition structures, |
| such as those produced by Hive:</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">dataset</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">ParquetDataset</span><span class="p">(</span><span class="s1">'dataset_name/'</span><span class="p">)</span> |
| <span class="n">table</span> <span class="o">=</span> <span class="n">dataset</span><span class="o">.</span><span class="n">read</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>You can also use the convenience function <code class="docutils literal notranslate"><span class="pre">read_table</span></code> exposed by |
| <code class="docutils literal notranslate"><span class="pre">pyarrow.parquet</span></code> that avoids the need for an additional Dataset object |
| creation step.</p> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="n">table</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">read_table</span><span class="p">(</span><span class="s1">'dataset_name'</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Note: the partition columns in the original table will have their types |
| converted to Arrow dictionary types (pandas categorical) on load. Ordering of |
| partition columns is not preserved through the save/load process. If reading |
| from a remote filesystem into a pandas dataframe you may need to run |
| <code class="docutils literal notranslate"><span class="pre">sort_index</span></code> to maintain row ordering (as long as the <code class="docutils literal notranslate"><span class="pre">preserve_index</span></code> |
| option was enabled on write).</p> |
| <div class="admonition note"> |
| <p class="admonition-title">Note</p> |
| <p>The ParquetDataset is being reimplemented based on the new generic Dataset |
| API (see the <a class="reference internal" href="dataset.html#dataset"><span class="std std-ref">Tabular Datasets</span></a> docs for an overview). This is not yet the |
| default, but can already be enabled by passing the <code class="docutils literal notranslate"><span class="pre">use_legacy_dataset=False</span></code> |
| keyword to <code class="xref py py-class docutils literal notranslate"><span class="pre">ParquetDataset</span></code> or <code class="xref py py-func docutils literal notranslate"><span class="pre">read_table()</span></code>:</p> |
| <div class="highlight-default notranslate"><div class="highlight"><pre><span></span><span class="n">pq</span><span class="o">.</span><span class="n">ParquetDataset</span><span class="p">(</span><span class="s1">'dataset_name/'</span><span class="p">,</span> <span class="n">use_legacy_dataset</span><span class="o">=</span><span class="kc">False</span><span class="p">)</span> |
| </pre></div> |
| </div> |
| <p>Enabling this gives the following new features:</p> |
| <ul class="simple"> |
| <li><p>Filtering on all columns (using row group statistics) instead of only on |
| the partition keys.</p></li> |
| <li><p>More fine-grained partitioning: support for a directory partitioning scheme |
| in addition to the Hive-like partitioning (e.g. “/2019/11/15/” instead of |
| “/year=2019/month=11/day=15/”), and the ability to specify a schema for |
| the partition keys.</p></li> |
| <li><p>General performance improvement and bug fixes.</p></li> |
| </ul> |
| <p>It also has the following changes in behaviour:</p> |
| <ul class="simple"> |
| <li><p>The partition keys need to be explicitly included in the <code class="docutils literal notranslate"><span class="pre">columns</span></code> |
| keyword when you want to include them in the result while reading a |
| subset of the columns</p></li> |
| </ul> |
| <p>This new implementation is already enabled in <code class="docutils literal notranslate"><span class="pre">read_table</span></code>, and in the |
| future, this will be turned on by default for <code class="docutils literal notranslate"><span class="pre">ParquetDataset</span></code>. The new |
| implementation does not yet cover all existing ParquetDataset features (e.g. |
| specifying the <code class="docutils literal notranslate"><span class="pre">metadata</span></code>, or the <code class="docutils literal notranslate"><span class="pre">pieces</span></code> property API). Feedback is |
| very welcome.</p> |
| </div> |
| </div> |
| <div class="section" id="using-with-spark"> |
| <h2>Using with Spark<a class="headerlink" href="#using-with-spark" title="Permalink to this headline">¶</a></h2> |
| <p>Spark places some constraints on the types of Parquet files it will read. The |
| option <code class="docutils literal notranslate"><span class="pre">flavor='spark'</span></code> will set these options automatically and also |
| sanitize field characters unsupported by Spark SQL.</p> |
| </div> |
| <div class="section" id="multithreaded-reads"> |
| <h2>Multithreaded Reads<a class="headerlink" href="#multithreaded-reads" title="Permalink to this headline">¶</a></h2> |
| <p>Each of the reading functions by default use multi-threading for reading |
| columns in parallel. Depending on the speed of IO |
| and how expensive it is to decode the columns in a particular file |
| (particularly with GZIP compression), this can yield significantly higher data |
| throughput.</p> |
| <p>This can be disabled by specifying <code class="docutils literal notranslate"><span class="pre">use_threads=False</span></code>.</p> |
| <div class="admonition note"> |
| <p class="admonition-title">Note</p> |
| <p>The number of threads to use concurrently is automatically inferred by Arrow |
| and can be inspected using the <a class="reference internal" href="generated/pyarrow.cpu_count.html#pyarrow.cpu_count" title="pyarrow.cpu_count"><code class="xref py py-func docutils literal notranslate"><span class="pre">cpu_count()</span></code></a> function.</p> |
| </div> |
| </div> |
| <div class="section" id="reading-a-parquet-file-from-azure-blob-storage"> |
| <h2>Reading a Parquet File from Azure Blob storage<a class="headerlink" href="#reading-a-parquet-file-from-azure-blob-storage" title="Permalink to this headline">¶</a></h2> |
| <p>The code below shows how to use Azure’s storage sdk along with pyarrow to read |
| a parquet file into a Pandas dataframe. |
| This is suitable for executing inside a Jupyter notebook running on a Python 3 |
| kernel.</p> |
| <p>Dependencies:</p> |
| <ul class="simple"> |
| <li><p>python 3.6.2</p></li> |
| <li><p>azure-storage 0.36.0</p></li> |
| <li><p>pyarrow 0.8.0</p></li> |
| </ul> |
| <div class="highlight-python notranslate"><div class="highlight"><pre><span></span><span class="kn">import</span> <span class="nn">pyarrow.parquet</span> <span class="kn">as</span> <span class="nn">pq</span> |
| <span class="kn">from</span> <span class="nn">io</span> <span class="kn">import</span> <span class="n">BytesIO</span> |
| <span class="kn">from</span> <span class="nn">azure.storage.blob</span> <span class="kn">import</span> <span class="n">BlockBlobService</span> |
| |
| <span class="n">account_name</span> <span class="o">=</span> <span class="s1">'...'</span> |
| <span class="n">account_key</span> <span class="o">=</span> <span class="s1">'...'</span> |
| <span class="n">container_name</span> <span class="o">=</span> <span class="s1">'...'</span> |
| <span class="n">parquet_file</span> <span class="o">=</span> <span class="s1">'mysample.parquet'</span> |
| |
| <span class="n">byte_stream</span> <span class="o">=</span> <span class="n">io</span><span class="o">.</span><span class="n">BytesIO</span><span class="p">()</span> |
| <span class="n">block_blob_service</span> <span class="o">=</span> <span class="n">BlockBlobService</span><span class="p">(</span><span class="n">account_name</span><span class="o">=</span><span class="n">account_name</span><span class="p">,</span> <span class="n">account_key</span><span class="o">=</span><span class="n">account_key</span><span class="p">)</span> |
| <span class="k">try</span><span class="p">:</span> |
| <span class="n">block_blob_service</span><span class="o">.</span><span class="n">get_blob_to_stream</span><span class="p">(</span><span class="n">container_name</span><span class="o">=</span><span class="n">container_name</span><span class="p">,</span> <span class="n">blob_name</span><span class="o">=</span><span class="n">parquet_file</span><span class="p">,</span> <span class="n">stream</span><span class="o">=</span><span class="n">byte_stream</span><span class="p">)</span> |
| <span class="n">df</span> <span class="o">=</span> <span class="n">pq</span><span class="o">.</span><span class="n">read_table</span><span class="p">(</span><span class="n">source</span><span class="o">=</span><span class="n">byte_stream</span><span class="p">)</span><span class="o">.</span><span class="n">to_pandas</span><span class="p">()</span> |
| <span class="c1"># Do work on df ...</span> |
| <span class="k">finally</span><span class="p">:</span> |
| <span class="c1"># Add finally block to ensure closure of the stream</span> |
| <span class="n">byte_stream</span><span class="o">.</span><span class="n">close</span><span class="p">()</span> |
| </pre></div> |
| </div> |
| <p>Notes:</p> |
| <ul class="simple"> |
| <li><p>The <code class="docutils literal notranslate"><span class="pre">account_key</span></code> can be found under <code class="docutils literal notranslate"><span class="pre">Settings</span> <span class="pre">-></span> <span class="pre">Access</span> <span class="pre">keys</span></code> in the |
| Microsoft Azure portal for a given container</p></li> |
| <li><p>The code above works for a container with private access, Lease State = |
| Available, Lease Status = Unlocked</p></li> |
| <li><p>The parquet file was Blob Type = Block blob</p></li> |
| </ul> |
| </div> |
| </div> |
| |
| |
| </div> |
| |
| </div> |
| <footer> |
| |
| <div class="rst-footer-buttons" role="navigation" aria-label="footer navigation"> |
| |
| <a href="dataset.html" class="btn btn-neutral float-right" title="Tabular Datasets" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right"></span></a> |
| |
| |
| <a href="json.html" class="btn btn-neutral float-left" title="Reading JSON files" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left"></span> Previous</a> |
| |
| </div> |
| |
| |
| <hr/> |
| |
| <div role="contentinfo"> |
| <p> |
| |
| © Copyright 2016-2019 Apache Software Foundation |
| |
| </p> |
| </div> |
| |
| |
| |
| Built with <a href="http://sphinx-doc.org/">Sphinx</a> using a |
| |
| <a href="https://github.com/rtfd/sphinx_rtd_theme">theme</a> |
| |
| provided by <a href="https://readthedocs.org">Read the Docs</a>. |
| |
| </footer> |
| |
| </div> |
| </div> |
| |
| </section> |
| |
| </div> |
| |
| |
| <script type="text/javascript"> |
| jQuery(function () { |
| SphinxRtdTheme.Navigation.enable(true); |
| }); |
| </script> |
| |
| |
| |
| |
| |
| |
| |
| <script type="text/javascript" src="/docs/_static/versionwarning.js"></script></body> |
| </html> |