layout: global title: ORC Files displayTitle: ORC Files license: | Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the “License”); you may not use this file except in compliance with the License. You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

  • Table of contents {:toc}

Apache ORC is a columnar format which has more advanced features like native zstd compression, bloom filter and columnar encryption.

ORC Implementation

Spark supports two ORC implementations (native and hive) which is controlled by spark.sql.orc.impl. Two implementations share most functionalities with different design goals.

  • native implementation is designed to follow Spark's data source behavior like Parquet.
  • hive implementation is designed to follow Hive's behavior and uses Hive SerDe.

For example, historically, native implementation handles CHAR/VARCHAR with Spark's native String while hive implementation handles it via Hive CHAR/VARCHAR. The query results are different. Since Spark 3.1.0, SPARK-33480 removes this difference by supporting CHAR/VARCHAR from Spark-side.

Vectorized Reader

native implementation supports a vectorized ORC reader and has been the default ORC implementation since Spark 2.3. The vectorized reader is used for the native ORC tables (e.g., the ones created using the clause USING ORC) when spark.sql.orc.impl is set to native and spark.sql.orc.enableVectorizedReader is set to true.

For the Hive ORC serde tables (e.g., the ones created using the clause USING HIVE OPTIONS (fileFormat 'ORC')), the vectorized reader is used when spark.sql.hive.convertMetastoreOrc is also set to true, and is turned on by default.

Schema Merging

Like Protocol Buffer, Avro, and Thrift, ORC also supports schema evolution. Users can start with a simple schema, and gradually add more columns to the schema as needed. In this way, users may end up with multiple ORC files with different but mutually compatible schemas. The ORC data source is now able to automatically detect this case and merge schemas of all these files.

Since schema merging is a relatively expensive operation, and is not a necessity in most cases, we turned it off by default . You may enable it by

  1. setting data source option mergeSchema to true when reading ORC files, or
  2. setting the global SQL option spark.sql.orc.mergeSchema to true.

Zstandard

Since Spark 3.2, you can take advantage of Zstandard compression in ORC files. Please see Zstandard for the benefits.

{% highlight sql %} CREATE TABLE compressed ( key STRING, value STRING ) USING ORC OPTIONS ( compression ‘zstd’ ) {% endhighlight %}

Bloom Filters

You can control bloom filters and dictionary encodings for ORC data sources. The following ORC example will create bloom filter and use dictionary encoding only for favorite_color. To find more detailed information about the extra ORC options, visit the official Apache ORC websites.

{% highlight sql %} CREATE TABLE users_with_options ( name STRING, favorite_color STRING, favorite_numbers array ) USING ORC OPTIONS ( orc.bloom.filter.columns ‘favorite_color’, orc.dictionary.key.threshold ‘1.0’, orc.column.encoding.direct ‘name’ ) {% endhighlight %}

Columnar Encryption

Since Spark 3.2, columnar encryption is supported for ORC tables with Apache ORC 1.6. The following example is using Hadoop KMS as a key provider with the given location. Please visit Apache Hadoop KMS for the detail.

Hive metastore ORC table conversion

When reading from Hive metastore ORC tables and inserting to Hive metastore ORC tables, Spark SQL will try to use its own ORC support instead of Hive SerDe for better performance. For CTAS statement, only non-partitioned Hive metastore ORC tables are converted. This behavior is controlled by the spark.sql.hive.convertMetastoreOrc configuration, and is turned on by default.

Configuration

Data Source Option

Data source options of ORC can be set via: