Configuration

Catalog properties

Iceberg catalogs support using catalog properties to configure catalog behaviors. Here is a list of commonly used catalog properties:

PropertyDefaultDescription
catalog-implnulla custom Catalog implementation to use by an engine
io-implnulla custom FileIO implementation to use in a catalog
warehousenullthe root path of the data warehouse
urinulla URI string, such as Hive metastore URI
clients2client pool size

HadoopCatalog and HiveCatalog can access the properties in their constructors. Any other custom catalog can access the properties by implementing Catalog.initialize(catalogName, catalogProperties). The properties can be manually constructed or passed in from a compute engine like Spark or Flink. Spark uses its session properties as catalog properties, see more details in the Spark configuration section. Flink passes in catalog properties through CREATE CATALOG statement, see more details in the Flink section.

Lock catalog properties

Here are the catalog properties related to locking. They are used by some catalog implementations to control the locking behavior during commits.

PropertyDefaultDescription
lock-implnulla custom implementation of the lock manager, the actual interface depends on the catalog used
lock.tablenullan auxiliary table for locking, such as in AWS DynamoDB lock manager
lock.acquire-interval-ms5 secondsthe interval to wait between each attempt to acquire a lock
lock.acquire-timeout-ms3 minutesthe maximum time to try acquiring a lock
lock.heartbeat-interval-ms3 secondsthe interval to wait between each heartbeat after acquiring a lock
lock.heartbeat-timeout-ms15 secondsthe maximum time without a heartbeat to consider a lock expired

Table properties

Iceberg tables support table properties to configure table behavior, like the default split size for readers.

Read properties

PropertyDefaultDescription
read.split.target-size134217728 (128 MB)Target size when combining data input splits
read.split.metadata-target-size33554432 (32 MB)Target size when combining metadata input splits
read.split.planning-lookback10Number of bins to consider when combining input splits
read.split.open-file-cost4194304 (4 MB)The estimated cost to open a file, used as a minimum weight when combining splits.

Write properties

PropertyDefaultDescription
write.format.defaultparquetDefault file format for the table; parquet, avro, or orc
write.parquet.row-group-size-bytes134217728 (128 MB)Parquet row group size
write.parquet.page-size-bytes1048576 (1 MB)Parquet page size
write.parquet.dict-size-bytes2097152 (2 MB)Parquet dictionary page size
write.parquet.compression-codecgzipParquet compression codec
write.parquet.compression-levelnullParquet compression level
write.avro.compression-codecgzipAvro compression codec
write.location-provider.implnullOptional custom implemention for LocationProvider
write.metadata.compression-codecnoneMetadata compression codec; none or gzip
write.metadata.metrics.defaulttruncate(16)Default metrics mode for all columns in the table; none, counts, truncate(length), or full
write.metadata.metrics.column.col1(not set)Metrics mode for column ‘col1’ to allow per-column tuning; none, counts, truncate(length), or full
write.target-file-size-bytesLong.MAX_VALUEControls the size of files generated to target about this many bytes
write.wap.enabledfalseEnables write-audit-publish writes
write.summary.partition-limit0Includes partition-level summary stats in snapshot summaries if the changed partition count is less than this limit
write.metadata.delete-after-commit.enabledfalseControls whether to delete the oldest version metadata files after commit
write.metadata.previous-versions-max100The max number of previous version metadata files to keep before deleting after commit
write.spark.fanout.enabledfalseEnables Partitioned-Fanout-Writer writes in Spark

Table behavior properties

PropertyDefaultDescription
commit.retry.num-retries4Number of times to retry a commit before failing
commit.retry.min-wait-ms100Minimum time in milliseconds to wait before retrying a commit
commit.retry.max-wait-ms60000 (1 min)Maximum time in milliseconds to wait before retrying a commit
commit.retry.total-timeout-ms1800000 (30 min)Maximum time in milliseconds to wait before retrying a commit
commit.manifest.target-size-bytes8388608 (8 MB)Target size when merging manifest files
commit.manifest.min-count-to-merge100Minimum number of manifests to accumulate before merging
commit.manifest-merge.enabledtrueControls whether to automatically merge manifests on writes
history.expire.max-snapshot-age-ms432000000 (5 days)Default max age of snapshots to keep while expiring snapshots
history.expire.min-snapshots-to-keep1Default min number of snapshots to keep while expiring snapshots

Compatibility flags

PropertyDefaultDescription
compatibility.snapshot-id-inheritance.enabledfalseEnables committing snapshots without explicit snapshot IDs

Hadoop configuration

The following properties from the Hadoop configuration are used by the Hive Metastore connector.

PropertyDefaultDescription
iceberg.hive.client-pool-size5The size of the Hive client pool when tracking tables in HMS
iceberg.hive.lock-timeout-ms180000 (3 min)Maximum time in milliseconds to acquire a lock
iceberg.hive.lock-check-min-wait-ms50Minimum time in milliseconds to check back on the status of lock acquisition
iceberg.hive.lock-check-max-wait-ms5000Maximum time in milliseconds to check back on the status of lock acquisition

Note: iceberg.hive.lock-check-max-wait-ms should be less than the transaction timeout of the Hive Metastore (hive.txn.timeout or metastore.txn.timeout in the newer versions). Otherwise, the heartbeats on the lock (which happens during the lock checks) would end up expiring in the Hive Metastore before the lock is retried from Iceberg.

Spark configuration

Catalogs

Spark catalogs are configured using Spark session properties.

A catalog is created and named by adding a property spark.sql.catalog.(catalog-name) with an implementation class for its value.

Iceberg supplies two implementations:

  • org.apache.iceberg.spark.SparkCatalog supports a Hive Metastore or a Hadoop warehouse as a catalog
  • org.apache.iceberg.spark.SparkSessionCatalog adds support for Iceberg tables to Spark's built-in catalog, and delegates to the built-in catalog for non-Iceberg tables

Both catalogs are configured using properties nested under the catalog name:

PropertyValuesDescription
spark.sql.catalog.catalog-name.typehive or hadoopThe underlying Iceberg catalog implementation, HiveCatalog or HadoopCatalog
spark.sql.catalog.catalog-name.catalog-implThe underlying Iceberg catalog implementation. When set, the value of type property is ignored
spark.sql.catalog.catalog-name.default-namespacedefaultThe default current namespace for the catalog
spark.sql.catalog.catalog-name.urithrift://host:portURI for the Hive Metastore; default from hive-site.xml (Hive only)
spark.sql.catalog.catalog-name.warehousehdfs://nn:8020/warehouse/pathBase path for the warehouse directory (Hadoop only)

Read options

Spark read options are passed when configuring the DataFrameReader, like this:

// time travel
spark.read
    .option("snapshot-id", 10963874102873L)
    .table("catalog.db.table")
Spark optionDefaultDescription
snapshot-id(latest)Snapshot ID of the table snapshot to read
as-of-timestamp(latest)A timestamp in milliseconds; the snapshot used will be the snapshot current at this time.
split-sizeAs per table propertyOverrides this table's read.split.target-size and read.split.metadata-target-size
lookbackAs per table propertyOverrides this table's read.split.planning-lookback
file-open-costAs per table propertyOverrides this table's read.split.open-file-cost
vectorization-enabledAs per table propertyOverrides this table's read.parquet.vectorization.enabled
batch-sizeAs per table propertyOverrides this table's read.parquet.vectorization.batch-size

Write options

Spark write options are passed when configuring the DataFrameWriter, like this:

// write with Avro instead of Parquet
df.write
    .option("write-format", "avro")
    .option("snapshot-property.key", "value")
    .insertInto("catalog.db.table")
Spark optionDefaultDescription
write-formatTable write.format.defaultFile format to use for this write operation; parquet, avro, or orc
target-file-size-bytesAs per table propertyOverrides this table's write.target-file-size-bytes
check-nullabilitytrueSets the nullable check on fields
snapshot-property.custom-keynullAdds an entry with custom-key and corresponding value in the snapshot summary
fanout-enabledfalseOverrides this table's write.spark.fanout.enabled
check-orderingtrueChecks if input schema and table schema are same